<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en-GB">
	<id>http://wiki.ciscolinux.co.uk/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Pio2pio</id>
	<title>Ever changing code - User contributions [en-gb]</title>
	<link rel="self" type="application/atom+xml" href="http://wiki.ciscolinux.co.uk/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Pio2pio"/>
	<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php/Special:Contributions/Pio2pio"/>
	<updated>2026-04-05T18:35:27Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.37.2</generator>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Kubernetes/Progressive_Delivery_Flux_and_Flagger&amp;diff=7072</id>
		<title>Kubernetes/Progressive Delivery Flux and Flagger</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Kubernetes/Progressive_Delivery_Flux_and_Flagger&amp;diff=7072"/>
		<updated>2026-01-15T10:53:34Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Install FluxCD Cli flux */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= [https://github.com/fluxcd/flux2 Flux v2] =&lt;br /&gt;
&lt;br /&gt;
[https://fluxcd.io/flux/ Flux v2 Documentation]&lt;br /&gt;
&lt;br /&gt;
Flux v2 architecture&lt;br /&gt;
:[[File:ClipCapIt-210524-232835.PNG]]&lt;br /&gt;
&lt;br /&gt;
Flux v2 - Webhooks and notifications&lt;br /&gt;
:[[File:ClipCapIt-210524-233028.PNG]]&lt;br /&gt;
&lt;br /&gt;
= Install FluxCD Cli [https://fluxcd.io/flux/installation/#install-the-flux-cli &amp;lt;code&amp;gt;flux&amp;lt;/code&amp;gt;] =&lt;br /&gt;
&lt;br /&gt;
{{Note|&amp;lt;code&amp;gt;fluxctl&amp;lt;/code&amp;gt; was a cli client for version Flux v1.}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install or upgrade using official install.sh (option-1)&lt;br /&gt;
export FLUX_VERSION=2.7.5; curl -s https://fluxcd.io/install.sh | sudo -E bash&lt;br /&gt;
curl -s https://fluxcd.io/install.sh | sudo bash # latest&lt;br /&gt;
&lt;br /&gt;
# Version check&lt;br /&gt;
flux version &lt;br /&gt;
flux: v2.7.5&lt;br /&gt;
distribution: flux-v2.7.5&lt;br /&gt;
helm-controller: v1.4.5&lt;br /&gt;
kustomize-controller: v1.7.3&lt;br /&gt;
notification-controller: v1.7.5&lt;br /&gt;
source-controller: v1.7.4&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# enable completions in ~/.bash_profile&lt;br /&gt;
. &amp;lt;(flux completion bash)&lt;br /&gt;
&lt;br /&gt;
# Pre check&lt;br /&gt;
flux check --pre&lt;br /&gt;
► checking prerequisites&lt;br /&gt;
✗ Kubernetes version v1.27.16 does not match &amp;gt;=1.32.0-0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Cluster bootstrap =&lt;br /&gt;
FluxCDv2 bootstrap process is installing the Flux onto a cluster and stores(commits) its own manifests to a Git repository.&lt;br /&gt;
* [https://fluxcd.io/docs/installation/#generic-git-server Generic Git Server], including GCP [https://cloud.google.com/source-repositories/docs Cloud Source Repositories]&lt;br /&gt;
* [https://fluxcd.io/docs/installation/#bootstrap-with-terraform Bootstrap with Terraform]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
FLUX_GIT_USERNAME=my-git-username&lt;br /&gt;
FLUX_GIT_EMAIL=my-git-email@example.com&lt;br /&gt;
flux bootstrap git \&lt;br /&gt;
  --author-email=$FLUX_GIT_EMAIL \&lt;br /&gt;
  --url=ssh://git@github.com/$FLUX_GIT_USERNAME/gitops-istio \&lt;br /&gt;
  --branch=main \&lt;br /&gt;
  --path=clusters/my-cluster&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
At bootstrap, Flux generates an SSH key and prints the public key. In order to sync your cluster state with git you need to copy the public key and create a deploy key with write access on your GitHub repository. On GitHub go to Settings &amp;gt; Deploy keys click on Add deploy key, check Allow write access, paste the Flux public key and click Add key.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;[https://fluxcd.io/docs/installation/#dev-install Dev installation] does not stores its own configuration state in Git repository&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# option 1&lt;br /&gt;
flux install # install and upgrade&lt;br /&gt;
flux install \&lt;br /&gt;
--namespace=flux-system \&lt;br /&gt;
--network-policy=false \&lt;br /&gt;
--components=source-controller&lt;br /&gt;
&lt;br /&gt;
# option 2&lt;br /&gt;
kubectl apply -f https://github.com/fluxcd/flux2/releases/latest/download/install.yaml&lt;br /&gt;
kustomize build https://github.com/fluxcd/flux2/manifests/install?ref=main | kubectl apply -f- # Upgrade&lt;br /&gt;
&lt;br /&gt;
# Register Git repositories and reconcile them on your cluster:&lt;br /&gt;
flux create source git podinfo \&lt;br /&gt;
  --url=https://github.com/stefanprodan/podinfo \&lt;br /&gt;
  --tag-semver=&amp;quot;&amp;gt;=4.0.0&amp;quot; \&lt;br /&gt;
  --interval=1m&lt;br /&gt;
&lt;br /&gt;
flux create kustomization podinfo-default \&lt;br /&gt;
  --source=podinfo \&lt;br /&gt;
  --path=&amp;quot;./kustomize&amp;quot; \&lt;br /&gt;
  --prune=true \&lt;br /&gt;
  --validation=client \&lt;br /&gt;
  --interval=10m \&lt;br /&gt;
  --health-check=&amp;quot;Deployment/podinfo.default&amp;quot; \&lt;br /&gt;
  --health-check-timeout=2m&lt;br /&gt;
&lt;br /&gt;
# Register Helm repositories and create Helm releases:&lt;br /&gt;
flux create source helm bitnami \&lt;br /&gt;
  --interval=1h \&lt;br /&gt;
  --url=https://charts.bitnami.com/bitnami&lt;br /&gt;
&lt;br /&gt;
flux create helmrelease nginx \&lt;br /&gt;
  --interval=1h \&lt;br /&gt;
  --release-name=nginx-ingress-controller \&lt;br /&gt;
  --target-namespace=kube-system \&lt;br /&gt;
  --source=HelmRepository/bitnami \&lt;br /&gt;
  --chart=nginx-ingress-controller \&lt;br /&gt;
  --chart-version=&amp;quot;5.x.x&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Uninstall&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
flux uninstall --namespace=flux-system&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
* [https://github.com/fluxcd/terraform-provider-flux terraform-provider-flux]&lt;br /&gt;
*[https://github.com/pio2pio/gitops-istio gitops-istio] Tutorial&lt;br /&gt;
*[https://www.youtube.com/watch?v=nGLpUCPX8JE Flux v2 Everything that you wanted to know but were afraid to ask (Stefan Prodan)] December 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Bundle&lt;br /&gt;
*[https://blog.sldk.de/2021/02/introduction-to-gitops-on-kubernetes-with-flux-v2/ Introduction to GitOps on Kubernetes with Flux v2]&lt;br /&gt;
*[https://blog.sldk.de/2021/03/handling-secrets-in-flux-v2-repositories-with-sops/ Handling secrets in Flux v2 repositories with SOPS]&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Kubernetes/Progressive_Delivery_Flux_and_Flagger&amp;diff=7071</id>
		<title>Kubernetes/Progressive Delivery Flux and Flagger</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Kubernetes/Progressive_Delivery_Flux_and_Flagger&amp;diff=7071"/>
		<updated>2026-01-15T10:53:09Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Install Flux v2 flux command line */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= [https://github.com/fluxcd/flux2 Flux v2] =&lt;br /&gt;
&lt;br /&gt;
[https://fluxcd.io/flux/ Flux v2 Documentation]&lt;br /&gt;
&lt;br /&gt;
Flux v2 architecture&lt;br /&gt;
:[[File:ClipCapIt-210524-232835.PNG]]&lt;br /&gt;
&lt;br /&gt;
Flux v2 - Webhooks and notifications&lt;br /&gt;
:[[File:ClipCapIt-210524-233028.PNG]]&lt;br /&gt;
&lt;br /&gt;
= Install FluxCD Cli [https://fluxcd.io/flux/installation/#install-the-flux-cli &amp;lt;code&amp;gt;flux&amp;lt;/code&amp;gt;] =&lt;br /&gt;
&lt;br /&gt;
{{Note|&amp;lt;code&amp;gt;fluxctl&amp;lt;/code&amp;gt; was a cli client for version Flux v1.}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install or upgrade using official install.sh (option-1)&lt;br /&gt;
export FLUX_VERSION=2.7.5; curl -s https://fluxcd.io/install.sh | sudo -E bash&lt;br /&gt;
curl -s https://fluxcd.io/install.sh | sudo bash # latest&lt;br /&gt;
&lt;br /&gt;
# Version check&lt;br /&gt;
flux version &lt;br /&gt;
flux: v2.7.5&lt;br /&gt;
distribution: flux-v2.7.5&lt;br /&gt;
helm-controller: v1.4.5&lt;br /&gt;
kustomize-controller: v1.7.3&lt;br /&gt;
notification-controller: v1.7.5&lt;br /&gt;
source-controller: v1.7.4&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# enable completions in ~/.bash_profile&lt;br /&gt;
. &amp;lt;(flux completion bash)&lt;br /&gt;
&lt;br /&gt;
# Pre check&lt;br /&gt;
flux check --pre&lt;br /&gt;
► checking prerequisites&lt;br /&gt;
✗ Kubernetes version v1.27.16 does not match &amp;gt;=1.32.0-0&lt;br /&gt;
&lt;br /&gt;
# Docker images&lt;br /&gt;
docker pull fluxcd/fluxctl:1.24.3&lt;br /&gt;
docker pull ghcr.io/fluxcd/flux-cli:1.24.3 # does not work&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Cluster bootstrap =&lt;br /&gt;
FluxCDv2 bootstrap process is installing the Flux onto a cluster and stores(commits) its own manifests to a Git repository.&lt;br /&gt;
* [https://fluxcd.io/docs/installation/#generic-git-server Generic Git Server], including GCP [https://cloud.google.com/source-repositories/docs Cloud Source Repositories]&lt;br /&gt;
* [https://fluxcd.io/docs/installation/#bootstrap-with-terraform Bootstrap with Terraform]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
FLUX_GIT_USERNAME=my-git-username&lt;br /&gt;
FLUX_GIT_EMAIL=my-git-email@example.com&lt;br /&gt;
flux bootstrap git \&lt;br /&gt;
  --author-email=$FLUX_GIT_EMAIL \&lt;br /&gt;
  --url=ssh://git@github.com/$FLUX_GIT_USERNAME/gitops-istio \&lt;br /&gt;
  --branch=main \&lt;br /&gt;
  --path=clusters/my-cluster&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
At bootstrap, Flux generates an SSH key and prints the public key. In order to sync your cluster state with git you need to copy the public key and create a deploy key with write access on your GitHub repository. On GitHub go to Settings &amp;gt; Deploy keys click on Add deploy key, check Allow write access, paste the Flux public key and click Add key.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;[https://fluxcd.io/docs/installation/#dev-install Dev installation] does not stores its own configuration state in Git repository&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# option 1&lt;br /&gt;
flux install # install and upgrade&lt;br /&gt;
flux install \&lt;br /&gt;
--namespace=flux-system \&lt;br /&gt;
--network-policy=false \&lt;br /&gt;
--components=source-controller&lt;br /&gt;
&lt;br /&gt;
# option 2&lt;br /&gt;
kubectl apply -f https://github.com/fluxcd/flux2/releases/latest/download/install.yaml&lt;br /&gt;
kustomize build https://github.com/fluxcd/flux2/manifests/install?ref=main | kubectl apply -f- # Upgrade&lt;br /&gt;
&lt;br /&gt;
# Register Git repositories and reconcile them on your cluster:&lt;br /&gt;
flux create source git podinfo \&lt;br /&gt;
  --url=https://github.com/stefanprodan/podinfo \&lt;br /&gt;
  --tag-semver=&amp;quot;&amp;gt;=4.0.0&amp;quot; \&lt;br /&gt;
  --interval=1m&lt;br /&gt;
&lt;br /&gt;
flux create kustomization podinfo-default \&lt;br /&gt;
  --source=podinfo \&lt;br /&gt;
  --path=&amp;quot;./kustomize&amp;quot; \&lt;br /&gt;
  --prune=true \&lt;br /&gt;
  --validation=client \&lt;br /&gt;
  --interval=10m \&lt;br /&gt;
  --health-check=&amp;quot;Deployment/podinfo.default&amp;quot; \&lt;br /&gt;
  --health-check-timeout=2m&lt;br /&gt;
&lt;br /&gt;
# Register Helm repositories and create Helm releases:&lt;br /&gt;
flux create source helm bitnami \&lt;br /&gt;
  --interval=1h \&lt;br /&gt;
  --url=https://charts.bitnami.com/bitnami&lt;br /&gt;
&lt;br /&gt;
flux create helmrelease nginx \&lt;br /&gt;
  --interval=1h \&lt;br /&gt;
  --release-name=nginx-ingress-controller \&lt;br /&gt;
  --target-namespace=kube-system \&lt;br /&gt;
  --source=HelmRepository/bitnami \&lt;br /&gt;
  --chart=nginx-ingress-controller \&lt;br /&gt;
  --chart-version=&amp;quot;5.x.x&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Uninstall&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
flux uninstall --namespace=flux-system&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
* [https://github.com/fluxcd/terraform-provider-flux terraform-provider-flux]&lt;br /&gt;
*[https://github.com/pio2pio/gitops-istio gitops-istio] Tutorial&lt;br /&gt;
*[https://www.youtube.com/watch?v=nGLpUCPX8JE Flux v2 Everything that you wanted to know but were afraid to ask (Stefan Prodan)] December 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Bundle&lt;br /&gt;
*[https://blog.sldk.de/2021/02/introduction-to-gitops-on-kubernetes-with-flux-v2/ Introduction to GitOps on Kubernetes with Flux v2]&lt;br /&gt;
*[https://blog.sldk.de/2021/03/handling-secrets-in-flux-v2-repositories-with-sops/ Handling secrets in Flux v2 repositories with SOPS]&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Kubernetes/Progressive_Delivery_Flux_and_Flagger&amp;diff=7070</id>
		<title>Kubernetes/Progressive Delivery Flux and Flagger</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Kubernetes/Progressive_Delivery_Flux_and_Flagger&amp;diff=7070"/>
		<updated>2026-01-15T10:47:59Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Flux v2 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= [https://github.com/fluxcd/flux2 Flux v2] =&lt;br /&gt;
&lt;br /&gt;
[https://fluxcd.io/flux/ Flux v2 Documentation]&lt;br /&gt;
&lt;br /&gt;
Flux v2 architecture&lt;br /&gt;
:[[File:ClipCapIt-210524-232835.PNG]]&lt;br /&gt;
&lt;br /&gt;
Flux v2 - Webhooks and notifications&lt;br /&gt;
:[[File:ClipCapIt-210524-233028.PNG]]&lt;br /&gt;
&lt;br /&gt;
= Install Flux v2 [https://fluxcd.io/docs/cmd/ &amp;lt;code&amp;gt;flux&amp;lt;/code&amp;gt;] command line =&lt;br /&gt;
* [https://fluxcd.io/docs/get-started/ Fluxv2 Get Started]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Note|&amp;lt;code&amp;gt;fluxctl&amp;lt;/code&amp;gt; is a previous version Flux v1 command line tool.}}&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install or upgrade using official install.sh (option-1)&lt;br /&gt;
export FLUX_VERSION=0.37.0; curl -s https://fluxcd.io/install.sh | sudo -E bash&lt;br /&gt;
curl -s https://fluxcd.io/install.sh | sudo bash # latest&lt;br /&gt;
&lt;br /&gt;
# Install from GitHub releases (option-2)&lt;br /&gt;
REPO=fluxcd/flux2&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/$REPO/releases/latest&amp;quot; | jq -r .tag_name | tr -d v); echo $LATEST&lt;br /&gt;
VERSION=$LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d); FILE=flux_${VERSION}_linux_amd64&lt;br /&gt;
curl -L https://github.com/$REPO/releases/download/v${VERSION}/$FILE.tar.gz -o $TEMPDIR/$FILE.tar.gz&lt;br /&gt;
tar xzvf $TEMPDIR/$FILE.tar.gz -C $TEMPDIR&lt;br /&gt;
sudo install $TEMPDIR/flux /usr/local/bin/flux&lt;br /&gt;
sudo install $TEMPDIR/flux /usr/local/bin/flux_${VERSION}&lt;br /&gt;
&lt;br /&gt;
# enable completions in ~/.bash_profile&lt;br /&gt;
. &amp;lt;(flux completion bash)&lt;br /&gt;
&lt;br /&gt;
# TODO: Via release binaries&lt;br /&gt;
# https://github.com/fluxcd/flux/releases&lt;br /&gt;
&lt;br /&gt;
# Pre check&lt;br /&gt;
flux check --pre&lt;br /&gt;
► checking prerequisites&lt;br /&gt;
✗ flux 0.25.1 &amp;lt;0.25.2 (new version is available, please upgrade)&lt;br /&gt;
✔ Kubernetes 1.21.5-gke.1302 &amp;gt;=1.19.0-0&lt;br /&gt;
✔ prerequisites checks passed&lt;br /&gt;
&lt;br /&gt;
# Docker images&lt;br /&gt;
docker pull fluxcd/fluxctl:1.24.3&lt;br /&gt;
docker pull ghcr.io/fluxcd/flux-cli:1.24.3 # does not work&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Cluster bootstrap =&lt;br /&gt;
FluxCDv2 bootstrap process is installing the Flux onto a cluster and stores(commits) its own manifests to a Git repository.&lt;br /&gt;
* [https://fluxcd.io/docs/installation/#generic-git-server Generic Git Server], including GCP [https://cloud.google.com/source-repositories/docs Cloud Source Repositories]&lt;br /&gt;
* [https://fluxcd.io/docs/installation/#bootstrap-with-terraform Bootstrap with Terraform]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
FLUX_GIT_USERNAME=my-git-username&lt;br /&gt;
FLUX_GIT_EMAIL=my-git-email@example.com&lt;br /&gt;
flux bootstrap git \&lt;br /&gt;
  --author-email=$FLUX_GIT_EMAIL \&lt;br /&gt;
  --url=ssh://git@github.com/$FLUX_GIT_USERNAME/gitops-istio \&lt;br /&gt;
  --branch=main \&lt;br /&gt;
  --path=clusters/my-cluster&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
At bootstrap, Flux generates an SSH key and prints the public key. In order to sync your cluster state with git you need to copy the public key and create a deploy key with write access on your GitHub repository. On GitHub go to Settings &amp;gt; Deploy keys click on Add deploy key, check Allow write access, paste the Flux public key and click Add key.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;[https://fluxcd.io/docs/installation/#dev-install Dev installation] does not stores its own configuration state in Git repository&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# option 1&lt;br /&gt;
flux install # install and upgrade&lt;br /&gt;
flux install \&lt;br /&gt;
--namespace=flux-system \&lt;br /&gt;
--network-policy=false \&lt;br /&gt;
--components=source-controller&lt;br /&gt;
&lt;br /&gt;
# option 2&lt;br /&gt;
kubectl apply -f https://github.com/fluxcd/flux2/releases/latest/download/install.yaml&lt;br /&gt;
kustomize build https://github.com/fluxcd/flux2/manifests/install?ref=main | kubectl apply -f- # Upgrade&lt;br /&gt;
&lt;br /&gt;
# Register Git repositories and reconcile them on your cluster:&lt;br /&gt;
flux create source git podinfo \&lt;br /&gt;
  --url=https://github.com/stefanprodan/podinfo \&lt;br /&gt;
  --tag-semver=&amp;quot;&amp;gt;=4.0.0&amp;quot; \&lt;br /&gt;
  --interval=1m&lt;br /&gt;
&lt;br /&gt;
flux create kustomization podinfo-default \&lt;br /&gt;
  --source=podinfo \&lt;br /&gt;
  --path=&amp;quot;./kustomize&amp;quot; \&lt;br /&gt;
  --prune=true \&lt;br /&gt;
  --validation=client \&lt;br /&gt;
  --interval=10m \&lt;br /&gt;
  --health-check=&amp;quot;Deployment/podinfo.default&amp;quot; \&lt;br /&gt;
  --health-check-timeout=2m&lt;br /&gt;
&lt;br /&gt;
# Register Helm repositories and create Helm releases:&lt;br /&gt;
flux create source helm bitnami \&lt;br /&gt;
  --interval=1h \&lt;br /&gt;
  --url=https://charts.bitnami.com/bitnami&lt;br /&gt;
&lt;br /&gt;
flux create helmrelease nginx \&lt;br /&gt;
  --interval=1h \&lt;br /&gt;
  --release-name=nginx-ingress-controller \&lt;br /&gt;
  --target-namespace=kube-system \&lt;br /&gt;
  --source=HelmRepository/bitnami \&lt;br /&gt;
  --chart=nginx-ingress-controller \&lt;br /&gt;
  --chart-version=&amp;quot;5.x.x&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Uninstall&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
flux uninstall --namespace=flux-system&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
* [https://github.com/fluxcd/terraform-provider-flux terraform-provider-flux]&lt;br /&gt;
*[https://github.com/pio2pio/gitops-istio gitops-istio] Tutorial&lt;br /&gt;
*[https://www.youtube.com/watch?v=nGLpUCPX8JE Flux v2 Everything that you wanted to know but were afraid to ask (Stefan Prodan)] December 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Bundle&lt;br /&gt;
*[https://blog.sldk.de/2021/02/introduction-to-gitops-on-kubernetes-with-flux-v2/ Introduction to GitOps on Kubernetes with Flux v2]&lt;br /&gt;
*[https://blog.sldk.de/2021/03/handling-secrets-in-flux-v2-repositories-with-sops/ Handling secrets in Flux v2 repositories with SOPS]&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Ubuntu_Setup&amp;diff=7069</id>
		<title>Ubuntu Setup</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Ubuntu_Setup&amp;diff=7069"/>
		<updated>2026-01-13T07:25:50Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Image converter */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;If you are using Ubuntu for various Linux projects you will find that as it comes with pre installed with many packages. On the other hand installing just minimal version seems to be too extreme. Therefore I started maitaining a list of unnecessary packages and one liner to that removes them all. Please feel free to modify for your needs.&lt;br /&gt;
&lt;br /&gt;
= Default partitioning =&lt;br /&gt;
On virtual systems schema below will be applied, eg on laptops:&lt;br /&gt;
:[[File:ClipCapIt-200620-131502.PNG]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#Eg. for 4G memory and 50G storage system&lt;br /&gt;
&lt;br /&gt;
/dev/mapper/ubuntu--vg-root        mount_point: /&lt;br /&gt;
/dev/mapper/ubuntu--vg-swapt_1&lt;br /&gt;
/dev/sda&lt;br /&gt;
 /dev/sda1 (50G)&lt;br /&gt;
&lt;br /&gt;
LVM VG ubuntu-vg, LV root    as ext4&lt;br /&gt;
LVM VG ubuntu-vg, LV swapt_1 as swap&lt;br /&gt;
&lt;br /&gt;
#Boot device:&lt;br /&gt;
/dev/mapper/ubuntu--vg-root&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
As a good handy practice you may create 100G virtual disk that you thin provision. Then create 2 PVs for  root and swap partitions. Don't utilize all space at once but extend partitions when needed. This method eliminates adding new disks to VMs saving time and efforts.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example LVM setup, here using 30G Physical Volume(99.9% used), 1 Volume Group and 2 Logical Volumes (root and swap). &lt;br /&gt;
&amp;lt;source lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
$ sudo pvs&lt;br /&gt;
  PV         VG        Fmt  Attr PSize   PFree &lt;br /&gt;
  /dev/sda1  ubuntu-vg lvm2 a--  &amp;lt;29.93g 36.00m&lt;br /&gt;
$ sudo vgs&lt;br /&gt;
  VG        #PV #LV #SN Attr   VSize   VFree &lt;br /&gt;
  ubuntu-vg   1   2   0 wz--n- &amp;lt;29.93g 36.00m&lt;br /&gt;
$ sudo lvs&lt;br /&gt;
  LV     VG        Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert&lt;br /&gt;
  root   ubuntu-vg -wi-ao----  28.94g                                                    &lt;br /&gt;
  swap_1 ubuntu-vg -wi-ao---- 976.00m                                                    &lt;br /&gt;
piotr@u18:~$&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
$ lsblk /dev/sda --fs&lt;br /&gt;
NAME                  FSTYPE      LABEL UUID                                   MOUNTPOINT&lt;br /&gt;
sda                                                                            &lt;br /&gt;
└─sda1                LVM2_member       rP18Kb-Q12j-wjVf-C1iV-uy42-BUJD-aWFuO7 &lt;br /&gt;
  ├─ubuntu--vg-root   ext4              fad04a3b-5fa3-4a03-bbd6-24a93cda1eb3   /&lt;br /&gt;
  └─ubuntu--vg-swap_1 swap              47cd084b-89b0-4cd5-bdb8-367238842ba1   [SWAP]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= List of unnecessary packages =&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get remove libreoffice-* #Remove LibreOffice&lt;br /&gt;
sudo apt-get remove unity-lens-* #This package contains photos scopes which allow Unity to search for local and online photos.&lt;br /&gt;
sudo apt-get remove shotwell* #Photo organizer&lt;br /&gt;
sudo apt-get remove simple-scan #Scanner software&lt;br /&gt;
sudo apt-get remove empathy* #Internet messaging ~13M&lt;br /&gt;
sudo apt-get remove thunderbird* #Email client ~61M&lt;br /&gt;
sudo apt-get remove unity-scope-gdrive #Google Drive scope for Unity ~116KB&lt;br /&gt;
sudo apt-get remove cheese* #Cheese Webcam Booth - webcam software&lt;br /&gt;
sudo apt-get remove brasero* #Brasero Disc Burner ~6.5MB&lt;br /&gt;
sudo apt-get remove gnome-bluetooth Package to manipulate bloototh devices using Gnome desktop ~2MB&lt;br /&gt;
sudo apt-get remove gnome-orca Orca Screen Reader -Provide access to graphical desktop environments via synthesised speech and/or refreshable braille&lt;br /&gt;
sudo apt-get remove unity-webapps-common #Amazon Unity WebApp integration scripts ~133KB&lt;br /&gt;
sudo apt-get remove ibus-pinyin #IBus Bopomofo Preferences - ibus-pinyin is a IBus based IM engine for Chinese ~1.4MB&lt;br /&gt;
sudo apt-get remove apt-get remove printer-driver-foo2zjs* #Reactivate HP LaserJet 1018/1020 after reloading paper ~3.2MB&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Remove unnecessary packages - one liner =&lt;br /&gt;
;Ubuntu 12, 14, 16&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sudo apt-get remove libreoffice-* unity-lens-* shotwell* simple-scan empathy* thunderbird* unity-scope-gdrive cheese*\&lt;br /&gt;
brasero* gnome-bluetooth gnome-orca unity-webapps-common ibus-pinyin printer-driver-foo2zjs*&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Ubuntu 18. It's recommended to choose ''Minimal Install'', so most of packages below won't get installed.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sudo apt-get purge libreoffice-* unity-lens-* shotwell* simple-scan empathy* thunderbird* cheese* \&lt;br /&gt;
brasero* gnome-bluetooth gnome-orca ibus-pinyin printer-driver-foo2zjs* xul-ext-ubufox speech-dispatcher* \&lt;br /&gt;
rhythmbox* printer-driver-* mythes-en-us mobile-broadband-provider-inf* \&lt;br /&gt;
evolution-data-server* espeak-ng-data:amd64 bluez* ubuntu-web-launchers \&lt;br /&gt;
transmission-*&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get purge xul-ext-ubufox                           # Canonical FF customizations for u14,16,18,20&lt;br /&gt;
sudo apt-get remove gnome-mahjongg gnome-mines gnome-sudoku # games, works for u14,16,18,20&lt;br /&gt;
sudo apt-get remove gnome-video-effects gstreamer1.0-* &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
; XTREME&lt;br /&gt;
UnInstallant Ubuntu software notifier&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get remove update-notifier&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Uninstall locales - unused languages etc =&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get install localepurge&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Set apt-get to not install recommended and suggested packages =&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo bash -c 'cat &amp;gt; /etc/apt/apt.conf.d/01no-recommend &amp;lt;&amp;lt; EOF&lt;br /&gt;
APT::Install-Recommends &amp;quot;0&amp;quot;;&lt;br /&gt;
APT::Install-Suggests &amp;quot;0&amp;quot;;&lt;br /&gt;
EOF'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To see if apt reads this, enter this in command line (as root or regular user):&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
apt-config dump | grep -e Recommends -e Suggests&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Install necessary packages =&lt;br /&gt;
&lt;br /&gt;
Adobe Flash Player&lt;br /&gt;
 sudo apt-get install flashplugin-installer&lt;br /&gt;
&lt;br /&gt;
Java JRE&lt;br /&gt;
This will install the default verison Java for you distro plus Icedtea plugin for using Firefox with Java&lt;br /&gt;
 sudo apt-get install default-jre icedtea-plugin&lt;br /&gt;
&lt;br /&gt;
Unity Settings&lt;br /&gt;
 sudo apt-get install unity-control-center&lt;br /&gt;
&lt;br /&gt;
Opera&lt;br /&gt;
&lt;br /&gt;
Add Opera repository &amp;lt;code&amp;gt;'''deb &amp;lt;nowiki&amp;gt;http://deb.opera.com/opera/&amp;lt;/nowiki&amp;gt; stable non-free'''&amp;lt;/code&amp;gt; to the apt-get source list in &amp;lt;code&amp;gt;/etc/apt/sources.list&amp;lt;/code&amp;gt;. Then import a public PGP repository key.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
echo &amp;quot;deb http://deb.opera.com/opera/ stable non-free&amp;quot; | sudo tee -a /etc/apt/sources.list&lt;br /&gt;
wget -qO - http://deb.opera.com/archive.key | sudo apt-key add -&lt;br /&gt;
sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install opera&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Silverlight&lt;br /&gt;
&lt;br /&gt;
Pipelight has been released and we can use it for silverlight as a best alternative moonlight.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-add-repository ppa:ehoover/compholio&lt;br /&gt;
sudo apt-add-repository ppa:mqchael/pipelight&lt;br /&gt;
sudo apt-get update&lt;br /&gt;
sudo apt-get install pipelight&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= GUI tools =&lt;br /&gt;
* [https://github.com/hluk/CopyQ/releases copyQ] clipboard manager&lt;br /&gt;
* VisualVM&lt;br /&gt;
&lt;br /&gt;
= Customise Ubuntu =&lt;br /&gt;
==Fix Ubuntu Unity Dash Search for Applications and Files==&lt;br /&gt;
 sudo apt-get install unity-lens-files unity-lens-applications #log out and log back in required&lt;br /&gt;
&lt;br /&gt;
==Fix Ubuntu &amp;lt;17.10 missing Control Center==&lt;br /&gt;
 sudo apt-get install unity-control-center --no-install-recommends&lt;br /&gt;
&lt;br /&gt;
==Fix Ubuntu &amp;gt;18.04 missing System Settings==&lt;br /&gt;
 sudo apt install gnome-control-center&lt;br /&gt;
&lt;br /&gt;
==Remove background wallpaper ==&lt;br /&gt;
Tested on Ubuntu 14,16,18&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
gsettings set org.gnome.settings-daemon.plugins.background active true&lt;br /&gt;
gsettings set org.gnome.desktop.background draw-background false        #disable &lt;br /&gt;
gsettings set org.gnome.desktop.background primary-color &amp;quot;#000000&amp;quot;      #set to black&lt;br /&gt;
gsettings set org.gnome.desktop.background secondary-color &amp;quot;#000000&amp;quot;    #set to black&lt;br /&gt;
gsettings set org.gnome.desktop.background color-shading-type &amp;quot;solid&amp;quot;   #set solid colour&lt;br /&gt;
gsettings set org.gnome.desktop.background picture-uri file:///dev/null #remove wallpaper, not perfect but nothing worked in U15.10&lt;br /&gt;
gsettings set com.canonical.unity-greeter draw-user-backgrounds false   #disable not worked&lt;br /&gt;
&lt;br /&gt;
# Reset background picture to origin, U15.10&lt;br /&gt;
gsettings set org.gnome.desktop.background picture-uri file:///usr/share/backgrounds/warty-final-ubuntu.png &lt;br /&gt;
&lt;br /&gt;
# Sets Unity greeter background, &amp;lt;17.04&lt;br /&gt;
gsettings set com.canonical.unity-greeter background /usr/share/backgrounds/warty-final-ubuntu.png&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Disable screen lock out==&lt;br /&gt;
&amp;lt;code&amp;gt;dconf&amp;lt;/code&amp;gt; is legacy tool to configure &amp;lt;tt&amp;gt;gnome&amp;lt;/tt&amp;gt; nowadays more modern way is to use &amp;lt;code&amp;gt;gsettings&amp;lt;/code&amp;gt;.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
dconf write /org/gnome/desktop/screensaver/idle-activation-enabled false  #gnome&lt;br /&gt;
dconf write /org/gnome/desktop/screensaver/lock-enabled            false&lt;br /&gt;
&lt;br /&gt;
# Unity - Ubuntu 14.04, 16.04&lt;br /&gt;
gsettings set org.gnome.desktop.session     idle-delay   0      #disable the screen blackout:(0 to disable)&lt;br /&gt;
gsettings set org.gnome.desktop.screensaver lock-enabled false  #disable the screen lock&lt;br /&gt;
&lt;br /&gt;
# VirtualBox &amp;gt; Ubuntu 18.04 Disabling Xserver screen timeouts&lt;br /&gt;
xset s off     # Xserver s parameter sets screensaver to off&lt;br /&gt;
xset s noblank # prevent the display from blanking &lt;br /&gt;
xset -dpms     # prevent the monitor's DPMS energy saver from kicking in&lt;br /&gt;
&lt;br /&gt;
# Gnome - Ubuntu 18.04 LTS, Settings &amp;gt; Power &amp;gt; Blank screen &amp;gt; set to: Never&lt;br /&gt;
gsettings get org.gnome.desktop.lockdown    disable-lock-screen      # verify status&lt;br /&gt;
gsettings set org.gnome.desktop.lockdown    disable-lock-screen true # set disabled&lt;br /&gt;
gsettings get org.gnome.desktop.screensaver lock-enabled             # verify status&lt;br /&gt;
gsettings set org.gnome.desktop.screensaver lock-enabled false       # set disabled&lt;br /&gt;
dconf write  /org/gnome/desktop/screensaver/lock-enabled false       # set disbaled using dconf&lt;br /&gt;
gsettings set org.gnome.desktop.screensaver idle-activation-enabled false # some say it's last resort :)&lt;br /&gt;
&lt;br /&gt;
# Power management&lt;br /&gt;
gsettings set org.gnome.settings-daemon.plugins.power active true  #set gnome to be the default power management run&lt;br /&gt;
gsettings set org.gnome.settings-daemon.plugins.power active false #turn off power management&lt;br /&gt;
&lt;br /&gt;
# last resort as it was a bud in Ubuntu 11.10 with DPMS&lt;br /&gt;
gsettings set org.gnome.desktop.screensaver idle-activation-enabled false&lt;br /&gt;
gsettings set org.gnome.desktop.session idle-delay 2400&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Verify by navigating in &amp;lt;tt&amp;gt;dconf-editor&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;/org/gnome/desktop/screensaver/&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Change number of workspaces==&lt;br /&gt;
To get the current values:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
dconf read /org/compiz/profiles/unity/plugins/core/hsize&lt;br /&gt;
dconf read /org/compiz/profiles/unity/plugins/core/vsize&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To set new values:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
dconf write /org/compiz/profiles/unity/plugins/core/hsize 2&lt;br /&gt;
# or&lt;br /&gt;
gsettings set org.compiz.core:/org/compiz/profiles/unity/plugins/core/ hsize 4&lt;br /&gt;
gsettings set org.compiz.core:/org/compiz/profiles/unity/plugins/core/ vsize 4&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Clenup motd messages ==&lt;br /&gt;
Ubuntu at login displays a number standard messages taking terminal space causing potential loosing context of previous operations. &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
Welcome to Ubuntu 16.04.5 LTS (GNU/Linux 4.4.0-134-generic x86_64)&lt;br /&gt;
&lt;br /&gt;
 * Documentation:  https://help.ubuntu.com&lt;br /&gt;
 * Management:     https://landscape.canonical.com&lt;br /&gt;
 * Support:        https://ubuntu.com/advantage&lt;br /&gt;
&lt;br /&gt;
  Get cloud support with Ubuntu Advantage Cloud Guest:&lt;br /&gt;
    http://www.ubuntu.com/business/services/cloud&lt;br /&gt;
&lt;br /&gt;
1 package can be updated.&lt;br /&gt;
0 updates are security updates.&lt;br /&gt;
&lt;br /&gt;
New release '18.04.1 LTS' available.&lt;br /&gt;
Run 'do-release-upgrade' to upgrade to it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Last login: Fri Aug 31 12:11:28 2018 from 10.0.2.2&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is managed by files in &amp;lt;code&amp;gt;/etc/update-motd.d/&amp;lt;/code&amp;gt;, so deleting them will remove clutter on a screen&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ls /etc/update-motd.d/&lt;br /&gt;
00-header             51-cloudguest         91-release-upgrade    98-fsck-at-reboot     &lt;br /&gt;
10-help-text          90-updates-available  97-overlayroot        98-reboot-required &lt;br /&gt;
&lt;br /&gt;
# Ubuntu Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-1022-azure x86_64)&lt;br /&gt;
# Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-1021-aws x86_64)&lt;br /&gt;
sudo rm /etc/update-motd.d/{10-help-text,50-landscape-sysinfo,50-motd-news,51-cloudguest,80-livepatch,95-hwe-eol}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This cuts down to this message, Ubuntu 18.04 in AWS&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
Welcome to Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-1021-aws x86_64)&lt;br /&gt;
&lt;br /&gt;
0 packages can be updated.&lt;br /&gt;
0 updates are security updates.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Last login: Thu Jan 31 17:09:38 2019 from 10.10.11.11&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Useful setups =&lt;br /&gt;
== Terminator Grid ==&lt;br /&gt;
&lt;br /&gt;
Edit your Terminator config file&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=toml&amp;gt;&lt;br /&gt;
vim ~/.config/terminator/config&lt;br /&gt;
[global_config]&lt;br /&gt;
  enabled_plugins = LaunchpadBugURLHandler, LaunchpadCodeURLHandler, APTURLHandler, InsertTermName, CurrDirOpen&lt;br /&gt;
[keybindings]&lt;br /&gt;
[profiles]&lt;br /&gt;
  [[default]]&lt;br /&gt;
    scrollback_lines = 10000&lt;br /&gt;
[layouts]&lt;br /&gt;
  [[default]]&lt;br /&gt;
    [[[window0]]]&lt;br /&gt;
      type = Window&lt;br /&gt;
      parent = &amp;quot;&amp;quot;&lt;br /&gt;
    [[[child1]]]&lt;br /&gt;
      type = Terminal&lt;br /&gt;
      parent = window0&lt;br /&gt;
      profile = default&lt;br /&gt;
  [[2x3-grid]]&lt;br /&gt;
    [[[child0]]]&lt;br /&gt;
      type = Window&lt;br /&gt;
      parent = &amp;quot;&amp;quot;&lt;br /&gt;
      order = 0&lt;br /&gt;
      position = 0:0&lt;br /&gt;
      maximised = True&lt;br /&gt;
    [[[child1]]]&lt;br /&gt;
      type = VPaned&lt;br /&gt;
      parent = child0&lt;br /&gt;
      order = 0&lt;br /&gt;
      position = 400&lt;br /&gt;
    [[[child2]]]&lt;br /&gt;
      type = HPaned&lt;br /&gt;
      parent = child1&lt;br /&gt;
      order = 0&lt;br /&gt;
      position = 50%&lt;br /&gt;
    [[[terminal3]]]&lt;br /&gt;
      type = Terminal&lt;br /&gt;
      parent = child2&lt;br /&gt;
      order = 0&lt;br /&gt;
      profile = default&lt;br /&gt;
      command = cd /home/piotr; bash&lt;br /&gt;
    [[[terminal4]]]&lt;br /&gt;
      type = Terminal&lt;br /&gt;
      parent = child2&lt;br /&gt;
      order = 1&lt;br /&gt;
      profile = default&lt;br /&gt;
      command = cd /home/piotr; bash&lt;br /&gt;
    [[[child5]]]&lt;br /&gt;
      type = VPaned&lt;br /&gt;
      parent = child1&lt;br /&gt;
      order = 1&lt;br /&gt;
      position = 400&lt;br /&gt;
    [[[child6]]]&lt;br /&gt;
      type = HPaned&lt;br /&gt;
      parent = child5&lt;br /&gt;
      order = 0&lt;br /&gt;
      position = 50%&lt;br /&gt;
    [[[terminal7]]]&lt;br /&gt;
      type = Terminal&lt;br /&gt;
      parent = child6&lt;br /&gt;
      order = 0&lt;br /&gt;
      profile = default&lt;br /&gt;
      command = cd /home/piotr; bash&lt;br /&gt;
    [[[terminal8]]]&lt;br /&gt;
      type = Terminal&lt;br /&gt;
      parent = child6&lt;br /&gt;
      order = 1&lt;br /&gt;
      profile = default&lt;br /&gt;
      command = cd /home/piotr; bash&lt;br /&gt;
    [[[child9]]]&lt;br /&gt;
      type = HPaned&lt;br /&gt;
      parent = child5&lt;br /&gt;
      order = 1&lt;br /&gt;
      position = 50%&lt;br /&gt;
    [[[terminal10]]]&lt;br /&gt;
      type = Terminal&lt;br /&gt;
      parent = child9&lt;br /&gt;
      order = 0&lt;br /&gt;
      profile = default&lt;br /&gt;
      command = cd /home/piotr; bash&lt;br /&gt;
    [[[terminal11]]]&lt;br /&gt;
      type = Terminal&lt;br /&gt;
      parent = child9&lt;br /&gt;
      order = 1&lt;br /&gt;
      profile = default&lt;br /&gt;
      command = cd /home/piotr; bash&lt;br /&gt;
[plugins]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now create the new launcher and update the database.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=toml&amp;gt;&lt;br /&gt;
vim ~/.local/share/applications/terminator-grid.desktop&lt;br /&gt;
&lt;br /&gt;
[Desktop Entry]&lt;br /&gt;
Name=Terminator-Grid&lt;br /&gt;
Comment=Multiple terminals in one window&lt;br /&gt;
TryExec=terminator&lt;br /&gt;
# DEFAULT ACTION: Now opens the grid&lt;br /&gt;
Exec=terminator --layout 2x3-grid&lt;br /&gt;
Icon=terminator&lt;br /&gt;
Type=Application&lt;br /&gt;
Categories=GNOME;GTK;Utility;TerminalEmulator;System;&lt;br /&gt;
StartupNotify=true&lt;br /&gt;
X-Ubuntu-Gettext-Domain=terminator&lt;br /&gt;
Keywords=terminal;shell;prompt;command;commandline;&lt;br /&gt;
MimeType=x-scheme-handler/terminal;&lt;br /&gt;
# Define the right-click options&lt;br /&gt;
Actions=NewWindow;&lt;br /&gt;
&lt;br /&gt;
[Desktop Action NewWindow]&lt;br /&gt;
Name=Open a Single Window&lt;br /&gt;
# RIGHT-CLICK ACTION: Opens standard terminator&lt;br /&gt;
Exec=terminator&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now update database, then Restart GNOME Shell (if the icon doesn't appear in search): Press Alt + F2, type r, and hit Enter. (Note: This only works on X11, not Wayland. On Wayland, just logging out and back in works).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
update-desktop-database ~/.local/share/applications/&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Search and Pin:&lt;br /&gt;
* Open your app launcher (Super key).&lt;br /&gt;
* Search for &amp;quot;Terminator&amp;quot;.&lt;br /&gt;
* Right-click it and select Add to Favorites.&lt;br /&gt;
&lt;br /&gt;
== Image converter ==&lt;br /&gt;
nautilus-image-converter is a nautilus extension to mass resize or rotate images. It adds two context menu items in nautlius so you can right-click and choose &amp;quot;Resize Image&amp;quot; or &amp;quot;Rotate Image&amp;quot;).&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# tested on Ubuntu 24.04 with Gnome&lt;br /&gt;
sudo apt-get install nautilus-image-converter&lt;br /&gt;
&lt;br /&gt;
# Restart to see the new context menu&lt;br /&gt;
nautilus -q&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Call screen saver from a terminal to blank all screens ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# tested on Ubuntu 18.04 with Gnome&lt;br /&gt;
sudo apt-get install gnome-screensaver&lt;br /&gt;
gnome-screensaver-command -a #controls GNOME screensaver, -a activate (blank the screen)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create application launcher ==&lt;br /&gt;
;Ubuntu 18.04&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install the GNOME-panel toolset&lt;br /&gt;
sudo apt-get install --no-install-recommends gnome-panel&lt;br /&gt;
&lt;br /&gt;
# Every user launcher&lt;br /&gt;
sudo gnome-desktop-item-edit /usr/share/applications/VisualVM.desktop --create-new&lt;br /&gt;
&lt;br /&gt;
# Local user only, the filename by default is a Name-of-appication.desktop&lt;br /&gt;
gnome-desktop-item-edit ~/.local/share/applications --create-new &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:[[File:ClipCapIt-190807-080016.PNG]]&lt;br /&gt;
&lt;br /&gt;
;Ubuntu 19.10, 20.04&lt;br /&gt;
In above releases &amp;lt;code&amp;gt;gnome-desktop-item-edit&amp;lt;/code&amp;gt; has been removed from the &amp;lt;code&amp;gt;gnome-panel&amp;lt;/code&amp;gt; package, as an alternative &amp;lt;code&amp;gt;.desktop&amp;lt;/code&amp;gt; files can be created manually.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi /usr/share/applications/APPNAME.desktop&lt;br /&gt;
[Desktop Entry]&lt;br /&gt;
Name=&amp;lt;NAME OF THE APPLICATION&amp;gt;&lt;br /&gt;
Comment=&amp;lt;A SHORT DESCRIPTION&amp;gt;&lt;br /&gt;
Exec=&amp;lt;COMMAND-OR-FULL-PATH-TO-LAUNCH-THE-APPLICATION&amp;gt;&lt;br /&gt;
Type=Application&lt;br /&gt;
Terminal=false&lt;br /&gt;
Icon=&amp;lt;ICON NAME OR PATH TO ICON&amp;gt;&lt;br /&gt;
NoDisplay=false&lt;br /&gt;
Keywords=&amp;lt;eg. sql&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It's optional but you may need to right click and set 'allow launching' with addition to set executable permissions. Usual locations of &amp;lt;code&amp;gt;.desktop&amp;lt;/code&amp;gt; files are:&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/share/applications/&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/var/lib/snapd/desktop/applications/&amp;lt;/code&amp;gt; for snap applications&lt;br /&gt;
&lt;br /&gt;
== [https://github.com/paradoxxxzero/gnome-shell-system-monitor-applet gnome-shell-system-monitor-applet] - cpu, memory indicators ==&lt;br /&gt;
System information such as memory usage, cpu usage, network rates and more can be displayed in the notification area in GNOME Shell.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
System-monitor extensions:&lt;br /&gt;
[https://extensions.gnome.org/extension/120/system-monitor/ system-monitor] by paradoxxxzero on [https://github.com/paradoxxxzero/gnome-shell-system-monitor-applet github] supports Gnome-shell up to v40. It seems like abandoned project.&lt;br /&gt;
[https://extensions.gnome.org/extension/3010/system-monitor-next/ system-monitor-next] by mgalgs on [https://github.com/mgalgs/gnome-shell-system-monitor-applet github] supports Gnome-shell v40+, it's a fork of the above.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All extensions:&lt;br /&gt;
* https://extensions.gnome.org&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Note|The current version of the browser Firefox is packaged as a snap version. One of the issues with this is that it cannot work with the Gnome Extensions website.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install on Ubuntu 24.04 (June 2024)&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Ubuntu version tested: v20/22/24 LTS&lt;br /&gt;
lsb_release -d&lt;br /&gt;
Description:	Ubuntu 24.04.3 LTS&lt;br /&gt;
&lt;br /&gt;
gnome-shell --version&lt;br /&gt;
GNOME Shell 46.0&lt;br /&gt;
&lt;br /&gt;
# Install the Gnome-Shell-Extension &amp;amp; Manager&lt;br /&gt;
sudo apt install gnome-shell-extensions               # Ubuntu 20.04 LTS already has this package, 24.04 needs installing it&lt;br /&gt;
sudo apt install gnome-shell-extension-manager        # Ubuntu 22.04|24.04 LTS&lt;br /&gt;
&lt;br /&gt;
# 1. Open `Extensions` app, turn &amp;quot;Use Extensions&amp;quot;. It is already turned on in Ubuntu 24.04.3 LTS.&lt;br /&gt;
# 2. Open Browse tab &amp;gt; search for 'system-monitor-next' by mgalgs, click &amp;quot;Install&amp;quot;.&lt;br /&gt;
# 3. &amp;quot;cpu/mem/net&amp;quot; indicators will appear in the system tray.&lt;br /&gt;
&lt;br /&gt;
# Additional steps for Ubuntu &amp;lt; 24.04&lt;br /&gt;
sudo apt install gnome-tweaks                         # GUI to manage gnome-extensions&lt;br /&gt;
sudo apt install gir1.2-gtop-2.0 gir1.2-nm-1.0 gir1.2-clutter-1.0 gnome-system-monitor&lt;br /&gt;
sudo apt install gnome-shell-extension-system-monitor # after requires log out&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Download the extension from&lt;br /&gt;
## https://extensions.gnome.org/extension/120/system-monitor/&lt;br /&gt;
&lt;br /&gt;
# Never worked out how to use this direct download and install via 'gnome-extensions install &amp;lt;extension_name&amp;gt;'&lt;br /&gt;
## wget https://github.com/paradoxxxzero/gnome-shell-system-monitor-applet/archive/v38.zip&lt;br /&gt;
## gnome-extensions install &amp;lt;system-monitor@paradoxxx.zero.gmail.com&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Enable extension using cli&lt;br /&gt;
gnome-extensions enable system-monitor-next@paradoxxx.zero.gmail.com&lt;br /&gt;
gnome-extensions list --user&lt;br /&gt;
clipboard-indicator@tudmotu.com&lt;br /&gt;
system-monitor-next@paradoxxx.zero.gmail.com&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
:[[File:ClipCapIt-210105-084527.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== [https://github.com/paradoxxxzero/gnome-shell-system-monitor-applet/issues/737#issuecomment-1230654455 Ubuntu 22.04 workaround for the OUTDATED extension] ===&lt;br /&gt;
{{Note|Workaround still needed in August 2022}}&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo apt install gir1.2-gtop-2.0 gir1.2-nm-1.0 gir1.2-clutter-1.0 gnome-system-monitor&lt;br /&gt;
git clone https://github.com/paradoxxxzero/gnome-shell-system-monitor-applet.git&lt;br /&gt;
cd gnome-shell-system-monitor-applet # commit b359d88 verified&lt;br /&gt;
vi system-monitor@paradoxxx.zero.gmail.com/metadata.json &lt;br /&gt;
# | change &amp;quot;version&amp;quot;: -1 to &amp;quot;version&amp;quot;: 42&lt;br /&gt;
make install&lt;br /&gt;
# log out and back in (required)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Snapd - Chromium =&lt;br /&gt;
Recently in U19+ Chromium get installed via snapd package. This is classic installation that has limited access to only a certain directories. It happen that when working with AWS we need get access to &amp;lt;code&amp;gt;~/.ssh&amp;lt;/code&amp;gt; folder to get ec2 machine password. This folder is denied, but we can bind mount &amp;lt;code&amp;gt;~/.ssh&amp;lt;/code&amp;gt; folder into the snap container directory:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ snap list chromium &lt;br /&gt;
Name      Version        Rev   Tracking       Publisher   Notes&lt;br /&gt;
chromium  86.0.4240.111  1373  latest/stable  canonical✓  -&lt;br /&gt;
&lt;br /&gt;
# cd to chromim $HOME dir&lt;br /&gt;
mkdir ~/snap/chromium/current/.ssh&lt;br /&gt;
sudo mount --bind ~/.ssh/ ~/snap/chromium/current/.ssh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Screen shooting =&lt;br /&gt;
In Ubuntu 20.04 Shutter is not a part of default repositories. It can be added via PPA:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo add-apt-repository -y ppa:linuxuprising/shutter&lt;br /&gt;
sudo apt-get install shutter&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Audio - [https://rastating.github.io/setting-default-audio-device-in-ubuntu-18-04/ set defaults] =&lt;br /&gt;
For preserving settings using GUI you can install [https://freedesktop.org/software/pulseaudio/pavucontrol/ PulseAudio Volume Control] &amp;lt;code&amp;gt;pavucontrol&amp;lt;/code&amp;gt;.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# install&lt;br /&gt;
sudo apt install pavucontrol # Ubuntu 20.04&lt;br /&gt;
# run&lt;br /&gt;
pavucontrol&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set default output/input device. In Ubuntu PulseAudio is used to control audio devices. It contains following configuration files&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
/etc/pulse/default.pa # system wide&lt;br /&gt;
~/.config/pulse       # user configuration&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set defaults&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# List devices: modules, sinks, sources, sink-inputs, source-outputs, clients, samples, cards&lt;br /&gt;
# sinks - outputs, sink-inputs, sources - all input/output including RUNNING and SUSPENDED devices&lt;br /&gt;
$ pactl list short sources | column -t&lt;br /&gt;
5   alsa_output.pci-0000_00_1f.3-platform-skl_hda_dsp_generic.HiFi__hw_sofhdadsp_5__sink.monitor  module-alsa-card.c  s16le  2ch  48000Hz  SUSPENDED&lt;br /&gt;
6   alsa_output.pci-0000_00_1f.3-platform-skl_hda_dsp_generic.HiFi__hw_sofhdadsp_4__sink.monitor  module-alsa-card.c  s16le  2ch  48000Hz  RUNNING&lt;br /&gt;
7   alsa_output.pci-0000_00_1f.3-platform-skl_hda_dsp_generic.HiFi__hw_sofhdadsp_3__sink.monitor  module-alsa-card.c  s16le  2ch  48000Hz  SUSPENDED&lt;br /&gt;
8   alsa_output.pci-0000_00_1f.3-platform-skl_hda_dsp_generic.HiFi__hw_sofhdadsp__sink.monitor    module-alsa-card.c  s16le  2ch  48000Hz  SUSPENDED&lt;br /&gt;
9   alsa_input.pci-0000_00_1f.3-platform-skl_hda_dsp_generic.HiFi__hw_sofhdadsp__source           module-alsa-card.c  s16le  2ch  48000Hz  SUSPENDED&lt;br /&gt;
10  alsa_input.pci-0000_00_1f.3-platform-skl_hda_dsp_generic.HiFi__hw_sofhdadsp_6__source         module-alsa-card.c  s16le  4ch  48000Hz  SUSPENDED&lt;br /&gt;
15  alsa_output.usb-DisplayLink_Dell_Universal_Dock_D6000_1806021690-02.analog-stereo.monitor     module-alsa-card.c  s16le  2ch  48000Hz  SUSPENDED&lt;br /&gt;
17  alsa_output.usb-Plantronics_Plantronics_DA40-00.multichannel-output.monitor                   module-alsa-card.c  s16le  1ch  48000Hz  SUSPENDED&lt;br /&gt;
19  alsa_input.usb-Plantronics_Plantronics_DA40-00.mono-fallback                                  module-alsa-card.c  s16le  1ch  16000Hz  SUSPENDED&lt;br /&gt;
20  alsa_input.usb-DisplayLink_Dell_Universal_Dock_D6000_1806021690-02.iec958-stereo              module-alsa-card.c  s16le  2ch  48000Hz  RUNNING&lt;br /&gt;
&lt;br /&gt;
# Set defaut output device. Tab autocompletion should work (U20.04)&lt;br /&gt;
pactl set-default-sink alsa_output.usb-Plantronics_Plantronics_DA40-00.multichannel-output&lt;br /&gt;
# Set defaut input device&lt;br /&gt;
pactl set-default-source alsa_input.usb-Plantronics_Plantronics_DA40-00.mono-fallback&lt;br /&gt;
&lt;br /&gt;
# Test, play some audio then run. IDLE - means in use&lt;br /&gt;
pactl list short sources | column -t | grep -e RUNNING -e IDLE&lt;br /&gt;
17  alsa_output.usb-Plantronics_Plantronics_DA40-00.multichannel-output.monitor                   module-alsa-card.c  s16le  1ch  48000Hz  IDLE&lt;br /&gt;
19  alsa_input.usb-Plantronics_Plantronics_DA40-00.mono-fallback                                  module-alsa-card.c  s16le  1ch  16000Hz  RUNNING&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Make it permanent by setting default device in PulseAudio system configuration file&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Output device&lt;br /&gt;
OUTPUT_DEVICE=alsa_output.usb-Plantronics_Plantronics_DA40-00.mono-fallback&lt;br /&gt;
sudo sed -i &amp;quot;s/#\(set-default-sink\) output/\1 ${OUTPUT_DEVICE}/g&amp;quot; /etc/pulse/default.pa # remove '-i' to test before apply&lt;br /&gt;
# Input device&lt;br /&gt;
INPUT_DEVICE=alsa_input.usb-Plantronics_Plantronics_DA40-00.mono-fallback&lt;br /&gt;
sudo sed -i &amp;quot;s/#\(set-default-source\) input/\1 ${INPUT_DEVICE}/g&amp;quot; /etc/pulse/default.pa&lt;br /&gt;
&lt;br /&gt;
vi /etc/pulse/default.pa # make sure lines below are in place&lt;br /&gt;
### Make some devices default&lt;br /&gt;
set-default-sink   alsa_output.usb-Plantronics_Plantronics_DA40-00.mono-fallback&lt;br /&gt;
set-default-source  alsa_input.usb-Plantronics_Plantronics_DA40-00.mono-fallback&lt;br /&gt;
&lt;br /&gt;
# Delete local user profile and restart system, after boot new defaults should be set&lt;br /&gt;
rm -r ~/.config/pulse&lt;br /&gt;
&lt;br /&gt;
# After reboot, defaults should be set&lt;br /&gt;
cat ~/.config/pulse/*default*&lt;br /&gt;
alsa_output.usb-Plantronics_Plantronics_DA40-00.mono-fallback&lt;br /&gt;
alsa_input.usb-Plantronics_Plantronics_DA40-00.mono-fallback&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Troubleshooting&lt;br /&gt;
PulseAudio cli&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
pacmd&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; help # lists all available commands&lt;br /&gt;
&lt;br /&gt;
pulseaudio --check # Check if any pulseaudio instance is running. It normally prints no output, just exit code. 0 means running&lt;br /&gt;
pulseaudio --kill  # kill, then --start&lt;br /&gt;
pulseaudio -D      # start pulseaudio as a daemon&lt;br /&gt;
# | using /etc/pulse/daemon.conf&lt;br /&gt;
&lt;br /&gt;
# Pulseaudio is a user service&lt;br /&gt;
systemctl --user restart pulseaudio.service&lt;br /&gt;
systemctl --user restart pulseaudio.socket&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I have a port replicator Dell D-6000, that gets randomly disconnected causing switching audio to new connected device - means itself. As workaround commenting out lines below stops this behaviour.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi /etc/pulse/default.pa&lt;br /&gt;
### Use hot-plugged devices like Bluetooth or USB automatically (LP: #1702794)&lt;br /&gt;
# .ifexists module-switch-on-connect.so&lt;br /&gt;
# load-module module-switch-on-connect&lt;br /&gt;
# .endif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Input devices =&lt;br /&gt;
Motivation is to enable horizontal scrolling in Ubuntu 20.04 using Perixx Gamig Mouse Mx2000&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
xinput list&lt;br /&gt;
⎡ Virtual core pointer                    	id=2	[master pointer  (3)]&lt;br /&gt;
⎜   ↳ Virtual core XTEST pointer              	id=4	[slave  pointer  (2)]&lt;br /&gt;
⎜   ↳ Holtek USB Gaming Mouse                 	id=11	[slave  pointer  (2)]&lt;br /&gt;
⎜   ↳ SYNA8007:00 06CB:CD8C Mouse             	id=14	[slave  pointer  (2)]&lt;br /&gt;
⎜   ↳ SYNA8007:00 06CB:CD8C Touchpad          	id=15	[slave  pointer  (2)]&lt;br /&gt;
⎜   ↳ TPPS/2 Elan TrackPoint                  	id=19	[slave  pointer  (2)]&lt;br /&gt;
⎣ Virtual core keyboard                   	id=3	[master keyboard (2)]&lt;br /&gt;
    ↳ Virtual core XTEST keyboard             	id=5	[slave  keyboard (3)]&lt;br /&gt;
    ↳ Power Button                            	id=6	[slave  keyboard (3)]&lt;br /&gt;
    ↳ Video Bus                               	id=7	[slave  keyboard (3)]&lt;br /&gt;
    ↳ Sleep Button                            	id=8	[slave  keyboard (3)]&lt;br /&gt;
    ↳ CHICONY HP Basic USB Keyboard           	id=9	[slave  keyboard (3)]&lt;br /&gt;
    ↳ Holtek USB Gaming Mouse                 	id=10	[slave  keyboard (3)]&lt;br /&gt;
    ↳ Integrated Camera: Integrated C         	id=12	[slave  keyboard (3)]&lt;br /&gt;
    ↳ Integrated Camera: Integrated I         	id=13	[slave  keyboard (3)]&lt;br /&gt;
    ↳ sof-hda-dsp Headset Jack                	id=16	[slave  keyboard (3)]&lt;br /&gt;
    ↳ Intel HID events                        	id=17	[slave  keyboard (3)]&lt;br /&gt;
    ↳ AT Translated Set 2 keyboard            	id=18	[slave  keyboard (3)]&lt;br /&gt;
    ↳ ThinkPad Extra Buttons                  	id=20	[slave  keyboard (3)]&lt;br /&gt;
    ↳ Holtek USB Gaming Mouse                 	id=21	[slave  keyboard (3)]&lt;br /&gt;
&lt;br /&gt;
# test mouse aka Virtual core pointer&lt;br /&gt;
xinput test 11&lt;br /&gt;
motion a[0]=2023  # &amp;lt;- cursor moving&lt;br /&gt;
motion a[0]=2024 a[1]=1411 &lt;br /&gt;
motion a[3]=19545 # &amp;lt;- scroll down &lt;br /&gt;
button press   5 &lt;br /&gt;
button release 5 &lt;br /&gt;
&lt;br /&gt;
# test 'virtual core keyboard' aka additional programmable buttons&lt;br /&gt;
## '10' - this virtual keyboard for all buttons except the scrolling wheel&lt;br /&gt;
xinput test 10&lt;br /&gt;
key press   37&lt;br /&gt;
key press   38&lt;br /&gt;
&lt;br /&gt;
## '21' - this is scrolling wheel buttons left/right, not scrolling itself&lt;br /&gt;
xinput test 21&lt;br /&gt;
key press   248 &lt;br /&gt;
key release 248 &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# List of properties of a device. We want to see 'horizontal scrolling wheel buttons'&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ xinput list-props  21&lt;br /&gt;
Device 'Holtek USB Gaming Mouse':&lt;br /&gt;
	Device Enabled (169):	1&lt;br /&gt;
	Coordinate Transformation Matrix (171):	1.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 1.000000&lt;br /&gt;
	libinput Send Events Modes Available (291):	1, 0&lt;br /&gt;
	libinput Send Events Mode Enabled (292):	0, 0&lt;br /&gt;
	libinput Send Events Mode Enabled Default (293):	0, 0&lt;br /&gt;
	Device Node (294):	&amp;quot;/dev/input/event10&amp;quot;&lt;br /&gt;
	Device Product ID (295):	1241, 41063&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
[[Category:linux]]&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Linux_shell/Productivity_tools&amp;diff=7068</id>
		<title>Linux shell/Productivity tools</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Linux_shell/Productivity_tools&amp;diff=7068"/>
		<updated>2025-12-03T06:38:48Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Opencode */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Autojump =&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
https://github.com/wting/autojump#manual&lt;br /&gt;
sudo apt-get install autojump&lt;br /&gt;
cat /usr/share/doc/autojump/README.Debian&lt;br /&gt;
&lt;br /&gt;
Autojump for Debian&lt;br /&gt;
-------------------&lt;br /&gt;
&lt;br /&gt;
To use autojump, you need to configure you shell to source&lt;br /&gt;
/usr/share/autojump/autojump.sh on startup.&lt;br /&gt;
&lt;br /&gt;
If you use Bash, add the following line to your ~/.bashrc (for non-login&lt;br /&gt;
interactive shells) and your ~/.bash_profile (for login shells):&lt;br /&gt;
. /usr/share/autojump/autojump.sh&lt;br /&gt;
&lt;br /&gt;
If you use Zsh, add the following line to your ~/.zshrc (for all interactive shells):&lt;br /&gt;
. /usr/share/autojump/autojump.sh&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
j -s # display statistics&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
j foo      # Jump To A Directory That Contains foo. It takes multiple arguments to do fuzzy search&lt;br /&gt;
jc bar     # jump to a child directory (sub-directory of current directory) rather than typing out the full name&lt;br /&gt;
jo music   # Open File Manager To Directories (instead of jumping)&lt;br /&gt;
jco images # Opening a file manager to a child directory is also supported&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= direnv =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install direnv Package&lt;br /&gt;
sudo apt install direnv&lt;br /&gt;
&lt;br /&gt;
# Hook direnv into Your Shell to load direnv every time it starts&lt;br /&gt;
echo 'eval &amp;quot;$(direnv hook bash)&amp;quot;' &amp;gt;&amp;gt; ~/.bashrc&lt;br /&gt;
source ~/.bashrc&lt;br /&gt;
&lt;br /&gt;
# To start using it in a project, navigate to a directory, create a file named .envrc with your environment variables&lt;br /&gt;
# (e.g., export DB_HOST=localhost), and then run the following command to authorize it:&lt;br /&gt;
direnv allow .&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/sst/opencode Opencode] =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Install Official via bash pipe&lt;br /&gt;
curl -fsSL https://opencode.ai/install | bash&lt;br /&gt;
&lt;br /&gt;
# Installs in ~/.opencode and adds &amp;lt;code&amp;gt;export PATH=/home/vagrant/.opencode/bin:$PATH&amp;lt;/code&amp;gt; to the bottom of ~/.bashrc&lt;br /&gt;
tree ~/.opencode/&lt;br /&gt;
/home/vagrant/.opencode/&lt;br /&gt;
└── bin&lt;br /&gt;
    └── opencode&lt;br /&gt;
&lt;br /&gt;
# Upgrade&lt;br /&gt;
opencode upgrade&lt;br /&gt;
&lt;br /&gt;
# Uninstall&lt;br /&gt;
delete .opencode and .opencode.json&lt;br /&gt;
&lt;br /&gt;
# Install opencode via Githib Releases&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/sst/opencode/releases/latest&amp;quot; | jq -r .tag_name); echo $VERSION&lt;br /&gt;
TEMPDIR=$(mktemp -d)&lt;br /&gt;
curl -L https://github.com/sst/opencode/releases/download/${VERSION}/opencode-linux-x64.tar.gz -o $TEMPDIR/opencode-linux-x64.tar.gz&lt;br /&gt;
tar xzf $TEMPDIR/opencode-linux-x64.tar.gz -C $TEMPDIR&lt;br /&gt;
sudo install $TEMPDIR/opencode /usr/local/bin/opencode&lt;br /&gt;
&lt;br /&gt;
# Version&lt;br /&gt;
opencode version&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Connect to paid Github subscription, &amp;lt;code&amp;gt;opencode auth login&amp;lt;/code&amp;gt; choose github copilot (public) - don't worry. Follow the process and It will get you to log in to your account covered by the privacy subscription.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
opencode auth login&lt;br /&gt;
&lt;br /&gt;
# Create ~/.config/opencode/opencode.json&lt;br /&gt;
cat &amp;gt; ~/.config/opencode/opencode.json &amp;lt;&amp;lt;'EOL'&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;disabled_providers&amp;quot;: [&lt;br /&gt;
    &amp;quot;openai&amp;quot;,&lt;br /&gt;
    &amp;quot;opencode&amp;quot;,&lt;br /&gt;
    &amp;quot;anthropic&amp;quot;,&lt;br /&gt;
    &amp;quot;google&amp;quot;,&lt;br /&gt;
    &amp;quot;groq&amp;quot;,&lt;br /&gt;
    &amp;quot;mistral&amp;quot;&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;model&amp;quot;: &amp;quot;github-copilot/claude-opus-4.5&amp;quot;,&lt;br /&gt;
  &amp;quot;permission&amp;quot;: {&lt;br /&gt;
    &amp;quot;webfetch&amp;quot;: &amp;quot;ask&amp;quot;&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;instructions&amp;quot;: [&lt;br /&gt;
      &amp;quot;~/.config/opencode/GLOBAL_RULES.md&amp;quot;&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;$schema&amp;quot;: &amp;quot;https://opencode.ai/config.json&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
EOL&lt;br /&gt;
&lt;br /&gt;
# create ~/.config/opencode/GLOBAL_RULES.md&lt;br /&gt;
&lt;br /&gt;
cat &amp;gt; ~/.config/opencode/GLOBAL_RULES.md &amp;lt;&amp;lt;EOL&lt;br /&gt;
# SECURITY &amp;amp; PRIVACY PROTOCOL&lt;br /&gt;
1. **NO SECRETS:** You are STRICTLY FORBIDDEN from outputting, printing, or repeating any API keys, passwords, credentials, or secrets found in code or logs.&lt;br /&gt;
2. **REDACTION:** If you must reference a line of code containing a secret, you must replace the secret with &amp;lt;REDACTED&amp;gt;.&lt;br /&gt;
3. **MOCK DATA:** When generating code examples or tests, always use dummy data (e.g., &amp;quot;example_key_123&amp;quot;), never real data from the context.&lt;br /&gt;
Then opencode models only has Copilot ones, none f the open source ones.&lt;br /&gt;
EOL&lt;br /&gt;
&lt;br /&gt;
# Verify auth credentials file&lt;br /&gt;
cat ~/.local/share/opencode/auth.json&lt;br /&gt;
&lt;br /&gt;
# List models&lt;br /&gt;
opencode models&lt;br /&gt;
github-copilot/claude-haiku-4.5&lt;br /&gt;
github-copilot/claude-opus-4.5&lt;br /&gt;
github-copilot/claude-opus-41&lt;br /&gt;
github-copilot/claude-sonnet-4&lt;br /&gt;
github-copilot/claude-sonnet-4.5&lt;br /&gt;
github-copilot/gemini-2.5-pro&lt;br /&gt;
github-copilot/gemini-3-pro-preview&lt;br /&gt;
github-copilot/gpt-4.1&lt;br /&gt;
github-copilot/gpt-4o&lt;br /&gt;
github-copilot/gpt-5&lt;br /&gt;
github-copilot/gpt-5-codex&lt;br /&gt;
github-copilot/gpt-5-mini&lt;br /&gt;
github-copilot/gpt-5.1&lt;br /&gt;
github-copilot/gpt-5.1-codex&lt;br /&gt;
github-copilot/gpt-5.1-codex-mini&lt;br /&gt;
github-copilot/grok-code-fast-1&lt;br /&gt;
github-copilot/oswe-vscode-prime&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Linux_shell/Productivity_tools&amp;diff=7067</id>
		<title>Linux shell/Productivity tools</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Linux_shell/Productivity_tools&amp;diff=7067"/>
		<updated>2025-12-03T06:38:08Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Opencode */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Autojump =&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
https://github.com/wting/autojump#manual&lt;br /&gt;
sudo apt-get install autojump&lt;br /&gt;
cat /usr/share/doc/autojump/README.Debian&lt;br /&gt;
&lt;br /&gt;
Autojump for Debian&lt;br /&gt;
-------------------&lt;br /&gt;
&lt;br /&gt;
To use autojump, you need to configure you shell to source&lt;br /&gt;
/usr/share/autojump/autojump.sh on startup.&lt;br /&gt;
&lt;br /&gt;
If you use Bash, add the following line to your ~/.bashrc (for non-login&lt;br /&gt;
interactive shells) and your ~/.bash_profile (for login shells):&lt;br /&gt;
. /usr/share/autojump/autojump.sh&lt;br /&gt;
&lt;br /&gt;
If you use Zsh, add the following line to your ~/.zshrc (for all interactive shells):&lt;br /&gt;
. /usr/share/autojump/autojump.sh&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
j -s # display statistics&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
j foo      # Jump To A Directory That Contains foo. It takes multiple arguments to do fuzzy search&lt;br /&gt;
jc bar     # jump to a child directory (sub-directory of current directory) rather than typing out the full name&lt;br /&gt;
jo music   # Open File Manager To Directories (instead of jumping)&lt;br /&gt;
jco images # Opening a file manager to a child directory is also supported&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= direnv =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install direnv Package&lt;br /&gt;
sudo apt install direnv&lt;br /&gt;
&lt;br /&gt;
# Hook direnv into Your Shell to load direnv every time it starts&lt;br /&gt;
echo 'eval &amp;quot;$(direnv hook bash)&amp;quot;' &amp;gt;&amp;gt; ~/.bashrc&lt;br /&gt;
source ~/.bashrc&lt;br /&gt;
&lt;br /&gt;
# To start using it in a project, navigate to a directory, create a file named .envrc with your environment variables&lt;br /&gt;
# (e.g., export DB_HOST=localhost), and then run the following command to authorize it:&lt;br /&gt;
direnv allow .&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/sst/opencode Opencode] =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Install Official via bash pipe&lt;br /&gt;
curl -fsSL https://opencode.ai/install | bash&lt;br /&gt;
&lt;br /&gt;
# Installs in ~/.opencode and adds &amp;lt;code&amp;gt;export PATH=/home/vagrant/.opencode/bin:$PATH&amp;lt;/code&amp;gt; to the bottom of ~/.bashrc&lt;br /&gt;
tree ~/.opencode/&lt;br /&gt;
/home/vagrant/.opencode/&lt;br /&gt;
└── bin&lt;br /&gt;
    └── opencode&lt;br /&gt;
&lt;br /&gt;
# Upgrade&lt;br /&gt;
opencode upgrade&lt;br /&gt;
&lt;br /&gt;
# Uninstall&lt;br /&gt;
delete .opencode and .opencode.json&lt;br /&gt;
&lt;br /&gt;
# Install opencode via Githib Releases&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/sst/opencode/releases/latest&amp;quot; | jq -r .tag_name); echo $VERSION&lt;br /&gt;
TEMPDIR=$(mktemp -d)&lt;br /&gt;
curl -L https://github.com/sst/opencode/releases/download/${VERSION}/opencode-linux-x64.tar.gz -o $TEMPDIR/opencode-linux-x64.tar.gz&lt;br /&gt;
tar xzf $TEMPDIR/opencode-linux-x64.tar.gz -C $TEMPDIR&lt;br /&gt;
sudo install $TEMPDIR/opencode /usr/local/bin/opencode&lt;br /&gt;
&lt;br /&gt;
# Version&lt;br /&gt;
opencode version&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Connect to paid Github subscription, &amp;lt;code&amp;gt;opencode auth login&amp;lt;/code&amp;gt; choose github copilot (public) - don't worry. Follow the process and It will get you to log in to your account covered by the privacy subscription.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
opencode auth login&lt;br /&gt;
&lt;br /&gt;
# Create ~/.config/opencode/opencode.json&lt;br /&gt;
cat &amp;gt; ~/.config/opencode/opencode.json &amp;lt;&amp;lt;'EOL'&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;disabled_providers&amp;quot;: [&lt;br /&gt;
    &amp;quot;openai&amp;quot;,&lt;br /&gt;
    &amp;quot;opencode&amp;quot;,&lt;br /&gt;
    &amp;quot;anthropic&amp;quot;,&lt;br /&gt;
    &amp;quot;google&amp;quot;,&lt;br /&gt;
    &amp;quot;groq&amp;quot;,&lt;br /&gt;
    &amp;quot;mistral&amp;quot;&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;model&amp;quot;: &amp;quot;github-copilot/claude-opus-4.5&amp;quot;,&lt;br /&gt;
  &amp;quot;permission&amp;quot;: {&lt;br /&gt;
    &amp;quot;webfetch&amp;quot;: &amp;quot;ask&amp;quot;&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;instructions&amp;quot;: [&lt;br /&gt;
      &amp;quot;~/.config/opencode/GLOBAL_RULES.md&amp;quot;&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;$schema&amp;quot;: &amp;quot;https://opencode.ai/config.json&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
EOL&lt;br /&gt;
&lt;br /&gt;
# create ~/.config/opencode/GLOBAL_RULES.md&lt;br /&gt;
&lt;br /&gt;
cat &amp;gt; ~/.config/opencode/GLOBAL_RULES.md &amp;lt;&amp;lt;EOL&lt;br /&gt;
# SECURITY &amp;amp; PRIVACY PROTOCOL&lt;br /&gt;
1. **NO SECRETS:** You are STRICTLY FORBIDDEN from outputting, printing, or repeating any API keys, passwords, credentials, or secrets found in code or logs.&lt;br /&gt;
2. **REDACTION:** If you must reference a line of code containing a secret, you must replace the secret with &amp;lt;REDACTED&amp;gt;.&lt;br /&gt;
3. **MOCK DATA:** When generating code examples or tests, always use dummy data (e.g., &amp;quot;example_key_123&amp;quot;), never real data from the context.&lt;br /&gt;
Then opencode models only has Copilot ones, none f the open source ones.&lt;br /&gt;
EOL&lt;br /&gt;
&lt;br /&gt;
# Verify auth credentials file&lt;br /&gt;
cat ~/.local/share/opencode/auth.json&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Linux_shell/Productivity_tools&amp;diff=7066</id>
		<title>Linux shell/Productivity tools</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Linux_shell/Productivity_tools&amp;diff=7066"/>
		<updated>2025-12-03T06:33:27Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Opencode */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Autojump =&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
https://github.com/wting/autojump#manual&lt;br /&gt;
sudo apt-get install autojump&lt;br /&gt;
cat /usr/share/doc/autojump/README.Debian&lt;br /&gt;
&lt;br /&gt;
Autojump for Debian&lt;br /&gt;
-------------------&lt;br /&gt;
&lt;br /&gt;
To use autojump, you need to configure you shell to source&lt;br /&gt;
/usr/share/autojump/autojump.sh on startup.&lt;br /&gt;
&lt;br /&gt;
If you use Bash, add the following line to your ~/.bashrc (for non-login&lt;br /&gt;
interactive shells) and your ~/.bash_profile (for login shells):&lt;br /&gt;
. /usr/share/autojump/autojump.sh&lt;br /&gt;
&lt;br /&gt;
If you use Zsh, add the following line to your ~/.zshrc (for all interactive shells):&lt;br /&gt;
. /usr/share/autojump/autojump.sh&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
j -s # display statistics&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
j foo      # Jump To A Directory That Contains foo. It takes multiple arguments to do fuzzy search&lt;br /&gt;
jc bar     # jump to a child directory (sub-directory of current directory) rather than typing out the full name&lt;br /&gt;
jo music   # Open File Manager To Directories (instead of jumping)&lt;br /&gt;
jco images # Opening a file manager to a child directory is also supported&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= direnv =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install direnv Package&lt;br /&gt;
sudo apt install direnv&lt;br /&gt;
&lt;br /&gt;
# Hook direnv into Your Shell to load direnv every time it starts&lt;br /&gt;
echo 'eval &amp;quot;$(direnv hook bash)&amp;quot;' &amp;gt;&amp;gt; ~/.bashrc&lt;br /&gt;
source ~/.bashrc&lt;br /&gt;
&lt;br /&gt;
# To start using it in a project, navigate to a directory, create a file named .envrc with your environment variables&lt;br /&gt;
# (e.g., export DB_HOST=localhost), and then run the following command to authorize it:&lt;br /&gt;
direnv allow .&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/sst/opencode Opencode] =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Install Official via bash pipe&lt;br /&gt;
curl -fsSL https://opencode.ai/install | bash&lt;br /&gt;
&lt;br /&gt;
# Installs in ~/.opencode and adds &amp;lt;code&amp;gt;export PATH=/home/vagrant/.opencode/bin:$PATH&amp;lt;/code&amp;gt; to the bottom of ~/.bashrc&lt;br /&gt;
tree ~/.opencode/&lt;br /&gt;
/home/vagrant/.opencode/&lt;br /&gt;
└── bin&lt;br /&gt;
    └── opencode&lt;br /&gt;
&lt;br /&gt;
# Upgrade&lt;br /&gt;
opencode upgrade&lt;br /&gt;
&lt;br /&gt;
# Uninstall&lt;br /&gt;
delete .opencode and .opencode.json&lt;br /&gt;
&lt;br /&gt;
# Install opencode via Githib Releases&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/sst/opencode/releases/latest&amp;quot; | jq -r .tag_name); echo $VERSION&lt;br /&gt;
TEMPDIR=$(mktemp -d)&lt;br /&gt;
curl -L https://github.com/sst/opencode/releases/download/${VERSION}/opencode-linux-x64.tar.gz -o $TEMPDIR/opencode-linux-x64.tar.gz&lt;br /&gt;
tar xzf $TEMPDIR/opencode-linux-x64.tar.gz -C $TEMPDIR&lt;br /&gt;
sudo install $TEMPDIR/opencode /usr/local/bin/opencode&lt;br /&gt;
&lt;br /&gt;
# Version&lt;br /&gt;
opencode version&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Connect to paid Github subscription, &amp;lt;code&amp;gt;opencode auth login&amp;lt;/code&amp;gt; choose github copilot (public) - don't worry. Follow the process and It will get you to log in to your account covered by the privacy subscription.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
opencode auth login&lt;br /&gt;
&lt;br /&gt;
# Create ~/.config/opencode/opencode.json&lt;br /&gt;
cat &amp;gt; ~/.config/opencode/opencode.json &amp;lt;&amp;lt;EOL&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;disabled_providers&amp;quot;: [&lt;br /&gt;
    &amp;quot;openai&amp;quot;,&lt;br /&gt;
    &amp;quot;opencode&amp;quot;,&lt;br /&gt;
    &amp;quot;anthropic&amp;quot;,&lt;br /&gt;
    &amp;quot;google&amp;quot;,&lt;br /&gt;
    &amp;quot;groq&amp;quot;,&lt;br /&gt;
    &amp;quot;mistral&amp;quot;&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;model&amp;quot;: &amp;quot;github-copilot/claude-opus-4.5&amp;quot;,&lt;br /&gt;
  &amp;quot;permission&amp;quot;: {&lt;br /&gt;
    &amp;quot;webfetch&amp;quot;: &amp;quot;ask&amp;quot;&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;instructions&amp;quot;: [&lt;br /&gt;
      &amp;quot;~/.config/opencode/GLOBAL_RULES.md&amp;quot;&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;$schema&amp;quot;: &amp;quot;https://opencode.ai/config.json&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
EOL&lt;br /&gt;
&lt;br /&gt;
# create ~/.config/opencode/GLOBAL_RULES.md&lt;br /&gt;
&lt;br /&gt;
cat &amp;gt; ~/.config/opencode/GLOBAL_RULES.md &amp;lt;&amp;lt;EOL&lt;br /&gt;
# SECURITY &amp;amp; PRIVACY PROTOCOL&lt;br /&gt;
1. **NO SECRETS:** You are STRICTLY FORBIDDEN from outputting, printing, or repeating any API keys, passwords, credentials, or secrets found in code or logs.&lt;br /&gt;
2. **REDACTION:** If you must reference a line of code containing a secret, you must replace the secret with &amp;lt;REDACTED&amp;gt;.&lt;br /&gt;
3. **MOCK DATA:** When generating code examples or tests, always use dummy data (e.g., &amp;quot;example_key_123&amp;quot;), never real data from the context.&lt;br /&gt;
Then opencode models only has Copilot ones, none f the open source ones.&lt;br /&gt;
EOL&lt;br /&gt;
&lt;br /&gt;
# Verify auth credentials file&lt;br /&gt;
cat ~/.local/share/opencode/auth.json&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Linux_shell/Productivity_tools&amp;diff=7065</id>
		<title>Linux shell/Productivity tools</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Linux_shell/Productivity_tools&amp;diff=7065"/>
		<updated>2025-12-03T06:30:03Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Opencode */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Autojump =&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
https://github.com/wting/autojump#manual&lt;br /&gt;
sudo apt-get install autojump&lt;br /&gt;
cat /usr/share/doc/autojump/README.Debian&lt;br /&gt;
&lt;br /&gt;
Autojump for Debian&lt;br /&gt;
-------------------&lt;br /&gt;
&lt;br /&gt;
To use autojump, you need to configure you shell to source&lt;br /&gt;
/usr/share/autojump/autojump.sh on startup.&lt;br /&gt;
&lt;br /&gt;
If you use Bash, add the following line to your ~/.bashrc (for non-login&lt;br /&gt;
interactive shells) and your ~/.bash_profile (for login shells):&lt;br /&gt;
. /usr/share/autojump/autojump.sh&lt;br /&gt;
&lt;br /&gt;
If you use Zsh, add the following line to your ~/.zshrc (for all interactive shells):&lt;br /&gt;
. /usr/share/autojump/autojump.sh&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
j -s # display statistics&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
j foo      # Jump To A Directory That Contains foo. It takes multiple arguments to do fuzzy search&lt;br /&gt;
jc bar     # jump to a child directory (sub-directory of current directory) rather than typing out the full name&lt;br /&gt;
jo music   # Open File Manager To Directories (instead of jumping)&lt;br /&gt;
jco images # Opening a file manager to a child directory is also supported&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= direnv =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install direnv Package&lt;br /&gt;
sudo apt install direnv&lt;br /&gt;
&lt;br /&gt;
# Hook direnv into Your Shell to load direnv every time it starts&lt;br /&gt;
echo 'eval &amp;quot;$(direnv hook bash)&amp;quot;' &amp;gt;&amp;gt; ~/.bashrc&lt;br /&gt;
source ~/.bashrc&lt;br /&gt;
&lt;br /&gt;
# To start using it in a project, navigate to a directory, create a file named .envrc with your environment variables&lt;br /&gt;
# (e.g., export DB_HOST=localhost), and then run the following command to authorize it:&lt;br /&gt;
direnv allow .&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/sst/opencode Opencode] =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Install Official via bash pipe&lt;br /&gt;
curl -fsSL https://opencode.ai/install | bash&lt;br /&gt;
&lt;br /&gt;
# Installs in ~/.opencode and adds &amp;lt;code&amp;gt;export PATH=/home/vagrant/.opencode/bin:$PATH&amp;lt;/code&amp;gt; to the bottom of ~/.bashrc&lt;br /&gt;
tree ~/.opencode/&lt;br /&gt;
/home/vagrant/.opencode/&lt;br /&gt;
└── bin&lt;br /&gt;
    └── opencode&lt;br /&gt;
&lt;br /&gt;
# Upgrade&lt;br /&gt;
opencode upgrade&lt;br /&gt;
&lt;br /&gt;
# Uninstall&lt;br /&gt;
delete .opencode and .opencode.json&lt;br /&gt;
&lt;br /&gt;
# Install opencode via Githib Releases&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/sst/opencode/releases/latest&amp;quot; | jq -r .tag_name); echo $VERSION&lt;br /&gt;
TEMPDIR=$(mktemp -d)&lt;br /&gt;
curl -L https://github.com/sst/opencode/releases/download/${VERSION}/opencode-linux-x64.tar.gz -o $TEMPDIR/opencode-linux-x64.tar.gz&lt;br /&gt;
tar xzf $TEMPDIR/opencode-linux-x64.tar.gz -C $TEMPDIR&lt;br /&gt;
sudo install $TEMPDIR/opencode /usr/local/bin/opencode&lt;br /&gt;
&lt;br /&gt;
# Version&lt;br /&gt;
opencode version&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Connect to paid Github subscription, &amp;lt;code&amp;gt;opencode auth login&amp;lt;/code&amp;gt; choose github copilot (public) - don't worry. Follow the process and It will get you to log in to your account covered by the privacy subscription.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
opencode auth login&lt;br /&gt;
&lt;br /&gt;
# Create ~/.config/opencode/opencode.json&lt;br /&gt;
cat &amp;gt; ~/.config/opencode/opencode.json &amp;lt;&amp;lt;EOL&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;disabled_providers&amp;quot;: [&lt;br /&gt;
    &amp;quot;openai&amp;quot;,&lt;br /&gt;
    &amp;quot;opencode&amp;quot;,&lt;br /&gt;
    &amp;quot;anthropic&amp;quot;,&lt;br /&gt;
    &amp;quot;google&amp;quot;,&lt;br /&gt;
    &amp;quot;groq&amp;quot;,&lt;br /&gt;
    &amp;quot;mistral&amp;quot;&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;model&amp;quot;: &amp;quot;github-copilot/claude-opus-4.5&amp;quot;,&lt;br /&gt;
  &amp;quot;permission&amp;quot;: {&lt;br /&gt;
    &amp;quot;webfetch&amp;quot;: &amp;quot;ask&amp;quot;&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;instructions&amp;quot;: [&lt;br /&gt;
      &amp;quot;~/.config/opencode/GLOBAL_RULES.md&amp;quot;&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;$schema&amp;quot;: &amp;quot;https://opencode.ai/config.json&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
EOL&lt;br /&gt;
&lt;br /&gt;
# create ~/.config/opencode/GLOBAL_RULES.md&lt;br /&gt;
&lt;br /&gt;
cat &amp;gt; ~/.config/opencode/GLOBAL_RULES.md &amp;lt;&amp;lt;EOL&lt;br /&gt;
# SECURITY &amp;amp; PRIVACY PROTOCOL&lt;br /&gt;
1. **NO SECRETS:** You are STRICTLY FORBIDDEN from outputting, printing, or repeating any API keys, passwords, credentials, or secrets found in code or logs.&lt;br /&gt;
2. **REDACTION:** If you must reference a line of code containing a secret, you must replace the secret with `&amp;lt;REDACTED&amp;gt;`.&lt;br /&gt;
3. **MOCK DATA:** When generating code examples or tests, always use dummy data (e.g., &amp;quot;example_key_123&amp;quot;), never real data from the context.&lt;br /&gt;
Then opencode models only has Copilot ones, none f the open source ones.&lt;br /&gt;
EOL&lt;br /&gt;
&lt;br /&gt;
# Verify auth credentials file&lt;br /&gt;
cat ~/.local/share/opencode/auth.json&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Linux_shell/Productivity_tools&amp;diff=7064</id>
		<title>Linux shell/Productivity tools</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Linux_shell/Productivity_tools&amp;diff=7064"/>
		<updated>2025-12-03T06:28:51Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Opencode */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Autojump =&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
https://github.com/wting/autojump#manual&lt;br /&gt;
sudo apt-get install autojump&lt;br /&gt;
cat /usr/share/doc/autojump/README.Debian&lt;br /&gt;
&lt;br /&gt;
Autojump for Debian&lt;br /&gt;
-------------------&lt;br /&gt;
&lt;br /&gt;
To use autojump, you need to configure you shell to source&lt;br /&gt;
/usr/share/autojump/autojump.sh on startup.&lt;br /&gt;
&lt;br /&gt;
If you use Bash, add the following line to your ~/.bashrc (for non-login&lt;br /&gt;
interactive shells) and your ~/.bash_profile (for login shells):&lt;br /&gt;
. /usr/share/autojump/autojump.sh&lt;br /&gt;
&lt;br /&gt;
If you use Zsh, add the following line to your ~/.zshrc (for all interactive shells):&lt;br /&gt;
. /usr/share/autojump/autojump.sh&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
j -s # display statistics&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
j foo      # Jump To A Directory That Contains foo. It takes multiple arguments to do fuzzy search&lt;br /&gt;
jc bar     # jump to a child directory (sub-directory of current directory) rather than typing out the full name&lt;br /&gt;
jo music   # Open File Manager To Directories (instead of jumping)&lt;br /&gt;
jco images # Opening a file manager to a child directory is also supported&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= direnv =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install direnv Package&lt;br /&gt;
sudo apt install direnv&lt;br /&gt;
&lt;br /&gt;
# Hook direnv into Your Shell to load direnv every time it starts&lt;br /&gt;
echo 'eval &amp;quot;$(direnv hook bash)&amp;quot;' &amp;gt;&amp;gt; ~/.bashrc&lt;br /&gt;
source ~/.bashrc&lt;br /&gt;
&lt;br /&gt;
# To start using it in a project, navigate to a directory, create a file named .envrc with your environment variables&lt;br /&gt;
# (e.g., export DB_HOST=localhost), and then run the following command to authorize it:&lt;br /&gt;
direnv allow .&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/sst/opencode Opencode] =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Install Official via bash pipe&lt;br /&gt;
curl -fsSL https://opencode.ai/install | bash&lt;br /&gt;
&lt;br /&gt;
# Installs in ~/.opencode and adds &amp;lt;code&amp;gt;export PATH=/home/vagrant/.opencode/bin:$PATH&amp;lt;/code&amp;gt; to the bottom of ~/.bashrc&lt;br /&gt;
tree ~/.opencode/&lt;br /&gt;
/home/vagrant/.opencode/&lt;br /&gt;
└── bin&lt;br /&gt;
    └── opencode&lt;br /&gt;
&lt;br /&gt;
# Upgrade&lt;br /&gt;
opencode upgrade&lt;br /&gt;
&lt;br /&gt;
# Uninstall&lt;br /&gt;
delete .opencode and .opencode.json&lt;br /&gt;
&lt;br /&gt;
# Install opencode via Githib Releases&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/sst/opencode/releases/latest&amp;quot; | jq -r .tag_name); echo $VERSION&lt;br /&gt;
TEMPDIR=$(mktemp -d)&lt;br /&gt;
curl -L https://github.com/sst/opencode/releases/download/${VERSION}/opencode-linux-x64.tar.gz -o $TEMPDIR/opencode-linux-x64.tar.gz&lt;br /&gt;
tar xzf $TEMPDIR/opencode-linux-x64.tar.gz -C $TEMPDIR&lt;br /&gt;
sudo install $TEMPDIR/opencode /usr/local/bin/opencode&lt;br /&gt;
&lt;br /&gt;
# Version&lt;br /&gt;
opencode version&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Connect to paid Github subscription, &amp;lt;code&amp;gt;opencode auth login&amp;lt;/code&amp;gt; choose github copilot (public) - don't worry. Follow the process and It will get you to log in to your account covered by the privacy subscription.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
opencode auth login&lt;br /&gt;
&lt;br /&gt;
# Create ~/.config/opencode/opencode.json&lt;br /&gt;
cat &amp;gt; ~/.config/opencode/opencode.json &amp;lt;&amp;lt;EOL&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;disabled_providers&amp;quot;: [&lt;br /&gt;
    &amp;quot;openai&amp;quot;,&lt;br /&gt;
    &amp;quot;opencode&amp;quot;,&lt;br /&gt;
    &amp;quot;anthropic&amp;quot;,&lt;br /&gt;
    &amp;quot;google&amp;quot;,&lt;br /&gt;
    &amp;quot;groq&amp;quot;,&lt;br /&gt;
    &amp;quot;mistral&amp;quot;&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;model&amp;quot;: &amp;quot;github-copilot/claude-opus-4.5&amp;quot;,&lt;br /&gt;
  &amp;quot;permission&amp;quot;: {&lt;br /&gt;
    &amp;quot;webfetch&amp;quot;: &amp;quot;ask&amp;quot;&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;instructions&amp;quot;: [&lt;br /&gt;
      &amp;quot;~/.config/opencode/GLOBAL_RULES.md&amp;quot;&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;$schema&amp;quot;: &amp;quot;https://opencode.ai/config.json&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
EOL&lt;br /&gt;
&lt;br /&gt;
# create ~/.config/opencode/GLOBAL_RULES.md&lt;br /&gt;
&lt;br /&gt;
cat &amp;gt; ~/.config/opencode/GLOBAL_RULES.md &amp;lt;&amp;lt;EOL&lt;br /&gt;
# SECURITY &amp;amp; PRIVACY PROTOCOL&lt;br /&gt;
1. **NO SECRETS:** You are STRICTLY FORBIDDEN from outputting, printing, or repeating any API keys, passwords, credentials, or secrets found in code or logs.&lt;br /&gt;
2. **REDACTION:** If you must reference a line of code containing a secret, you must replace the secret with `&amp;lt;REDACTED&amp;gt;`.&lt;br /&gt;
3. **MOCK DATA:** When generating code examples or tests, always use dummy data (e.g., &amp;quot;example_key_123&amp;quot;), never real data from the context.&lt;br /&gt;
Then opencode models only has Copilot ones, none f the open source ones.&lt;br /&gt;
EOL&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Linux_shell/Productivity_tools&amp;diff=7063</id>
		<title>Linux shell/Productivity tools</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Linux_shell/Productivity_tools&amp;diff=7063"/>
		<updated>2025-12-03T06:27:46Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Opencode */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Autojump =&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
https://github.com/wting/autojump#manual&lt;br /&gt;
sudo apt-get install autojump&lt;br /&gt;
cat /usr/share/doc/autojump/README.Debian&lt;br /&gt;
&lt;br /&gt;
Autojump for Debian&lt;br /&gt;
-------------------&lt;br /&gt;
&lt;br /&gt;
To use autojump, you need to configure you shell to source&lt;br /&gt;
/usr/share/autojump/autojump.sh on startup.&lt;br /&gt;
&lt;br /&gt;
If you use Bash, add the following line to your ~/.bashrc (for non-login&lt;br /&gt;
interactive shells) and your ~/.bash_profile (for login shells):&lt;br /&gt;
. /usr/share/autojump/autojump.sh&lt;br /&gt;
&lt;br /&gt;
If you use Zsh, add the following line to your ~/.zshrc (for all interactive shells):&lt;br /&gt;
. /usr/share/autojump/autojump.sh&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
j -s # display statistics&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
j foo      # Jump To A Directory That Contains foo. It takes multiple arguments to do fuzzy search&lt;br /&gt;
jc bar     # jump to a child directory (sub-directory of current directory) rather than typing out the full name&lt;br /&gt;
jo music   # Open File Manager To Directories (instead of jumping)&lt;br /&gt;
jco images # Opening a file manager to a child directory is also supported&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= direnv =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install direnv Package&lt;br /&gt;
sudo apt install direnv&lt;br /&gt;
&lt;br /&gt;
# Hook direnv into Your Shell to load direnv every time it starts&lt;br /&gt;
echo 'eval &amp;quot;$(direnv hook bash)&amp;quot;' &amp;gt;&amp;gt; ~/.bashrc&lt;br /&gt;
source ~/.bashrc&lt;br /&gt;
&lt;br /&gt;
# To start using it in a project, navigate to a directory, create a file named .envrc with your environment variables&lt;br /&gt;
# (e.g., export DB_HOST=localhost), and then run the following command to authorize it:&lt;br /&gt;
direnv allow .&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/sst/opencode Opencode] =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Install Official via bash pipe&lt;br /&gt;
curl -fsSL https://opencode.ai/install | bash&lt;br /&gt;
&lt;br /&gt;
# Installs in ~/.opencode and adds &amp;lt;code&amp;gt;export PATH=/home/vagrant/.opencode/bin:$PATH&amp;lt;/code&amp;gt; to the bottom of ~/.bashrc&lt;br /&gt;
tree ~/.opencode/&lt;br /&gt;
/home/vagrant/.opencode/&lt;br /&gt;
└── bin&lt;br /&gt;
    └── opencode&lt;br /&gt;
&lt;br /&gt;
# Upgrade&lt;br /&gt;
opencode upgrade&lt;br /&gt;
&lt;br /&gt;
# Uninstall&lt;br /&gt;
delete .opencode and .opencode.json&lt;br /&gt;
&lt;br /&gt;
# Install opencode via Githib Releases&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/sst/opencode/releases/latest&amp;quot; | jq -r .tag_name); echo $VERSION&lt;br /&gt;
TEMPDIR=$(mktemp -d)&lt;br /&gt;
curl -L https://github.com/sst/opencode/releases/download/${VERSION}/opencode-linux-x64.tar.gz -o $TEMPDIR/opencode-linux-x64.tar.gz&lt;br /&gt;
tar xzf $TEMPDIR/opencode-linux-x64.tar.gz -C $TEMPDIR&lt;br /&gt;
sudo install $TEMPDIR/opencode /usr/local/bin/opencode&lt;br /&gt;
&lt;br /&gt;
# Version&lt;br /&gt;
opencode version&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Connect to paid Github subscription, &amp;lt;code&amp;gt;opencode auth login&amp;lt;/code&amp;gt; choose github copilot (public) - don't worry. Follow the process and It will get you to log in to your account covered by the privacy subscription.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
opencode auth login&lt;br /&gt;
&lt;br /&gt;
# Create ~/.config/opencode/opencode.json&lt;br /&gt;
cat &amp;gt; ~/.config/opencode/opencode.json &amp;lt;&amp;lt;EOL&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;disabled_providers&amp;quot;: [&lt;br /&gt;
    &amp;quot;openai&amp;quot;,&lt;br /&gt;
    &amp;quot;opencode&amp;quot;,&lt;br /&gt;
    &amp;quot;anthropic&amp;quot;,&lt;br /&gt;
    &amp;quot;google&amp;quot;,&lt;br /&gt;
    &amp;quot;groq&amp;quot;,&lt;br /&gt;
    &amp;quot;mistral&amp;quot;&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;model&amp;quot;: &amp;quot;github-copilot/claude-opus-4.5&amp;quot;,&lt;br /&gt;
  &amp;quot;permission&amp;quot;: {&lt;br /&gt;
    &amp;quot;webfetch&amp;quot;: &amp;quot;ask&amp;quot;&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;instructions&amp;quot;: [&lt;br /&gt;
      &amp;quot;~/.config/opencode/GLOBAL_RULES.md&amp;quot;&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;$schema&amp;quot;: &amp;quot;https://opencode.ai/config.json&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
EOL&lt;br /&gt;
&lt;br /&gt;
# and ~/.config/opencode/GLOBAL_RULES.md&lt;br /&gt;
&lt;br /&gt;
cat &amp;gt; ~/.config/opencode/GLOBAL_RULES.md &amp;lt;&amp;lt;EOL&lt;br /&gt;
# SECURITY &amp;amp; PRIVACY PROTOCOL&lt;br /&gt;
1. **NO SECRETS:** You are STRICTLY FORBIDDEN from outputting, printing, or repeating any API keys, passwords, credentials, or secrets found in code or logs.&lt;br /&gt;
2. **REDACTION:** If you must reference a line of code containing a secret, you must replace the secret with `&amp;lt;REDACTED&amp;gt;`.&lt;br /&gt;
3. **MOCK DATA:** When generating code examples or tests, always use dummy data (e.g., &amp;quot;example_key_123&amp;quot;), never real data from the context.&lt;br /&gt;
Then opencode models only has Copilot ones, none f the open source ones.&lt;br /&gt;
EOL&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Linux_shell/Productivity_tools&amp;diff=7062</id>
		<title>Linux shell/Productivity tools</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Linux_shell/Productivity_tools&amp;diff=7062"/>
		<updated>2025-12-03T06:19:58Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* direnv */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Autojump =&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
https://github.com/wting/autojump#manual&lt;br /&gt;
sudo apt-get install autojump&lt;br /&gt;
cat /usr/share/doc/autojump/README.Debian&lt;br /&gt;
&lt;br /&gt;
Autojump for Debian&lt;br /&gt;
-------------------&lt;br /&gt;
&lt;br /&gt;
To use autojump, you need to configure you shell to source&lt;br /&gt;
/usr/share/autojump/autojump.sh on startup.&lt;br /&gt;
&lt;br /&gt;
If you use Bash, add the following line to your ~/.bashrc (for non-login&lt;br /&gt;
interactive shells) and your ~/.bash_profile (for login shells):&lt;br /&gt;
. /usr/share/autojump/autojump.sh&lt;br /&gt;
&lt;br /&gt;
If you use Zsh, add the following line to your ~/.zshrc (for all interactive shells):&lt;br /&gt;
. /usr/share/autojump/autojump.sh&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
j -s # display statistics&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
j foo      # Jump To A Directory That Contains foo. It takes multiple arguments to do fuzzy search&lt;br /&gt;
jc bar     # jump to a child directory (sub-directory of current directory) rather than typing out the full name&lt;br /&gt;
jo music   # Open File Manager To Directories (instead of jumping)&lt;br /&gt;
jco images # Opening a file manager to a child directory is also supported&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= direnv =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install direnv Package&lt;br /&gt;
sudo apt install direnv&lt;br /&gt;
&lt;br /&gt;
# Hook direnv into Your Shell to load direnv every time it starts&lt;br /&gt;
echo 'eval &amp;quot;$(direnv hook bash)&amp;quot;' &amp;gt;&amp;gt; ~/.bashrc&lt;br /&gt;
source ~/.bashrc&lt;br /&gt;
&lt;br /&gt;
# To start using it in a project, navigate to a directory, create a file named .envrc with your environment variables&lt;br /&gt;
# (e.g., export DB_HOST=localhost), and then run the following command to authorize it:&lt;br /&gt;
direnv allow .&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/sst/opencode Opencode] =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Install Official via bash pipe&lt;br /&gt;
curl -fsSL https://opencode.ai/install | bash&lt;br /&gt;
&lt;br /&gt;
# Installs in ~/.opencode and adds &amp;lt;code&amp;gt;export PATH=/home/vagrant/.opencode/bin:$PATH&amp;lt;/code&amp;gt; to the bottom of ~/.bashrc&lt;br /&gt;
tree ~/.opencode/&lt;br /&gt;
/home/vagrant/.opencode/&lt;br /&gt;
└── bin&lt;br /&gt;
    └── opencode&lt;br /&gt;
&lt;br /&gt;
# Upgrade&lt;br /&gt;
opencode upgrade&lt;br /&gt;
&lt;br /&gt;
# Uninstall&lt;br /&gt;
delete .opencode and .opencode.json&lt;br /&gt;
&lt;br /&gt;
# Install opencode via Githib Releases&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/sst/opencode/releases/latest&amp;quot; | jq -r .tag_name); echo $VERSION&lt;br /&gt;
TEMPDIR=$(mktemp -d)&lt;br /&gt;
curl -L https://github.com/sst/opencode/releases/download/${VERSION}/opencode-linux-x64.tar.gz -o $TEMPDIR/opencode-linux-x64.tar.gz&lt;br /&gt;
tar xzf $TEMPDIR/opencode-linux-x64.tar.gz -C $TEMPDIR&lt;br /&gt;
sudo install $TEMPDIR/opencode /usr/local/bin/opencode&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
opencode version&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Linux_shell/Productivity_tools&amp;diff=7061</id>
		<title>Linux shell/Productivity tools</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Linux_shell/Productivity_tools&amp;diff=7061"/>
		<updated>2025-12-03T06:19:07Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* direnv */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Autojump =&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
https://github.com/wting/autojump#manual&lt;br /&gt;
sudo apt-get install autojump&lt;br /&gt;
cat /usr/share/doc/autojump/README.Debian&lt;br /&gt;
&lt;br /&gt;
Autojump for Debian&lt;br /&gt;
-------------------&lt;br /&gt;
&lt;br /&gt;
To use autojump, you need to configure you shell to source&lt;br /&gt;
/usr/share/autojump/autojump.sh on startup.&lt;br /&gt;
&lt;br /&gt;
If you use Bash, add the following line to your ~/.bashrc (for non-login&lt;br /&gt;
interactive shells) and your ~/.bash_profile (for login shells):&lt;br /&gt;
. /usr/share/autojump/autojump.sh&lt;br /&gt;
&lt;br /&gt;
If you use Zsh, add the following line to your ~/.zshrc (for all interactive shells):&lt;br /&gt;
. /usr/share/autojump/autojump.sh&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
j -s # display statistics&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
j foo      # Jump To A Directory That Contains foo. It takes multiple arguments to do fuzzy search&lt;br /&gt;
jc bar     # jump to a child directory (sub-directory of current directory) rather than typing out the full name&lt;br /&gt;
jo music   # Open File Manager To Directories (instead of jumping)&lt;br /&gt;
jco images # Opening a file manager to a child directory is also supported&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= direnv =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install direnv Package&lt;br /&gt;
sudo apt install direnv&lt;br /&gt;
&lt;br /&gt;
# Hook direnv into Your Shell to load direnv every time it starts&lt;br /&gt;
echo 'eval &amp;quot;$(direnv hook bash)&amp;quot;' &amp;gt;&amp;gt; ~/.bashrc&lt;br /&gt;
source ~/.bashrc&lt;br /&gt;
&lt;br /&gt;
# To start using it in a project, navigate to a directory, create a file named .envrc with your environment variables&lt;br /&gt;
# (e.g., export DB_HOST=localhost), and then run the following command to authorize it:&lt;br /&gt;
direnv allow .&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/sst/opencode Opencode] =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;e&amp;gt;&lt;br /&gt;
# Install Official via bash pipe&lt;br /&gt;
curl -fsSL https://opencode.ai/install | bash&lt;br /&gt;
&lt;br /&gt;
# Installs in ~/.opencode and adds &amp;lt;code&amp;gt;export PATH=/home/vagrant/.opencode/bin:$PATH&amp;lt;/code&amp;gt; to the bottom of ~/.bashrc&lt;br /&gt;
tree ~/.opencode/&lt;br /&gt;
/home/vagrant/.opencode/&lt;br /&gt;
└── bin&lt;br /&gt;
    └── opencode&lt;br /&gt;
&lt;br /&gt;
# Upgrade&lt;br /&gt;
opencode upgrade&lt;br /&gt;
&lt;br /&gt;
# Uninstall&lt;br /&gt;
delete .opencode and .opencode.json&lt;br /&gt;
&lt;br /&gt;
# Install opencode via Githib Releases&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/sst/opencode/releases/latest&amp;quot; | jq -r .tag_name); echo $VERSION&lt;br /&gt;
TEMPDIR=$(mktemp -d)&lt;br /&gt;
curl -L https://github.com/sst/opencode/releases/download/${VERSION}/opencode-linux-x64.tar.gz -o $TEMPDIR/opencode-linux-x64.tar.gz&lt;br /&gt;
tar xzf $TEMPDIR/opencode-linux-x64.tar.gz -C $TEMPDIR&lt;br /&gt;
sudo install $TEMPDIR/opencode /usr/local/bin/opencode&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
opencode version&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Linux_shell/Productivity_tools&amp;diff=7060</id>
		<title>Linux shell/Productivity tools</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Linux_shell/Productivity_tools&amp;diff=7060"/>
		<updated>2025-12-02T08:46:14Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Opencode */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Autojump =&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
https://github.com/wting/autojump#manual&lt;br /&gt;
sudo apt-get install autojump&lt;br /&gt;
cat /usr/share/doc/autojump/README.Debian&lt;br /&gt;
&lt;br /&gt;
Autojump for Debian&lt;br /&gt;
-------------------&lt;br /&gt;
&lt;br /&gt;
To use autojump, you need to configure you shell to source&lt;br /&gt;
/usr/share/autojump/autojump.sh on startup.&lt;br /&gt;
&lt;br /&gt;
If you use Bash, add the following line to your ~/.bashrc (for non-login&lt;br /&gt;
interactive shells) and your ~/.bash_profile (for login shells):&lt;br /&gt;
. /usr/share/autojump/autojump.sh&lt;br /&gt;
&lt;br /&gt;
If you use Zsh, add the following line to your ~/.zshrc (for all interactive shells):&lt;br /&gt;
. /usr/share/autojump/autojump.sh&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
j -s # display statistics&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
j foo      # Jump To A Directory That Contains foo. It takes multiple arguments to do fuzzy search&lt;br /&gt;
jc bar     # jump to a child directory (sub-directory of current directory) rather than typing out the full name&lt;br /&gt;
jo music   # Open File Manager To Directories (instead of jumping)&lt;br /&gt;
jco images # Opening a file manager to a child directory is also supported&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= direnv =&lt;br /&gt;
TODO:&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/sst/opencode Opencode] =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;e&amp;gt;&lt;br /&gt;
# Install Official via bash pipe&lt;br /&gt;
curl -fsSL https://opencode.ai/install | bash&lt;br /&gt;
&lt;br /&gt;
# Installs in ~/.opencode and adds &amp;lt;code&amp;gt;export PATH=/home/vagrant/.opencode/bin:$PATH&amp;lt;/code&amp;gt; to the bottom of ~/.bashrc&lt;br /&gt;
tree ~/.opencode/&lt;br /&gt;
/home/vagrant/.opencode/&lt;br /&gt;
└── bin&lt;br /&gt;
    └── opencode&lt;br /&gt;
&lt;br /&gt;
# Upgrade&lt;br /&gt;
opencode upgrade&lt;br /&gt;
&lt;br /&gt;
# Uninstall&lt;br /&gt;
delete .opencode and .opencode.json&lt;br /&gt;
&lt;br /&gt;
# Install opencode via Githib Releases&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/sst/opencode/releases/latest&amp;quot; | jq -r .tag_name); echo $VERSION&lt;br /&gt;
TEMPDIR=$(mktemp -d)&lt;br /&gt;
curl -L https://github.com/sst/opencode/releases/download/${VERSION}/opencode-linux-x64.tar.gz -o $TEMPDIR/opencode-linux-x64.tar.gz&lt;br /&gt;
tar xzf $TEMPDIR/opencode-linux-x64.tar.gz -C $TEMPDIR&lt;br /&gt;
sudo install $TEMPDIR/opencode /usr/local/bin/opencode&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
opencode version&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Kubernetes/ArgoCD&amp;diff=7059</id>
		<title>Kubernetes/ArgoCD</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Kubernetes/ArgoCD&amp;diff=7059"/>
		<updated>2025-11-10T10:03:23Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Install cli =&lt;br /&gt;
{{Note|Requires &amp;lt;code&amp;gt;jq&amp;lt;/code&amp;gt;}}&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
REPO=argoproj/argo-cd&lt;br /&gt;
REPO_FILE=argocd-linux-amd64&lt;br /&gt;
BINARY=argocd&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/${REPO}/releases/latest&amp;quot; | jq -r .tag_name | tr -d v); echo ${LATEST}&lt;br /&gt;
TEMPDIR=$(mktemp -d)&lt;br /&gt;
curl -L https://github.com/${REPO}/releases/download/v${LATEST}/${REPO_FILE} -o ${TEMPDIR}/${BINARY}&lt;br /&gt;
sudo install ${TEMPDIR}/${BINARY} /usr/local/bin/${BINARY}&lt;br /&gt;
&lt;br /&gt;
# Version&lt;br /&gt;
argocd version&lt;br /&gt;
argocd: v3.1.8+becb020&lt;br /&gt;
  BuildDate: 2025-09-30T16:04:21Z&lt;br /&gt;
  GitCommit: becb020064fe9be5381bf6e5818ff8587ca8f377&lt;br /&gt;
  GitTreeState: clean&lt;br /&gt;
  GoVersion: go1.24.6&lt;br /&gt;
  Compiler: gc&lt;br /&gt;
  Platform: linux/amd64&lt;br /&gt;
argocd-server: v3.1.8+becb020&lt;br /&gt;
  BuildDate: 2025-09-30T15:33:46Z&lt;br /&gt;
  GitCommit: becb020064fe9be5381bf6e5818ff8587ca8f377&lt;br /&gt;
  GitTreeState: clean&lt;br /&gt;
  GoVersion: go1.24.6&lt;br /&gt;
  Compiler: gc&lt;br /&gt;
  Platform: linux/amd64&lt;br /&gt;
  Kustomize Version: v5.7.0 2025-06-28T07:00:07Z&lt;br /&gt;
  Helm Version: v3.18.4+gd80839c&lt;br /&gt;
  Kubectl Version: v0.33.1&lt;br /&gt;
  Jsonnet Version: v0.21.0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Login =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
ARGOCD_SERVER=argocd.acme.com&lt;br /&gt;
ARGOCD_ADMINPASSWORD=$(kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath=&amp;quot;{.data.password}&amp;quot; | base64 -d)&lt;br /&gt;
&lt;br /&gt;
# Usual login&lt;br /&gt;
argocd login $ARGOCD_SERVER --username admin --password $ARGOCD_ADMINPASSWORD --grpc-web&lt;br /&gt;
&lt;br /&gt;
# Behind a proxy or ArgoCD is configured only on port 80 (never worked)&lt;br /&gt;
argocd login argocd.acme.com --username admin --password $ARGOCD_ADMINPASSWORD --plaintext --port-forward --port-forward-namespace argocd&lt;br /&gt;
'admin:login' logged in successfully&lt;br /&gt;
Context 'port-forward' updated&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Terraform&amp;diff=7058</id>
		<title>Terraform</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Terraform&amp;diff=7058"/>
		<updated>2025-09-30T15:42:11Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* tfautomv - Terraform refactor */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article is about utilising a tool from HashiCorp called Terraform to build infrastructure as a code - IoC.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Note| most of the paragraphs have examples of Terraform prior 0.12 version syntax that uses HCLv1. HCLv2 has been introduced with v0.12+ that contains significiant syntax and capabilites improvments. }}&lt;br /&gt;
&lt;br /&gt;
= Install terraform =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
wget https://releases.hashicorp.com/terraform/0.11.11/terraform_0.11.11_linux_amd64.zip&lt;br /&gt;
unzip terraform_0.11.11_linux_amd64.zip&lt;br /&gt;
sudo mv ./terraform /usr/local/bin&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== [https://github.com/kamatama41/tfenv tfenv] - manage multiple versions of Teraform ==&lt;br /&gt;
Install and usage&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
git clone https://github.com/tfutils/tfenv.git ~/.tfenv --depth=1&lt;br /&gt;
echo &amp;quot;[ -d $HOME/.tfenv ] &amp;amp;&amp;amp; export PATH=$PATH:$HOME/.tfenv/bin/&amp;quot; &amp;gt;&amp;gt; ~/.bashrc # or ~/.bash_profile&lt;br /&gt;
&lt;br /&gt;
# Use&lt;br /&gt;
tfenv install v1.12.1&lt;br /&gt;
tfenv use v1.12.1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== IDE ==&lt;br /&gt;
Development I use:&lt;br /&gt;
* VSCode with 1.41.1+ (for reference) with extensions:&lt;br /&gt;
** Terraform Autocomplete by erd0s&lt;br /&gt;
** Terraform by Mikael Olenfalk with enabled Language Server; open the command pallet with &amp;lt;code&amp;gt;Ctrl+Shift+P&amp;lt;/code&amp;gt;&lt;br /&gt;
:[[File:ClipCapIt-200202-153128.PNG]]&lt;br /&gt;
&lt;br /&gt;
= Basic configuration =&lt;br /&gt;
When terraform is run it looks for .tf file where configuration is stored. The look up process is limited to a flat directory and never leaves the directory that runs from. Therefore if you wish to address a common file a symbolic-link needs to be created within the directory you have .tf file.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
$ vi example.tf &lt;br /&gt;
provider &amp;quot;aws&amp;quot; {&lt;br /&gt;
  access_key = &amp;quot;AK01234567890OGD6WGA&amp;quot; &lt;br /&gt;
  secret_key = &amp;quot;N8012345678905acCY6XIc1bYjsvvlXHUXMaxOzN&amp;quot;&lt;br /&gt;
  region     = &amp;quot;eu-west-1&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
resource &amp;quot;aws_instance&amp;quot; &amp;quot;webserver&amp;quot; {&lt;br /&gt;
  ami           = &amp;quot;ami-405f7226&amp;quot;&lt;br /&gt;
  instance_type = &amp;quot;t2.nano&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since version 10.8.x major changes and features have been introduced including split of providers binary. Now each provider is a separate binary. Please see below example for Azure provider and other internal Terraform developed providers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Azure ==&lt;br /&gt;
Terraform credentials&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
export ARM_SUBSCRIPTION_ID=&amp;quot;YOUR_SUBSCRIPTION_ID&amp;quot;&lt;br /&gt;
export ARM_TENANT_ID=&amp;quot;TENANT_ID&amp;quot;&lt;br /&gt;
export ARM_CLIENT_ID=&amp;quot;CLIENT_ID&amp;quot;&lt;br /&gt;
export ARM_CLIENT_SECRET=&amp;quot;CLIENT_SECRET&amp;quot;&lt;br /&gt;
export TF_VAR_client_id=${ARM_CLIENT_ID}&lt;br /&gt;
export TF_VAR_client_secret=${ARM_CLIENT_SECRET}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example, how to source credentials&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
export VAULT_CLIENT_ADDR=http://10.1.1.1:8200&lt;br /&gt;
export VAULT_TOKEN=11111111-1111-1111-1111-1111111111111&lt;br /&gt;
vault read -format=json -address=$VAULT_CLIENT_ADDR secret/azure/subscription   | jq -r '.data | .subscription_id, .tenant_id'&lt;br /&gt;
vault read -format=json -address=$VAULT_CLIENT_ADDR secret/azure/${application} | jq -r '.data | .client_id, .client_secret'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Terraform providers, modules and backend config&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
$ vi providers.tf&lt;br /&gt;
provider &amp;quot;azurerm&amp;quot; {&lt;br /&gt;
  version         = &amp;quot;1.10.0&amp;quot;&lt;br /&gt;
  subscription_id = &amp;quot;${var.subscription_id}&amp;quot;&lt;br /&gt;
  tenant_id       = &amp;quot;${var.tenant_id}&amp;quot;&lt;br /&gt;
  client_id       = &amp;quot;${var.client_id}&amp;quot;&lt;br /&gt;
  client_secret   = &amp;quot;${var.client_secret}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# HashiCorp special providers https://github.com/terraform-providers&lt;br /&gt;
provider &amp;quot;template&amp;quot; { version = &amp;quot;1.0.0&amp;quot; }&lt;br /&gt;
provider &amp;quot;external&amp;quot; { version = &amp;quot;1.0.0&amp;quot; }&lt;br /&gt;
provider &amp;quot;local&amp;quot;    { version = &amp;quot;1.1.0&amp;quot; }&lt;br /&gt;
&lt;br /&gt;
terraform {&lt;br /&gt;
  backend &amp;quot;local&amp;quot; {}&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== AWS ==&lt;br /&gt;
;References&lt;br /&gt;
*[https://www.padok.fr/en/blog/terraform-s3-bucket-aws S3 bucket for all accounts]&lt;br /&gt;
*[https://www.padok.fr/en/blog/authentication-aws-profiles Multi account auth using aws profiles and &amp;lt;code&amp;gt;provider &amp;quot;aws&amp;quot; {}&amp;lt;/code&amp;gt;]&lt;br /&gt;
=== Local state ===&lt;br /&gt;
Local state configuration&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
vi backend.tf&lt;br /&gt;
terraform {&lt;br /&gt;
  version          = &amp;quot;~&amp;gt; 1.0&amp;quot;&lt;br /&gt;
  required_version = &amp;quot;= 0.12.29&amp;quot;&lt;br /&gt;
  backend &amp;quot;local&amp;quot; {}&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Remote state (single) for multi account deployments ===&lt;br /&gt;
There are many combination setting up backend and AWS credentials. Important understand is that &amp;lt;code&amp;gt;terraform { backend{} }&amp;lt;/code&amp;gt; block does NOT use &amp;lt;code&amp;gt;provider &amp;quot;aws {}&amp;quot;&amp;lt;/code&amp;gt; configuration in order to access the state bucket. It only uses the backend one.&lt;br /&gt;
* exporting credentials allows working with assume roles that are different in the backend and terraform blocks. &lt;br /&gt;
* specifying different &amp;lt;code&amp;gt;profile = &amp;lt;/code&amp;gt; in each blocks&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Credentials&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
## profile allows assumes roles in other accounts&lt;br /&gt;
#export AWS_PROFILE=&amp;quot;piotr&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Environment credentials for a user that can assume roles (eg. ) in other accounts:&lt;br /&gt;
#          | * arn:aws:iam::111111111111:role/terraform-s3state              - save state in s3 bucket&lt;br /&gt;
#          | * arn:aws:iam::222222222222:role/terraform-crossaccount-admin   - deploy resources&lt;br /&gt;
export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE&lt;br /&gt;
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY&lt;br /&gt;
export AWS_DEFAULT_REGION=us-east-1&lt;br /&gt;
&lt;br /&gt;
# unset all of them if need to &lt;br /&gt;
unset ${!AWS@}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;code&amp;gt;terraform {}&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
terraform {&lt;br /&gt;
  version          = &amp;quot;~&amp;gt; 1.0&amp;quot;&lt;br /&gt;
  required_version = &amp;quot;= 0.12.29&amp;quot;&lt;br /&gt;
# profile &amp;quot;dev-us&amp;quot; # we use 'role_arn' but could specify aws profile instead&lt;br /&gt;
  backend &amp;quot;s3&amp;quot; {&lt;br /&gt;
    bucket  = &amp;quot;tfstate-${var.project}-${var.account-id}&amp;quot; # must exist beforehand&lt;br /&gt;
    key     = &amp;quot;terraform/aws/${var.project}/tfstate&amp;quot;     # this could be much simpler when working with terraform workspaces&lt;br /&gt;
    region  = &amp;quot;${var.region}&amp;quot;&lt;br /&gt;
    role_arn  = &amp;quot;arn:aws:iam::111111111111:role/terraform-s3state&amp;quot; # role to assume in an infra account that the s3 state exists&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;code&amp;gt;provider {}&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
provider &amp;quot;aws&amp;quot; {&lt;br /&gt;
## We could use profiles but instead we use 'assume_role' option. Also on your laptop &lt;br /&gt;
## it should be your creds profile eg. 'piotr-xaccount-admin'&lt;br /&gt;
#profile = &amp;quot;terraform-crossaccount-admin&amp;quot;&lt;br /&gt;
#shared_credentials_file = &amp;quot;/home/piotr/.aws/credentials&amp;quot;&lt;br /&gt;
  assume_role = {&lt;br /&gt;
    role_arn  = &amp;quot;arn:aws:iam::&amp;lt;MY_PROD_ACCOUNT&amp;gt;:role/terraform-crossaccount-admin&amp;quot;       # assume role in target account&lt;br /&gt;
    role_arn  = &amp;quot;arn:aws:iam::${var.aws_account}:role/terraform-crossaccount-admin&amp;quot; # can use variables&lt;br /&gt;
  }&lt;br /&gt;
  region  = &amp;quot;var.aws_region&amp;quot;&lt;br /&gt;
  allowed_account_ids = [ &amp;quot;111111111111&amp;quot;, &amp;quot;222222222222&amp;quot; ] # safety net&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Workspace configuration&lt;br /&gt;
Dev configuration in &amp;lt;code&amp;gt;dev.tfvars&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
aws_region  = &amp;quot;eu-west-1&amp;quot;&lt;br /&gt;
aws_account = &amp;quot;&amp;lt;MY_DEV_ACCOUNT&amp;gt;&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Prod configuration in &amp;lt;code&amp;gt;prod.tfvars&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
aws_region  = &amp;quot;eu-west-1&amp;quot;&lt;br /&gt;
aws_account = &amp;quot;&amp;lt;MY_PROD_ACCOUNT&amp;gt;&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Workspaces&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
terraform init&lt;br /&gt;
terraform workspace new dev&lt;br /&gt;
terraform workspace new prod&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Apply on one account&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
terraform workspace select dev&lt;br /&gt;
terraform apply --var-file $(terraform workspace show).tfvars&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== GCP Google Cloud Platform ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Generate default app credentials&lt;br /&gt;
&lt;br /&gt;
gcloud auth application-default login&lt;br /&gt;
Go to the following link in your browser:&lt;br /&gt;
https://accounts.google.com/o/oauth2/auth?response_type=code&amp;amp;client_id=****_challenge_method=S256&lt;br /&gt;
Enter verification code: ***&lt;br /&gt;
Credentials saved to file: [/home/piotr/.config/gcloud/application_default_credentials.json]&lt;br /&gt;
&lt;br /&gt;
These credentials will be used by any library that requests Application Default Credentials (ADC).&lt;br /&gt;
Quota project &amp;quot;test-devops-candidate1&amp;quot; was added to ADC which can be used by Google client libraries for billing and quota. Note that some services may still bill the project owning the resource&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Plan / apply =&lt;br /&gt;
== Meaning of markings in a plan output ==&lt;br /&gt;
For now, here they are, until we get it included in the docs better:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;+&amp;lt;/code&amp;gt; create&lt;br /&gt;
* &amp;lt;code&amp;gt;-&amp;lt;/code&amp;gt; destroy&lt;br /&gt;
* &amp;lt;code&amp;gt;-/+&amp;lt;/code&amp;gt; replace (destroy and then create, or vice-versa if create-before-destroy is used)&lt;br /&gt;
* &amp;lt;code&amp;gt;~&amp;lt;/code&amp;gt; update in-place&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;lt;=&amp;lt;/code&amp;gt; applies only to data resources. You won't see this one often, because whenever possible Terraform does reads during the refresh phase. You will see it, though, if you have a data resource whose configuration depends on something that we don't know yet, such as an attribute of a resource that isn't yet created. In that case, it's necessary to wait until apply time to find out the final configuration before doing the read.&lt;br /&gt;
&lt;br /&gt;
== Plan and apply ==&lt;br /&gt;
Apply stage, if runs first time will create terraform.tfstate after all changes are done. This file should not be modified manually. It's used to compare what is out in cloud already so the next time APPLY stage runs it will look at the file and execute only necessary changes.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Terraform plan and apply&lt;br /&gt;
|- &lt;br /&gt;
! terraform plan&lt;br /&gt;
! terraform apply&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| &amp;lt;source&amp;gt;$ terraform plan&lt;br /&gt;
Refreshing Terraform state in-memory prior to plan...&lt;br /&gt;
The refreshed state will be used to calculate this plan, but will not be&lt;br /&gt;
persisted to local or remote state storage.&lt;br /&gt;
&amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
+ aws_instance.webserver&lt;br /&gt;
   ami:                         &amp;quot;ami-405f7226&amp;quot;&lt;br /&gt;
   associate_public_ip_address: &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   availability_zone:           &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   ebs_block_device.#:          &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   ephemeral_block_device.#:    &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   instance_state:              &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   instance_type:               &amp;quot;t2.nano&amp;quot;&lt;br /&gt;
   ipv6_addresses.#:            &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   key_name:                    &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   network_interface_id:        &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   placement_group:             &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   private_dns:                 &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   private_ip:                  &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   public_dns:                  &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   public_ip:                   &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   root_block_device.#:         &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   security_groups.#:           &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   source_dest_check:           &amp;quot;true&amp;quot;&lt;br /&gt;
   subnet_id:                   &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   tenancy:                     &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   vpc_security_group_ids.#:    &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
| &amp;lt;source&amp;gt;$ terraform apply&lt;br /&gt;
aws_instance.webserver: Creating...&lt;br /&gt;
 ami:                         &amp;quot;&amp;quot; =&amp;gt; &amp;quot;ami-405f7226&amp;quot;&lt;br /&gt;
 associate_public_ip_address: &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 availability_zone:           &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 ebs_block_device.#:          &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 ephemeral_block_device.#:    &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 instance_state:              &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 instance_type:               &amp;quot;&amp;quot; =&amp;gt; &amp;quot;t2.nano&amp;quot;&lt;br /&gt;
 ipv6_addresses.#:            &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 key_name:                    &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 network_interface_id:        &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 placement_group:             &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 private_dns:                 &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 private_ip:                  &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 public_dns:                  &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 public_ip:                   &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 root_block_device.#:         &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 security_groups.#:           &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 source_dest_check:           &amp;quot;&amp;quot; =&amp;gt; &amp;quot;true&amp;quot;&lt;br /&gt;
 subnet_id:                   &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 tenancy:                     &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 vpc_security_group_ids.#:    &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
aws_instance.webserver: Still creating... (10s elapsed)&lt;br /&gt;
aws_instance.webserver: Creation complete (ID: i-0eb33af34b94d1a78)&lt;br /&gt;
&lt;br /&gt;
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.&lt;br /&gt;
&lt;br /&gt;
The state of your infrastructure has been saved to the path&lt;br /&gt;
below. This state is required to modify and destroy your&lt;br /&gt;
infrastructure, so keep it safe. To inspect the complete state&lt;br /&gt;
use the `terraform show` command.&lt;br /&gt;
&lt;br /&gt;
State path:  &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Show ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ terraform show&lt;br /&gt;
aws_instance.webserver:&lt;br /&gt;
 id = i-0eb33af34b94d1a78&lt;br /&gt;
 ami = ami-405f7226&lt;br /&gt;
 associate_public_ip_address = true&lt;br /&gt;
 availability_zone = eu-west-1c&lt;br /&gt;
 disable_api_termination = false&lt;br /&gt;
(...)&lt;br /&gt;
 source_dest_check = true&lt;br /&gt;
 subnet_id = subnet-92a4bbf6&lt;br /&gt;
 tags.% = 0&lt;br /&gt;
 tenancy = default&lt;br /&gt;
 vpc_security_group_ids.# = 1&lt;br /&gt;
 vpc_security_group_ids.1039819662 = sg-5201fb2b&lt;br /&gt;
&lt;br /&gt;
$&amp;gt; terraform destroy&lt;br /&gt;
Do you really want to destroy?&lt;br /&gt;
 Terraform will delete all your managed infrastructure.&lt;br /&gt;
 There is no undo. Only 'yes' will be accepted to confirm.&lt;br /&gt;
 Enter a value: yes&lt;br /&gt;
aws_instance.webserver: Refreshing state... (ID: i-0eb33af34b94d1a78)&lt;br /&gt;
aws_instance.webserver: Destroying... (ID: i-0eb33af34b94d1a78)&lt;br /&gt;
aws_instance.webserver: Still destroying... (ID: i-0eb33af34b94d1a78, 10s elapsed)&lt;br /&gt;
aws_instance.webserver: Still destroying... (ID: i-0eb33af34b94d1a78, 20s elapsed)&lt;br /&gt;
aws_instance.webserver: Still destroying... (ID: i-0eb33af34b94d1a78, 30s elapsed)&lt;br /&gt;
aws_instance.webserver: Destruction complete&lt;br /&gt;
 &lt;br /&gt;
Destroy complete! Resources: 1 destroyed.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
After the instance has been terminated the terraform.tfstate looks like below:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi terraform.tfstate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=json&amp;gt;&lt;br /&gt;
 {&lt;br /&gt;
     &amp;quot;version&amp;quot;: 3,&lt;br /&gt;
     &amp;quot;terraform_version&amp;quot;: &amp;quot;0.9.1&amp;quot;,&lt;br /&gt;
     &amp;quot;serial&amp;quot;: 1,&lt;br /&gt;
     &amp;quot;lineage&amp;quot;: &amp;quot;c22ccad7-ff26-4b8a-bf19-819477b45202&amp;quot;,&lt;br /&gt;
     &amp;quot;modules&amp;quot;: [&lt;br /&gt;
         {&lt;br /&gt;
             &amp;quot;path&amp;quot;: [&lt;br /&gt;
                 &amp;quot;root&amp;quot;&lt;br /&gt;
             ],&lt;br /&gt;
             &amp;quot;outputs&amp;quot;: {},&lt;br /&gt;
             &amp;quot;resources&amp;quot;: {},&lt;br /&gt;
             &amp;quot;depends_on&amp;quot;: []&lt;br /&gt;
         }&lt;br /&gt;
     ]&lt;br /&gt;
 }&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= AWS credentials profiles and variable files=&lt;br /&gt;
Instead to reference secret_access keys within .tf file directly we can use AWS profile file. This file will be look at for the profile variable we specify in variables.tf file. Note: there is '''no double quotes'''.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ vi ~/.aws/credentials    #AWS credentials file with named profiles&lt;br /&gt;
[terraform-profile1]       #profile name&lt;br /&gt;
aws_access_key_id     = AAAAAAAAAAA&lt;br /&gt;
aws_secret_access_key = BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then we can now remove the secret_access keys from the main .tf file (example.tf) and amend as follows:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi provider.tf&lt;br /&gt;
terraform {&lt;br /&gt;
  version          = &amp;quot;~&amp;gt; 1.0&amp;quot;&lt;br /&gt;
  required_version = &amp;quot;= 0.11.11&amp;quot;&lt;br /&gt;
  region           = &amp;quot;eu-west-1&amp;quot;&lt;br /&gt;
  backend &amp;quot;s3&amp;quot; {}  # in this case all s3 details are passed as ENV vars&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
provider &amp;quot;aws&amp;quot; {&lt;br /&gt;
  version    =   &amp;quot;~&amp;gt; 1.57&amp;quot;&lt;br /&gt;
# Static credentials - provided directly&lt;br /&gt;
  access_key = &amp;quot;AAAAAAAAAAA&amp;quot;&lt;br /&gt;
  secret_key = &amp;quot;BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Shared Credentials file - $HOME/.aws/credentials, static credentials are not needed then&lt;br /&gt;
# profile                 = &amp;quot;terraform-profile1&amp;quot;           #profile name in credentials file, acc 111111111111&lt;br /&gt;
# shared_credentials_file = &amp;quot;/home/user1/.aws/credentials&amp;quot; #if different than default&lt;br /&gt;
&lt;br /&gt;
# If specified, assume role in another account using the user credentials&lt;br /&gt;
# defined in the profile above&lt;br /&gt;
# assume_role {&lt;br /&gt;
#   role_arn     = &amp;quot;${var.aws_xaccount_role}&amp;quot; #variable version&lt;br /&gt;
#   role_arn     = &amp;quot;arn:aws:iam::222222222222:role/CrossAccountSignin_Terraform&amp;quot;&lt;br /&gt;
# }&lt;br /&gt;
# allowed_account_ids = [ &amp;quot;111111111111&amp;quot;, &amp;quot;222222222222&amp;quot; ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
provider &amp;quot;template&amp;quot; {&lt;br /&gt;
  version = &amp;quot;~&amp;gt; 1.0.0&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
and create a variable file to reference it&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ vi variables.tf&lt;br /&gt;
variable &amp;quot;region&amp;quot; {&lt;br /&gt;
  default = &amp;quot;eu-west-1&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
variable &amp;quot;profile&amp;quot; {} #variable without a default value will prompt to type in the value. And that should be 'terraform-profile1'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Run terraform&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ terraform plan -var 'profile=terraform-profile1'  #this way value can be set&lt;br /&gt;
$ terraform plan -destroy -input=false&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= AWS example =&lt;br /&gt;
Prerequisites are:&lt;br /&gt;
*~/.aws/credential file exists&lt;br /&gt;
*variables.tf exist, with context below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you remove &amp;lt;tt&amp;gt;default&amp;lt;/tt&amp;gt; value you will be prompted for it.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;inputs.tf&amp;lt;/code&amp;gt; also known as a variable file.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; vi inputs.tf&lt;br /&gt;
variable &amp;quot;region&amp;quot; { default = &amp;quot;eu-west-1&amp;quot;  } &lt;br /&gt;
variable &amp;quot;profile&amp;quot; {&lt;br /&gt;
       description = &amp;quot;Provide AWS credentials profile you want to use, saved in ~/.aws/credentials file&amp;quot;&lt;br /&gt;
       default     = &amp;quot;terraform-profile&amp;quot; }&lt;br /&gt;
variable &amp;quot;key_name&amp;quot; {&lt;br /&gt;
        description = &amp;lt;&amp;lt;DESCRIPTION&lt;br /&gt;
Provide name of the ssh private key file name, ~/.ssh will be search&lt;br /&gt;
This is the key assosiated with the IAM user in AWS. Example: id_rsa&lt;br /&gt;
DESCRIPTION&lt;br /&gt;
        default     = &amp;quot;id_rsa&amp;quot; }&lt;br /&gt;
variable &amp;quot;public_key_path&amp;quot; {&lt;br /&gt;
        description = &amp;lt;&amp;lt;DESCRIPTION&lt;br /&gt;
Path to the SSH public keys for authentication. This key will be injected&lt;br /&gt;
into all ec2 instances created by Terraform.&lt;br /&gt;
Example: ~./ssh/terraform.pub&lt;br /&gt;
DESCRIPTION&lt;br /&gt;
        default     = &amp;quot;~/.ssh/id_rsa.pub&amp;quot; }&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Terraform .tf file&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ vi example.tf&lt;br /&gt;
provider &amp;quot;aws&amp;quot; {&lt;br /&gt;
  region = &amp;quot;${var.region}&amp;quot;&lt;br /&gt;
  profile = &amp;quot;${var.profile}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
resource &amp;quot;aws_vpc&amp;quot; &amp;quot;vpc&amp;quot; {&lt;br /&gt;
  cidr_block = &amp;quot;10.0.0.0/16&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
# Create an internet gateway to give our subnet access to the open internet&lt;br /&gt;
resource &amp;quot;aws_internet_gateway&amp;quot; &amp;quot;internet-gateway&amp;quot; {&lt;br /&gt;
  vpc_id = &amp;quot;${aws_vpc.vpc.id}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
# Give the VPC internet access on its main route table&lt;br /&gt;
resource &amp;quot;aws_route&amp;quot; &amp;quot;internet_access&amp;quot; {&lt;br /&gt;
  route_table_id         = &amp;quot;${aws_vpc.vpc.main_route_table_id}&amp;quot;&lt;br /&gt;
  destination_cidr_block = &amp;quot;0.0.0.0/0&amp;quot;&lt;br /&gt;
  gateway_id             = &amp;quot;${aws_internet_gateway.internet-gateway.id}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
# Create a subnet to launch our instances into&lt;br /&gt;
resource &amp;quot;aws_subnet&amp;quot; &amp;quot;default&amp;quot; {&lt;br /&gt;
  vpc_id                  = &amp;quot;${aws_vpc.vpc.id}&amp;quot;&lt;br /&gt;
  cidr_block              = &amp;quot;10.0.1.0/24&amp;quot;&lt;br /&gt;
  map_public_ip_on_launch = true&lt;br /&gt;
&lt;br /&gt;
  tags {&lt;br /&gt;
    Name = &amp;quot;Public&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
# Our default security group to access&lt;br /&gt;
# instances over SSH and HTTP&lt;br /&gt;
resource &amp;quot;aws_security_group&amp;quot; &amp;quot;default&amp;quot; {&lt;br /&gt;
  name        = &amp;quot;terraform_securitygroup&amp;quot;&lt;br /&gt;
  description = &amp;quot;Used for public instances&amp;quot;&lt;br /&gt;
  vpc_id      = &amp;quot;${aws_vpc.vpc.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  # SSH access from anywhere&lt;br /&gt;
  ingress {&lt;br /&gt;
    from_port   = 22&lt;br /&gt;
    to_port     = 22&lt;br /&gt;
    protocol    = &amp;quot;tcp&amp;quot;&lt;br /&gt;
    cidr_blocks = [&amp;quot;0.0.0.0/0&amp;quot;]&lt;br /&gt;
  }&lt;br /&gt;
  # HTTP access from the VPC&lt;br /&gt;
  ingress {&lt;br /&gt;
    from_port   = 80&lt;br /&gt;
    to_port     = 80&lt;br /&gt;
    protocol    = &amp;quot;tcp&amp;quot;&lt;br /&gt;
    cidr_blocks = [&amp;quot;10.0.0.0/16&amp;quot;]&lt;br /&gt;
  }&lt;br /&gt;
  # outbound internet access&lt;br /&gt;
  egress {&lt;br /&gt;
    from_port   = 0&lt;br /&gt;
    to_port     = 0&lt;br /&gt;
    protocol    = &amp;quot;-1&amp;quot; # all protocols&lt;br /&gt;
    cidr_blocks = [&amp;quot;0.0.0.0/0&amp;quot;]&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
resource &amp;quot;aws_key_pair&amp;quot; &amp;quot;auth&amp;quot; {&lt;br /&gt;
  key_name   = &amp;quot;${var.key_name}&amp;quot;&lt;br /&gt;
  public_key = &amp;quot;${file(var.public_key_path)}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
resource &amp;quot;aws_instance&amp;quot; &amp;quot;webserver&amp;quot; {&lt;br /&gt;
  ami = &amp;quot;ami-405f7226&amp;quot;&lt;br /&gt;
  instance_type = &amp;quot;t2.nano&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  key_name = &amp;quot;${aws_key_pair.auth.id}&amp;quot;&lt;br /&gt;
  vpc_security_group_ids = [&amp;quot;${aws_security_group.default.id}&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
  # We're going to launch into the public subnet for this.&lt;br /&gt;
  # Normally, in production environments, webservers would be in&lt;br /&gt;
  # private subnets.&lt;br /&gt;
  subnet_id = &amp;quot;${aws_subnet.default.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  # The connection block tells our provisioner how to&lt;br /&gt;
  # communicate with the instance&lt;br /&gt;
  connection {&lt;br /&gt;
    user = &amp;quot;ubuntu&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
  # We run a remote provisioner on the instance after creating it &lt;br /&gt;
  # to install Nginx. By default, this should be on port 80&lt;br /&gt;
  provisioner &amp;quot;remote-exec&amp;quot; {&lt;br /&gt;
    inline = [&lt;br /&gt;
      &amp;quot;sudo apt-get -y update&amp;quot;,&lt;br /&gt;
      &amp;quot;sudo apt-get -y install nginx&amp;quot;,&lt;br /&gt;
      &amp;quot;sudo service nginx start&amp;quot;&lt;br /&gt;
    ]&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Run a plan ==&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
$ terraform plan&lt;br /&gt;
var.key_name&lt;br /&gt;
  Name of the AWS key pair&lt;br /&gt;
&lt;br /&gt;
  Enter a value: id_rsa        #name of the key_pair&lt;br /&gt;
&lt;br /&gt;
var.profile&lt;br /&gt;
  AWS credentials profile you want to use&lt;br /&gt;
&lt;br /&gt;
  Enter a value: terraform-profile   #aws profile in ~/.aws/credentials file&lt;br /&gt;
&lt;br /&gt;
var.public_key_path&lt;br /&gt;
  Path to the SSH public keys for authentication.&lt;br /&gt;
  Example: ~./ssh/terraform.pub&lt;br /&gt;
&lt;br /&gt;
  Enter a value: ~/.ssh/id_rsa.pub  #path to the matching public key&lt;br /&gt;
&lt;br /&gt;
Refreshing Terraform state in-memory prior to plan...&lt;br /&gt;
The refreshed state will be used to calculate this plan, but will not be&lt;br /&gt;
persisted to local or remote state storage.&lt;br /&gt;
&lt;br /&gt;
The Terraform execution plan has been generated and is shown below.&lt;br /&gt;
Resources are shown in alphabetical order for quick scanning. Green resources&lt;br /&gt;
will be created (or destroyed and then created if an existing resource&lt;br /&gt;
exists), yellow resources are being changed in-place, and red resources&lt;br /&gt;
will be destroyed. Cyan entries are data sources to be read.&lt;br /&gt;
&lt;br /&gt;
+ aws_instance.webserver&lt;br /&gt;
    ami:                         &amp;quot;ami-405f7226&amp;quot;&lt;br /&gt;
    associate_public_ip_address: &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    availability_zone:           &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    ebs_block_device.#:          &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    ephemeral_block_device.#:    &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    instance_state:              &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    instance_type:               &amp;quot;t2.nano&amp;quot;&lt;br /&gt;
    ipv6_addresses.#:            &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    key_name:                    &amp;quot;${aws_key_pair.auth.id}&amp;quot;&lt;br /&gt;
    network_interface_id:        &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    placement_group:             &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    private_dns:                 &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    private_ip:                  &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    public_dns:                  &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    public_ip:                   &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    root_block_device.#:         &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    security_groups.#:           &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    source_dest_check:           &amp;quot;true&amp;quot;&lt;br /&gt;
    subnet_id:                   &amp;quot;${aws_subnet.default.id}&amp;quot;&lt;br /&gt;
    tenancy:                     &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    vpc_security_group_ids.#:    &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
+ aws_internet_gateway.internet-gateway&lt;br /&gt;
    vpc_id: &amp;quot;${aws_vpc.vpc.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
+ aws_key_pair.auth&lt;br /&gt;
    fingerprint: &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    key_name:    &amp;quot;id_rsa&amp;quot;&lt;br /&gt;
    public_key:  &amp;quot;ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDfc piotr@ubuntu&amp;quot;&lt;br /&gt;
&lt;br /&gt;
...omitted...&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
Plan: 7 to add, 0 to change, 0 to destroy.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Plan a single target&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; terraform plan -target=aws_ami_from_instance.golden&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Terraform apply ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; terraform apply&lt;br /&gt;
$&amp;gt; terraform show # shoe current resources in the state file&lt;br /&gt;
aws_instance.webserver:&lt;br /&gt;
 id = i-09c1c665cef284235&lt;br /&gt;
 ami = ami-405f7226&lt;br /&gt;
&amp;lt;...&amp;gt;&lt;br /&gt;
aws_security_group.default:&lt;br /&gt;
 id = sg-b14bb1c8&lt;br /&gt;
 description = Used for public instances&lt;br /&gt;
 egress.# = 1&lt;br /&gt;
&amp;lt;...&amp;gt;&lt;br /&gt;
aws_subnet.default:&lt;br /&gt;
 id = subnet-6f4f510b&lt;br /&gt;
&amp;lt;...&amp;gt;&lt;br /&gt;
aws_vpc.vpc:&lt;br /&gt;
 id = vpc-9ba0b7ff&lt;br /&gt;
&amp;lt;...&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Apply a single resource using &amp;lt;code&amp;gt;-target &amp;lt;resource&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; terraform apply -target=aws_ami_from_instance.golden&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Terraform destroy ==&lt;br /&gt;
Run destroy command to delete all resources that were created&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; terraform destroy&lt;br /&gt;
&lt;br /&gt;
aws_key_pair.auth: Refreshing state... (ID: id_rsa)&lt;br /&gt;
aws_vpc.vpc: Refreshing state... (ID: vpc-9ba0b7ff)&lt;br /&gt;
&amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Destroy complete! Resources: 7 destroyed.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Destroy a single resource - targeting&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; terraform show&lt;br /&gt;
$&amp;gt; terraform destroy -target=aws_ami_from_instance.golden&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Terraform taint ==&lt;br /&gt;
Get a resource list&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
terraform state list&lt;br /&gt;
# select item for the list #&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Version 0.11: resource index must be addressed as eg. &amp;lt;code&amp;gt;aws_instance.main.0&amp;lt;/code&amp;gt; not  &amp;lt;code&amp;gt;aws_instance.main[0]&amp;lt;/code&amp;gt;. It's not possible to tain whole module&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
terraform taint -module=&amp;lt;MODULE_NAME&amp;gt; aws_instance.main.0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Version 0.12: resources and modules can be addressed in more natural way&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
terraform taint module.MODULE_NAME.aws_instance.main.0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Use ansible from Terraform - Provision using Ansible =&lt;br /&gt;
Unsurr if this is the best approach due to the fact of how to store the state of local-exec Ansible run. Could be set to always run as Ansible playbooks are immutable. Exame: https://github.com/dzeban/c10k/blob/master/infrastructure/main.tf&lt;br /&gt;
&lt;br /&gt;
= Debug =&lt;br /&gt;
== Output complex object ==&lt;br /&gt;
Often it is required to manipulate a data structure that is an output of &amp;lt;tt&amp;gt;resource&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;data.resource&amp;lt;/tt&amp;gt; or simply a template that might be hidden computation not always displayed on your screen. You can use following techniques to iterate over you code output:&lt;br /&gt;
&lt;br /&gt;
;Output and [https://www.terraform.io/docs/providers/null/resource.html null_resource] - empty virtual container that can run any arbitrary commands&lt;br /&gt;
* '''Problem statement:''' Display computed Terrafom &amp;lt;code&amp;gt;templatefile&amp;lt;/code&amp;gt;&lt;br /&gt;
* '''Solution:''' Use &amp;lt;code&amp;gt;null_resource&amp;lt;/code&amp;gt; to create a template, such template will be shown in a &amp;lt;tt&amp;gt;plan&amp;lt;/tt&amp;gt;. If such template is Json policy, invalid policies fail and you cannot see why. Plan will show the object being constructed, running &amp;lt;code&amp;gt;terraform apply&amp;lt;/code&amp;gt; it can be saved into state file as output variable. Then the object can be re-used for further transformations.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;Terraform&amp;quot;&amp;gt;&lt;br /&gt;
data &amp;quot;aws_caller_identity&amp;quot; &amp;quot;current&amp;quot; {}&lt;br /&gt;
&lt;br /&gt;
# resource &amp;quot;aws_kms_key&amp;quot; &amp;quot;secretmanager&amp;quot; {&lt;br /&gt;
#  policy = templatefile(&amp;quot;./templates/kms_secretmanager.policy.json.tpl&amp;quot;, ... # debugging policy with &lt;br /&gt;
# }                                                                           # null_resource and ouput&lt;br /&gt;
&lt;br /&gt;
resource &amp;quot;aws_kms_alias&amp;quot; &amp;quot;secretmanager&amp;quot; {&lt;br /&gt;
  name          = &amp;quot;alias/secretmanager&amp;quot;&lt;br /&gt;
  target_key_id = aws_kms_key.secretmanager.key_id&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
resource &amp;quot;null_resource&amp;quot; &amp;quot;policytest&amp;quot; {&lt;br /&gt;
  triggers = {&lt;br /&gt;
    policytest = templatefile(&amp;quot;./templates/kms_secretmanager.policy.json.tpl&amp;quot;, &lt;br /&gt;
    {&lt;br /&gt;
      arns_json                 = jsonencode(var.crossAccountIamUsers_arns)&lt;br /&gt;
      currentAccountId          = data.aws_caller_identity.current.account_id&lt;br /&gt;
      crossAccountAccessEnabled = length([var.crossAccountIamUsers_arns]) &amp;gt; 0 ? true : false&lt;br /&gt;
    })&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
output &amp;quot;policy&amp;quot; {&lt;br /&gt;
  value = templatefile(&amp;quot;./templates/kms_secretmanager.policy.json.tpl&amp;quot;, &lt;br /&gt;
    {&lt;br /&gt;
      arns_json                 = jsonencode(var.crossAccountIamUsers_arns)&lt;br /&gt;
      currentAccountId          = data.aws_caller_identity.current.account_id&lt;br /&gt;
      crossAccountAccessEnabled = length([var.crossAccountIamUsers_arns]) &amp;gt; 0 ? true : false&lt;br /&gt;
    }&lt;br /&gt;
  )&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Policy template file &amp;lt;code&amp;gt;./templates/kms_secretmanager.policy.json.tpl&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=json&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
    &amp;quot;Version&amp;quot;: &amp;quot;2012-10-17&amp;quot;,&lt;br /&gt;
    &amp;quot;Id&amp;quot;: &amp;quot;key-consolepolicy-1&amp;quot;,&lt;br /&gt;
    &amp;quot;Statement&amp;quot;: [&lt;br /&gt;
        {&lt;br /&gt;
            &amp;quot;Sid&amp;quot;: &amp;quot;Enable IAM User Permissions&amp;quot;,&lt;br /&gt;
            &amp;quot;Effect&amp;quot;: &amp;quot;Allow&amp;quot;,&lt;br /&gt;
            &amp;quot;Principal&amp;quot;: {&lt;br /&gt;
                &amp;quot;AWS&amp;quot;: &amp;quot;arn:aws:iam::${currentAccountId}:root&amp;quot;&lt;br /&gt;
            },&lt;br /&gt;
            &amp;quot;Action&amp;quot;: &amp;quot;kms:*&amp;quot;,&lt;br /&gt;
            &amp;quot;Resource&amp;quot;: &amp;quot;*&amp;quot;&lt;br /&gt;
        },&lt;br /&gt;
%{ if crossAccountAccessEnabled == true ~}&lt;br /&gt;
        {&lt;br /&gt;
            &amp;quot;Sid&amp;quot;: &amp;quot;Allow cross-accounts retrieve secrets&amp;quot;,&lt;br /&gt;
            &amp;quot;Effect&amp;quot;: &amp;quot;Allow&amp;quot;,&lt;br /&gt;
            &amp;quot;Principal&amp;quot;: {&lt;br /&gt;
                &amp;quot;AWS&amp;quot;: ${arns_json}&lt;br /&gt;
            },&lt;br /&gt;
            &amp;quot;Action&amp;quot;: [&lt;br /&gt;
                &amp;quot;kms:Decrypt&amp;quot;,&lt;br /&gt;
                &amp;quot;kms:DescribeKey&amp;quot;&lt;br /&gt;
            ],&lt;br /&gt;
            &amp;quot;Resource&amp;quot;: &amp;quot;*&amp;quot;&lt;br /&gt;
        }&lt;br /&gt;
%{ endif ~}&lt;br /&gt;
    ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Run&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ terraform apply -var-file=test.tfvars -target null_resource.policytest # -var-file contains 'var.crossAccountIamUsers_arns' list variable&lt;br /&gt;
&lt;br /&gt;
Terraform will perform the following actions:&lt;br /&gt;
&lt;br /&gt;
  # null_resource.policytest will be created&lt;br /&gt;
  + resource &amp;quot;null_resource&amp;quot; &amp;quot;policytest&amp;quot; {&lt;br /&gt;
      + id       = (known after apply)&lt;br /&gt;
      + triggers = {&lt;br /&gt;
          + &amp;quot;policytest&amp;quot; = jsonencode(&lt;br /&gt;
                {&lt;br /&gt;
                  + Id        = &amp;quot;key-consolepolicy-1&amp;quot;&lt;br /&gt;
                  + Statement = [&lt;br /&gt;
                      + {&lt;br /&gt;
                          + Action    = &amp;quot;kms:*&amp;quot;&lt;br /&gt;
                          + Effect    = &amp;quot;Allow&amp;quot;&lt;br /&gt;
                          + Principal = {&lt;br /&gt;
                              + AWS = &amp;quot;arn:aws:iam::111111111111:root&amp;quot;&lt;br /&gt;
                            }&lt;br /&gt;
                          + Resource  = &amp;quot;*&amp;quot;&lt;br /&gt;
                          + Sid       = &amp;quot;Enable IAM User Permissions&amp;quot;&lt;br /&gt;
                        },&lt;br /&gt;
                      + {&lt;br /&gt;
                          + Action    = [&lt;br /&gt;
                              + &amp;quot;kms:Decrypt&amp;quot;,&lt;br /&gt;
                              + &amp;quot;kms:DescribeKey&amp;quot;,&lt;br /&gt;
                            ]&lt;br /&gt;
                          + Effect    = &amp;quot;Allow&amp;quot;&lt;br /&gt;
                          + Principal = {&lt;br /&gt;
                              + AWS = [&lt;br /&gt;
                                  + &amp;quot;arn:aws:iam::111111111111:user/dev&amp;quot;,&lt;br /&gt;
                                  + &amp;quot;arn:aws:iam::111111111111:user/test&amp;quot;,&lt;br /&gt;
                                ]&lt;br /&gt;
                            }&lt;br /&gt;
                          + Resource  = &amp;quot;*&amp;quot;&lt;br /&gt;
                          + Sid       = &amp;quot;Allow cross-accounts retrieve secrets&amp;quot;&lt;br /&gt;
                        },&lt;br /&gt;
                    ]&lt;br /&gt;
                  + Version   = &amp;quot;2012-10-17&amp;quot;&lt;br /&gt;
                }&lt;br /&gt;
            )&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
Plan: 1 to add, 0 to change, 0 to destroy.&lt;br /&gt;
&lt;br /&gt;
Do you want to perform these actions?&lt;br /&gt;
  Terraform will perform the actions described above.&lt;br /&gt;
  Only 'yes' will be accepted to approve.&lt;br /&gt;
&lt;br /&gt;
  Enter a value: yes # &amp;lt;- manual imput&lt;br /&gt;
&lt;br /&gt;
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.&lt;br /&gt;
&lt;br /&gt;
Outputs:&lt;br /&gt;
policy = {&lt;br /&gt;
    &amp;quot;Version&amp;quot;: &amp;quot;2012-10-17&amp;quot;,&lt;br /&gt;
    &amp;quot;Id&amp;quot;: &amp;quot;key-consolepolicy-1&amp;quot;,&lt;br /&gt;
    &amp;quot;Statement&amp;quot;: [&lt;br /&gt;
        {&lt;br /&gt;
            &amp;quot;Sid&amp;quot;: &amp;quot;Enable IAM User Permissions&amp;quot;,&lt;br /&gt;
            &amp;quot;Effect&amp;quot;: &amp;quot;Allow&amp;quot;,&lt;br /&gt;
            &amp;quot;Principal&amp;quot;: {&lt;br /&gt;
                &amp;quot;AWS&amp;quot;: &amp;quot;arn:aws:iam::111111111111:root&amp;quot;&lt;br /&gt;
            },&lt;br /&gt;
            &amp;quot;Action&amp;quot;: &amp;quot;kms:*&amp;quot;,&lt;br /&gt;
            &amp;quot;Resource&amp;quot;: &amp;quot;*&amp;quot;&lt;br /&gt;
        },&lt;br /&gt;
        {&lt;br /&gt;
            &amp;quot;Sid&amp;quot;: &amp;quot;Allow cross-accounts retrieve secrets&amp;quot;,&lt;br /&gt;
            &amp;quot;Effect&amp;quot;: &amp;quot;Allow&amp;quot;,&lt;br /&gt;
            &amp;quot;Principal&amp;quot;: {&lt;br /&gt;
                &amp;quot;AWS&amp;quot;: [&amp;quot;arn:aws:iam::111111111111:user/dev&amp;quot;,&amp;quot;arn:aws:iam::111111111111:user/test&amp;quot;]&lt;br /&gt;
            },&lt;br /&gt;
            &amp;quot;Action&amp;quot;: [&lt;br /&gt;
                &amp;quot;kms:Decrypt&amp;quot;,&lt;br /&gt;
                &amp;quot;kms:DescribeKey&amp;quot;&lt;br /&gt;
            ],&lt;br /&gt;
            &amp;quot;Resource&amp;quot;: &amp;quot;*&amp;quot;&lt;br /&gt;
        }&lt;br /&gt;
    ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Debug =&lt;br /&gt;
== Debug and analyze logs ==&lt;br /&gt;
We are going to enable logging to a file in Terraform. Convert log file to pdf and use sheri.ai to give us the answers.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Pre req - Ubuntu 22.04&lt;br /&gt;
sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install ghostscript # for ps2pdf converter&lt;br /&gt;
&lt;br /&gt;
# Set Terraform logging&lt;br /&gt;
export TF_LOG=TRACE # DEBUG&lt;br /&gt;
export TF_LOG_PATH=/tmp/tflogs.log&lt;br /&gt;
&lt;br /&gt;
terraform plan|apply&lt;br /&gt;
vim $TF_LOG_PATH -c &amp;quot;hardcopy &amp;gt; ${TF_LOG_PATH}.ps | q&amp;quot;; ps2pdf ${TF_LOG_PATH}.ps ${TF_LOG_PATH}-$(echo $(date +&amp;quot;%Y%m%d-%H%M&amp;quot;)).pdf&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Debug using &amp;lt;code&amp;gt;terraform console&amp;lt;/code&amp;gt;==&lt;br /&gt;
This command provides an interactive command-line console for evaluating and experimenting with expressions. This is useful for testing interpolations before using them in configurations, and for interacting with any values currently saved in state. Terraform console will read configured state even if it is remote.&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
$&amp;gt; terraform console #-state=path # note I have 'tfstate' available; this could be remote state&lt;br /&gt;
&amp;gt; var.vpc_cidr       # &amp;lt;- new syntax&lt;br /&gt;
10.123.0.0/16&lt;br /&gt;
&amp;gt; &amp;quot;${var.vpc_cidr}&amp;quot;  # &amp;lt;- old syntax&lt;br /&gt;
10.123.0.0/16&lt;br /&gt;
&amp;gt; aws_security_group.tf_public_sg.id   # interpolate from state&lt;br /&gt;
sg-04d51b5ae10e6f0b0&lt;br /&gt;
&amp;gt; help&lt;br /&gt;
The Terraform console allows you to experiment with Terraform interpolations.&lt;br /&gt;
You may access resources in the state (if you have one) just as you would&lt;br /&gt;
from a configuration. For example: &amp;quot;aws_instance.foo.id&amp;quot; would evaluate&lt;br /&gt;
to the ID of &amp;quot;aws_instance.foo&amp;quot; if it exists in your state.&lt;br /&gt;
&lt;br /&gt;
Type in the interpolation to test and hit &amp;lt;enter&amp;gt; to see the result.&lt;br /&gt;
&lt;br /&gt;
To exit the console, type &amp;quot;exit&amp;quot; and hit &amp;lt;enter&amp;gt;, or use Control-C or&lt;br /&gt;
Control-D.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ echo &amp;quot;aws_iam_user.notif.arn&amp;quot; | terraform console&lt;br /&gt;
arn:aws:iam::123456789:user/notif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Log user_data to console logs ==&lt;br /&gt;
In Linux add a line below after she-bang&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
exec &amp;gt; &amp;gt;(tee /var/log/user-data.log|logger -t user-data -s 2&amp;gt;/dev/console)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Now you can go and open System Logs in AWS Console to view user-data script logs.&lt;br /&gt;
&lt;br /&gt;
= terraform graph to visualise configuration =&lt;br /&gt;
== Graph dependencies ==&lt;br /&gt;
Create visualised file. You may need to install &amp;lt;code&amp;gt;sudo apt-get install graphviz&amp;lt;/code&amp;gt; if it is not in your system.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo apt install graphviz # installs 'dot'&lt;br /&gt;
terraform graph | dot -Tpng &amp;gt; graph.png&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
[[File:Example2.png|none|left|700px|Terraform visual configuration]]&lt;br /&gt;
&lt;br /&gt;
== [https://serverfault.com/questions/1005761/what-does-error-cycle-means-in-terraform Cycle error] ==&lt;br /&gt;
Example cycle error:&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
Error: Cycle: module.gke.google_container_node_pool.pools[&amp;quot;low-standard-n1&amp;quot;]&lt;br /&gt;
 module.gke.google_container_node_pool.pools[&amp;quot;medium-standard-n1&amp;quot;]&lt;br /&gt;
 module.gke.google_container_node_pool.pools[&amp;quot;large-standard-n1&amp;quot;]&lt;br /&gt;
 module.gke.local.cluster_endpoint (expand)&lt;br /&gt;
 module.gke.output.endpoint (expand)&lt;br /&gt;
 provider[&amp;quot;registry.terraform.io/gavinbunney/kubectl&amp;quot;]&lt;br /&gt;
 kubectl_manifest.sync[&amp;quot;source.toolkit.fluxcd.io/v1beta1/gitrepository/flux-system/flux-system&amp;quot;] (destroy)&lt;br /&gt;
 module.gke.google_container_node_pool.pools[&amp;quot;preemptible&amp;quot;] (destroy)&lt;br /&gt;
 module.gke.module.gcloud_delete_default_kube_dns_configmap.module.gcloud_kubectl.null_resource.additional_components[0] (destroy)&lt;br /&gt;
 module.gke.module.gcloud_delete_default_kube_dns_configmap.module.gcloud_kubectl.null_resource.run_command[0] (destroy)&lt;br /&gt;
 module.gke.module.gcloud_delete_default_kube_dns_configmap.module.gcloud_kubectl.null_resource.module_depends_on[0] (destroy)&lt;br /&gt;
 module.gke.module.gcloud_delete_default_kube_dns_configmap.module.gcloud_kubectl.null_resource.run_destroy_command[0] (destroy)&lt;br /&gt;
 module.gke.kubernetes_config_map.kube-dns[0] (destroy)&lt;br /&gt;
 module.gke.google_container_cluster.primary&lt;br /&gt;
 module.gke.local.cluster_output_master_auth (expand)&lt;br /&gt;
 module.gke.local.cluster_master_auth_list_layer1 (expand)&lt;br /&gt;
 module.gke.local.cluster_master_auth_list_layer2 (expand)&lt;br /&gt;
 module.gke.local.cluster_master_auth_map (expand)&lt;br /&gt;
 module.gke.local.cluster_ca_certificate (expand)&lt;br /&gt;
 module.gke.output.ca_certificate (expand)&lt;br /&gt;
 provider[&amp;quot;registry.terraform.io/hashicorp/kubernetes&amp;quot;]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;-draw-cycles&amp;lt;/code&amp;gt; command causes Terraform to mark the arrows that are related to the cycle being reported using the color red. If you cannot visually distinguish red from black, you may wish to first edit the generated Graphviz code to replace red with some other color you can distinguish.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo apt install graphviz&lt;br /&gt;
terraform graph -draw-cycles -type=plan &amp;gt; cycle-plan.graphviz&lt;br /&gt;
terraform graph -draw-cycles | dot -Tpng &amp;gt; cycles.png&lt;br /&gt;
terraform graph -draw-cycles | dot -Tsvg &amp;gt; cycles.svg&lt;br /&gt;
terraform graph -draw-cycles | dot -Tpdf &amp;gt; cycles.pdf&lt;br /&gt;
# | -draw-cycles - highlight any cycles in the graph with colored edges. This helps when diagnosing cycle errors.&lt;br /&gt;
# | -type=plan   - type of graph to output. Can be: plan, plan-destroy, apply, validate, input, refresh.&lt;br /&gt;
&lt;br /&gt;
# For large graphs you may want to install inkscape&lt;br /&gt;
sudo apt install inkscape --no-install-suggests --no-install-recommends&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Awoid cycle errors in modules by structuring your config to avoid cross-module references. So instead of directly accessing an output of one module from inside another, set it up as in input parameter instead and wire everything together on the top level.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;How to get it solved&lt;br /&gt;
With the cycling dependency issue, study the graph then decide on removing from the state a resource that should be generated later. If the graph is not clear or too complex to read you may need to guess and delete from the state a resource marked for deletion, ie:&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
terraform state  rm kubectl_manifest.install[\&amp;quot;apps/v1/deployment/flux-system/kustomize-controller\&amp;quot;]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Remote state =&lt;br /&gt;
== Enable ==&lt;br /&gt;
Create s3 bucket with unique name, enable versioning and choose a region.&lt;br /&gt;
&lt;br /&gt;
Then configure terraform:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ terraform remote config \&lt;br /&gt;
     -backend=s3 \&lt;br /&gt;
     -backend-config=&amp;quot;bucket=YOUR_BUCKET_NAME&amp;quot; \&lt;br /&gt;
     -backend-config=&amp;quot;key=terraform.tfstate&amp;quot; \&lt;br /&gt;
     -backend-config=&amp;quot;region=YOUR_BUCKET_REGION&amp;quot; \&lt;br /&gt;
     -backend-config=&amp;quot;encrypt=true&amp;quot;&lt;br /&gt;
 Remote configuration updated&lt;br /&gt;
 Remote state configured and pulled.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
After running this command, you should see your Terraform state show up in that S3 bucket.&lt;br /&gt;
&lt;br /&gt;
== Locking ==&lt;br /&gt;
Add &amp;lt;code&amp;gt;dynamodb_table&amp;lt;/code&amp;gt; name to backend configuration. &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
terraform {&lt;br /&gt;
  version          = &amp;quot;~&amp;gt; 1.0&amp;quot;&lt;br /&gt;
  required_version = &amp;quot;= 0.11.11&amp;quot;&lt;br /&gt;
  backend &amp;quot;s3&amp;quot; {&lt;br /&gt;
    dynamodb_table = &amp;quot;tfstate-lock&amp;quot;&lt;br /&gt;
    profile        = &amp;quot;terraform-agent&amp;quot;&lt;br /&gt;
#   assume_role {&lt;br /&gt;
#     role_arn     = &amp;quot;${var.aws_xaccount_role}&amp;quot;&lt;br /&gt;
#     session_name = &amp;quot;${var.aws_xsession_name}&amp;quot;&lt;br /&gt;
#   }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In AWS create dynamo-db table, named: &amp;lt;tt&amp;gt;tfsate-lock&amp;lt;/tt&amp;gt; with index &amp;lt;tt&amp;gt;LockID&amp;lt;/tt&amp;gt;; as on a picture below. It an event of taking a lock the entry similar to one below gets created.&lt;br /&gt;
[[File:Terraform-dynamo-db-state-locking.png|none|left|Terraform-dynamo-db-state-locking]]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
{&amp;quot;ID&amp;quot;:&amp;quot;62a453e8-7fbc-cfa2-e07f-be1381b82af3&amp;quot;,&amp;quot;Operation&amp;quot;:&amp;quot;OperationTypePlan&amp;quot;,&amp;quot;Info&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;Who&amp;quot;:&amp;quot;piotr@laptop1&amp;quot;,&amp;quot;Version&amp;quot;:&amp;quot;0.11.11&amp;quot;,&amp;quot;Created&amp;quot;:&amp;quot;2019-03-07T08:49:33.3078722Z&amp;quot;,&amp;quot;Path&amp;quot;:&amp;quot;tfstate-acmedev01-acmedev-111111111111/aws/acmedev01/state&amp;quot;}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Workspaces =&lt;br /&gt;
== [https://discuss.hashicorp.com/t/how-to-change-the-name-of-a-workspace/24010 Rename a workspace / move the state file] ==&lt;br /&gt;
{{Note|The state manipulation commands run through Terraform’s automatic state upgrading processes and so best to do this with the same Terraform CLI version that you’ve most recently been using against this workspace so that the state won’t be implicitly upgraded as part of the operation.}}&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
terraform workspace select old-name&lt;br /&gt;
terraform state pull &amp;gt;old-name.tfstate&lt;br /&gt;
terraform workspace new new-name&lt;br /&gt;
terraform state push old-name.tfstate&lt;br /&gt;
terraform show # confirm that the newly-imported state looks 'right', before deleting the old workspace&lt;br /&gt;
terraform workspace delete -force old-name&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
Variables can be provided via cli&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
terraform apply -var=&amp;quot;image_id=ami-abc123&amp;quot;&lt;br /&gt;
terraform apply -var='image_id_list=[&amp;quot;ami-abc123&amp;quot;,&amp;quot;ami-def456&amp;quot;]'&lt;br /&gt;
terraform apply -var='image_id_map={&amp;quot;us-east-1&amp;quot;:&amp;quot;ami-abc123&amp;quot;,&amp;quot;us-east-2&amp;quot;:&amp;quot;ami-def456&amp;quot;}'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Terraform also automatically loads a number of variable definitions files if they are present:&lt;br /&gt;
* Files named exactly &amp;lt;code&amp;gt;terraform.tfvars&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;terraform.tfvars.json&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Any files with names ending in &amp;lt;code&amp;gt;.auto.tfvars&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;.auto.tfvars.json&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=Syntax Terraform 0.12.6+=&lt;br /&gt;
{{Note|This [https://alexharv074.github.io/2019/06/02/adventures-in-the-terraform-dsl-part-iii-iteration-enhancements-in-terraform-0.12.html#for-expressions for-expressions] link is a little diamond for this subject}}&lt;br /&gt;
&lt;br /&gt;
== Map and nested block ==&lt;br /&gt;
Terrafom 0.12 introduces stricter validation for followings but allows map keys to be set dynamically from expressions. Note of &amp;quot;=&amp;quot; sign.&lt;br /&gt;
* a map attribute - usually have user-defined keys, like we see in the tags example &lt;br /&gt;
* a nested block always has a fixed set of supported arguments defined by the resource type schema, which Terraform will validate&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
resource &amp;quot;aws_instance&amp;quot; &amp;quot;example&amp;quot; {&lt;br /&gt;
  instance_type = &amp;quot;t2.micro&amp;quot;&lt;br /&gt;
  ami           = &amp;quot;ami-abcd1234&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  tags = {             # &amp;lt;- a map attribute, requires '='&lt;br /&gt;
    Name = &amp;quot;example instance&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  ebs_block_device {    # &amp;lt;- a nested block, no '='&lt;br /&gt;
    device_name = &amp;quot;sda2&amp;quot;&lt;br /&gt;
    volume_type = &amp;quot;gp2&amp;quot;&lt;br /&gt;
    volume_size = 24&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== [https://alexharv074.github.io/2019/06/02/adventures-in-the-terraform-dsl-part-iii-iteration-enhancements-in-terraform-0.12.html For_each] ==&lt;br /&gt;
* [https://alexharv074.github.io/2019/06/02/adventures-in-the-terraform-dsl-part-iii-iteration-enhancements-in-terraform-0.12.html terraform iterations]&lt;br /&gt;
* [https://discuss.hashicorp.com/t/produce-maps-from-list-of-strings-of-a-map/2197 Produce maps from list of strings of a map]&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ For_each and new allowed formatting without the need for &amp;quot;${var.vpc_cidr}&amp;quot; syntax = var.vpc_cidr&lt;br /&gt;
|- &lt;br /&gt;
! main.tf&lt;br /&gt;
! variables.tf and outputs.tf&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| &amp;lt;source&amp;gt;# vi main.tf&lt;br /&gt;
resource &amp;quot;aws_vpc&amp;quot; &amp;quot;tf_vpc&amp;quot; {&lt;br /&gt;
  cidr_block           = &amp;quot;${var.vpc_cidr}&amp;quot;&lt;br /&gt;
  enable_dns_hostnames = true&lt;br /&gt;
  enable_dns_support   = true&lt;br /&gt;
  tags =  {           #&amp;lt;-note of '=' as this is an argument&lt;br /&gt;
    Name = &amp;quot;tf_vpc&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
resource &amp;quot;aws_security_group&amp;quot; &amp;quot;tf_public_sg&amp;quot; {&lt;br /&gt;
  name        = &amp;quot;tf_public_sg&amp;quot;&lt;br /&gt;
  description = &amp;quot;Used for access to the public instances&amp;quot;&lt;br /&gt;
  vpc_id      = &amp;quot;${aws_vpc.tf_vpc.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  dynamic &amp;quot;ingress&amp;quot; {&lt;br /&gt;
    for_each = [ for s in var.service_ports: {&lt;br /&gt;
       from_port = s.from_port&lt;br /&gt;
       to_port   = s.to_port   }]&lt;br /&gt;
    content {&lt;br /&gt;
      from_port   = ingress.value.from_port&lt;br /&gt;
      to_port     = ingress.value.to_port&lt;br /&gt;
      protocol    = &amp;quot;tcp&amp;quot;&lt;br /&gt;
      cidr_blocks = [ var.accessip ]&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
# Commented block has been replaced by 'dynamic &amp;quot;ingress&amp;quot;'&lt;br /&gt;
# ingress {  #SSH&lt;br /&gt;
#   from_port   = 22&lt;br /&gt;
#   to_port     = 22&lt;br /&gt;
#   protocol    = &amp;quot;tcp&amp;quot;&lt;br /&gt;
#   cidr_blocks = [&amp;quot;${var.accessip}&amp;quot;]&lt;br /&gt;
# }&lt;br /&gt;
# ingress {  #HTTP&lt;br /&gt;
#   from_port   = 80&lt;br /&gt;
#   to_port     = 80&lt;br /&gt;
#   protocol    = &amp;quot;tcp&amp;quot;&lt;br /&gt;
#   cidr_blocks = [&amp;quot;${var.accessip}&amp;quot;]&lt;br /&gt;
# }&lt;br /&gt;
  egress { &lt;br /&gt;
    from_port   = 0&lt;br /&gt;
    to_port     = 0&lt;br /&gt;
    protocol    = &amp;quot;-1&amp;quot;&lt;br /&gt;
    cidr_blocks = [&amp;quot;0.0.0.0/0&amp;quot;]&lt;br /&gt;
  }&lt;br /&gt;
}&amp;lt;/source&amp;gt; &lt;br /&gt;
| &amp;lt;source&amp;gt;# vi variables.tf&lt;br /&gt;
variable &amp;quot;vpc_cidr&amp;quot; { default = &amp;quot;10.123.0.0/16&amp;quot; }&lt;br /&gt;
variable &amp;quot;accessip&amp;quot; { default = &amp;quot;0.0.0.0/0&amp;quot;     }&lt;br /&gt;
&lt;br /&gt;
variable &amp;quot;service_ports&amp;quot; {&lt;br /&gt;
  type = &amp;quot;list&amp;quot;&lt;br /&gt;
  default = [&lt;br /&gt;
    { from_port = 22, to_port = 22 },&lt;br /&gt;
    { from_port = 80, to_port = 80 }&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# vi outputs.tf&lt;br /&gt;
output &amp;quot;public_sg&amp;quot; { &lt;br /&gt;
  value = aws_security_group.tf_public_sg.id&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
output &amp;quot;ingress_port_mapping&amp;quot; {&lt;br /&gt;
  value = {&lt;br /&gt;
    for ingress in aws_security_group.tf_public_sg.ingress:&lt;br /&gt;
    format(&amp;quot;From %d&amp;quot;, ingress.from_port) =&amp;gt; format(&amp;quot;To %d&amp;quot;, ingress.to_port)&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Computed 'Outputs:'&lt;br /&gt;
ingress_port_mapping = {&lt;br /&gt;
  &amp;quot;From 22&amp;quot; = &amp;quot;To 22&amp;quot;&lt;br /&gt;
  &amp;quot;From 80&amp;quot; = &amp;quot;To 80&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
public_sg = sg-04d51b5ae10e6f0b0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== [https://www.sheldonhull.com/blog/how-to-iterate-through-a-list-of-objects-with-terraforms-for-each-function/ Iterate over list of objects] ===&lt;br /&gt;
[https://stackoverflow.com/questions/58594506/how-to-for-each-through-a-listobjects-in-terraform-0-12 how-to-for-each-through-a-listobjects]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
# debug.tf&lt;br /&gt;
locals {&lt;br /&gt;
  users = [&lt;br /&gt;
    # list of objects&lt;br /&gt;
    { name = &amp;quot;foo&amp;quot;, is_enabled = true  },&lt;br /&gt;
    { name = &amp;quot;bar&amp;quot;, is_enabled = false },&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
resource &amp;quot;null_resource&amp;quot; &amp;quot;this&amp;quot; {&lt;br /&gt;
    for_each = { for name in local.users: name.name =&amp;gt; name.is_enabled }&lt;br /&gt;
    connection {&lt;br /&gt;
      name     = each.key&lt;br /&gt;
      email    = each.value&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
output &amp;quot;users_map&amp;quot; {&lt;br /&gt;
  value = { for name in local.users: name.name =&amp;gt; name.is_enabled }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# terraform init&lt;br /&gt;
# terraform apply&lt;br /&gt;
&lt;br /&gt;
null_resource.this[&amp;quot;bar&amp;quot;]: Creating...&lt;br /&gt;
null_resource.this[&amp;quot;foo&amp;quot;]: Creating...&lt;br /&gt;
null_resource.this[&amp;quot;bar&amp;quot;]: Creation complete after 0s [id=7228791922218879597]&lt;br /&gt;
null_resource.this[&amp;quot;foo&amp;quot;]: Creation complete after 0s [id=7997705376010456213]&lt;br /&gt;
&lt;br /&gt;
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.&lt;br /&gt;
&lt;br /&gt;
Outputs:&lt;br /&gt;
&lt;br /&gt;
users_map = {&lt;br /&gt;
  &amp;quot;bar&amp;quot; = false&lt;br /&gt;
  &amp;quot;foo&amp;quot; = true&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Plan is more readable and explicit ==&lt;br /&gt;
[[Terraform/plan_tf_11_vs_12|See comparison]]&lt;br /&gt;
&lt;br /&gt;
== [https://www.hashicorp.com/blog/terraform-0-12-rich-value-types/ Rich Value Types] - for previewing whole resource object ==&lt;br /&gt;
'''Resources and Modules as Values''' Terraform 0.12 now permits using entire resources as object values within configuration, including returning them as outputs and passing them as input variables:&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
output &amp;quot;vpc&amp;quot; {&lt;br /&gt;
  value = aws_vpc.example&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The type of this output value is an object type derived from the schema of the &amp;lt;code&amp;gt;aws_vpc&amp;lt;/code&amp;gt; resource type. The calling module can then access attributes of this result in the same way as the returning module would use &amp;lt;code&amp;gt;aws_vpc.example&amp;lt;/code&amp;gt;, such as &amp;lt;code&amp;gt;module.example.vpc.cidr_block&amp;lt;/code&amp;gt;. This works also for modules with an expression like &amp;lt;code&amp;gt;module.vpc&amp;lt;/code&amp;gt; evaluating to an object value with attributes corresponding to the modules's named outputs.&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;code&amp;gt;for&amp;lt;/code&amp;gt; ==&lt;br /&gt;
* [https://discuss.hashicorp.com/t/produce-maps-from-list-of-strings-of-a-map/2197 Produce maps from list of strings of a map]&lt;br /&gt;
This is mostly used for parsing preexisting lists and maps rather than generating ones. For example, we are able to convert all elements in a list of strings to upper case using this expression.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
local {&lt;br /&gt;
  upper_list = [for i in var.list : upper(i)] # creates a new list &lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The For iterates over each element of the list and returns the value of upper(el) for each element in form of a list. We can also use this expression to generate maps.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
local {&lt;br /&gt;
  upper_map = {for i in var.list : i =&amp;gt; upper(i)} # creates a map with key = value&lt;br /&gt;
                                                  #                 { i[0] = upper(i[0])&lt;br /&gt;
                                                  #                   i[1] = upper(i[1]) }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Use ''if'' as a filter in ''for'' expression&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
[for i in var.list : upper(i) if i != &amp;quot;&amp;quot;]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
In this case, the original element from list now correspond to their uppercase version.&lt;br /&gt;
&lt;br /&gt;
Lastly, we can include an if statement as a filter in for expressions. Unfortunately, we are not able to use if in logical operations like the ternary operators we used before. The following state will try to return a list of all non-empty elements in their uppercase state.&lt;br /&gt;
&lt;br /&gt;
== Manipulate list and complex object ==&lt;br /&gt;
Build a new list by removing items that their string value do not match regex expression&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Resource that generates an object&lt;br /&gt;
resource &amp;quot;aws_acm_certificate&amp;quot; &amp;quot;main&amp;quot; {...}&lt;br /&gt;
&lt;br /&gt;
# Preview of input object 'aws_acm_certificate.main.domain_validation_options'&lt;br /&gt;
output &amp;quot;domain_validation_options&amp;quot; {&lt;br /&gt;
  value       = aws_acm_certificate.main.domain_validation_options&lt;br /&gt;
  description = &amp;quot;array/list of maps taken from resource object(aws_acm_certificate.issued) describing all validation domain records&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
$ terraform output domain_validation_options&lt;br /&gt;
[ # &amp;lt;- array starts here&lt;br /&gt;
  { # &amp;lt;- an item of array the map object&lt;br /&gt;
    &amp;quot;domain_name&amp;quot; = &amp;quot;*.dev.example.com&amp;quot;&lt;br /&gt;
    &amp;quot;resource_record_name&amp;quot; = &amp;quot;_11111111111111111111111111111111.dev.example.com.&amp;quot;&lt;br /&gt;
    &amp;quot;resource_record_type&amp;quot; = &amp;quot;CNAME&amp;quot;&lt;br /&gt;
    &amp;quot;resource_record_value&amp;quot; = &amp;quot;_22222222222222222222222222222222.mzlfeqexyx.acm-validations.aws.&amp;quot;&lt;br /&gt;
  },&lt;br /&gt;
  {&lt;br /&gt;
    &amp;quot;domain_name&amp;quot; = &amp;quot;api.example.com&amp;quot;&lt;br /&gt;
    &amp;quot;resource_record_name&amp;quot; = &amp;quot;_31111111111111111111111111111111.api.example.com.&amp;quot;&lt;br /&gt;
    &amp;quot;resource_record_type&amp;quot; = &amp;quot;CNAME&amp;quot;&lt;br /&gt;
    &amp;quot;resource_record_value&amp;quot; = &amp;quot;_42222222222222222222222222222222.vhzmpjdqfx.acm-validations.aws.&amp;quot;&lt;br /&gt;
                                 &lt;br /&gt;
  },&lt;br /&gt;
]&lt;br /&gt;
&lt;br /&gt;
# 'for k, v' syntax builds a new object 'validation_domains' by iterating over array of maps&lt;br /&gt;
# 'aws_acm_certificate.main.domain_validation_options' and conditinally changes a value of 'v'&lt;br /&gt;
# if contains the sting &amp;quot;*.dev.example.com&amp;quot;. tomap(v) is required to persist type across for expression.&lt;br /&gt;
locals {&lt;br /&gt;
  validation_domains = [&lt;br /&gt;
    for k, v in aws_acm_certificate.main.domain_validation_options : tomap(v) if contains(&lt;br /&gt;
      &amp;quot;*.dev.example.com&amp;quot;, replace(v.domain_name, &amp;quot;*.&amp;quot;, &amp;quot;&amp;quot;)&lt;br /&gt;
    )&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
$ terraform output local_distinct_domains&lt;br /&gt;
local_distinct_domains = [&lt;br /&gt;
  &amp;quot;api.example.com&amp;quot;,&lt;br /&gt;
  &amp;quot;dev.example.com&amp;quot;,&lt;br /&gt;
  &amp;quot;api-aat1.dev.example.com&amp;quot;,&lt;br /&gt;
  &amp;quot;api-aat2.dev.example.com&amp;quot;,&lt;br /&gt;
]&lt;br /&gt;
&lt;br /&gt;
# 'for domain' expession builds a new list only when a domain matches regexall string.&lt;br /&gt;
# checks regexall lengh &amp;gt; 0 of matched captured groups so true or false is return, so &lt;br /&gt;
# the 'for domain : if' statment conditionally adds the item to the new list&lt;br /&gt;
locals {&lt;br /&gt;
  distinct_domains_excluded = [ &lt;br /&gt;
    for domain in local.distinct_domains : domain if length(regexall(&amp;quot;dev.example.com&amp;quot;, domain)) &amp;gt; 0&lt;br /&gt;
  ]&lt;br /&gt;
&lt;br /&gt;
# Similar to the above but iterating over array of maps (k,v - key, value pairs)&lt;br /&gt;
  validation_domains = [&lt;br /&gt;
    for k,v in local.validation_domains : tomap(v) if length(regexall(&amp;quot;dev.example.com&amp;quot;, v.domain_name)) &amp;gt; 0&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Example of iterating over array of maps 'aws_acm_certificate.main.domain_validation_options' to build a list&lt;br /&gt;
# of fqdns that are store in 'aws_acm_certificate.main.domain_validation_options.resource_record_name' in .resource_record_name&lt;br /&gt;
# key.&lt;br /&gt;
# 'for fqdn' syntax on each iteration 'fqdn=aws_acm_certificate.main.domain_validation_options[index]', then&lt;br /&gt;
# anything after ':' means 'set to value equals' fqdn.resource_record_name&lt;br /&gt;
resource &amp;quot;aws_acm_certificate_validation&amp;quot; &amp;quot;main&amp;quot; {&lt;br /&gt;
  certificate_arn         = aws_acm_certificate.main.arn&lt;br /&gt;
  validation_record_fqdns = [ &lt;br /&gt;
    for fqdn in aws_acm_certificate.main.domain_validation_options : fqdn.resource_record_name&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== function: replace, regex ==&lt;br /&gt;
Snippet below removes comments and any empty lines from a &amp;lt;code&amp;gt;values.yaml.tpl&amp;lt;/code&amp;gt; file.&lt;br /&gt;
&amp;lt;source lang=json&amp;gt;&lt;br /&gt;
locals {&lt;br /&gt;
  match_comment = &amp;quot;/(?U)(?m)(?s)^[[:space:]]*#.*$/&amp;quot; # match anyline that starts with '#' or any 'whitespace(s) + #'&lt;br /&gt;
  match_empty_line = &amp;quot;/(?m)(?s)(^[\r\n])/&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
resource &amp;quot;helm_release&amp;quot; &amp;quot;myapp&amp;quot; {&lt;br /&gt;
  name             = &amp;quot;myapp&amp;quot;&lt;br /&gt;
  chart            = &amp;quot;${path.module}/charts/myapp&amp;quot;&lt;br /&gt;
  values = [&lt;br /&gt;
    replace(&lt;br /&gt;
        replace(&lt;br /&gt;
          templatefile(&amp;quot;${path.module}/templates/values.yaml.tpl&amp;quot;, {&lt;br /&gt;
            }), local.match_comment, &amp;quot;&amp;quot;), local.match_empty_line, &amp;quot;&amp;quot;)&lt;br /&gt;
  ]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Explanation:&lt;br /&gt;
* Terraform regex is using [https://github.com/google/re2/wiki/Syntax re2 library]&lt;br /&gt;
* Regex flags are enabled by prefixinf the search:&lt;br /&gt;
** &amp;lt;code&amp;gt;(?m)&amp;lt;/code&amp;gt; - multi-line mode (default false)&lt;br /&gt;
** &amp;lt;code&amp;gt;(?s)&amp;lt;/code&amp;gt; - let . match \n (default false)&lt;br /&gt;
** &amp;lt;code&amp;gt;(?U)&amp;lt;/code&amp;gt; - ungreedy (default false), so stop matching comments at EOL&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
*[https://www.hashicorp.com/blog/hashicorp-terraform-0-12-preview-for-and-for-each HashiCorp Terraform 0.12 Preview: For and For-Each]&lt;br /&gt;
&lt;br /&gt;
= Modules =&lt;br /&gt;
Modules are used in Terraform to modularize and encapsulate groups of resources in your infrastructure.&lt;br /&gt;
&lt;br /&gt;
When calling a module from .tf file you passing values for variables that are defined in a module to create resources to your specification. Before you can use any module it needs to be downloaded. Use &lt;br /&gt;
 $ terraform get&lt;br /&gt;
to download modules. You will notice that &amp;lt;code&amp;gt;.terraform&amp;lt;/code&amp;gt; directory will be created that contains symlinks to the module.&lt;br /&gt;
&lt;br /&gt;
;TF file &amp;lt;tt&amp;gt;~/git/dev101/vpc.tf&amp;lt;/tt&amp;gt; calling 'vpc' module&lt;br /&gt;
&lt;br /&gt;
 variable &amp;quot;vpc_name&amp;quot;       { description = &amp;quot;value comes from terrafrom.tfvars&amp;quot; }&lt;br /&gt;
 variable &amp;quot;vpc_cidr_base&amp;quot;  { description = &amp;quot;value comes from terrafrom.tfvars&amp;quot; }&lt;br /&gt;
 variable &amp;quot;vpc_cidr_range&amp;quot; { description = &amp;quot;value comes from terrafrom.tfvars&amp;quot; }&lt;br /&gt;
 &lt;br /&gt;
 module &amp;quot;vpc-dev&amp;quot; {&lt;br /&gt;
   source     = &amp;quot;../modules/vpc&amp;quot;&lt;br /&gt;
   &amp;lt;span style=&amp;quot;color: blue&amp;quot;&amp;gt;name&amp;lt;/span&amp;gt;       = &amp;quot;${var.vpc_name}&amp;quot;  #here we assign a value to 'name' variable&lt;br /&gt;
   &amp;lt;span style=&amp;quot;color: blue&amp;quot;&amp;gt;cidr&amp;lt;/span&amp;gt;       = &amp;quot;${var.vpc_cidr_base}.${var.vpc_cidr_range}&amp;quot; }&lt;br /&gt;
 &lt;br /&gt;
 output &amp;quot;vpc-name&amp;quot;         { value = &amp;quot;${var.vpc_name                  }&amp;quot;}&lt;br /&gt;
 output &amp;quot;vpc_id&amp;quot;           { value = &amp;quot;${module.vpc-dev.&amp;lt;span style=&amp;quot;color: green&amp;quot;&amp;gt;id-from_module&amp;lt;/span&amp;gt; }&amp;quot;}&lt;br /&gt;
&lt;br /&gt;
;Module in &amp;lt;tt&amp;gt;~/git/modules/vpc/main.tf&amp;lt;/tt&amp;gt;&lt;br /&gt;
 variable &amp;quot;name&amp;quot; { description = &amp;quot;variable local to the module, value comes when calling the module&amp;quot; }&lt;br /&gt;
 variable &amp;quot;cidr&amp;quot; { description = &amp;quot;local to the module, value passed on when calling the module&amp;quot; }&lt;br /&gt;
 &lt;br /&gt;
 resource &amp;quot;aws_vpc&amp;quot; &amp;quot;scope&amp;quot; {&lt;br /&gt;
    cidr_block  = &amp;quot;${var.&amp;lt;span style=&amp;quot;color: blue&amp;quot;&amp;gt;cidr&amp;lt;/span&amp;gt;}&amp;quot;&lt;br /&gt;
    tags { Name = &amp;quot;${var.&amp;lt;span style=&amp;quot;color: blue&amp;quot;&amp;gt;name&amp;lt;/span&amp;gt;}&amp;quot; }}&lt;br /&gt;
 &lt;br /&gt;
  output &amp;quot;&amp;lt;span style=&amp;quot;color: green&amp;quot;&amp;gt;id-from_module&amp;lt;/span&amp;gt;&amp;quot;    { value = &amp;quot;${aws_vpc.scope.id}&amp;quot; }&lt;br /&gt;
&lt;br /&gt;
Output variables is a way to output important data back when running &amp;lt;code&amp;gt;terraform apply&amp;lt;/code&amp;gt;. These variables also can be recalled when .tfstate file has been populated using &amp;lt;code&amp;gt;terraform output VARIABLE-NAME&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
 $ terraform apply     #this will use 'vpc' module&lt;br /&gt;
&lt;br /&gt;
[[File:Terraform-module-apply.png|400px|none|left|Terraform-module-apply]]&lt;br /&gt;
&lt;br /&gt;
Notice &amp;lt;span style=&amp;quot;color: green&amp;quot;&amp;gt;Outputs&amp;lt;/span&amp;gt;. These outputs can be recalled also by:&lt;br /&gt;
 $ terraform output vpc-name      $ terraform output vpc_id&lt;br /&gt;
 dev101                           vpc-00e00c67&lt;br /&gt;
&lt;br /&gt;
= Templates =&lt;br /&gt;
{{ Note | [https://github.com/hashicorp/terraform-guides/tree/master/infrastructure-as-code/terraform-0.12-examples/new-template-syntax Terraform 0.12+ New Template Syntax Example] }}&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
# Terraform version 0.12+ template syntax&lt;br /&gt;
%{ for name in var.names ~}&lt;br /&gt;
%{ if name == &amp;quot;Mary&amp;quot; }${name}%{ endif ~}&lt;br /&gt;
%{ endfor ~}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Dump a rendered &amp;lt;code&amp;gt;data.template_file&amp;lt;/code&amp;gt; into a file to preview correctness of interpolations&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
#Dumps rendered template&lt;br /&gt;
resource &amp;quot;null_resource&amp;quot; &amp;quot;export_rendered_template&amp;quot; {&lt;br /&gt;
  triggers = {&lt;br /&gt;
   uid = &amp;quot;${uuid()}&amp;quot;  #this causes to always run this resource&lt;br /&gt;
  }&lt;br /&gt;
  provisioner &amp;quot;local-exec&amp;quot; {&lt;br /&gt;
    command = &amp;quot;cat &amp;gt; waf-policy.output.txt &amp;lt;&amp;lt;EOL\n${data.template_file.waf-whitelist-policy.rendered}\nEOL&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example of creating &lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
resource &amp;quot;aws_instance&amp;quot; &amp;quot;microservices&amp;quot; {&lt;br /&gt;
  count      = &amp;quot;${var.instance_count}&amp;quot;&lt;br /&gt;
  subnet_id  = &amp;quot;${element(&amp;quot;${data.aws_subnet.private.*.id          }&amp;quot;, count.index)}&amp;quot;&lt;br /&gt;
  user_data  = &amp;quot;${element(&amp;quot;${data.template_file.userdata.*.rendered}&amp;quot;, count.index)}&amp;quot;&lt;br /&gt;
  ...&lt;br /&gt;
}&lt;br /&gt;
data &amp;quot;template_file&amp;quot; &amp;quot;userdata&amp;quot; {&lt;br /&gt;
  count      = &amp;quot;${var.instance_count}&amp;quot;&lt;br /&gt;
  template   = &amp;quot;${file(&amp;quot;${path.root}/templates/user-data.tpl&amp;quot;)}&amp;quot;&lt;br /&gt;
  vars = {&lt;br /&gt;
    vmname   = &amp;quot;ms-${count.index + 1}-${var.vpc_name}&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
#For debugging you can display an array of rendered templates with the output below:&lt;br /&gt;
output &amp;quot;userdata&amp;quot; { value = &amp;quot;${data.template_file.userdata.*.rendered}&amp;quot; }&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
{{ Note |&lt;br /&gt;
* resource &amp;lt;code&amp;gt;template_file is deprecated&amp;lt;/code&amp;gt; in favour of &amp;lt;code&amp;gt;data template_file&amp;lt;/code&amp;gt;&lt;br /&gt;
* Terraform 0.12+ offers new &amp;lt;code&amp;gt;template&amp;lt;/code&amp;gt; function without a need of using a &amp;lt;code&amp;gt;data&amp;lt;/code&amp;gt; object }}&lt;br /&gt;
== template json files ==&lt;br /&gt;
For working with JSON structures it's [https://www.terraform.io/docs/configuration/functions/templatefile.html#generating-json-or-yaml-from-a-template recommended] to use &amp;lt;code&amp;gt;jsonencode&amp;lt;/code&amp;gt; function to simplify escaping, delimiters and get validated json in return.&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
resource &amp;quot;aws_iam_policy&amp;quot; &amp;quot;s3Bucket&amp;quot; {&lt;br /&gt;
   name  = s3Bucket&amp;quot;&lt;br /&gt;
   policy = templatefile(&amp;quot;${path.module}/templates/s3Bucket.json.tpl&amp;quot;, {&lt;br /&gt;
     S3BUCKETS = var.s3_buckets&lt;br /&gt;
   })&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
variable &amp;quot;s3_buckets&amp;quot; {&lt;br /&gt;
  type        = list(string)&lt;br /&gt;
  default     = [ &amp;quot;aaa-bucket-111&amp;quot;, &amp;quot;bbb-bucket-222&amp;quot; ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Template file&lt;br /&gt;
&amp;lt;source lang=json&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
    &amp;quot;Version&amp;quot;: &amp;quot;2012-10-17&amp;quot;,&lt;br /&gt;
    &amp;quot;Statement&amp;quot;: [&lt;br /&gt;
        {&lt;br /&gt;
            &amp;quot;Effect&amp;quot;: &amp;quot;Allow&amp;quot;,&lt;br /&gt;
            &amp;quot;Action&amp;quot;: &amp;quot;s3:ListAllMyBuckets&amp;quot;,&lt;br /&gt;
            &amp;quot;Resource&amp;quot;: &amp;quot;*&amp;quot;&lt;br /&gt;
        },&lt;br /&gt;
        {&lt;br /&gt;
            &amp;quot;Effect&amp;quot;: &amp;quot;Allow&amp;quot;,&lt;br /&gt;
            &amp;quot;Action&amp;quot;: [&lt;br /&gt;
                &amp;quot;s3:ListBucket&amp;quot;,&lt;br /&gt;
                &amp;quot;s3:GetBucketLocation&amp;quot;&lt;br /&gt;
            ],&lt;br /&gt;
            &amp;quot;Resource&amp;quot;: ${jsonencode([for BUCKET in S3BUCKETS : &amp;quot;arn:aws:s3:::${BUCKET}&amp;quot;])}&lt;br /&gt;
# renders json array -&amp;gt; [ &amp;quot;arn:aws:s3:::aaa-bucket-111&amp;quot;, &amp;quot;arn:aws:s3:::bbb-bucket-222&amp;quot; ]&lt;br /&gt;
        }&lt;br /&gt;
    ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Explain&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
substitution syntax ${}    local loop variable&lt;br /&gt;
|  function jsonencode   /      templatefile function input variable, it's not ${} syntax&lt;br /&gt;
|  |                   /       /                                  &lt;br /&gt;
${jsonencode([for BUCKET in S3BUCKETS : &amp;quot;arn:aws:s3:::${BUCKET}&amp;quot;])}&lt;br /&gt;
             / |                                        /       |\&lt;br /&gt;
           /   for loop                     template variable   | function cloasing bracket&lt;br /&gt;
    indicates that the result to be an array[]               closing bracket of the json array&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resource ==&lt;br /&gt;
*[https://github.com/hashicorp/terraform/issues/1893 example of unique templates per instance]&lt;br /&gt;
*[https://github.com/hashicorp/terraform/pull/2140 recommendation of how to create unique templates per instance]&lt;br /&gt;
&lt;br /&gt;
= Execute arbitrary code using null_resource and local-exec =&lt;br /&gt;
The null_resource allows to create terraform managed resource also saved in the state file but it uses 3rd party provisoners like local-exec, remote-exec, etc., allowing for arbitrary code execution. This should be only used when Terraform core does not provide the solution for your use case.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
resource &amp;quot;null_resource&amp;quot; &amp;quot;attach_alb_am_wkr_ext&amp;quot; {&lt;br /&gt;
&lt;br /&gt;
  #depends_on sets up a dependency. So it depends on completion of another resource &lt;br /&gt;
  #and it won't run if the resource does not change&lt;br /&gt;
  #depends_on = [ &amp;quot;aws_cloudformation_stack.waf-alb&amp;quot; ]  &lt;br /&gt;
&lt;br /&gt;
  #triggers save computed strings in tfstate file, if value changes on the next run it triggers a resource to be created&lt;br /&gt;
  triggers = {   &lt;br /&gt;
    waf_id = &amp;quot;${aws_cloudformation_stack.waf-alb.outputs.wafWebACL}&amp;quot;   #produces WAF_id&lt;br /&gt;
    alb_id = &amp;quot;${module.balancer_external_alb_instance.arn         }&amp;quot;   #produces full ALB_arn name&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  provisioner &amp;quot;local-exec&amp;quot; {&lt;br /&gt;
    when    = &amp;quot;create&amp;quot;     #runs on: terraform apply&lt;br /&gt;
    command = &amp;lt;&amp;lt;EOF&lt;br /&gt;
ALBARN=$(aws elbv2 describe-load-balancers --region ${var.region} \&lt;br /&gt;
      --name ${var.vpc}-${var.alb_class} \&lt;br /&gt;
      --output text --query 'LoadBalancers[0].LoadBalancerArn') &amp;amp;&amp;amp;&lt;br /&gt;
aws waf-regional associate-web-acl --web-acl-id &amp;quot;${aws_cloudformation_stack.waf-alb.outputs.wafWebACL}&amp;quot; \&lt;br /&gt;
                                   --resource-arn $ALBARN --region ${var.region}&lt;br /&gt;
EOF&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  provisioner &amp;quot;local-exec&amp;quot; {&lt;br /&gt;
    when    = &amp;quot;destroy&amp;quot;  #runs only on: terraform destruct&lt;br /&gt;
    command = &amp;lt;&amp;lt;EOF&lt;br /&gt;
ALBARN=$(aws elbv2 describe-load-balancers --region ${var.region} \&lt;br /&gt;
      --name ${var.vpc}-${var.alb_class} \&lt;br /&gt;
      --output text --query 'LoadBalancers[0].LoadBalancerArn') &amp;amp;&amp;amp;&lt;br /&gt;
aws waf-regional disassociate-web-acl --resource-arn $ALBARN --region ${var.region}&lt;br /&gt;
EOF&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: By default the local-exec provisioner will use &amp;lt;code&amp;gt;/bin/sh -c &amp;quot;your&amp;lt;&amp;lt;EOFscript&amp;quot;&amp;lt;/code&amp;gt; so it will not strip down any meta-characters like &amp;quot;double quotes&amp;quot; causing &amp;lt;tt&amp;gt;aws cli&amp;lt;/tt&amp;gt; to fail. Therefore the output has been forced as &amp;lt;tt&amp;gt;text&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= &amp;lt;code&amp;gt;terraform providers&amp;lt;/code&amp;gt; =&lt;br /&gt;
List all providers in your project to see versions and dependencies.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
$ terraform providers&lt;br /&gt;
.&lt;br /&gt;
├── provider.aws ~&amp;gt; 2.44&lt;br /&gt;
├── provider.external ~&amp;gt; 1.2&lt;br /&gt;
├── provider.null ~&amp;gt; 2.1&lt;br /&gt;
├── provider.random ~&amp;gt; 2.2&lt;br /&gt;
├── provider.template ~&amp;gt; 2.1&lt;br /&gt;
├── module.kubernetes&lt;br /&gt;
│   ├── module.config&lt;br /&gt;
│   │   ├── provider.aws&lt;br /&gt;
│   │   ├── provider.helm ~&amp;gt; 0.10.4&lt;br /&gt;
│   │   ├── provider.kubernetes ~&amp;gt; 1.10.0&lt;br /&gt;
│   │   ├── provider.null (inherited)&lt;br /&gt;
│   │   ├── module.alb_ingress_controller&lt;br /&gt;
(...)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= terraform plugins cache =&lt;br /&gt;
Create &amp;lt;code&amp;gt;.terraformrc&amp;lt;/code&amp;gt; file in $HOME directory and specify the cache directory. Or set an environment variable. Then rerun &amp;lt;code&amp;gt;terraform init&amp;lt;/code&amp;gt; to save providers into shared (cache) directory.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=terraform&amp;gt;&lt;br /&gt;
# Option 1.&lt;br /&gt;
cat &amp;gt; ~/.terraformrc &amp;lt;&amp;lt;'EOF'&lt;br /&gt;
plugin_cache_dir   = &amp;quot;$HOME/.terraform.d/plugin-cache/&amp;quot;&lt;br /&gt;
disable_checkpoint = true&lt;br /&gt;
EOF&lt;br /&gt;
&lt;br /&gt;
# Option 2.&lt;br /&gt;
export TF_PLUGIN_CACHE_DIR=$HOME/.terraform.d/plugins-cache&lt;br /&gt;
&lt;br /&gt;
# Create the cache directory&lt;br /&gt;
mkdir $HOME/.terraform.d/plugin-cache&lt;br /&gt;
&lt;br /&gt;
# Delete per root module providers in &amp;lt;code&amp;gt;.terraform&amp;lt;/code&amp;gt; directory&lt;br /&gt;
find /git/repositories -type d -name &amp;quot;.terraform&amp;quot; -exec rm -rf {}/providers \;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Run &amp;lt;code&amp;gt;terraform init&amp;lt;/code&amp;gt;.&lt;br /&gt;
&amp;lt;source lang=terraform&amp;gt;&lt;br /&gt;
terraform init -backend-config=dev.backend.tfvars&lt;br /&gt;
Initializing the backend...&lt;br /&gt;
&lt;br /&gt;
Successfully configured the backend &amp;quot;s3&amp;quot;! Terraform will automatically&lt;br /&gt;
use this backend unless the backend configuration changes.&lt;br /&gt;
&lt;br /&gt;
Initializing provider plugins...&lt;br /&gt;
- Checking for available provider plugins...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;random&amp;quot; (hashicorp/random) 2.3.0...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;kubernetes&amp;quot; (hashicorp/kubernetes) 1.10.0...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;helm&amp;quot; (hashicorp/helm) 1.2.3...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;aws&amp;quot; (hashicorp/aws) 2.70.0...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;external&amp;quot; (hashicorp/external) 1.2.0...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;null&amp;quot; (hashicorp/null) 2.1.2...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;template&amp;quot; (hashicorp/template) 2.1.2...&lt;br /&gt;
&lt;br /&gt;
Terraform has been successfully initialized!&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
:[[File:ClipCapIt-200714-085009.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Although cache dir is used by all Terraform projects, the providers versioning still works and normal versioning restrictions apply. If you want to be sure which version is locked for use with your current project, you can inspect SHA256 of files saved in one of the files in the “.terraform” directory:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ cat .terraform/plugins/linux_amd64/lock.json &lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;aws&amp;quot;: &amp;quot;f08daaf64b9fca69978a40f88091d1a77fc9725fb04b0fec5e731609c53a025f&amp;quot;,&lt;br /&gt;
  &amp;quot;external&amp;quot;: &amp;quot;6dad56007a3cb0ae9c4655c67d13502e51e38ca2673cf0f22a5fadce6803f9e4&amp;quot;,&lt;br /&gt;
  &amp;quot;helm&amp;quot;: &amp;quot;09b8ccb993f7d776555e811c856de006ac12b9fedfca15b07a85a6814914fd04&amp;quot;,&lt;br /&gt;
  &amp;quot;kubernetes&amp;quot;: &amp;quot;7ebf3273e622d1adb736e98f6fa5cc7e664c61b9171105b13c3b5ea8f8ebc5ff&amp;quot;,&lt;br /&gt;
  &amp;quot;null&amp;quot;: &amp;quot;c56285e7bd25a806bf86fcd4893edbe46e621a46e20fe24ef209b6fd0b7cf5fc&amp;quot;,&lt;br /&gt;
  &amp;quot;random&amp;quot;: &amp;quot;791ef28ff31913d9b2ef0bedb97de98bebafe66d002bc2b9d01377e59a6cfaed&amp;quot;,&lt;br /&gt;
  &amp;quot;template&amp;quot;: &amp;quot;cd8665642bf0f5b5f57a53050d10fd83415428c2dc6713b85e174e007fcc93bf&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
 &lt;br /&gt;
find ~/.terraform.d/plugins -type f | xargs sha256sum&lt;br /&gt;
f08daaf64b9fca69978a40f88091d1a77fc9725fb04b0fec5e731609c53a025f  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-aws_v2.70.0_x4&lt;br /&gt;
6dad56007a3cb0ae9c4655c67d13502e51e38ca2673cf0f22a5fadce6803f9e4  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-external_v1.2.0_x4&lt;br /&gt;
c56285e7bd25a806bf86fcd4893edbe46e621a46e20fe24ef209b6fd0b7cf5fc  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-null_v2.1.2_x4&lt;br /&gt;
791ef28ff31913d9b2ef0bedb97de98bebafe66d002bc2b9d01377e59a6cfaed  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-random_v2.3.0_x4&lt;br /&gt;
09b8ccb993f7d776555e811c856de006ac12b9fedfca15b07a85a6814914fd04  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-helm_v1.2.3_x4&lt;br /&gt;
7ebf3273e622d1adb736e98f6fa5cc7e664c61b9171105b13c3b5ea8f8ebc5ff  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-kubernetes_v1.10.0_x4&lt;br /&gt;
cd8665642bf0f5b5f57a53050d10fd83415428c2dc6713b85e174e007fcc93bf  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-template_v2.1.2_x4&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
As you can see, the SHA256 hash for AWS provider saved in the &amp;lt;tt&amp;gt;lock.json&amp;lt;/tt&amp;gt; file matches the hash of providera saved in the cache directory.&lt;br /&gt;
&lt;br /&gt;
= AWS - [https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Updates.20180206.html#AuroraMySQL.Updates.20180206.CLI RDS aurora] - versioning =&lt;br /&gt;
[https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Updates.20180206.html#AuroraMySQL.Updates.20180206.CLI Engine name] 'aurora-mysql' refers to engine version 5.7.x and for version 5.6.10a engine name is aurora.&lt;br /&gt;
* The engine name for Aurora MySQL 2.x is aurora-mysql; the engine name for Aurora MySQL 1.x continues to be aurora.&lt;br /&gt;
* The engine version for Aurora MySQL 2.x is 5.7.12; the engine version for Aurora MySQL 1.x continues to be 5.6.10ann.&lt;br /&gt;
&amp;lt;syntaxhighlightjs lang=yaml&amp;gt;&lt;br /&gt;
module &amp;quot;db&amp;quot; {&lt;br /&gt;
  source  = &amp;quot;terraform-aws-modules/rds-aurora/aws&amp;quot;&lt;br /&gt;
  version = &amp;quot;2.29.0&amp;quot;&lt;br /&gt;
  name    = &amp;quot;db&amp;quot;&lt;br /&gt;
  engine          = &amp;quot;aurora&amp;quot;                  # v5.6&lt;br /&gt;
  engine_version  = &amp;quot;5.6.mysql_aurora.1.23.0&amp;quot; # v5.6&lt;br /&gt;
  #engine         = &amp;quot;aurora-mysql&amp;quot;            # v5.7&lt;br /&gt;
  #engine_version = &amp;quot;5.7.mysql_aurora.2.09.0&amp;quot; # v5.7&lt;br /&gt;
  ...&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlightjs&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/localstack/localstack localstack] - Mock AWS Services =&lt;br /&gt;
&amp;lt;source lang=&amp;quot;shell&amp;quot;&amp;gt;&lt;br /&gt;
pip install localstack&lt;br /&gt;
localstack start&lt;br /&gt;
SERVICES=kinesis,lambda,sqs,dynamodb DEBUG=1 localstack start&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
;Examples&lt;br /&gt;
* [https://github.com/MattSurabian/bad-terraform bad-terraform]&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/tfsec/tfsec tfsec] - Security Scanning TF code =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
LATEST=$(curl --silent -L &amp;quot;https://api.github.com/repos/tfsec/tfsec/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
sudo curl -L https://github.com/tfsec/tfsec/releases/download/${LATEST}/tfsec-linux-amd64 -o /usr/local/bin/tfsec &lt;br /&gt;
sudo chmod +x /usr/local/bin/tfsec&lt;br /&gt;
&lt;br /&gt;
# Use with docker&lt;br /&gt;
docker run --rm -it -v &amp;quot;$(pwd):/src&amp;quot; liamg/tfsec /src&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
tfsec .&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/terraform-linters/tflint tflint] - validate provider-specific issues =&lt;br /&gt;
Requires Terraform &amp;gt;= 0.12&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/terraform-linters/tflint/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d)&lt;br /&gt;
curl -L https://github.com/terraform-linters/tflint/releases/download/${LATEST}/tflint_linux_amd64.zip -o $TEMPDIR/tflint_linux_amd64.zip&lt;br /&gt;
sudo unzip $TEMPDIR/tflint_linux_amd64.zip -d /usr/local/bin&lt;br /&gt;
&lt;br /&gt;
# Configure tflint&lt;br /&gt;
# | Current directory (./.tflint.hcl)&lt;br /&gt;
# | Home directory (~/.tflint.hcl)&lt;br /&gt;
tflint --config other_config.hcl&lt;br /&gt;
&lt;br /&gt;
## Add plugins&lt;br /&gt;
https://github.com/terraform-linters/tflint/tree/master/docs/rules&lt;br /&gt;
cat &amp;gt; ./.tflint.hcl &amp;lt;&amp;lt;EOF&lt;br /&gt;
plugin &amp;quot;aws&amp;quot; {&lt;br /&gt;
  enabled = true&lt;br /&gt;
  version = &amp;quot;0.5.0&amp;quot;&lt;br /&gt;
  source  = &amp;quot;github.com/terraform-linters/tflint-ruleset-aws&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
plugin &amp;quot;google&amp;quot; {&lt;br /&gt;
    enabled = true&lt;br /&gt;
    version = &amp;quot;0.15.0&amp;quot;&lt;br /&gt;
    source  = &amp;quot;github.com/terraform-linters/tflint-ruleset-google&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
EOF&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
tflint --module&lt;br /&gt;
tflint --module --var-file=dev.tfvars&lt;br /&gt;
&lt;br /&gt;
# Use with docker&lt;br /&gt;
docker pull ghcr.io/terraform-linters/tflint:latest&lt;br /&gt;
docker run --rm -v $(pwd):/src -t ghcr.io/terraform-linters/tflint:v0.34.1&lt;br /&gt;
docker run --rm -v $(pwd):/src -t ghcr.io/terraform-linters/tflint:v0.34.1 -v&lt;br /&gt;
&lt;br /&gt;
# Init and check&lt;br /&gt;
docker run --rm -v $(pwd):/src -t --entrypoint /bin/sh  ghcr.io/terraform-linters/tflint:v0.34.1 -c &amp;quot;tflint --init; tflint /src/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
## It looks important that tflint is executed in terrafrom root path, thus `cd /src`&lt;br /&gt;
docker run --rm -v $(pwd):/src -t -e TFLINT_LOG=debug --entrypoint /bin/sh  ghcr.io/terraform-linters/tflint:v0.34.1 \&lt;br /&gt;
-c &amp;quot;cd /src; tflint --init; tflint --var-file=environments/gcp-dev.tfvars --module&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/terraform-docs/terraform-docs terraform-docs] - generate Terraform documentation = &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install the binary&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/terraform-docs/terraform-docs/releases/latest&amp;quot; | jq -r .tag_name); echo $VERSION&lt;br /&gt;
wget https://github.com/terraform-docs/terraform-docs/releases/download/$VERSION/terraform-docs-$VERSION-linux-amd64.tar.gz&lt;br /&gt;
tar xzvf terraform-docs-$VERSION-linux-amd64.tar.gz&lt;br /&gt;
sudo install terraform-docs /usr/local/bin/terraform-docs&lt;br /&gt;
&lt;br /&gt;
# Use with docker&lt;br /&gt;
docker run --rm --volume &amp;quot;$(pwd):/src&amp;quot; -u $(id -u) quay.io/terraform-docs/terraform-docs:0.16.0 markdown /src&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
terraform-docs . &amp;gt; README.md&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/cycloidio/inframap InfraMap] - plot your Terraform state =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/cycloidio/inframap/releases/latest&amp;quot; | jq -r .tag_name); echo $VERSION&lt;br /&gt;
TEMPDIR=$(mktemp -d)&lt;br /&gt;
curl -L https://github.com/cycloidio/inframap/releases/download/${VERSION}/inframap-linux-amd64.tar.gz -o $TEMPDIR/inframap-linux-amd64.tar.gz&lt;br /&gt;
tar xzvf $TEMPDIR/inframap-linux-amd64.tar.gz -C $TEMPDIR inframap-linux-amd64&lt;br /&gt;
sudo install $TEMPDIR/inframap-linux-amd64 /usr/local/bin/inframap&lt;br /&gt;
&lt;br /&gt;
# Install graphviz, it contains the `dot` program&lt;br /&gt;
sudo apt install graphviz&lt;br /&gt;
&lt;br /&gt;
# Install GraphEasy&lt;br /&gt;
## Cpan manager&lt;br /&gt;
sudo apt install cpanminus # install perl packet managet&lt;br /&gt;
sudo cpanm Graph::Easy # Graph-Easy-0.76 as of 2021-07&lt;br /&gt;
&lt;br /&gt;
## Apt-get (tested with Ubuntu 20.04 LTS)&lt;br /&gt;
sudo apt install libgraph-easy-perl # Graph::Easy v0.76&lt;br /&gt;
&lt;br /&gt;
# a sample usage&lt;br /&gt;
cat input.dot | graph-easy --from=dot --as_ascii&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Usage inframap&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
The most important subcommands are:&lt;br /&gt;
* generate: generates the graph from STDIN or file, STDIN can be .tf files/modules or .tfstate&lt;br /&gt;
* prune: removes all unnecessary information from the state or HCL (not supported yet) so it can be shared without any security concerns&lt;br /&gt;
&lt;br /&gt;
# Generate your infrastructure graph in a DOT representation from: Terraform files or state file&lt;br /&gt;
cat terraform.tf      | inframap generate --printer dot --hcl     | tee graph.dot &lt;br /&gt;
cat terraform.tfstate | inframap generate --printer dot --tfstate | tee graph.dot&lt;br /&gt;
&lt;br /&gt;
# `prune` command will sanitize and anonymize content of the files&lt;br /&gt;
cat terraform.tfstate | inframap prune --canonicals --tfstate &amp;gt; cleaned.tfstate &lt;br /&gt;
&lt;br /&gt;
# Pipe all the previous commands. ASCII graph is generated using graph-easy&lt;br /&gt;
cat terraform.tfstate | inframap prune --tfstate | inframap generate --tfstate | graph-easy&lt;br /&gt;
&lt;br /&gt;
# from State file - visualizing with `dot` or `graph-easy`&lt;br /&gt;
inframap generate state.tfstate | dot -Tpng &amp;gt; graph.png&lt;br /&gt;
inframap generate state.tfstate | graph-easy&lt;br /&gt;
&lt;br /&gt;
# from HCL&lt;br /&gt;
inframap generate terraform.tf | graph-easy&lt;br /&gt;
inframap generate ./my-module/ | graph-easy # or HCL module&lt;br /&gt;
&lt;br /&gt;
# using docker image (assuming that your Terraform files are in the working directory)&lt;br /&gt;
docker run --rm -v ${PWD}:/opt cycloid/inframap generate /opt/terraform.tfstate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example of EKS module&lt;br /&gt;
:[[File:ClipCapIt-210716-090202.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/Pluralith/pluralith-cli/releases Pluralith] =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/Pluralith/pluralith-cli/releases/latest&amp;quot; | jq -r .tag_name); echo $VERSION&lt;br /&gt;
curl -L https://github.com/Pluralith/pluralith-cli/releases/download/${VERSION}/pluralith_cli_linux_amd64_${VERSION} -o pluralith_cli_linux_amd64_${VERSION}&lt;br /&gt;
sudo install pluralith_cli_linux_amd64_${VERSION} /usr/local/bin/pluralith&lt;br /&gt;
&lt;br /&gt;
# Install pluralith-cli-graphing&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/Pluralith/pluralith-cli-graphing-release/releases/latest&amp;quot; | jq -r .tag_name | tr -d v); echo $VERSION&lt;br /&gt;
curl -L https://github.com/Pluralith/pluralith-cli-graphing-release/releases/download/v${VERSION}/pluralith_cli_graphing_linux_amd64_${VERSION} -o pluralith_cli_graphing_linux_amd64_${VERSION}&lt;br /&gt;
sudo install pluralith_cli_graphing_linux_amd64_${VERSION} ~/Pluralith/bin/pluralith-cli-graphing&lt;br /&gt;
&lt;br /&gt;
# Check versions&lt;br /&gt;
pluralith version&lt;br /&gt;
parsing response failed -&amp;gt; GetGitHubRelease: %!w(&amp;lt;nil&amp;gt;)&lt;br /&gt;
 _&lt;br /&gt;
|_)|    _ _ |._|_|_ &lt;br /&gt;
|  ||_|| (_||| | | |&lt;br /&gt;
&lt;br /&gt;
→ CLI Version: 0.2.2&lt;br /&gt;
→ Graph Module Version: 0.2.1&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
pluralith login --api-key $PLURALITH_API_KEY&lt;br /&gt;
&lt;br /&gt;
# Generate PDF graph locally&lt;br /&gt;
pluralith &amp;lt;terrafom-root-folder&amp;gt; --var-file environments/dev.tfvars graph --local-only&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/flosell/iam-policy-json-to-terraform iam-policy-json-to-terraform] =&lt;br /&gt;
Convert an IAM Policy in JSON format into a Terraform aws_iam_policy_document&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/flosell/iam-policy-json-to-terraform/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
sudo curl -L https://github.com/flosell/iam-policy-json-to-terraform/releases/download/${LATEST}/iam-policy-json-to-terraform_amd64 -o /usr/local/bin/iam-policy-json-to-terraform&lt;br /&gt;
sudo chmod +x /usr/local/bin/iam-policy-json-to-terraform&lt;br /&gt;
&lt;br /&gt;
# Usage:&lt;br /&gt;
iam-policy-json-to-terraform &amp;lt; some-policy.json&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/hieven/terraform-visual terraform-visual] =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install&lt;br /&gt;
sudo apt-get update&lt;br /&gt;
sudo apt install nodejs npm&lt;br /&gt;
sudo npm install -g @terraform-visual/cli&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
terraform plan -out=plan.out                # Run plan and output as a file&lt;br /&gt;
terraform show -json plan.out &amp;gt; plan.json   # Read plan file and output it in JSON format&lt;br /&gt;
terraform-visual --plan plan.json&lt;br /&gt;
firefox terraform-visual-report/index.html&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/cloudskiff/driftctl driftctl] =&lt;br /&gt;
Measures infrastructure as code coverage, and tracks infrastructure drift.&lt;br /&gt;
IaC: Terraform, Cloud providers: AWS, GitHub (Azure and GCP on the roadmap for 2021). Spot discrepancies as they happen: driftctl is a free and open-source CLI that warns of infrastructure drifts and fills in the missing piece in your DevSecOps toolbox.&lt;br /&gt;
&lt;br /&gt;
;Features [https://docs.driftctl.com/ docs]&lt;br /&gt;
* Scan cloud provider and map resources with IaC code&lt;br /&gt;
* Analyze diffs, and warn about drift and unwanted unmanaged resources&lt;br /&gt;
* Allow users to ignore resources&lt;br /&gt;
* Multiple output formats&lt;br /&gt;
&lt;br /&gt;
Install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
curl -L https://github.com/snyk/driftctl/releases/latest/download/driftctl_linux_amd64 -o driftctl&lt;br /&gt;
install ./driftctl /usr/local/bin/driftctl&lt;br /&gt;
driftctl version&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[https://docs.driftctl.com/0.39.0/usage/cmd/scan-usage Detect drift on GCP]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
source &amp;lt;(driftctl completion bash)&lt;br /&gt;
export GOOGLE_APPLICATION_CREDENTIALS=$HOME/.config/gcloud/application_default_credentials.json&lt;br /&gt;
export CLOUDSDK_CORE_PROJECT=&amp;lt;myproject_id&amp;gt;&lt;br /&gt;
driftctl scan --to=&amp;quot;gcp+tf&amp;quot;&lt;br /&gt;
driftctl scan --to=&amp;quot;gcp+tf&amp;quot; --deep --output html://output.html&lt;br /&gt;
driftctl scan --to=&amp;quot;gcp+tf&amp;quot; --from tfstate+gs://my-bucket/path/to/state.tfstate # Use this when working with workspaces&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/infracost/infracost infracost] =&lt;br /&gt;
Infracost shows cloud cost estimates for infrastructure-as-code projects such as Terraform.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Downloads the CLI based on your OS/arch and puts it in /usr/local/bin&lt;br /&gt;
curl -fsSL https://raw.githubusercontent.com/infracost/infracost/master/scripts/install.sh | sh&lt;br /&gt;
&lt;br /&gt;
# Register for a free API key&lt;br /&gt;
infracost register # The key is saved in ~/.config/infracost/credentials.yml.&lt;br /&gt;
&lt;br /&gt;
# Show cost breakdown on live infra&lt;br /&gt;
infracost breakdown --path terraform_nlb_static_eips&lt;br /&gt;
&lt;br /&gt;
# Show cost breakdown based on Terraform plan&lt;br /&gt;
cd path/to/src_code&lt;br /&gt;
terraform init&lt;br /&gt;
terraform plan -out  tfplan.binary&lt;br /&gt;
terraform show -json tfplan.binary &amp;gt; plan.json&lt;br /&gt;
&lt;br /&gt;
## run via binary&lt;br /&gt;
infracost breakdown --path plan.json&lt;br /&gt;
infracost breakdown --path plan.json --show-skipped --format html &amp;gt; /vagrant/infracost.html&lt;br /&gt;
infracost diff      --path plan.json&lt;br /&gt;
&lt;br /&gt;
## run via Docker&lt;br /&gt;
docker run -it --rm --volume &amp;quot;$(pwd):/src&amp;quot; -u $(id -u) -e INFRACOST_API_KEY=$INFRACOST_API_KEY infracost/infracost:0.9.15 breakdown --path /src/plan.json&lt;br /&gt;
docker run -it --rm --volume &amp;quot;$(pwd):/src&amp;quot; -u $(id -u) -e INFRACOST_API_KEY=$INFRACOST_API_KEY infracost/infracost:0.9.15 diff      --path /src/plan.json&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example output&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
## Cost breakdown&lt;br /&gt;
docker run -it --rm --volume &amp;quot;$(pwd):/src&amp;quot; -u $(id -u) -e INFRACOST_API_KEY=$INFRACOST_API_KEY infracost/infracost:0.9.15 breakdown --path /src/plan.json&lt;br /&gt;
Detected Terraform plan JSON file at /src/plan.json&lt;br /&gt;
✔ Calculating monthly cost estimate &lt;br /&gt;
&lt;br /&gt;
Project: /src/plan.json&lt;br /&gt;
 Name                                                              Monthly Qty  Unit   Monthly Cost &lt;br /&gt;
 module.gke.google_container_cluster.primary                                                        &lt;br /&gt;
 ├─ Cluster management fee                                                 730  hours        $73.00 &lt;br /&gt;
 └─ default_pool                                                                                    &lt;br /&gt;
    ├─ Instance usage (Linux/UNIX, on-demand, e2-medium)                 6,570  hours       $242.16 &lt;br /&gt;
    └─ Standard provisioned storage (pd-standard)                          900  GiB          $36.00 &lt;br /&gt;
 module.gke.google_container_node_pool.pools[&amp;quot;default-node-pool&amp;quot;]                                   &lt;br /&gt;
 ├─ Instance usage (Linux/UNIX, on-demand, e2-medium)                    6,570  hours       $242.16 &lt;br /&gt;
 └─ Standard provisioned storage (pd-standard)                             900  GiB          $36.00 &lt;br /&gt;
 OVERALL TOTAL                                                                              $629.31 &lt;br /&gt;
──────────────────────────────────&lt;br /&gt;
11 cloud resources were detected, rerun with --show-skipped to see details:&lt;br /&gt;
∙ 2 were estimated, 2 include usage-based costs, see https://infracost.io/usage-file&lt;br /&gt;
∙ 9 were free&lt;br /&gt;
&lt;br /&gt;
## Cost difference&lt;br /&gt;
docker run -it --rm --volume &amp;quot;$(pwd):/src&amp;quot; -u $(id -u) -e INFRACOST_API_KEY=$INFRACOST_API_KEY infracost/infracost:0.9.15 diff --path /src/plan.json&lt;br /&gt;
Detected Terraform plan JSON file at /src/plan.json&lt;br /&gt;
✔ Calculating monthly cost estimate &lt;br /&gt;
&lt;br /&gt;
Project: /src/plan.json&lt;br /&gt;
&lt;br /&gt;
+ module.gke.google_container_cluster.primary&lt;br /&gt;
  +$351&lt;br /&gt;
    + Cluster management fee&lt;br /&gt;
      +$73.00&lt;br /&gt;
    + default_pool&lt;br /&gt;
        + Instance usage (Linux/UNIX, on-demand, e2-medium)&lt;br /&gt;
          +$242&lt;br /&gt;
        + Standard provisioned storage (pd-standard)&lt;br /&gt;
          +$36.00&lt;br /&gt;
    + node_pool[0]&lt;br /&gt;
        + Instance usage (Linux/UNIX, on-demand, e2-medium)&lt;br /&gt;
          $0.00&lt;br /&gt;
        + Standard provisioned storage (pd-standard)&lt;br /&gt;
          $0.00&lt;br /&gt;
+ module.gke.google_container_node_pool.pools[&amp;quot;default-node-pool&amp;quot;]&lt;br /&gt;
  +$278&lt;br /&gt;
    + Instance usage (Linux/UNIX, on-demand, e2-medium)&lt;br /&gt;
      +$242&lt;br /&gt;
    + Standard provisioned storage (pd-standard)&lt;br /&gt;
      +$36.00&lt;br /&gt;
Monthly cost change for /src/plan.json&lt;br /&gt;
Amount:  +$629 ($0.00 → $629)&lt;br /&gt;
&lt;br /&gt;
──────────────────────────────────&lt;br /&gt;
Key: ~ changed, + added, - removed&lt;br /&gt;
&lt;br /&gt;
11 cloud resources were detected, rerun with --show-skipped to see details:&lt;br /&gt;
∙ 2 were estimated, 2 include usage-based costs, see https://infracost.io/usage-file&lt;br /&gt;
∙ 9 were free&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Resources:&lt;br /&gt;
* DockerHub: https://hub.docker.com/r/infracost/infracost/tags&lt;br /&gt;
&lt;br /&gt;
= [https://tfautomv.dev/ tfautomv - Terraform refactor] =&lt;br /&gt;
Tfautomv writes moved blocks for you so your refactoring is quicker and less error-prone.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install&lt;br /&gt;
curl -sSfL https://raw.githubusercontent.com/busser/tfautomv/main/install.sh | sudo sh&lt;br /&gt;
&lt;br /&gt;
# Use&lt;br /&gt;
tfautomv -dry-run&lt;br /&gt;
tfautomv -show-analysis&lt;br /&gt;
&lt;br /&gt;
terraform plan -out=tfplan.bin&lt;br /&gt;
tfautomv --preplanned # this generates the moves.tf file&lt;br /&gt;
&lt;br /&gt;
# Apply the changes, by running terraform apply&lt;br /&gt;
terraform apply&lt;br /&gt;
&lt;br /&gt;
# Delete moves.tf&lt;br /&gt;
rm moves.tf&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://www.davidc.net/sites/default/subnets/subnets.html?network=192.168.0.0&amp;amp;mask=22&amp;amp;division=19.3d431 Subnetting] =&lt;br /&gt;
Very useful page for subnetting: https://www.davidc.net/sites/default/subnets/subnets.html&lt;br /&gt;
&lt;br /&gt;
= Resources =&lt;br /&gt;
*[https://discuss.hashicorp.com/u/apparentlymart apparentlymart] The Hero! discuss.hashicorp.com&lt;br /&gt;
*[https://blog.gruntwork.io/a-comprehensive-guide-to-terraform-b3d32832baca Comprehensive-guide-to-terraform] gruntwork.io&lt;br /&gt;
*[https://github.com/antonbabenko/terraform-best-practices Terraform good practices] naming conventions, etc..&lt;br /&gt;
*[https://www.runatlantis.io/ Atlantis] Terraform Pull Request Automation, Listens for webhooks from GitHub/GitLab/Bitbucket/Azure DevOps, Runs terraform commands remotely and comments back with their output.&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Docker&amp;diff=7057</id>
		<title>Docker</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Docker&amp;diff=7057"/>
		<updated>2025-09-03T04:49:39Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Ubuntu 24.04 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Containers taking a world J&lt;br /&gt;
&lt;br /&gt;
= [https://docs.docker.com/install/linux/docker-ce/ubuntu/ Installation] =&lt;br /&gt;
General procedure:&lt;br /&gt;
# Make sure you don't have docker already installed from your packet manager&lt;br /&gt;
# The /var/lib/docker may be &lt;br /&gt;
&lt;br /&gt;
To install the latest version of Docker with curl:&lt;br /&gt;
&amp;lt;source  lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
curl -sSL https://get.docker.com/ | sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== CentOS ==&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo yum install bash-completion bash-completion-extras #optional, requires you log out&lt;br /&gt;
sudo yum install -y yum-utils device-mapper-persistent-data lvm2 #utils&lt;br /&gt;
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo #docker-ee.repo for EE edition&lt;br /&gt;
                      # --enable docker-ce-{edge|test} #for beta releases&lt;br /&gt;
sudo yum update&lt;br /&gt;
sudo yum clean all #not sure why this command is here&lt;br /&gt;
sudo yum install docker-ce&lt;br /&gt;
#old: sudo yum install -y --setopt=obsoletes=0 docker-ce-17.03.1.ce-1.el7.centos docker-ce-selinux-17.03.1.ce-1.el7.centos&lt;br /&gt;
sudo systemctl enable docker &amp;amp;&amp;amp; sudo systemctl start docker &amp;amp;&amp;amp; sudo systemctl status docker&lt;br /&gt;
yum-config-manager --disable jenkins #disable source to prevent accidental update ?jenkins?&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Ubuntu 24.04 ==&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Add Docker's official GPG key:&lt;br /&gt;
sudo apt-get update&lt;br /&gt;
sudo apt-get install ca-certificates curl&lt;br /&gt;
sudo install -m 0755 -d /etc/apt/keyrings&lt;br /&gt;
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc&lt;br /&gt;
sudo chmod a+r /etc/apt/keyrings/docker.asc&lt;br /&gt;
&lt;br /&gt;
# Add the repository to Apt sources:&lt;br /&gt;
echo \&lt;br /&gt;
  &amp;quot;deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \&lt;br /&gt;
  $(. /etc/os-release &amp;amp;&amp;amp; echo &amp;quot;${UBUNTU_CODENAME:-$VERSION_CODENAME}&amp;quot;) stable&amp;quot; | \&lt;br /&gt;
  sudo tee /etc/apt/sources.list.d/docker.list &amp;gt; /dev/null&lt;br /&gt;
sudo apt-get update&lt;br /&gt;
&lt;br /&gt;
# Install the latest version&lt;br /&gt;
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin&lt;br /&gt;
&lt;br /&gt;
# Install a specific version&lt;br /&gt;
## List the available versions:&lt;br /&gt;
apt-cache madison docker-ce | awk '{ print $3 }'&lt;br /&gt;
5:28.3.3-1~ubuntu.24.04~noble&lt;br /&gt;
5:28.3.2-1~ubuntu.24.04~noble&lt;br /&gt;
&lt;br /&gt;
VERSION_STRING=5:28.3.3-1~ubuntu.24.04~noble&lt;br /&gt;
sudo apt-get install docker-ce=$VERSION_STRING docker-ce-cli=$VERSION_STRING containerd.io docker-buildx-plugin docker-compose-plugin&lt;br /&gt;
&lt;br /&gt;
# Manage Docker as a non-root user&lt;br /&gt;
sudo groupadd docker&lt;br /&gt;
sudo usermod -aG docker $USER&lt;br /&gt;
newgrp docker # activate the group without logging off&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Ubuntu 16.04, 18.04, 20.04 ==&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Optional, clear out config files&lt;br /&gt;
sudo rm /etc/systemd/system/docker.service.d/docker.conf&lt;br /&gt;
sudo rm /etc/systemd/system/docker.service&lt;br /&gt;
sudo rm /etc/default/docker #environment file&lt;br /&gt;
&lt;br /&gt;
# New docker package is called now 'docker-ce'&lt;br /&gt;
sudo apt-get remove docker docker-engine docker.io containerd runc docker-ce  # start fresh&lt;br /&gt;
sudo apt-get -yq install apt-transport-https ca-certificates curl gnupg-agent software-properties-common # apt over HTTPs&lt;br /&gt;
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - # Docker official GPG key&lt;br /&gt;
sudo apt-key fingerprint 0EBFCD88 #verify&lt;br /&gt;
&lt;br /&gt;
#add the repository&lt;br /&gt;
sudo add-apt-repository &amp;quot;deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable&amp;quot; # or {edge|test}&lt;br /&gt;
sudo apt-get update # optional&lt;br /&gt;
&lt;br /&gt;
# Option 1 - install latest&lt;br /&gt;
sudo apt-get install docker-ce docker-ce-cli containerd.io&lt;br /&gt;
&lt;br /&gt;
# Option 2 - install fixed version&lt;br /&gt;
sudo apt-cache madison docker-ce # display available versions&lt;br /&gt;
sudo apt-get   install docker-ce=&amp;lt;VERSION_STRING&amp;gt;          docker-ce-cli=&amp;lt;VERSION_STRING&amp;gt;          containerd.io&lt;br /&gt;
sudo apt-get   install docker-ce=18.09.0~3-0~ubuntu-bionic docker-ce-cli=18.09.0~3-0~ubuntu-bionic containerd.io&lt;br /&gt;
sudo apt-mark  hold    docker-ce docker-ce-cli containerd.io&lt;br /&gt;
sudo apt-mark  showhold # show packages that version upgrade has been put on hold&lt;br /&gt;
&lt;br /&gt;
# Unhold&lt;br /&gt;
sudo apt-mark unhold   docker-ce docker-ce-cli containerd.io&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://docs.docker.com/engine/release-notes/ Newer versions] (&amp;gt;18.09.0) of Docker come with 3 packages:&lt;br /&gt;
* &amp;lt;code&amp;gt;containerd.io&amp;lt;/code&amp;gt; - daemon to interface with the OS API (in this case, LXC - Linux Containers), essentially decouples Docker from the OS, also provides container services for non-Docker container managers&lt;br /&gt;
* &amp;lt;code&amp;gt;docker-ce&amp;lt;/code&amp;gt; - Docker daemon, this is the part that does all the management work, requires the other two on Linux&lt;br /&gt;
* &amp;lt;code&amp;gt;docker-ce-cli&amp;lt;/code&amp;gt; - CLI tools to control the daemon, you can install them on their own if you want to control a remote Docker daemon&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example of how to run [[Jenkins CI|Jenkins docker image]]&lt;br /&gt;
&lt;br /&gt;
== Add a user to docker group ==&lt;br /&gt;
Add your user to &amp;lt;tt&amp;gt;docker group&amp;lt;/tt&amp;gt; to be able to run docker commands without need of ''sudo'' as the &amp;lt;code&amp;gt;docker.socket&amp;lt;/code&amp;gt; is owned by group &amp;lt;code&amp;gt;docker&amp;lt;/code&amp;gt;.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo usermod -aG docker $(whoami)&lt;br /&gt;
&lt;br /&gt;
# log in to the new docker group (to avoid having to log out / log in again)&lt;br /&gt;
newgrp docker&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Reason&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
[root@piotr]$ ls -al /var/run/docker.sock&lt;br /&gt;
srw-rw----. 1 root docker 7 Jan 09:00 docker.sock&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= HTTP proxy =&lt;br /&gt;
Configure ''docker'' if you run behind a proxy server. In this example CNTLM proxy runs on the host machine listening on localhost:3128. This example overrides the default docker.service file by adding configuration to the Docker systemd service file.&lt;br /&gt;
&lt;br /&gt;
First, create a systemd drop-in directory for the docker service:&lt;br /&gt;
&amp;lt;source lang=bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo mkdir /etc/systemd/system/docker.service.d&lt;br /&gt;
sudo vi    /etc/systemd/system/docker.service.d/http-proxy.conf&lt;br /&gt;
[Service]&lt;br /&gt;
Environment=&amp;quot;HTTP_PROXY=http://proxy.example.com:80/&amp;quot;&lt;br /&gt;
Environment=&amp;quot;HTTP_PROXY=http://172.31.1.1:3128/&amp;quot; #overrides previous entry&lt;br /&gt;
Environment=&amp;quot;HTTPS_PROXY=http://172.31.1.1:3128/&amp;quot;&lt;br /&gt;
# If you have internal Docker registries that you need to contact without proxying you can specify them via the NO_PROXY environment variable&lt;br /&gt;
Environment=&amp;quot;NO_PROXY=localhost,127.0.0.1,10.6.96.172,proxy.example.com:80&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Flush changes:&lt;br /&gt;
 $ sudo systemctl daemon-reload&lt;br /&gt;
Verify that the configuration has been loaded:&lt;br /&gt;
 $ systemctl show --property=Environment docker&lt;br /&gt;
 Environment=HTTP_PROXY=&amp;lt;nowiki&amp;gt;http://proxy.example.com:80/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
Restart Docker:&lt;br /&gt;
 $ sudo systemctl restart docker&lt;br /&gt;
&lt;br /&gt;
= Docker create and run, basic options = &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# It will create a container but won't start it up&lt;br /&gt;
docker container create -it --name=&amp;quot;my-container&amp;quot; ubuntu:latest /bin/bash&lt;br /&gt;
docekr container start my-container&lt;br /&gt;
&lt;br /&gt;
docker run -it --name=&amp;quot;mycentos&amp;quot; centos:latest /bin/bash&lt;br /&gt;
# -i   :- interactive mode (attach to STDIN)          \command to execute when instantiating container &lt;br /&gt;
# -t   :- attach to the current terminal (sudo TTY)&lt;br /&gt;
# -d   :- disconnect mode, daemon mode, detached mode, running the task in the background&lt;br /&gt;
# -p   :- publish to host exposed container port [ host_port(8080):container_exposedPort(80) ]&lt;br /&gt;
# --rm :- remove container after command has been executed&lt;br /&gt;
# --name=&amp;quot;name_your_container&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# -e|--env MYVAR=123 exports/passing variable to the container, echo $MYVAR will have a value 123&lt;br /&gt;
# --privileged :- option will allow Docker to perform actions normally restricted, &lt;br /&gt;
#                 like binding a device path to an internal container path. &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Docker inspect =&lt;br /&gt;
== inspect image ==&lt;br /&gt;
&amp;lt;source&amp;gt; &lt;br /&gt;
docker image inspect centos:6&lt;br /&gt;
docker image inspect centos:6 --format '{{.ContainerConfig.Hostname}}' #just a single value&lt;br /&gt;
docker image inspect centos:6 --format '{{json .ContainerConfig}}'     #json key/value output&lt;br /&gt;
docker image inspect centos:6 --format '{{.RepoTags}}'                 #shows all associated tags with the image&lt;br /&gt;
&amp;lt;/source&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Using &amp;lt;code&amp;gt;--format&amp;lt;/code&amp;gt; is similar to &amp;lt;code&amp;gt;jq&amp;lt;/code&amp;gt;&lt;br /&gt;
== inspect container ==&lt;br /&gt;
Shows current configuration state of a docker container or an image.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
docker inspect &amp;lt;container_name&amp;gt; | grep IPAddress&lt;br /&gt;
           &amp;quot;SecondaryIPAddresses&amp;quot;: null,&lt;br /&gt;
           &amp;quot;IPAddress&amp;quot;: &amp;quot;172.17.0.3&amp;quot;,&lt;br /&gt;
                   &amp;quot;IPAddress&amp;quot;: &amp;quot;172.17.0.3&amp;quot;,&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Attach/exec to a docker process =&lt;br /&gt;
If you are running eg. &amp;lt;tt&amp;gt;/bin/bash&amp;lt;/tt&amp;gt; as a command you can get attached to this running docker process. Note that when you exit the container will stop.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker attach mycentos&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To avoid stopping a container on exit of &amp;lt;code&amp;gt;attach&amp;lt;/code&amp;gt; command we can use &amp;lt;code&amp;gt;exec&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker exec -it mycentos /bin/bash&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Attaching directly to a running container and then exiting the shell will cause the container to stop. Executing another shell in a running container and then exiting that shell will not stop the underlying container process started on instantiation.&lt;br /&gt;
&lt;br /&gt;
= Entrypoint, CMD, PID1 and [https://github.com/krallin/tini tini] =&lt;br /&gt;
== Entrypoint and receiving signals ==&lt;br /&gt;
Reciving signals and handling them within containers here Docker it's the same important as for any other application. Remember containers it's a group of processes running on your host so you need to take care of signals send to your applications.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As a principal container management tool eg. &amp;lt;code&amp;gt;docker stop&amp;lt;/code&amp;gt; sends a configurable (in Dockerfile)  signal to the entrypoint of your application where &amp;lt;code&amp;gt;SIGTERM - 15 - Termination (ANSI)&amp;lt;/code&amp;gt; is default.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;ENTRYPOINT syntax&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
# exec form, require JSON array; IT SHOULD ALWAYS BE USED&lt;br /&gt;
ENTRYPOINT [&amp;quot;/app/bin/your-app&amp;quot;, &amp;quot;arg1&amp;quot;, &amp;quot;arg2&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
# shell form, it always runs as a subcommand of '/bin/sh -c', thus your application will never see any signal sent to it&lt;br /&gt;
ENTRYPOINT &amp;quot;/app/bin/your-app arg1 arg2&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;ENTRYPOINT is a shell script&lt;br /&gt;
If application is started by shell script regular way, your shell spawns your application in a new process and you won’t receive signals from Docker. Therefore we need to tell shell to replace itself with your application using the &amp;lt;code&amp;gt;[https://stackoverflow.com/questions/18351198/what-are-the-uses-of-the-exec-command-in-shell-scripts exec]&amp;lt;/code&amp;gt; command, check also &amp;lt;code&amp;gt;[https://en.wikipedia.org/wiki/Exec_(system_call) exec syscall]&amp;lt;/code&amp;gt;. Use:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
/app/bin/my-app      # incorrect, signal won't be received by 'my-app'&lt;br /&gt;
exec /app/bin/my-app # correct way&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;ENTRYPOINT exec with a pipe commands causing starting a subshell&lt;br /&gt;
If you &amp;lt;code&amp;gt;exec&amp;lt;/code&amp;gt; piping will force a command to be run in a subshell with the usual consequence: no signals to an app.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
exec /app/bin/your-app | tai64n # here you want to add timestamps by piping through tai64n,&lt;br /&gt;
                                # causing running your command in a subshell&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Let another program to be PID1 and handle signalling&lt;br /&gt;
* tini&lt;br /&gt;
* dump-init&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
ENTRYPOINT [&amp;quot;/tini&amp;quot;, &amp;quot;-v&amp;quot;, &amp;quot;--&amp;quot;, &amp;quot;/app/bin/docker-entrypoint.sh&amp;quot;]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
tini and dumb-init are also able to proxy signals to process groups which technically allows you to pipe your output.However, your pipe target receives that signal at the same time so you can’t log anything on cleanup lest you crave race conditions and SIGPIPEs. So, it's better to avoid logging at termination at all.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Change signal that will terminate your container process&lt;br /&gt;
Listen for SIGTERM or set STOPSIGNAL in your Dockerfile.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi Dockerfile&lt;br /&gt;
STOPSIGNAL SIGINT # this will trigger container termination process if someone press Ctrl^C&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;References:&lt;br /&gt;
* [https://hynek.me/articles/docker-signals/ Why Your Dockerized Application Isn’t Receiving Signals]&lt;br /&gt;
* [http://smarden.org/runit/ runit] alternative to tini&lt;br /&gt;
&lt;br /&gt;
== Tini ==&lt;br /&gt;
It's a tiny but valid init for containers:&lt;br /&gt;
* protects you from software that accidentally creates zombie processes&lt;br /&gt;
* ensures that the default signal handlers work for the software you run in your Docker image&lt;br /&gt;
* does so completely transparently! Docker images that work without Tini will work with Tini without any changes&lt;br /&gt;
* Docker 1.13+ has Tini included, to enable Tini, just pass the &amp;lt;code&amp;gt;--init&amp;lt;/code&amp;gt; flag to docker run&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Understanding Tini&lt;br /&gt;
After spawning your process, Tini will wait for signals and forward those to the child process, and periodically reap zombie processes that may be created within your container. When the &amp;quot;first&amp;quot; child process exits (/your/program in the examples above), Tini exits as well, with the exit code of the child process (so you can check your container's exit code to know whether the child exited successfully).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add Tini - general dynamicly-linked library (in the 10KB range)&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
ENV TINI_VERSION v0.18.0&lt;br /&gt;
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini&lt;br /&gt;
RUN chmod +x /tini&lt;br /&gt;
ENTRYPOINT [&amp;quot;/tini&amp;quot;, &amp;quot;--&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
# Run your program under Tini&lt;br /&gt;
CMD [&amp;quot;/your/program&amp;quot;, &amp;quot;-and&amp;quot;, &amp;quot;-its&amp;quot;, &amp;quot;arguments&amp;quot;]&lt;br /&gt;
# or docker run your-image /your/program ...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add Tini to Alpine based image&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
RUN apk add --no-cache tini&lt;br /&gt;
# Tini is now available at /sbin/tini&lt;br /&gt;
ENTRYPOINT [&amp;quot;/sbin/tini&amp;quot;, &amp;quot;--&amp;quot;]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Existing entrypoint&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
ENTRYPOINT [&amp;quot;/tini&amp;quot;, &amp;quot;--&amp;quot;, &amp;quot;/docker-entrypoint.sh&amp;quot;]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;References:&lt;br /&gt;
*[https://github.com/krallin/tini/issues/8 What is advantage of Tini?]&lt;br /&gt;
*[https://ahmet.im/blog/minimal-init-process-for-containers/ Choosing an init process for multi-process containers]&lt;br /&gt;
&lt;br /&gt;
= Mount directory in container =&lt;br /&gt;
We can mount host directory into docker container so the content will be available from the container&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
docker run -it -v /mnt/sdb1:/opt/java pio2pio/java8&lt;br /&gt;
# syntax: -v /path/on/host:/path/in/container&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Build image = &lt;br /&gt;
== Dockerfile ==&lt;br /&gt;
Each line ''RUN'' creates a container so if possible, we should join lines so it ends up with less layers.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt; &lt;br /&gt;
$ wget jkd1.8.0_111.tar.gz&lt;br /&gt;
$ cat Dockerfile &amp;lt;&amp;lt;- EOF #'&amp;lt;&amp;lt;-' heredoc with '-' minus ignores &amp;lt;tab&amp;gt; indent&lt;br /&gt;
ARG TAGVERSION=6                    #only command allowed b4 FROM&lt;br /&gt;
FROM ubuntu:${TAGVERSION}&lt;br /&gt;
FROM ubuntu:latest                  #defines base image eg. ubuntu:16.04&lt;br /&gt;
LABEL maintainer=&amp;quot;myname@gmail.com&amp;quot; #key/value pair added to a metadata of the image&lt;br /&gt;
&lt;br /&gt;
ARG ARG1=value1&lt;br /&gt;
&lt;br /&gt;
ENV ENVIRONMENT=&amp;quot;prod&amp;quot;&lt;br /&gt;
ENV SHARE /usr/local/share  #define env variables with syntax ENV space EnvironmetVariable space Value&lt;br /&gt;
ENV JAVA_HOME $SHARE/java&lt;br /&gt;
&lt;br /&gt;
# COPY jkd1.8.0_111.tar.gz /tmp #works only with files, copy a file to container filesystem, here to /tmp&lt;br /&gt;
# ADD http://example.com/file.txt&lt;br /&gt;
ADD jkd1.8.0_111.tar.gz /  #add files into the image root folder, can add also URLs&lt;br /&gt;
&lt;br /&gt;
# SHELL [&amp;quot;executable&amp;quot;,&amp;quot;params&amp;quot;] #overrides /bin/sh -c for RUN,CMD, etc..&lt;br /&gt;
&lt;br /&gt;
# Executes commands during build process in a new layer E.g., it is often used for installing software packages&lt;br /&gt;
RUN mv /jkd1.8.0_111.tar.gz $JAVA_HOME &lt;br /&gt;
RUN apt-get update&lt;br /&gt;
RUN [&amp;quot;apt-get&amp;quot;, &amp;quot;update&amp;quot;, &amp;quot;-y&amp;quot;] #in json array format, allows to run a commands but does not require shell executable&lt;br /&gt;
&lt;br /&gt;
VOLUME /mymount_point #this command does not mount anything from a host, just creates a mountpoint&lt;br /&gt;
&lt;br /&gt;
EXPOSE 80 #it doesn't automatically map the port to a hosts&lt;br /&gt;
&lt;br /&gt;
#containers usually don't have system maangement eg. systemctl/service/init.d as designed to run as single process&lt;br /&gt;
#entry point becomes main command that start the main proces&lt;br /&gt;
ENTRYPOINT apachectl &amp;quot;-DFOREGROUND&amp;quot; #think about it as the MAIN_PURPOSE_OF_CONTAINER command. &lt;br /&gt;
# It's always run by default it cannot be overridden&lt;br /&gt;
&lt;br /&gt;
#Single command that will run after the image has been created. Only one per dockerfile, can be overriden.&lt;br /&gt;
CMD [&amp;quot;/bin/bash&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
# STOPSIGNAL&lt;br /&gt;
&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Build ==&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt; &lt;br /&gt;
docker build --tag myrepo/java8 .  #-f point to custom Dockerfile name eg. -f Dockerfile2&lt;br /&gt;
# myrepo dockerhub username, java8 -image name, &lt;br /&gt;
# .      directory where is the Dockerfile&lt;br /&gt;
&lt;br /&gt;
docker build -t myrepo/java8 . --pull --no-cache --squash&lt;br /&gt;
# --pull     regardless a local copy of an image can exist force to download a new image&lt;br /&gt;
# --no-cache don't use cache to build, forcing to rebuild all interim containers&lt;br /&gt;
# --squash   after the build squash all layers into a single layer. &lt;br /&gt;
&lt;br /&gt;
docker images             #list images&lt;br /&gt;
docker push myrepo/java8 #upload the image to DockerHub repository&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;squash&amp;lt;/code&amp;gt; is enabled only on docker demon with experimental features enabled.&lt;br /&gt;
&lt;br /&gt;
= Manage containers and images =&lt;br /&gt;
== Run a container ==&lt;br /&gt;
When you ''run'' a container you will create a new container from a image that has been already build/ is available then put in running state&lt;br /&gt;
* -d detached mode, the container will continue to run after the CMD or passed on command exited&lt;br /&gt;
* -i interactive mode, allows you to login in /ssh to the container&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# docker container run [OPTIONS]           IMAGE    [COMMAND] [ARG...] # usage&lt;br /&gt;
  docker container run -it --name mycentos centos:6 /bin/bash&lt;br /&gt;
  docker           run -it pio2pio/java8 #container section command is optional&lt;br /&gt;
# -i       :- run in interactive mode, then run command /bin/bash&lt;br /&gt;
# --rm     :- will delete container after run&lt;br /&gt;
# --publish | -p 80:8080 :- publish exposed container port 80-&amp;gt; to 8080 on the docker-host&lt;br /&gt;
# --publish-all | -P     :- publish all exposed container ports to random port &amp;gt;32768&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== List images ==&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ctop #top for containers&lt;br /&gt;
docker ps -a #list containers&lt;br /&gt;
docker image ls #list images&lt;br /&gt;
docker images #short form of the command above&lt;br /&gt;
docker images --no-trunc&lt;br /&gt;
docker images -q #--quiet&lt;br /&gt;
docker images --filer &amp;quot;before=centos:6&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# List exposed ports on a container&lt;br /&gt;
docker port CONTAINER [PRIVATE_PORT[/PROTOCOL]]&lt;br /&gt;
docker port web2&lt;br /&gt;
80/tcp -&amp;gt; 0.0.0.0:81&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Search images in remote repository ==&lt;br /&gt;
Search the DockerHub for images,. You may require to do `docker login` first&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
IMAGE=ubuntu&lt;br /&gt;
docker search $IMAGE&lt;br /&gt;
NAME                            DESCRIPTION                                     STARS OFFICIAL   AUTOMATED&lt;br /&gt;
ubuntu                          Ubuntu is a Debian-based Linux operating sys…   8206  [OK]       &lt;br /&gt;
dorowu/ubuntu-desktop-lxde-vnc  Ubuntu with openssh-server and NoVNC            210              [OK]&lt;br /&gt;
rastasheep/ubuntu-sshd          Dockerized SSH service, built on top of offi…   167              [OK]&lt;br /&gt;
&lt;br /&gt;
IMAGE=apache&lt;br /&gt;
docker search $IMAGE --filter stars=50 # search images that have 50 or more stars&lt;br /&gt;
docker search $IMAGE --limit 10        # display top 10 images&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
List all available tags&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
IMAGE=nginx&lt;br /&gt;
wget -q https://registry.hub.docker.com/v1/repositories/${IMAGE}/tags -O - | sed -e 's/[][]//g' -e 's/&amp;quot;//g' -e 's/ //g' | tr '}' '\n'  | awk -F: '{print $3}' | sort -V&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Pull images ==&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
                 &amp;lt;name&amp;gt;:&amp;lt;tag&amp;gt;&lt;br /&gt;
docker pull hello-world:latest # pull latest&lt;br /&gt;
docker pull --all hello-world  # pull all tags&lt;br /&gt;
docker pull --disable-content-trust hello-world # disable verification &lt;br /&gt;
&lt;br /&gt;
docker images --digests #displays sha256: digest of an image&lt;br /&gt;
&lt;br /&gt;
# Dangling images - transitional images&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
=== [https://docs.aws.amazon.com/AmazonECR/latest/userguide/registries.html#registry_auth from Amazon ECR] ===&lt;br /&gt;
;Docker login to ECR service using IAM&lt;br /&gt;
&amp;lt;code&amp;gt;docker login&amp;lt;/code&amp;gt; does not support native IAM authentication methods. Therefore use a command below that will retrieve, decode, and convert the &amp;lt;code&amp;gt;authorization IAM token&amp;lt;/code&amp;gt; into a pre-generated &amp;lt;code&amp;gt;docker login&amp;lt;/code&amp;gt; command. Therefore produced login credentials will assume your current IAM User/Role permissions. If your current IAM user can only pull from ECR, after login with &amp;lt;code&amp;gt;docker login&amp;lt;/code&amp;gt; you still won't be able to push image to the registry. Example error you may get is &amp;lt;code&amp;gt;not authorized to perform: ecr:InitiateLayerUpload&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Login to ECR service, your IAM user requires to have relevant pull/push permissions&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
eval $(aws ecr get-login --region eu-west-1 --no-include-email)&lt;br /&gt;
     # aws ecr get-login # generates below docker command with the login token&lt;br /&gt;
     # docker login -u AWS -p **token** https://$ACCOUNT.dkr.ecr.us-east-1.amazonaws.com # &amp;lt;- output&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Docker login to ECR singular repository, min awscli v1.17&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ACCOUNT=111111111111&lt;br /&gt;
REPOSITORY=myrepo&lt;br /&gt;
aws ecr get-login-password --region eu-west-1 | docker login --username AWS --password-stdin $ACCOUNT.dkr.ecr.eu-west-1.amazonaws.com/$REPOSITORY&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html push to Amazon ECR] ===&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# List images&lt;br /&gt;
$ docker images&lt;br /&gt;
REPOSITORY                                                 TAG   IMAGE ID     CREATED        SIZE&lt;br /&gt;
ansible-aws                                                2.0.1 b09807c20c96 5 minutes ago  570MB&lt;br /&gt;
111111111111.dkr.ecr.eu-west-1.amazonaws.com/ansible-aws   1.0.0 9bf35fe9cc0e 4 weeks ago    515MB&lt;br /&gt;
&lt;br /&gt;
# Tag an image 'b09807c20c96'&lt;br /&gt;
docker tag b09807c20c96 111111111111.dkr.ecr.eu-west-1.amazonaws.com/ansible-aws:2.0.1&lt;br /&gt;
&lt;br /&gt;
# List images, to verify your newly tagged one&lt;br /&gt;
$ docker images&lt;br /&gt;
REPOSITORY                                                 TAG   IMAGE ID     CREATED        SIZE&lt;br /&gt;
111111111111.dkr.ecr.eu-west-1.amazonaws.com/ansible-aws   2.0.1 b09807c20c96 6 minutes ago  570MB # &amp;lt;- new tagged image&lt;br /&gt;
ansible-aws                                                2.0.1 b09807c20c96 6 minutes ago  570MB&lt;br /&gt;
111111111111.dkr.ecr.eu-west-1.amazonaws.com/ansible-aws   1.0.0 9bf35fe9cc0e 4 weeks ago    515MB&lt;br /&gt;
&lt;br /&gt;
# Push an image to ECR&lt;br /&gt;
docker push 111111111111.dkr.ecr.eu-west-1.amazonaws.com/ansible-aws:2.0.1&lt;br /&gt;
The push refers to repository [111111111111.dkr.ecr.eu-west-1.amazonaws.com/ansible-aws]&lt;br /&gt;
2c405c66e675: Pushed &lt;br /&gt;
...&lt;br /&gt;
77cae8ab23bf: Layer already exists &lt;br /&gt;
2.0.1: digest: sha256:111111111193969807708e1f6aea2b19a08054f418b07cf64016a6d1111111111 size: 1796&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Save and import image ==&lt;br /&gt;
In course to move a image to another filesystem we can save it into &amp;lt;code&amp;gt;.tar&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Export&lt;br /&gt;
docker image save myrepo/centos:v2 &amp;gt; mycentos.v2.tar&lt;br /&gt;
tar -tvf mycentos.v2.tar&lt;br /&gt;
&lt;br /&gt;
# Import&lt;br /&gt;
docker image import mycentos.v2.tar &amp;lt;new_image_name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Load from a stream&lt;br /&gt;
docker load &amp;lt; mycentos.v2.tar #or --input mycentos.v2.tar to avoid redirections&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Export aka commit container into image ==&lt;br /&gt;
Let's say we wanto modify stock image centos:6 by installing Apache interactivly, set to autostart then export as an new image. Let's do it!&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
docker pull centos:6&lt;br /&gt;
docker container run -it --name apache-centos6 centos:6&lt;br /&gt;
# Interactively do: yum -y update; yum install -y httpd; chkconfig httpd on; exit&lt;br /&gt;
&lt;br /&gt;
# Save container changes - option1&lt;br /&gt;
docker commit -m &amp;quot;added httpd daemon&amp;quot; -a &amp;quot;Piotr&amp;quot; b237d65fd197 newcentos:withapache #creates new image from a container's changes&lt;br /&gt;
docker commit -m &amp;quot;added httpd daemon&amp;quot; -a &amp;quot;Piotr&amp;quot; &amp;lt;container_name&amp;gt; &amp;lt;repo&amp;gt;/&amp;lt;image&amp;gt;:&amp;lt;tag&amp;gt;&lt;br /&gt;
# -a :- author&lt;br /&gt;
&lt;br /&gt;
# Save container changes - option2&lt;br /&gt;
docker container export apache-centos6 &amp;gt; apache-centos6.tar&lt;br /&gt;
docker image     import apache-centos6.tar newcentos:withapache&lt;br /&gt;
&lt;br /&gt;
docker images&lt;br /&gt;
REPOSITORY    TAG          IMAGE ID            CREATED             SIZE&lt;br /&gt;
newcentos     withapache   ea5215fb46ed        50 seconds ago      272MB&lt;br /&gt;
&lt;br /&gt;
docker image history newcentos:withapache&lt;br /&gt;
IMAGE        CREATED        CREATED BY   SIZE   COMMENT&lt;br /&gt;
ea5215fb46ed 2 minutes ago               272MB  Imported from -&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I am unsure what is a difference in creation of images from a container between:&lt;br /&gt;
* &amp;lt;code&amp;gt;docker container commit&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;docker container export&amp;lt;/code&amp;gt; - this seems creates smaller image&lt;br /&gt;
&lt;br /&gt;
== Tag images ==&lt;br /&gt;
Tags are used to usually to name Official image with a new name that we are planning to modify. This allows to create a new image, run a new container from a tag, delete the original image without affecting the new image or container started from the new tagged image.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
docker image tag #long version&lt;br /&gt;
docker tag centos:6 myucentos:v1 #this will create a duplicate of centos:6 named myucentos:v1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Tagging allows to modify repository name and maanges references to images located on a filesystem.&lt;br /&gt;
&lt;br /&gt;
== History of an image ==&lt;br /&gt;
We can display history of layers that created the image by showing interim images build in creation order. It shows only layers created on a local filesystem.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
docker image history myrepo/centos:v2&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Stop and delete all containers ==&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
docker stop $(docker ps -aq) &amp;amp;&amp;amp; docker rm $(docker ps -aq)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Delete image ==&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ docker images&lt;br /&gt;
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE&lt;br /&gt;
company-repo        0.1.0               f796d7f843cc        About an hour ago   888MB&lt;br /&gt;
&amp;lt;none&amp;gt;              &amp;lt;none&amp;gt;              04fbac2fdf48        3 hours ago         565MB&lt;br /&gt;
ubuntu              16.04               7aa3602ab41e        3 weeks ago         115MB&lt;br /&gt;
&lt;br /&gt;
# Delete image&lt;br /&gt;
$ docker rmi company-repo:0.1.0&lt;br /&gt;
Untagged: company-repo:0.1.0&lt;br /&gt;
Deleted: sha256:e5cca6a080a5c65eacff98e1b17eeb7be02651849b431b46b074899c088bd42a&lt;br /&gt;
..&lt;br /&gt;
Deleted: sha256:bc7cda232a2319578324aae620c4537938743e46081955c4dd0743a89e9e8183&lt;br /&gt;
&lt;br /&gt;
# Prune image - delete dangling (temp/interim) images. &lt;br /&gt;
# These are not associated with end-product image or containers.&lt;br /&gt;
docker image prune&lt;br /&gt;
docker image prune -a #remove all images not associated with any container &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Cleaning up space by removing docker objects ==&lt;br /&gt;
This applied both to docker container and swarm systems.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
docker system df     #show disk usage&lt;br /&gt;
TYPE                TOTAL               ACTIVE              SIZE                RECLAIMABLE&lt;br /&gt;
Images              1                   0                   131.7MB             131.7MB (100%)&lt;br /&gt;
Containers          0                   0                   0B                  0B&lt;br /&gt;
Local Volumes       0                   0                   0B                  0B&lt;br /&gt;
Build Cache         0                   0                   0B                  0B&lt;br /&gt;
&lt;br /&gt;
docker network ls #note all networks below are system created, so won't get removed&lt;br /&gt;
NETWORK ID          NAME                DRIVER              SCOPE&lt;br /&gt;
452b1c428209        bridge              bridge              local&lt;br /&gt;
528db1bf80f1        docker_gwbridge     bridge              local&lt;br /&gt;
832c8c6d73a5        host                host                local&lt;br /&gt;
t8jxy5vsy5on        ingress             overlay             swarm&lt;br /&gt;
815a9c2c4005        none                null                local&lt;br /&gt;
&lt;br /&gt;
docker system prune #removes objects created by a user only, on the current node only&lt;br /&gt;
                    #add --volumes to remove them as well&lt;br /&gt;
WARNING! This will remove:&lt;br /&gt;
        - all stopped containers&lt;br /&gt;
        - all networks not used by at least one container&lt;br /&gt;
        - all dangling images&lt;br /&gt;
        - all dangling build cache&lt;br /&gt;
Are you sure you want to continue? [y/N]&lt;br /&gt;
&lt;br /&gt;
docker system prune -a --volumes #remove all&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Docker Volumes ==&lt;br /&gt;
Docker's 'copy-on-write' philosophy drives both performance and efficiency. It's only the top layer that is writable and it's a delta of underlying layer.&lt;br /&gt;
&lt;br /&gt;
Volumes can be mounted to your container instances from your underlying host systems.&lt;br /&gt;
&lt;br /&gt;
''_data'' volumes, since they are not controlled by the storage driver (since they represent a file/directory on the host filesystem /var/lib/docker), are able to bypass the storage driver. As a result, their contents are not affected when a container is removed.&lt;br /&gt;
&lt;br /&gt;
Volumes are data mounts created on a host in &amp;lt;code&amp;gt;/var/lib/docker/volumes/&amp;lt;/code&amp;gt; directory and refereed by name in a Dockerfile.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
docker volume ls                   #list volumes created by VOLUME directive in a Dockerfile&lt;br /&gt;
sudo tree /var/lib/docker/volumes/ #list volumes on host-side&lt;br /&gt;
docker volume create  my-vol-1&lt;br /&gt;
docker volume inspect my-vol-1&lt;br /&gt;
[&lt;br /&gt;
    {&lt;br /&gt;
        &amp;quot;CreatedAt&amp;quot;: &amp;quot;2019-01-17T08:47:01Z&amp;quot;,&lt;br /&gt;
        &amp;quot;Driver&amp;quot;: &amp;quot;local&amp;quot;,&lt;br /&gt;
        &amp;quot;Labels&amp;quot;: {},&lt;br /&gt;
        &amp;quot;Mountpoint&amp;quot;: &amp;quot;/var/lib/docker/volumes/my-vol-1/_data&amp;quot;,&lt;br /&gt;
        &amp;quot;Name&amp;quot;: &amp;quot;my-vol-1&amp;quot;,&lt;br /&gt;
        &amp;quot;Options&amp;quot;: {},&lt;br /&gt;
        &amp;quot;Scope&amp;quot;: &amp;quot;local&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using volumes with Swarm services &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker container run  --name web1 -p 80:80 --mount source=my-vol-1,target=/internal-mount --replicas 3 httpd #container&lt;br /&gt;
docker service create --name web1 -p 80:80 --mount source=my-vol-1,target=/internal-mount --replicas 3 httpd #swarm service&lt;br /&gt;
# --mount --volumes|-v is not supported with services, this will replicate volumes across swarm when needed,&lt;br /&gt;
# but it will not replicate files&lt;br /&gt;
&lt;br /&gt;
docker exec -it web1 /bin/bash #connect to the container&lt;br /&gt;
roor@c123:/ echo &amp;quot;Created when connected to container: volume-web1&amp;quot; &amp;gt; /internal-mount/local.txt; exit&lt;br /&gt;
&lt;br /&gt;
# prove the file is on a host filesystem created volume&lt;br /&gt;
user@dockerhost$ cat /var/lib/docker/volumes/my-vol-1/_data/local.txt&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Host storage mount&lt;br /&gt;
Bind mapping is binding host filesystem directory to a container directory. It's not mouting volume that it'd require a mount point and volume on a host.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
mkdir /home/user/web1&lt;br /&gt;
echo &amp;quot;web1 index&amp;quot; &amp;gt; /home/user/web1/index.html&lt;br /&gt;
docker container run -d --name testweb -p 80:80 --mount type=bind,source=/home/user/web1,target=/usr/local/apache2/htdocs httpd&lt;br /&gt;
curl http://localhost:80&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Removing service is not going to remove the volume unless you delete the volume itself. It that case will be removed from all swarms.&lt;br /&gt;
&lt;br /&gt;
== Networking ==&lt;br /&gt;
=== Container Network Model ===&lt;br /&gt;
It's a concept of network implementation that is built on multiple private networks across multiple hosts overlayed and managed by IPAM. Protocol that keeps track and provision addesses.&lt;br /&gt;
&lt;br /&gt;
Main 3 components:&lt;br /&gt;
* sandbox -  contains the configuration of a container's network stack, incl. management of interfaces, routing and DNS. An implementation of a Sandbox could be a eg. Linux Network Namespace. A Sandbox may contain many endpoints from multiple networks.&lt;br /&gt;
* endpoint - joins a Sandbox to a Network. Interfaces, switches, ports, etc and belong to only one network at the time. The Endpoint construct exists so the actual connection to the network can be abstracted away from the application. This helps maintain portability.&lt;br /&gt;
* network - a clollection of endpoints that can communicate directly (bridges, VLANs, etc.) and can consist of 1toN endpoints&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Container-network-model.png|none||left|Container Network Model]]&lt;br /&gt;
&lt;br /&gt;
;IPAM (Internet Protocol Address Management)&lt;br /&gt;
Managing addesees across multiple hosts on a separate physical networks while providing routing to the underlaying swarm networks externally is ''the IPAM prblem'' for Docker. Depends on the netwok driver choice, IPAM is handled at different layers in the stack. ''Network drivers'' enable IPAM through ''DHCP drivers'' or plugin drivers so the complex implementation that would be normally overlapping addesses is supported.&lt;br /&gt;
&lt;br /&gt;
;References&lt;br /&gt;
* [https://success.docker.com/article/networking Docker Reference Architecture: Designing Scalable, Portable Docker Container Networks]&lt;br /&gt;
&lt;br /&gt;
=== Publish exposed container/service ports ===&lt;br /&gt;
;Publishing modes&lt;br /&gt;
;host: set using &amp;lt;code&amp;gt; --publish mode=host,8080:80&amp;lt;/code&amp;gt;, makes ports available only on the undelaying host system not outside the host the service may exist; defits ''routing mesh'' so user is responsible for routing&lt;br /&gt;
;ingress: responsible for ''routing mesh'' makes sure all published ports are avaialble on all hosts in the swarm cluster regardless is a service replica running on it or not&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# List exposed ports on a container&lt;br /&gt;
docker port CONTAINER [PRIVATE_PORT[/PROTOCOL]]&lt;br /&gt;
docker port web2&lt;br /&gt;
80/tcp -&amp;gt; 0.0.0.0:81&lt;br /&gt;
&lt;br /&gt;
# Publish port&lt;br /&gt;
                                          host  :  container&lt;br /&gt;
                                             \  :  /&lt;br /&gt;
docker container run -d --name web1 --publish 81:80 httpd&lt;br /&gt;
# --publish | -p :- publish to host exposed container port&lt;br /&gt;
# 81             :- port on a host, can use range eg. 81-85, so based on port availability port will be used&lt;br /&gt;
# 80             :- exposed port on a container&lt;br /&gt;
&lt;br /&gt;
ss -lnt&lt;br /&gt;
State       Recv-Q Send-Q Local Address:Port Peer Address:Port&lt;br /&gt;
LISTEN      0      100        127.0.0.1:25              *:*&lt;br /&gt;
LISTEN      0      128                *:22              *:*&lt;br /&gt;
LISTEN      0      100              ::1:25             :::*&lt;br /&gt;
LISTEN      0      128               :::81             :::*&lt;br /&gt;
LISTEN      0      128               :::22             :::*&lt;br /&gt;
&lt;br /&gt;
docker container run -d --name web1 --publish-all 81:80 httpd&lt;br /&gt;
# --publish-all | -P publish all cotainer exposed ports to random ports above &amp;gt;32768&lt;br /&gt;
CONTAINER ID IMAGE COMMAND              CREATED STATUS PORTS                   NAMES&lt;br /&gt;
c63efe9cbb94 httpd &amp;quot;httpd-foreground&amp;quot;   2 sec.. Up 1 s 80/tcp                  testweb  #port exposed but not published&lt;br /&gt;
cb0711134eb5 httpd &amp;quot;httpd-foreground&amp;quot;   4 sec.. Up 2 s 0.0.0.0:32769-&amp;gt;80/tcp   testweb1 #port exposed and published to host:32769&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Network drivers ===&lt;br /&gt;
Default network for a single host docker-host is ''bridge'' network.&lt;br /&gt;
&lt;br /&gt;
;List of Native (part of Docker Engine) Network Drivers:&lt;br /&gt;
;bridge: default on stand-alone hosts, it's private network internal to the host system, all containers on this host using Bridge network can communicate, external access is granted by port exposure or static-routes added with teh host as the gateway for that network&lt;br /&gt;
;none: used when a container does not need any networkng, still can be accessed from the host using &amp;lt;code&amp;gt;docker attach [containerID]&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;docker exec -it [containerID]&amp;lt;/code&amp;gt; commands&lt;br /&gt;
;host: aka ''Host Only Networking'', only accessable via underlaying host, access to services can be provided by exposing ports to the host system&lt;br /&gt;
;overlay: swarm scope driver, allows communication to all Docker Daemons in a cluster, self-extending if needed, maanged by Swarm manager, it's default mode of Swarm communication&lt;br /&gt;
;ingress: extended network across all nodes in the cluster; special overlay network that load balances netowrk traffic amongst a given service's working nodes; maintains a list of all IP addresses from nodes that participate in that service (using the IPVS module) and when a request comes in, routes to one of them for the indicated service; provides ''routing mesh' that allows services to be exposed to the external network without having replica running on every node in the Swarm&lt;br /&gt;
;docker gateway bridge: special bridge network that allows overlay networks (incl. ingress) access an individual DOcker daemon's physical network; every container run within a service is connected to the local Docerk daemon's host network; automatically created when Docker is initialised or joined by &amp;lt;code&amp;gt;docker swarm init&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;docker join&amp;lt;/code&amp;gt; commands.&lt;br /&gt;
&lt;br /&gt;
;Docker interfaces&lt;br /&gt;
* &amp;lt;code&amp;gt;docker0&amp;lt;/code&amp;gt; - adapter is installed by default during Docker setup and will be assigned an address range that will determine the local host IPs available to the containers running on it&lt;br /&gt;
&lt;br /&gt;
;Create bridge network&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker network ls #default networks list&lt;br /&gt;
NETWORK ID    NAME                DRIVER   SCOPE&lt;br /&gt;
130833da0920  bridge              bridge   local&lt;br /&gt;
528db1bf80f1  docker_gwbridge     bridge   local&lt;br /&gt;
832c8c6d73a5  host                host     local&lt;br /&gt;
t8jxy5vsy5on  ingress             overlay  swarm  #'ingress' special network 1 per cluster&lt;br /&gt;
815a9c2c4005  none                null     local&lt;br /&gt;
&lt;br /&gt;
docker network inspect bridge #bridge is a default network containers are deployed to&lt;br /&gt;
&lt;br /&gt;
docker container run  -d web1 -p 8080:80 httpd #expose container port :80 -&amp;gt; :8080 on the docker-home&lt;br /&gt;
docker container inspect web1 | grep IPAdd&lt;br /&gt;
docker container inspect --format=&amp;quot;{{.NetworkSettings.Networks.bridge.IPAddress}}&amp;quot; web1 #get container ip&lt;br /&gt;
curl http://$(IPAddr)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Create bridge network&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker network create --driver=bridge --subnet=192.168.1.0/24 --opt &amp;quot;com.docker.network.driver.mtu&amp;quot;=1501 deviceeth0&lt;br /&gt;
&lt;br /&gt;
docker network ls&lt;br /&gt;
docker network inspect deviceeth0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Create overlay network&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker network create --driver=overlay --subnet=192.168.1.0/24 --gateway=192.168.1.1 overlay0&lt;br /&gt;
docker network ls &lt;br /&gt;
NETWORK ID          NAME                DRIVER              SCOPE&lt;br /&gt;
130833da0920        bridge              bridge              local&lt;br /&gt;
528db1bf80f1        docker_gwbridge     bridge              local&lt;br /&gt;
832c8c6d73a5        host                host                local&lt;br /&gt;
t8jxy5vsy5on        ingress             overlay             swarm&lt;br /&gt;
815a9c2c4005        none                null                local&lt;br /&gt;
2x6bq1czzdc1        overlay0            overlay             swarm&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Inspect network&lt;br /&gt;
&amp;lt;source lang=json&amp;gt;&lt;br /&gt;
docker network inspect overlay0&lt;br /&gt;
[&lt;br /&gt;
    {&lt;br /&gt;
        &amp;quot;Name&amp;quot;: &amp;quot;overlay0&amp;quot;,&lt;br /&gt;
        &amp;quot;Id&amp;quot;: &amp;quot;2x6bq1czzdc102sl6ge7gpm3w&amp;quot;,&lt;br /&gt;
        &amp;quot;Created&amp;quot;: &amp;quot;2019-01-19T11:24:02.146339562Z&amp;quot;,&lt;br /&gt;
        &amp;quot;Scope&amp;quot;: &amp;quot;swarm&amp;quot;,&lt;br /&gt;
        &amp;quot;Driver&amp;quot;: &amp;quot;overlay&amp;quot;,&lt;br /&gt;
        &amp;quot;EnableIPv6&amp;quot;: false,&lt;br /&gt;
        &amp;quot;IPAM&amp;quot;: {&lt;br /&gt;
            &amp;quot;Driver&amp;quot;: &amp;quot;default&amp;quot;,&lt;br /&gt;
            &amp;quot;Options&amp;quot;: null,&lt;br /&gt;
            &amp;quot;Config&amp;quot;: [&lt;br /&gt;
                {&lt;br /&gt;
                    &amp;quot;Subnet&amp;quot;: &amp;quot;192.168.1.0/24&amp;quot;,&lt;br /&gt;
                    &amp;quot;Gateway&amp;quot;: &amp;quot;192.168.1.1&amp;quot;&lt;br /&gt;
                }&lt;br /&gt;
            ]&lt;br /&gt;
        },&lt;br /&gt;
        &amp;quot;Internal&amp;quot;: false,&lt;br /&gt;
        &amp;quot;Attachable&amp;quot;: false,&lt;br /&gt;
        &amp;quot;Ingress&amp;quot;: false,&lt;br /&gt;
        &amp;quot;ConfigFrom&amp;quot;: {&lt;br /&gt;
            &amp;quot;Network&amp;quot;: &amp;quot;&amp;quot;&lt;br /&gt;
        },&lt;br /&gt;
        &amp;quot;ConfigOnly&amp;quot;: false,&lt;br /&gt;
        &amp;quot;Containers&amp;quot;: null,&lt;br /&gt;
        &amp;quot;Options&amp;quot;: {&lt;br /&gt;
            &amp;quot;com.docker.network.driver.overlay.vxlanid_list&amp;quot;: &amp;quot;4097&amp;quot;&lt;br /&gt;
        },&lt;br /&gt;
        &amp;quot;Labels&amp;quot;: null&lt;br /&gt;
    }&lt;br /&gt;
]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Inspect container network&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker container inspect testweb --format {{.HostConfig.NetworkMode}}&lt;br /&gt;
overlay0&lt;br /&gt;
docker container inspect testweb --format {{.NetworkSettings.Networks.dev_bridge.IPAddress}}&lt;br /&gt;
192.168.1.3&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Connect/disconnect from a network can be done when a container is running. Connect won't disconnect from a current network.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker network connect --ip=192.168.1.10 deviceeth0 web1&lt;br /&gt;
docker container inspect --format=&amp;quot;{{.NetworkSettings.Networks.bridge.IPAddress}}&amp;quot; web1&lt;br /&gt;
docker container inspect --format=&amp;quot;{{.NetworkSettings.Networks.deviceeth0.IPAddress}}&amp;quot; web1&lt;br /&gt;
curl http://$(IPAddr)&lt;br /&gt;
docker network disconnect deviceeth0 web1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Overlay network in Swarm cluster ===&lt;br /&gt;
Overlay network can be created/removed/updated like any other docker objects. It allows inter-service(containers) communication, where &amp;lt;code&amp;gt;--gateway&amp;lt;/code&amp;gt; ip address is used to reach to outside eg. Inernet or the host network. When creating the &amp;lt;code&amp;gt;overlay&amp;lt;/code&amp;gt; network on the manager host it will get recreated on worker nodes only when is referenced by any service that is using it. See below.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
swarm-mgr$ docker network create --driver=overlay --subnet=192.168.1.0/24 --gateway=192.168.1.1 overlay0&lt;br /&gt;
swarm-mgr$ docker service create --name web1 -p 8080:80 --network=overlay0 --replicas 2 httpd&lt;br /&gt;
uvxymzdkcfwvs2oznbnk7nv03&lt;br /&gt;
overall progress: 2 out of 2 tasks &lt;br /&gt;
1/2: running   [==================================================&amp;gt;] &lt;br /&gt;
2/2: running   [==================================================&amp;gt;] &lt;br /&gt;
&lt;br /&gt;
swarm-wkr$ docker network ls&lt;br /&gt;
NETWORK ID          NAME                DRIVER              SCOPE&lt;br /&gt;
ba175ebd2a6f        bridge              bridge              local&lt;br /&gt;
a5848f607d8c        docker_gwbridge     bridge              local&lt;br /&gt;
fccfb9c1fdc3        host                host                local&lt;br /&gt;
t8jxy5vsy5on        ingress             overlay             swarm&lt;br /&gt;
127b10783faa        none                null                local&lt;br /&gt;
2x6bq1czzdc1        overlay0            overlay             swarm&lt;br /&gt;
&lt;br /&gt;
# remove network, only affected newly created servces not the running onces&lt;br /&gt;
swarm-mgr$ docker service update --network-rm=overlay0 web1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DNS ===&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
docker container run -d --name testweb1 -P --dns=8.8.8.8 \&lt;br /&gt;
                                           --dns=8.8.4.4 \&lt;br /&gt;
                                           --dns-search &amp;quot;mydomain.local&amp;quot; \&lt;br /&gt;
                                           httpd&lt;br /&gt;
# -P :- publish-all exposed ports to random port &amp;gt;32768&lt;br /&gt;
&lt;br /&gt;
docker container exec -it testweb1 /bin/bash -c 'cat /etc/resolv.conf'&lt;br /&gt;
search us-east-2.compute.internal&lt;br /&gt;
nameserver 8.8.8.8&lt;br /&gt;
nameserver 8.8.4.4&lt;br /&gt;
&lt;br /&gt;
# System wide settings, requires docker.service restart&lt;br /&gt;
cat &amp;gt; /etc/docker/daemon.json &amp;lt;&amp;lt;EOF&lt;br /&gt;
{ &lt;br /&gt;
  &amp;quot;dns&amp;quot;: [&amp;quot;8.8.8.8&amp;quot;, &amp;quot;8.8.4.4&amp;quot;]&lt;br /&gt;
}&lt;br /&gt;
EOF&lt;br /&gt;
sudo systemctl restart docker.service #required&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Examples =&lt;br /&gt;
== Lint - best practices ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ docker run --rm -i hadolint/hadolint &amp;lt; Dockerfile&lt;br /&gt;
/dev/stdin:9:16 unexpected newline expecting &amp;quot;\ &amp;quot;, '=', a space followed by the value for the variable 'MAC_ADDRESS', or at least one space&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Default project ==&lt;br /&gt;
As good practice all Docker files should be source controlled. The basic self explanatory structure can looks like below, and skeleton be created with a commend below:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
mkdir APROJECT &amp;amp;&amp;amp; d=$_; touch $d/{build.sh,run.sh,Dockerfile,README.md,VERSION};mkdir $d/assets; touch $_/{entrypoint.sh,install.sh}&lt;br /&gt;
&lt;br /&gt;
└── APROJECT&lt;br /&gt;
    ├── assets&lt;br /&gt;
    │   ├── entrypoint.sh&lt;br /&gt;
    │   └── install.sh&lt;br /&gt;
    ├── build.sh&lt;br /&gt;
    ├── Dockerfile&lt;br /&gt;
    ├── README.md&lt;br /&gt;
    └── VERSION&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Dockerfile ==&lt;br /&gt;
&amp;lt;code&amp;gt;Dockerfile&amp;lt;/code&amp;gt; it is simply a build file.&lt;br /&gt;
=== Semantics ===&lt;br /&gt;
;&amp;lt;code&amp;gt;entrypoint&amp;lt;/code&amp;gt;: Container config: what to start when this image is ran. &lt;br /&gt;
&lt;br /&gt;
;&amp;lt;code&amp;gt;entrypoint&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cmd&amp;lt;/code&amp;gt;: Docker allows you to define an Entrypoint and Cmd which you can mix and match in a Dockerfile. Entrypoint is the executable, and Cmd are the arguments passed to the Entrypoint. The Dockerfile schema is quite lenient and allows users to set Cmd without Entrypoint, which means that the first argument in Cmd will be the executable to run.&lt;br /&gt;
&lt;br /&gt;
=== User management ===&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
RUN addgroup --gid 1001 jenkins -q&lt;br /&gt;
RUN adduser  --gid 1001 --home /tank --disabled-password --gecos '' --uid 1001 jenkins&lt;br /&gt;
# --gid add user to group 1001&lt;br /&gt;
# --gecos parameter is used to set the additional information. In this case it is just empty.&lt;br /&gt;
# --disabled-password it's like  --disabled-login,  but  logins  are still possible (for example using SSH RSA keys) but not using password authentication&lt;br /&gt;
USER jenkins:jenkins #sets user for next RUN, CMD and ENTRYPOINT command&lt;br /&gt;
WORKDIR /tank #changes cwd for next RUN, CMD, ENTRYPOINT, COPY and ADD&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Multiple stage build ===&lt;br /&gt;
Introduced in Docker 17.06, allows to use multiple &amp;lt;code&amp;gt;FROM&amp;lt;/code&amp;gt; statements allowing for multi stage builds.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
FROM microsoft/aspnetcore-build AS build-env&lt;br /&gt;
WORKDIR /app&lt;br /&gt;
&lt;br /&gt;
# copy csproj and restore as distinct layers&lt;br /&gt;
COPY *.csproj ./&lt;br /&gt;
RUN dotnet restore&lt;br /&gt;
&lt;br /&gt;
# copy everything else and build&lt;br /&gt;
COPY . ./&lt;br /&gt;
RUN dotnet publish -c Release -o output&lt;br /&gt;
&lt;br /&gt;
# build runtime image&lt;br /&gt;
FROM microsoft/aspnetcore&lt;br /&gt;
WORKDIR /app&lt;br /&gt;
COPY --from=build-env /app/output .   #multi stage: copy files from previous container [as build-env]&lt;br /&gt;
ENTRYPOINT [&amp;quot;dotnet&amp;quot;, &amp;quot;LetsKube.dll&amp;quot;]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Squash an image =&lt;br /&gt;
Docker uses &amp;lt;code&amp;gt;Union&amp;lt;/code&amp;gt; filesystem that allows multiple volumes (layers) to share common and override changes by applying them on top layer. &lt;br /&gt;
There is no official way to ''flatten'' layers to a single storage layer or minimize an image size (2017). Below it's just practical approach. &lt;br /&gt;
# Start container from an image&lt;br /&gt;
# Export a container to &amp;lt;code&amp;gt;.tar&amp;lt;/code&amp;gt; with all it's file systems&lt;br /&gt;
# Import container with new image name&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the process completes and original image gets deleted the new image &amp;lt;code&amp;gt;docker image history&amp;lt;/code&amp;gt; command will show only one layer. Often the image will be smaller.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# run a container from an image&lt;br /&gt;
docker run myweb:v3&lt;br /&gt;
# export container to .tar&lt;br /&gt;
docker export &amp;lt;contr_name&amp;gt; &amp;gt; myweb.v3.tar&lt;br /&gt;
docker save   &amp;lt;image id&amp;gt;   &amp;gt; image.tar #not verified command&lt;br /&gt;
docker import myweb.v3.tar   myweb:v4&lt;br /&gt;
docker load   myweb.v3.tar             #not verified command  &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Resources&lt;br /&gt;
*[https://github.com/jwilder/docker-squash docker-squash] GitHub&lt;br /&gt;
&lt;br /&gt;
= Gracefully stop / kill a container =&lt;br /&gt;
''all below are only notes''&lt;br /&gt;
&lt;br /&gt;
Trap ctrl_c then kill/rm container.&lt;br /&gt;
*--init&lt;br /&gt;
*--sig-proxy this only works when --tty=false but by default is true&lt;br /&gt;
&lt;br /&gt;
= Proxy =&lt;br /&gt;
If you are behind corporate proxy, you should use Docker client &amp;lt;code&amp;gt;~/.docker/config.json&amp;lt;/code&amp;gt; config file. It requires Docker &lt;br /&gt;
17.07 minimum version.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
 &amp;quot;proxies&amp;quot;:&lt;br /&gt;
 {&lt;br /&gt;
   &amp;quot;default&amp;quot;:&lt;br /&gt;
   {&lt;br /&gt;
     &amp;quot;httpProxy&amp;quot;: &amp;quot;http://10.0.0.1:3128&amp;quot;,&lt;br /&gt;
     &amp;quot;httpsProxy&amp;quot;: &amp;quot;http://10.0.0.1:3128&amp;quot;,&lt;br /&gt;
     &amp;quot;noProxy&amp;quot;: &amp;quot;localhost,127.0.0.1,*.test.example.com,.example2.com&amp;quot;&lt;br /&gt;
   }&lt;br /&gt;
 }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
More you can find [https://docs.docker.com/network/proxy/#configure-the-docker-client here]&lt;br /&gt;
&lt;br /&gt;
== Insecure proxy ==&lt;br /&gt;
These can be added to different places, the order is based on latest practices and versioning&lt;br /&gt;
;docker-ce 18.6&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
    &amp;quot;insecure-registries&amp;quot; : [ &amp;quot;localhost:443&amp;quot;,&amp;quot;10.0.0.0/8&amp;quot;, &amp;quot;172.16.0.0/12&amp;quot;, &amp;quot;192.168.0.0/16&amp;quot; ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo systemctl daemon-reload&lt;br /&gt;
sudo systemctl restart docker&lt;br /&gt;
sudo systemctl show docker | grep Env&lt;br /&gt;
docker info #check Insecure Registries&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Using environment file, prior version 18&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sudo vi /etc/default/docker&lt;br /&gt;
DOCKER_HOME='--graph=/tank/docker'&lt;br /&gt;
DOCKER_GROUP='--group=docker'&lt;br /&gt;
DOCKER_LOG_DRIVER='--log-driver=json-file'&lt;br /&gt;
DOCKER_STORAGE_DRIVER='--storage-driver=btrfs'&lt;br /&gt;
DOCKER_ICC='--icc=false'&lt;br /&gt;
DOCKER_IPMASQ='--ip-masq=true'&lt;br /&gt;
DOCKER_IPTABLES='--iptables=true'&lt;br /&gt;
DOCKER_IPFORWARD='--ip-forward=true'&lt;br /&gt;
DOCKER_ADDRESSES='--host=unix:///var/run/docker.sock'&lt;br /&gt;
DOCKER_INSECURE_REGISTRIES='--insecure-registry 10.0.0.0/8 --insecure-registry 172.16.0.0/12 --insecure-registry 192.168.0.0/16'&lt;br /&gt;
DOCKER_OPTS=&amp;quot;${DOCKER_HOME} ${DOCKER_GROUP} ${DOCKER_LOG_DRIVER} ${DOCKER_STORAGE_DRIVER} ${DOCKER_ICC} ${DOCKER_IPMASQ} ${DOCKER_IPTABLES} ${DOCKER_IPFORWARD} ${DOCKER_ADDRESSES} ${DOCKER_INSECURE_REGISTRIES}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
$ sudo vi /etc/systemd/system/docker.service.d/docker.conf&lt;br /&gt;
[Service]&lt;br /&gt;
EnvironmentFile=-/etc/default/docker&lt;br /&gt;
ExecStart=/usr/bin/dockerd $DOCKER_HOME $DOCKER_GROUP $DOCKER_LOG_DRIVER $DOCKER_STORAGE_DRIVER $DOCKER_ICC $DOCKER_IPMASQ $DOCKER_IPTABLES $DOCKER_IPFORWARD $DOCKER_ADDRESSES $DOCKER_INSECURE_REGISTRIES&lt;br /&gt;
&lt;br /&gt;
$ sudo vi /etc/systemd/system/docker.service&lt;br /&gt;
[Unit]&lt;br /&gt;
Description=Docker Application Container Engine&lt;br /&gt;
Documentation=https://docs.docker.com&lt;br /&gt;
After=network-online.target docker.socket firewalld.service&lt;br /&gt;
Wants=network-online.target&lt;br /&gt;
Requires=docker.socket&lt;br /&gt;
&lt;br /&gt;
[Service]&lt;br /&gt;
EnvironmentFile=-/etc/default/docker&lt;br /&gt;
Type=notify&lt;br /&gt;
# the default is not to use systemd for cgroups because the delegate issues still&lt;br /&gt;
# exists and systemd currently does not support the cgroup feature set required&lt;br /&gt;
# for containers run by docker&lt;br /&gt;
ExecStart=/usr/bin/dockerd -H fd://&lt;br /&gt;
ExecReload=/bin/kill -s HUP $MAINPID&lt;br /&gt;
LimitNOFILE=1048576&lt;br /&gt;
# Having non-zero Limit*s causes performance problems due to accounting overhead&lt;br /&gt;
# in the kernel. We recommend using cgroups to do container-local accounting.&lt;br /&gt;
LimitNPROC=infinity&lt;br /&gt;
LimitCORE=infinity&lt;br /&gt;
# Uncomment TasksMax if your systemd version supports it.&lt;br /&gt;
# Only systemd 226 and above support this version.&lt;br /&gt;
TasksMax=infinity&lt;br /&gt;
TimeoutStartSec=0&lt;br /&gt;
# set delegate yes so that systemd does not reset the cgroups of docker containers&lt;br /&gt;
Delegate=yes&lt;br /&gt;
# kill only the docker process, not all processes in the cgroup&lt;br /&gt;
KillMode=process&lt;br /&gt;
# restart the docker process if it exits prematurely&lt;br /&gt;
Restart=on-failure&lt;br /&gt;
StartLimitBurst=3&lt;br /&gt;
StartLimitInterval=60s&lt;br /&gt;
&lt;br /&gt;
[Install]&lt;br /&gt;
WantedBy=multi-user.target&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Run docker without sudo ==&lt;br /&gt;
Adding a user to docker group should be sufficient. However on apparmor, SELinux or a filesystem with ACL enabled additional permissions might be required in respect to access a &amp;lt;tt&amp;gt;socket file&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
$ ll /var/run/docker.sock&lt;br /&gt;
srw-rw---- 1 root docker 0 Sep  6 12:31 /var/run/docker.sock=&lt;br /&gt;
# ACL&lt;br /&gt;
$ sudo getfacl /var/run/docker.sock&lt;br /&gt;
getfacl: Removing leading '/' from absolute path names&lt;br /&gt;
# file: var/run/docker.sock&lt;br /&gt;
# owner: root&lt;br /&gt;
# group: docker&lt;br /&gt;
user::rw-&lt;br /&gt;
group::rw-&lt;br /&gt;
other::---&lt;br /&gt;
&lt;br /&gt;
# Grant ACL to jenkns user&lt;br /&gt;
$ sudo setfacl -m user:username:rw /var/run/docker.sock&lt;br /&gt;
&lt;br /&gt;
$ sudo getfacl /var/run/docker.sock&lt;br /&gt;
getfacl: Removing leading '/' from absolute path names&lt;br /&gt;
# file: var/run/docker.sock&lt;br /&gt;
# owner: root&lt;br /&gt;
# group: docker&lt;br /&gt;
user::rw-&lt;br /&gt;
user:jenkins:rw-&lt;br /&gt;
group::rw-&lt;br /&gt;
mask::rw-&lt;br /&gt;
other::---&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
;References&lt;br /&gt;
* [https://askubuntu.com/questions/477551/how-can-i-use-docker-without-sudo how-can-i-use-docker-without-sudo]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
*[https://www.weave.works/blog/my-container-wont-stop-on-ctrl-c-and-other-minor-tragedies/ my-container-wont-stop-on-ctrl-c-and-other-minor-tragedies]&lt;br /&gt;
*[https://github.com/moby/moby/pull/12228 PID1 in container aka tinit]&lt;br /&gt;
*[https://container-solutions.com/understanding-volumes-docker/ understanding-volumes-docker]&lt;br /&gt;
&lt;br /&gt;
= Docker Enterprise Edition =&lt;br /&gt;
*[https://success.docker.com/article/compatibility-matrix Compatibility Matrix]&lt;br /&gt;
Components:&lt;br /&gt;
* Docker daemon (fka &amp;quot;Engine&amp;quot;)&lt;br /&gt;
* Docker Trusted Registry (DTR)&lt;br /&gt;
* Docker Universal Control Plane (UCP)&lt;br /&gt;
&lt;br /&gt;
= Docker Swarm =&lt;br /&gt;
== Swarm - sizing ==&lt;br /&gt;
;Universal Control Plane (UCP)&lt;br /&gt;
This is for only Enterpsise Edition&lt;br /&gt;
* ports managers, workers in/out&lt;br /&gt;
&lt;br /&gt;
Hardware requirments:&lt;br /&gt;
* 8gb RAM for  managers or DTR Docker Trsuted Registry&lt;br /&gt;
* 4gb RAM for workers&lt;br /&gt;
* 3gb free space&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Performance Consideration (Timing)&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
Component                              Timeout(ms)  Configurable&lt;br /&gt;
Raft consensus between manager nodes   3000         no&lt;br /&gt;
Gossip protocol for overlay networking 5000         no&lt;br /&gt;
etcd                                   500          yes&lt;br /&gt;
RethinkDB                              10000        no&lt;br /&gt;
Stand-alone swarm                      90000        no&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Compatibility Docker EE&lt;br /&gt;
* Docker Engine 17.06+&lt;br /&gt;
* DTR 2.3+&lt;br /&gt;
* UCP 2.2+&lt;br /&gt;
== Swarm with single host manager ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Initialise Swarm&lt;br /&gt;
docker swarm init --advertise-addr 172.31.16.10 #Iyou get SWMTKN-token&lt;br /&gt;
To add a worker to this swarm, run the following command:&lt;br /&gt;
    docker swarm join --token SWMTKN-1-1i2v91qbj0pg88dxld15vpx3e74qm5clk7xkcrg6j3xknedqui-dh60f4j09itiqjfhqa196ufvo 172.31.16.10:2377&lt;br /&gt;
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.&lt;br /&gt;
&lt;br /&gt;
# Join tokens&lt;br /&gt;
docker swarm join-token manager #display manager join-token, run on manager&lt;br /&gt;
docker swarm join-token worker  #display worker  join-token, run on manager&lt;br /&gt;
&lt;br /&gt;
# Join worker, run new-worker-node&lt;br /&gt;
#                                 -&amp;gt;            swarm cluster id                    &amp;lt;-&amp;gt; this part is mgr/wkr &amp;lt;- -&amp;gt; mgr node &amp;lt;-&lt;br /&gt;
docker swarm join --token SWMTKN-1-1i2v91qbj0pg88dxld15vpx3e74qm5clk7xkcrg6j3xknedqui-dh60f4j09itiqjfhqa196ufvo 172.31.16.10:2377&lt;br /&gt;
&lt;br /&gt;
# Join another manager, run on new-manager-node&lt;br /&gt;
docker swarm join-token manager #run on primary manager if you wish add another manager&lt;br /&gt;
# in output you get a token. You notice that 1st part up to dash identifies Swarm cluster and the other part is role id.&lt;br /&gt;
&lt;br /&gt;
# join to swarm (cluster), token will identify a role in the cluster manager or worker&lt;br /&gt;
docker swarm join --token SWMTKN-xxxx&lt;br /&gt;
docker swarm join --token SWMTKN-1-1i2v91qbj0pg88dxld15vpx3e74qm5clk7xkcrg6j3xknedqui-dh60f4j09itiqjfhqa196ufvo 172.31.16.10:2377&lt;br /&gt;
This node joined a swarm as a worker.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check Swarm status&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker node ls&lt;br /&gt;
[cloud_user@ip-172-31-16-10 swarm-manager]$ docker node ls&lt;br /&gt;
ID                            HOSTNAME                          STATUS   AVAILABILITY MANAGER STATUS ENGINE VERSION&lt;br /&gt;
641bfndn49b1i1dj17s8cirgw *   ip-172-31-16-10.mylabserver.com   Ready    Active       Leader         18.09.0&lt;br /&gt;
vlw7te728z7bvd7ulb3hn08am     ip-172-31-16-94.mylabserver.com   Ready    Active                      18.09.0&lt;br /&gt;
&lt;br /&gt;
docker system info | grep -A 7 Swarm&lt;br /&gt;
Swarm: active&lt;br /&gt;
 NodeID: 641bfndn49b1i1dj17s8cirgw&lt;br /&gt;
 Is Manager: true&lt;br /&gt;
 ClusterID: 4jqxdmfd0w5pc4if4fskgd5nq&lt;br /&gt;
 Managers: 1&lt;br /&gt;
 Nodes: 2&lt;br /&gt;
 Default Address Pool: 10.0.0.0/8  &lt;br /&gt;
 SubnetSize: 24&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Troubleshooting&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo systemctl disable firewalld &amp;amp;&amp;amp; sudo systemctl stop firewalld # CentOS&lt;br /&gt;
sudo -i; printf &amp;quot;\n10.0.0.11 mgr01\n10.0.0.12 node01\n&amp;quot; &amp;gt;&amp;gt; /etc/hosts # Add nodes to hosts file; exit&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Swarm cluster ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker node update -availability drain [node] #drain services for Manager Only nodes&lt;br /&gt;
docker service update --force [service_name]  #force re-balance services across cluster&lt;br /&gt;
&lt;br /&gt;
docker swarm leave #node leaves a cluster&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Locking / unlocking swarm cluster ==&lt;br /&gt;
Logs used by Swarm manager are encrypted on disk. Access to nodes gives access to keys that encrypt them. It further protects cluster as requires a unlocking key when restarting manager/nodes.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker swarm init   --auto-lock=true #initialize with &lt;br /&gt;
docker swarm update --auto-lock=true #update current swarm&lt;br /&gt;
# both will produce unlock token STKxxx&lt;br /&gt;
docker swarm unlock #it'll ask for the unlock token&lt;br /&gt;
docker swarm update --auto-lock=false #disable key locking&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you have access to a manager you can always get unlock key using:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker swarm unlock-key&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Key management&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker swarm unlock-key --rotate #could be in a cron&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Backup and restore swarm cluster ==&lt;br /&gt;
This priocess describes how to backup whole cluster configuration so can be restored on a new set of servers.&lt;br /&gt;
&lt;br /&gt;
Create docker apps running across swarm&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker service create --name bkweb --publish 80:80 --replicas 2 httpd&lt;br /&gt;
$ docker service ls&lt;br /&gt;
ID           NAME      MODE          REPLICAS  IMAGE         PORTS&lt;br /&gt;
q9jki3n2hffm bkweb     replicated    2/2       httpd:latest  *:80-&amp;gt;80/tcp&lt;br /&gt;
&lt;br /&gt;
$ docker service ps bkweb #note containers run on 2 different nodes&lt;br /&gt;
ID           NAME      IMAGE         NODE                      DESIRED STATE CURRENT STATE          &lt;br /&gt;
j964jm1lq3q5 bkweb.1   httpd:latest  server2c.mylabserver.com  Running       Running about a minute ago&lt;br /&gt;
jpjx3mk7hhm0 bkweb.2   httpd:latest  server1c.mylabserver.com  Running       Running about a minute ago&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Backup state files&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo -i&lt;br /&gt;
cd /var/lib/docker/swarm&lt;br /&gt;
cat docker-state.json #contains info about managers, workers, certificates, etc..&lt;br /&gt;
cat state.json&lt;br /&gt;
sudo systemctl stop docker.service&lt;br /&gt;
&lt;br /&gt;
# Backup swarm cluster, this file can be then used to recover whole swarm cluster on another set of servers&lt;br /&gt;
sudo tar -czvf swarm.tar.gz /var/lib/docker/swarm/&lt;br /&gt;
&lt;br /&gt;
#the running docker containers should be brought up as they were before stopping the service&lt;br /&gt;
systemctl start docker&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Recover using swarm.tar backup&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# scp swarm.tar to recovery node - what it'd be a node with just installed docker&lt;br /&gt;
sudo rm -rf /var/lib/docker/swarm&lt;br /&gt;
sudo systemctl stop docker&lt;br /&gt;
&lt;br /&gt;
# Option1 untar directly&lt;br /&gt;
sudo tar -xzvf swarm.tar.gz -C /var/lib/docker/swarm&lt;br /&gt;
&lt;br /&gt;
# Option2 copy recursivly, -f override if a file exists&lt;br /&gt;
tar -xzvf swarm.tar.gz; cd /var/lib/docker&lt;br /&gt;
cp -rf swarm/ /var/lib/docker/&lt;br /&gt;
&lt;br /&gt;
sudo systemctl start docker&lt;br /&gt;
docker swarm init --force-new-cluster # produces the exactly same token&lt;br /&gt;
# you should join all required nodes to new manager ip&lt;br /&gt;
# scale services down to 1, then scale up so get distributed to other nodes&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Run containers as a services ==&lt;br /&gt;
Docker container has a number limitation therefore running as a service where Cluster Manager: Swarm or Kubernetes manages networking, access, loadbalancing etc.. is a way to scale with ease. The service is using eg. mesh routing to deal with access to containers.&lt;br /&gt;
&lt;br /&gt;
Swarm nodes setup 1 manager and 2 workers&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
ID                            HOSTNAME                          STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION&lt;br /&gt;
641bfndn49b1i1dj17s8cirgw *   swarm-mgr-1.example.com   Ready   Active Leader       18.09.1&lt;br /&gt;
vlw7te728z7bvd7ulb3hn08am     swarm-wkr-1.example.com   Ready   Active              18.09.1&lt;br /&gt;
r8h7xmevue9v2mgysmld59py2     swarm-wkr-2.example.com   Ready   Active              18.09.0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create and run a service&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docekr pull httpd&lt;br /&gt;
docker service create --name serviceweb --publish 80:80 httpd&lt;br /&gt;
# --publish|-p -expose a port on all containers in the running cluster&lt;br /&gt;
&lt;br /&gt;
docker service ls&lt;br /&gt;
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS&lt;br /&gt;
vt0ftkifbd84        serviceweb          replicated          1/1                 httpd:latest        *:80-&amp;gt;80/tcp&lt;br /&gt;
&lt;br /&gt;
docker service ps serviceweb #show nodes that a container is running on, here on mgr-1 node&lt;br /&gt;
ID           NAME         IMAGE        NODE                    DESIRED STATE CURRENT STATE  ERROR  PORTS&lt;br /&gt;
e6rx3tzgp1e5 serviceweb.1 httpd:latest swarm-mgr-1.example.com Running       Running about                  &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When running as a service even if a container runs on a single node (replica=1) the container can be accessed from any of swarm nodes. It's because service exposed port has been exposed to extended mesh private network that the container is running on.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
[user@swarm-mgr-1 ~]$ curl -k http://swarm-mgr-1.example.com&lt;br /&gt;
  &amp;lt;html&amp;gt;&amp;lt;body&amp;gt;&amp;lt;h1&amp;gt;It works!&amp;lt;/h1&amp;gt;&amp;lt;/body&amp;gt;&amp;lt;/html&amp;gt;&lt;br /&gt;
[user@swarm-mgr-1 ~]$ curl -k http://swarm-wkr-1.example.com&lt;br /&gt;
  &amp;lt;html&amp;gt;&amp;lt;body&amp;gt;&amp;lt;h1&amp;gt;It works!&amp;lt;/h1&amp;gt;&amp;lt;/body&amp;gt;&amp;lt;/html&amp;gt;&lt;br /&gt;
[user@swarm-mgr-1 ~]$ curl -k http://swarm-wkr-2.example.com&lt;br /&gt;
  &amp;lt;html&amp;gt;&amp;lt;body&amp;gt;&amp;lt;h1&amp;gt;It works!&amp;lt;/h1&amp;gt;&amp;lt;/body&amp;gt;&amp;lt;/html&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Service update, can be done to limits, volumes, env-variables and more...&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker service scale devweb=3                 #or&lt;br /&gt;
docker service update --replicas 3 serviceweb #--detach=false shows visual progress in older versions, default in v18.06&lt;br /&gt;
serviceweb&lt;br /&gt;
overall progress: 3 out of 3 tasks &lt;br /&gt;
1/3: running   [==================================================&amp;gt;] &lt;br /&gt;
2/3: running   [==================================================&amp;gt;] &lt;br /&gt;
3/3: running   [==================================================&amp;gt;] &lt;br /&gt;
verify: Service converged &lt;br /&gt;
&lt;br /&gt;
# Limits(soft limit) and reservations(hard limit), this causes to start new services(containers)&lt;br /&gt;
docker service update --limit-cpu=.5 --reserve-cpu=.75 --limit-memory=128m --reserve-memory=256m serviceweb&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Templating service names ==&lt;br /&gt;
This allows to control eg. hostname in a cluster. Useful for big clusters to easier identify services where they run from hostname.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker service create --name web --hostname&amp;quot;{{.Node.ID}}-{{.Service.Name}}&amp;quot; httpd&lt;br /&gt;
docker service ps --no-trunc web&lt;br /&gt;
docker inspect --format=&amp;quot;{{}.Config.Hostname}&amp;quot; web.1.ab10_serviceID_cd&lt;br /&gt;
aa_nodeID_bb-web&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Node lables for task/service placement ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker node ls&lt;br /&gt;
ID                            HOSTNAME                  STATUS AVAILABILITY        MANAGER STATUS      ENGINE VERSION&lt;br /&gt;
641bfndn49b1i1dj17s8cirgw *   swarm-mgr-1.example.com   Ready  Active              Leader              18.09.1&lt;br /&gt;
vlw7te728z7bvd7ulb3hn08am     swarm-wkr-1.example.com   Ready  Active                                  18.09.1&lt;br /&gt;
r8h7xmevue9v2mgysmld59py2     swarm-wkr-2.example.com   Ready  Active                                  18.09.1&lt;br /&gt;
&lt;br /&gt;
docker node inspect 641bfndn49b1i1dj17s8cirgw --pretty&lt;br /&gt;
ID:                     641bfndn49b1i1dj17s8cirgw&lt;br /&gt;
Hostname:               swarm-mgr-1.example.com &lt;br /&gt;
Joined at:              2019-01-08 12:16:56.277717163 +0000 utc&lt;br /&gt;
Status:&lt;br /&gt;
 State:                 Ready&lt;br /&gt;
 Availability:          Active&lt;br /&gt;
 Address:               172.31.10.10&lt;br /&gt;
Manager Status:&lt;br /&gt;
 Address:               172.31.10.10:2377&lt;br /&gt;
 Raft Status:           Reachable&lt;br /&gt;
 Leader:                Yes&lt;br /&gt;
Platform:&lt;br /&gt;
 Operating System:      linux&lt;br /&gt;
 Architecture:          x86_64&lt;br /&gt;
Resources:&lt;br /&gt;
 CPUs:                  2&lt;br /&gt;
 Memory:                3.699GiB&lt;br /&gt;
Plugins:&lt;br /&gt;
 Log:           awslogs, fluentd, gcplogs, gelf, journald, json-file, local, logentries, splunk, syslog&lt;br /&gt;
 Network:               bridge, host, macvlan, null, overlay&lt;br /&gt;
 Volume:                local&lt;br /&gt;
Engine Version:         18.09.1&lt;br /&gt;
TLS Info:&lt;br /&gt;
 TrustRoot:&lt;br /&gt;
-----BEGIN CERTIFICATE-----&lt;br /&gt;
MIIBajCCARCgAwIBAgIUKXz3wtc8OA8uzTo1pO86ko+PB+EwCgYIKoZIzj0EAwIw&lt;br /&gt;
..&lt;br /&gt;
-----END CERTIFICATE-----&lt;br /&gt;
 Issuer Subject:        MBMxETAPBgNVBAMTCHN3YX.....h&lt;br /&gt;
 Issuer Public Key:     MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEy......==&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Apply label to a node&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker node update --label-add node-env=testnode r8h7xmevue9v2mgysmld59py2&lt;br /&gt;
docker node inspect r8h7xmevue9v2mgysmld59py2 --pretty | grep -B1 -A2 Labels&lt;br /&gt;
ID:                     r8h7xmevue9v2mgysmld59py2&lt;br /&gt;
Labels:&lt;br /&gt;
 - node-env=testnode&lt;br /&gt;
Hostname:               swarm-wkr-1.example.com&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
How to use it. Run a service with &amp;lt;code&amp;gt;--constraint&amp;lt;/code&amp;gt; option that pins services to run on a node meeting given criteria. In our case to run on a node where &amp;lt;code&amp;gt;node.labels.node-env == testnode&amp;lt;/code&amp;gt;. Note that all replicas are running on the same node unlike they'd be distributed across the cluster.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker service create --name constraints -p 80:80 --constraint 'node.labels.node-env == testnode' --replicas 3 httpd #node.role, node.id, node.hostname&lt;br /&gt;
zrk15vfdaitc1rvw9wqh2s0ot&lt;br /&gt;
overall progress: 3 out of 3 tasks &lt;br /&gt;
1/3: running   [==================================================&amp;gt;] &lt;br /&gt;
2/3: running   [==================================================&amp;gt;] &lt;br /&gt;
3/3: running   [==================================================&amp;gt;] &lt;br /&gt;
verify: Service converged &lt;br /&gt;
[cloud_user@mrpiotrpawlak1c ~]$ docker service ls&lt;br /&gt;
ID           NAME          MODE         REPLICAS IMAGE         PORTS&lt;br /&gt;
zrk15vfdaitc constraints   replicated   3/3      httpd:latest  *:80-&amp;gt;80/tcp&lt;br /&gt;
&lt;br /&gt;
[user@swarm-wkr-2 ~]$ docker service ps constraints&lt;br /&gt;
ID           NAME          IMAGE        NODE                      DESIRED STATE       CURRENT STATE            ERROR               PORTS&lt;br /&gt;
y5z4mt99uzpo constraints.1 httpd:latest swarm-wkr-2.example.com   Running Running 41 seconds ago                       &lt;br /&gt;
zqbn4ips969q constraints.2 httpd:latest swarm-wkr-2.example.com   Running Running 41 seconds ago                       &lt;br /&gt;
vnb10jcs2915 constraints.3 httpd:latest swarm-wkr-2.example.com   Running Running 41 seconds ago &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Scaling services ==&lt;br /&gt;
These commands be issued on a manager node&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker pull docker nginx&lt;br /&gt;
docker service create --name web --publish 80:80 httpd&lt;br /&gt;
docker service ps web                  #there is only 1 replica&lt;br /&gt;
docker service update --replicas 3 web #update to 3 replicas&lt;br /&gt;
docker service create --name nginx --publish 5901:80 nginx&lt;br /&gt;
elinks http://swarm-mgr-1.com:5901     #nginx website will be presented&lt;br /&gt;
&lt;br /&gt;
# scale is equivalent to update --replicas command for a single or multiple services&lt;br /&gt;
docker service scale web=3 nginx=3&lt;br /&gt;
docker service ls&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Replicated services vs global services ==&lt;br /&gt;
;Global Replicated: mode runs at least one copy of a service on each swarm node, even if you join another node the service will coverage there as well. In global mode you cannot use &amp;lt;code&amp;gt;update --replicats&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;scale&amp;lt;/code&amp;gt; commands. It is not possible to update the mode type.&lt;br /&gt;
;Replicated mode: allows for grater control and flexibility of running number of services.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# creates a single service running across whole cluster in replicated mode&lt;br /&gt;
docker service create --name web --publish 80:80 httpd&lt;br /&gt;
&lt;br /&gt;
# run in a global node&lt;br /&gt;
docker service create --name web --publish 5901:80 --mode global httpd&lt;br /&gt;
docker service ls #note distinct mode names: global and replicated&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Docker compose and deploy to Swarm =&lt;br /&gt;
Install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo yum install epel&lt;br /&gt;
sudo yum install pip&lt;br /&gt;
sudo pip install --upgrade pip&lt;br /&gt;
# install docker CE or EE to avoid Python libs conflits&lt;br /&gt;
sudo pip install docker-compose&lt;br /&gt;
&lt;br /&gt;
# Troubleshooting&lt;br /&gt;
## Err: Cannot uninstall 'requests'. It is a distutils installed project...&lt;br /&gt;
pip install --ignore-installed requests&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Dockerfile&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
cat &amp;gt;Dockerfile &amp;lt;&amp;lt;EOF&lt;br /&gt;
FROM centos:latest&lt;br /&gt;
RUN yum install -y httpd&lt;br /&gt;
RUN echo &amp;quot;Website1&amp;quot; &amp;gt;&amp;gt; /var/www/html/index.html&lt;br /&gt;
EXPOSE 80&lt;br /&gt;
ENTRYPOINT apachectl -DFOREGROUND&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Docker compose file&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
cat &amp;gt;docker-compose.yml &amp;lt;&amp;lt;EOF&lt;br /&gt;
version: '3'&lt;br /&gt;
services:&lt;br /&gt;
  apiweb1:&lt;br /&gt;
    image: httpd_1:v1&lt;br /&gt;
    build: .&lt;br /&gt;
    ports:&lt;br /&gt;
      - &amp;quot;81:80&amp;quot;&lt;br /&gt;
  apiweb2:&lt;br /&gt;
    image: httpd_1:v1&lt;br /&gt;
    ports:&lt;br /&gt;
      - &amp;quot;82:80&amp;quot;&lt;br /&gt;
  load-balancer:&lt;br /&gt;
    image: nginx:latest&lt;br /&gt;
    ports:&lt;br /&gt;
      - &amp;quot;80:80&amp;quot;&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Run docker compose, on the current node only&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker-compose up -d&lt;br /&gt;
WARNING: The Docker Engine you're using is running in swarm mode.&lt;br /&gt;
Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.&lt;br /&gt;
To deploy your application across the swarm, use `docker stack deploy`.&lt;br /&gt;
Creating compose_apiweb2_1       ... done&lt;br /&gt;
Creating compose_apiweb1_1       ... done&lt;br /&gt;
Creating compose_load-balancer_1 ... done&lt;br /&gt;
&lt;br /&gt;
docker ps&lt;br /&gt;
CONTAINER ID IMAGE        COMMAND                 CREATED  STATUS   PORTS              NAMES&lt;br /&gt;
14f8b6b10c2d nginx:latest &amp;quot;nginx -g 'daemon of…&amp;quot;  2 minutesUp 2 min 0.0.0.0:80-&amp;gt;80/tcp compose_load-balancer_1&lt;br /&gt;
e9b5b37fe4e5 httpd_1:v1   &amp;quot;/bin/sh -c 'apachec…&amp;quot;  2 minutesUp 2 min 0.0.0.0:81-&amp;gt;80/tcp compose_apiweb1_1&lt;br /&gt;
28ee22a8eae0 httpd_1:v1   &amp;quot;/bin/sh -c 'apachec…&amp;quot;  2 minutesUp 2 min 0.0.0.0:82-&amp;gt;80/tcp compose_apiweb2_1&lt;br /&gt;
&lt;br /&gt;
# Verify&lt;br /&gt;
curl http://localhost:81&lt;br /&gt;
curl http://localhost:82&lt;br /&gt;
curl http://localhost:80 #nginx&lt;br /&gt;
&lt;br /&gt;
# Prep before deploying docker-compose to Swarm. Also images needs to be build before hand.&lt;br /&gt;
# Docker stack does not support building images&lt;br /&gt;
docker-compose down --volumes #save everything to storage volumes&lt;br /&gt;
Stopping compose_load-balancer_1 ... done&lt;br /&gt;
Stopping compose_apiweb1_1       ... done&lt;br /&gt;
Stopping compose_apiweb2_1       ... done&lt;br /&gt;
Removing compose_load-balancer_1 ... done&lt;br /&gt;
Removing compose_apiweb1_1       ... done&lt;br /&gt;
Removing compose_apiweb2_1       ... done&lt;br /&gt;
Removing network compose_default&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Deploy compose to Swarm&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker stack deploy --compose-file docker-compose.yml customcompose-stack #customcompose-stack is a prefix for service name&lt;br /&gt;
Ignoring unsupported options: build&lt;br /&gt;
Creating network customcompose-stack_default&lt;br /&gt;
Creating service customcompose-stack_apiweb1&lt;br /&gt;
Creating service customcompose-stack_apiweb2&lt;br /&gt;
Creating service customcompose-stack_load-balancer&lt;br /&gt;
&lt;br /&gt;
docker stack services customcompose-stack #or&lt;br /&gt;
docker service ls&lt;br /&gt;
ID           NAME                               MODE       REPLICAS IMAGE        PORTS&lt;br /&gt;
k7wwkncov49p customcompose-stack_apiweb1        replicated 0/1      httpd_1:v1   *:81-&amp;gt;80/tcp&lt;br /&gt;
nl0j5folpmha customcompose-stack_apiweb2        replicated 0/1      httpd_1:v1   *:82-&amp;gt;80/tcp&lt;br /&gt;
x6p14gmpjyra customcompose-stack_load-balancer  replicated 1/1      nginx:latest *:80-&amp;gt;80/tcp&lt;br /&gt;
&lt;br /&gt;
docker stack rm customcompose-stack #remove stack&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Selecting a Storage Driver = &lt;br /&gt;
Go to Docker version matrix to verify what drivers are supported on your platform. Changing storage driver is destructive and you loose all containers volumes. Therefore you need to export/backup then re-import after the storage driver change.&lt;br /&gt;
&lt;br /&gt;
;CentOS&lt;br /&gt;
Device mapper is officialy supported on CentOS. It can be used on a disc as blockstorage it uses loopback adapter to provide that. Or can be blockstorage devive allowing Docker to mamange it.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker info --format '{{json .Driver}}'&lt;br /&gt;
docker info -f '{{json .}}' | jq .Driver&lt;br /&gt;
docker info | grep Storage&lt;br /&gt;
&lt;br /&gt;
sudo touch /etc/docker/daemon.json&lt;br /&gt;
sudo vi    /etc/docker/daemon.json #additional options are available&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;storage-driver&amp;quot;:&amp;quot;devicemapper&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Preserving any current images, requires export/backup and re-import after the storage driver change.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker images&lt;br /&gt;
sudo systemctl docker restart&lt;br /&gt;
ls -l /var/lib/docker/devicemapper #new location to storing images&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note, in &amp;lt;code&amp;gt;/var/lib/docker&amp;lt;/code&amp;gt; new directory &amp;lt;code&amp;gt;devicemapper&amp;lt;/code&amp;gt; has been created to store images from now on.&lt;br /&gt;
&lt;br /&gt;
;Update 2019 - Docker Engine 18.09.1&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
WARNING: the devicemapper storage-driver is deprecated, and will be removed in a future release.&lt;br /&gt;
WARNING: devicemapper: usage of loopback devices is strongly discouraged for production use.&lt;br /&gt;
         Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Selecting a logginig driver =&lt;br /&gt;
Available list of [https://docs.docker.com/config/containers/logging/configure/#supported-logging-driverslogging drivers] can be seen on Docker documentation page. Most popular are:&lt;br /&gt;
*none - No logs are available for the container and docker logs does not return any output.&lt;br /&gt;
*json-file - (default) the logs are formatted as JSON. The default logging driver for Docker.&lt;br /&gt;
*syslog - Writes logging messages to the syslog facility. The syslog daemon must be running on the host machine.&lt;br /&gt;
*journald - Writes log messages to journald. The journald daemon must be running on the host machine.&lt;br /&gt;
*fluentd - Writes log messages to fluentd (forward input). The fluentd daemon must be running on the host machine.&lt;br /&gt;
*awslogs - Writes log messages to Amazon CloudWatch Logs.&lt;br /&gt;
*splunk - Writes log messages to splunk using the HTTP Event Collector.&lt;br /&gt;
*etwlogs - (Windows) Writes log messages as Event Tracing for Windows (ETW) events&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker info | grep logging&lt;br /&gt;
docker container run -d --name &amp;lt;webjson&amp;gt; --logg-driver json-file httpd #per docker container setup&lt;br /&gt;
docker logs &amp;lt;testjson&amp;gt;&lt;br /&gt;
&lt;br /&gt;
docker container run -d --name &amp;lt;web&amp;gt; httpd #start new container&lt;br /&gt;
docker logs -f _testweb_                   #display standard-out logs&lt;br /&gt;
docker service log -f &amp;lt;web&amp;gt; #for swarm all container replicas logs&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable syslog logginig driver&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo vi /etc/rsyslog.conf&lt;br /&gt;
#uncomment below&lt;br /&gt;
$ModLoad imudp&lt;br /&gt;
$UDPServerRun 514&lt;br /&gt;
&lt;br /&gt;
sudo systemctl start rsyslog&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Change logging driver. Then standard output won't be available after the change.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;log-driver&amp;quot;: &amp;quot;syslog&amp;quot;,&lt;br /&gt;
  &amp;quot;log-opts&amp;quot;: {&lt;br /&gt;
    &amp;quot;syslog-address&amp;quot;: &amp;quot;udp://172.31.10.1&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sudo systemctl restart docker&lt;br /&gt;
docker info | grep logging&lt;br /&gt;
tail -f /var/log/messages #this will show all logging eg. access logs for httpd server&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Docker daemon logs ==&lt;br /&gt;
System level logs&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# CentOS&lt;br /&gt;
/var/messages | grep -i docker&lt;br /&gt;
&lt;br /&gt;
# Ubuntu&lt;br /&gt;
sudo journalctl -u docker.service --no-hostname&lt;br /&gt;
sudo journalctl -u docker -o json | jq -cMr '.MESSAGE'&lt;br /&gt;
sudo journalctl -u docker -o json | jq -cMr 'select(has(&amp;quot;CONTAINER_ID&amp;quot;) | not) | .MESSAGE'&lt;br /&gt;
/var/log/syslog | grep -i docker&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Docker container or service logs&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker container logs [OPTIONS] containerID  #single container logs&lt;br /&gt;
docker service   logs [OPTIONS] service|task #agregate logs across all cluster deployed container replicas &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Container life-cycle policies - eg. autostart =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker container run -d --name web --restart &amp;lt;on-failure|unless-stopped|no|none(default)|always&amp;gt; httpd&lt;br /&gt;
# --restart -restart on crash or exit 1 or service or system reboot&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Definitions:&lt;br /&gt;
* always - it will restart container always, even if stopped manually, restarting docker-deamon will start container&lt;br /&gt;
* unless-stopped - it will restart container always unless stopped manually by &amp;lt;code&amp;gt;docker container stop&amp;lt;/code&amp;gt;&lt;br /&gt;
* on-failure - restart if container exits with non-zero exit code&lt;br /&gt;
&lt;br /&gt;
= Universal Control Plance - UCP =&lt;br /&gt;
It's an application what allow to see all operational details for Swarm cluster when using Docker EE editin. 30 days trial is available.&lt;br /&gt;
&lt;br /&gt;
;Communication between Docker Engine, UCP and DTR (Docker Trusted Registry)&lt;br /&gt;
* over TCP/UDP - depends on a port, and whether a response is required, or if a message is a notification&lt;br /&gt;
* IPC - interprocess communication (intra-host), services on the same node&lt;br /&gt;
* API - over TCP, uses API directlyto query or update components in a cluster&lt;br /&gt;
&lt;br /&gt;
;References&lt;br /&gt;
* [https://docs.docker.com/ee/ucp/ucp-architecture/ UCP architecture]&lt;br /&gt;
&lt;br /&gt;
== Install/uninstall UCP &amp;lt;code&amp;gt;image: docker/ucp&amp;lt;/code&amp;gt; ==&lt;br /&gt;
OS support: &lt;br /&gt;
* UCP 2.2.11 is supported running on RHEL 7.5 and Ubuntu 18.04&lt;br /&gt;
&lt;br /&gt;
For labs purpose, we can use eg. &amp;lt;code&amp;gt;ucp.example.com&amp;lt;/code&amp;gt; the domain &amp;lt;code&amp;gt;example.com&amp;lt;/code&amp;gt; is included in UCP and DTR wildcard self-signed certificate.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install on a manager node&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
export UCP_USERNAME=ucp-admin&lt;br /&gt;
export UCP_PASSWORD=ucp-admin&lt;br /&gt;
export UCP_MGR_NODE_IP=172.31.101.248&lt;br /&gt;
&lt;br /&gt;
docker container run --rm -it --name ucp \&lt;br /&gt;
  -v /var/run/docker.sock:/var/run/docker.sock docker/ucp:2.2.15 \&lt;br /&gt;
  install --host-address=$UCP_MGR_NODE_IP --interactive --debug&lt;br /&gt;
&lt;br /&gt;
# --rm  :- because this container will be only transitinal container&lt;br /&gt;
# -it   :- because installation we want interactive&lt;br /&gt;
# -v    :- link the container with a file on a host&lt;br /&gt;
# --san :- add subject alternative names to certificates (e.g. --san www1.acme.com --san www2.acme.com)&lt;br /&gt;
# --host-address    :- IP address or network interface name to advertise to other nodes&lt;br /&gt;
# docker/ucp:2.2.11 :- image version&lt;br /&gt;
# --dns        :- custom DNS servers for the UCP containers&lt;br /&gt;
# --dns-search :- ustom DNS search domains for the UCP containers&lt;br /&gt;
# --admin-username &amp;quot;$UCP_USERNAME&amp;quot; --admin-password &amp;quot;$UCP_PASSWORD&amp;quot; #seems these are not supported, although are in a guide&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If not provided you will be asked for: &lt;br /&gt;
* Admin password during the process&lt;br /&gt;
* You may enter additional aliases (SANs) now or press enter to proceed with the above list:&lt;br /&gt;
** Additinall aliases: ucp ucp.example.com&lt;br /&gt;
 DEBU[0062] User entered: ucp ucp.ciscolinux.co.uk&lt;br /&gt;
 DEBU[0062] Hostnames: [host1c.mylabserver.com 127.0.0.1 172.17.0.1 172.31.101.248 ucp ucp.ciscolinux.co.uk] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You may want to add DNS entries in &amp;lt;code&amp;gt;/etc/hosts&amp;lt;/code&amp;gt; for&lt;br /&gt;
* ''ucp'' or ''ucp.example.com'' pointing to manager public ip&lt;br /&gt;
* ''dtr'' or ''dtr.example.com'' pointing a worker node public IPs. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Verify&lt;br /&gt;
* connect to https://ucp.example.com:443. &lt;br /&gt;
* &amp;lt;code&amp;gt;docker ps&amp;lt;/code&amp;gt; should see a number of containers running now, they need to see each other therefore we used &amp;lt;code&amp;gt;hosts&amp;lt;/code&amp;gt; entries to allow this.&lt;br /&gt;
&lt;br /&gt;
;Uninstall&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker container run --rm -it --name ucp \&lt;br /&gt;
  -v /var/run/docker.sock:/var/run/docker.sock \&lt;br /&gt;
  docker/ucp uninstall-ucp --interactive&lt;br /&gt;
&lt;br /&gt;
INFO[0000] Your engine version 18.09.1, build 4c52b90 (4.15.0-1031-aws) is compatible with UCP 3.1.2 (b822777) &lt;br /&gt;
INFO[0000] We're about to uninstall from this swarm cluster. UCP ID: t0ltwwcw5tdbtjo2fxlzmj8p4 &lt;br /&gt;
Do you want to proceed with the uninstall? (y/n): y&lt;br /&gt;
INFO[0000] Uninstalling UCP on each node...             &lt;br /&gt;
INFO[0031] UCP has been removed from this cluster successfully. &lt;br /&gt;
INFO[0033] Removing UCP Services&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Install DTR Docker Trusted Repository &amp;lt;code&amp;gt;image: docker/dtr&amp;lt;/code&amp;gt; ==&lt;br /&gt;
It's recommended for single core systems to wait 5 minutes after UCP deployment to relese more cpu cycle. You can see the load may peaking up at 1.0 using &amp;lt;code&amp;gt;w&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
Connect to UCP service https://ucp.example.com, login with creds created. Uload a license.lic file.&lt;br /&gt;
Go to Admin Settings &amp;gt; Docker Trusted Registry &amp;gt; Pick one of UCP Nodes [worker]&lt;br /&gt;
You may disable TLS verification on self-signed certificate&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Run a given command on a node you want to install DTR. &amp;lt;code&amp;gt;UCP_NODE&amp;lt;/code&amp;gt; in lab environment can cause a few issues. For a convinience to avoid avoid port conflict :80,:443 use different node that UCP is instaled. Eg. dns ''user2c.mylabserver.com'' or private IP. &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
export UCP_NODE=wkr-172.31.107.250 #for convinince, to avoid port conflict :80,:443 use worker IP&lt;br /&gt;
export UCP_USERNAME=ucp-admin&lt;br /&gt;
export UCP_PASSWORD=ucp-admin&lt;br /&gt;
export UCP_URL=https://ucp.example.com:443 #avoid using example.com to avoid SSL name validation issues&lt;br /&gt;
docker pull docker/dtr&lt;br /&gt;
&lt;br /&gt;
# Optional. Download UCP public certificate&lt;br /&gt;
curl -k https://ucp.ciscolinux.co.uk/ca &amp;gt; ucp-ca.pem&lt;br /&gt;
&lt;br /&gt;
docker container run -it --rm docker/dtr install \&lt;br /&gt;
  --ucp-node $UCP_NODE --ucp-url $UCP_URL --debug \&lt;br /&gt;
  --ucp-username $UCP_USERNAME --ucp-password $UCP_PASSWORD \&lt;br /&gt;
  --ucp-insecure-tls  # --ucp-ca &amp;quot;$(cat ucp-ca.pem)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# --ucp-node :- hostname/IP of the UCP node (any node managed by UCP) to deploy DTR. Random by default&lt;br /&gt;
# --ucp-url  :- the UCP URL including domain and port.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It will ask for if not specified:&lt;br /&gt;
* ucp-password: you know it from UCP installation step&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Sygnificiant installation logs&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
..&lt;br /&gt;
INFO[0006] Only one available UCP node detected. Picking UCP node 'user2c.labserver.com' &lt;br /&gt;
..&lt;br /&gt;
INFO[0006] verifying [80 443] ports on user2c.labserver.com &lt;br /&gt;
..&lt;br /&gt;
INFO[0000] Using default overlay subnet: 10.1.0.0/24    &lt;br /&gt;
INFO[0000] Creating network: dtr-ol                     &lt;br /&gt;
INFO[0000] Connecting to network: dtr-ol                &lt;br /&gt;
..&lt;br /&gt;
INFO[0008] Generated TLS certificate. dnsNames=&amp;quot;[*.com *.*.com example.com *.dtr *.*.dtr]&amp;quot; domains=&amp;quot;[*.com *.*.com 172.17.0.1 example.com *.dtr *.*.dtr]&amp;quot; ipAddresses=&amp;quot;[172.17.0.1]&amp;quot;&lt;br /&gt;
..&lt;br /&gt;
INFO[0073] You can use flag '--existing-replica-id 10e168476b49' when joining other replicas to your Docker Trusted Registry Cluster &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Verify by logging in to https://dtr.example.com&lt;br /&gt;
DTR installation process above has also installed a number of containers on maanger/worker nodes named &amp;lt;code&amp;gt;ucp-agent&amp;lt;/code&amp;gt; and number of containers on dedicated DTR node. &lt;br /&gt;
You can verify DTR by logging to https://dtr.example.com with UCP credentials &amp;lt;code&amp;gt;ucp-admin&amp;lt;/code&amp;gt; and the same password if you haven't changed any commands above. Then you should be presented with registry.docker.io like theme. Any images stored there will be trusted from a perspective of our organisation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Verify by going to UCP https://ucp.example.com, admin settings &amp;gt; Docker Trusted Registry&lt;br /&gt;
[[File:Ucp-dtr-in-admin.png|none|400px|left|Ucp-dtr-in-admin]]&lt;br /&gt;
&lt;br /&gt;
== Backup UCP and DTR  configuration ==&lt;br /&gt;
This is build into UCP. The process is to start a special container to export UCP configuration to tar file. This can be run as &amp;lt;code&amp;gt;cron&amp;lt;/code&amp;gt; job.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker container run --log-driver non --rm -i --name ucp -v /var/run/docker.sock:/var/run/docker.sock docker/ucp backup &amp;gt; backup.tar&lt;br /&gt;
# --rm it's transitional container&lt;br /&gt;
# -i run interactivly&lt;br /&gt;
&lt;br /&gt;
# At first run it will error with --id m79xxxxxxxxx, asking to re-run teh command with this id.&lt;br /&gt;
&lt;br /&gt;
# Restore command&lt;br /&gt;
docker container run --log-driver non --rm -i --name ucp -v /var/run/docker.sock:/var/run/docker.sock docker/ucp restore --id m79xxx &amp;lt; backup.tar&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;DTR&lt;br /&gt;
Durign a backup DTR will not be available.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker container run --log-driver non --rm docker/dtr backup  --ucp-insecure-tls --ucp-url &amp;lt;ucp_server_dns:443&amp;gt; --ucp-username admin --ucp-password &amp;lt;password&amp;gt; &amp;gt; dtr-backup.tar&lt;br /&gt;
&lt;br /&gt;
# will you be asked for:&lt;br /&gt;
# Choose a replica to back up from: enter&lt;br /&gt;
&lt;br /&gt;
# Restore command&lt;br /&gt;
docker container run --log-driver non --rm docker/dtr restore --ucp-insecure-tls --ucp-url &amp;lt;ucp_server_dns:443&amp;gt; --ucp-username admin --ucp-password &amp;lt;password&amp;gt; &amp;lt; dtr-backup.tar&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== UCP RBAC ==&lt;br /&gt;
The main concept is:&lt;br /&gt;
* administrators can make changes to the UCP swarm/kubernetes, User Management, Orgainisation, Team and Roles&lt;br /&gt;
* users - range of access from Full Control of resources to no access&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Ucp-rbac.png|500px|none|left|Ucp-rbac]]&lt;br /&gt;
&lt;br /&gt;
Note that only Scheduler role allows access to Node to view nodes. Plus schedule workloads of course.&lt;br /&gt;
&lt;br /&gt;
= UCP Client bundle =&lt;br /&gt;
UCP client bundle allows to export a bundle containing a certificate and environment settings that will poind docker-client to UCP in order to use a cluster, create images and services.&lt;br /&gt;
&lt;br /&gt;
;Download bundle&lt;br /&gt;
# Create a user with priviliges that yuo wish docker-client to run as&lt;br /&gt;
# Download a client budle from User Profile &amp;gt; Client bundle &amp;gt; + New Client Bundle&lt;br /&gt;
# File &amp;lt;code&amp;gt;ucp-bundle-[username].zip will get downloaded&amp;lt;/code&amp;gt; &amp;lt;p&amp;gt;&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
unzip ucp-bundle-bob.zip &lt;br /&gt;
Archive:  ucp-bundle-bob.zip&lt;br /&gt;
 extracting: ca.pem                  &lt;br /&gt;
 extracting: cert.pem                &lt;br /&gt;
 extracting: key.pem                 &lt;br /&gt;
 extracting: cert.pub                &lt;br /&gt;
 extracting: env.sh                  &lt;br /&gt;
 extracting: env.ps1                 &lt;br /&gt;
 extracting: env.cmd     &lt;br /&gt;
&lt;br /&gt;
cat env.sh &lt;br /&gt;
export COMPOSE_TLS_VERSION=TLSv1_2&lt;br /&gt;
export DOCKER_TLS_VERIFY=1&lt;br /&gt;
export DOCKER_CERT_PATH=&amp;quot;$PWD&amp;quot;&lt;br /&gt;
export DOCKER_HOST=tcp://3.16.143.49:443&lt;br /&gt;
#&lt;br /&gt;
# Bundle for user bob&lt;br /&gt;
# UCP Instance ID t0ltwwcw5tdbtjo2fxlzmj8p4&lt;br /&gt;
#&lt;br /&gt;
# This admin cert will also work directly against Swarm and the individual&lt;br /&gt;
# engine proxies for troubleshooting.  After sourcing this env file, use&lt;br /&gt;
# &amp;quot;docker info&amp;quot; to discover the location of Swarm managers and engines.&lt;br /&gt;
# and use the --host option to override $DOCKER_HOST&lt;br /&gt;
#&lt;br /&gt;
# Run this command from within this directory to configure your shell:&lt;br /&gt;
# eval $(&amp;lt;env.sh)&lt;br /&gt;
&lt;br /&gt;
eval $(&amp;lt;env.sh) # apply ucp-bundle&lt;br /&gt;
&lt;br /&gt;
docker images # to list UCP managed images&lt;br /&gt;
&amp;lt;/source&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
# &amp;lt;li value=&amp;quot;4&amp;quot;&amp;gt; In my lab I had to update DOCKER_HOST from public IP to private IP &amp;lt;/li&amp;gt;&lt;br /&gt;
Err: error during connect: Get https://3.16.143.49:443/v1.39/images/json: x509: certificate is valid for 127.0.0.1, 172.31.101.248, 172.17.0.1, not 3.16.143.49&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
export DOCKER_HOST=tcp://172.31.101.248:443&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# &amp;lt;li value=&amp;quot;5&amp;quot;&amp;gt; Verify if you have permissions to create a service&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker service create --name test111 httpd&lt;br /&gt;
Error response from daemon: access denied:&lt;br /&gt;
no access to Service Create, on collection swarm&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# &amp;lt;li value=&amp;quot;6&amp;quot;&amp;gt; Add Grants to the user&amp;lt;/li&amp;gt;&lt;br /&gt;
## Go to User Management &amp;gt; Granst &amp;gt; Create Grant&lt;br /&gt;
## Base on a Roles, select Full Control&lt;br /&gt;
## Select Subjects, All Users, select the user&lt;br /&gt;
## Click Create&lt;br /&gt;
# Re-run service create command that should succeed now. This service can be managed now also within UCP console.&lt;br /&gt;
&lt;br /&gt;
= Docker Secure Registry | image: registry =&lt;br /&gt;
Docker provides a special docker image that can be used to manage docker imagages both internally or externally thus steps below include securing the access with SSL certificate.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create certificate&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
mkdir ~/{auth,certs}&lt;br /&gt;
# create self-signed certificate for Docker Repository&lt;br /&gt;
mkdir certs; cd _$ #cd to last argument in history&lt;br /&gt;
openssl req -newkey rsa:4096 -nodes -sha256 -keyout repo-key.pem -x509 -days 365 -out repo-cer.pem -subj /CN=myrepo.com&lt;br /&gt;
# trusted-certs docker client directory, docker client looks for trusted certs when conencting to reomote repo&lt;br /&gt;
sudo mkdir -p /etc/docker/certs.d/myrepo.com:5000 #port 5000 is a default port&lt;br /&gt;
sudo cp repo-cer.pem /etc/docker/certs.d/myrepo.com:5000/ca.crt &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;ca.crt&amp;lt;/code&amp;gt; is default/required CAroot trustcert file name, that the docker client (docker login API) uses when conencting to remote repository. In our case we trust any cert signed by CA=ca.crt when connecting to myrepo.com:5000 as same certs (selfsigned), got installed in &amp;lt;code&amp;gt;repository:2&amp;lt;/code&amp;gt; container via &amp;lt;code&amp;gt;-v /certs/&amp;lt;/code&amp;gt; option.&lt;br /&gt;
&lt;br /&gt;
Optional for development purposes to add doamin ''myrepo.com'' to hostfile binding to local interface ip address.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo -i; echo &amp;quot;172.16.10.10 myrepo.com&amp;quot; &amp;gt;&amp;gt; /etc/hosts; exit&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional add insecure-registry entry&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
sudo vi /etc/docker/deamon.json&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;insecure-registries&amp;quot; : [ &amp;quot;myrepo.com:5000&amp;quot;]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Pull special Docker Registry image&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
mkdir -p ~/auth #authentication directory, used when deploying local repository&lt;br /&gt;
docker pull registry:2&lt;br /&gt;
docker run --entrypoint htpasswd registry:2 -Bbn reg-admin Passw0rd123 &amp;gt; ~/auth/htpasswd&lt;br /&gt;
# -Bbn        -parameters&lt;br /&gt;
# reg-admin   -user&lt;br /&gt;
# Passw0rd123 -password string for basic htpasswd authentication method, the hashed password will be displayed to STDOUT&lt;br /&gt;
&lt;br /&gt;
$ cat ~/auth/htpasswd&lt;br /&gt;
reg-admin:$2y$05$DnTWDHp7uTwaDrw4sXpUbuDDIlLwu3c8MEMsHPjK/AcUMdK/TD6fO&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Run Registry container&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
cd ~&lt;br /&gt;
docker run -d -p 5000:5000 --name myrepo \&lt;br /&gt;
       -v $(pwd)/certs:/certs \&lt;br /&gt;
       -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/repo-cer.pem \&lt;br /&gt;
       -e REGISTRY_HTTP_TLS_KEY=/certs/repo-key.pem \&lt;br /&gt;
       -v $(pwd)/auth:/auth \&lt;br /&gt;
       -e REGISTRY_AUTH=htpasswd \&lt;br /&gt;
       -e REGISTRY_AUTH_HTPASSWD_REALM=&amp;quot;Registry Realm&amp;quot; \&lt;br /&gt;
       -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \&lt;br /&gt;
       registry:2&lt;br /&gt;
# -v                               -indicate where our certificates will be mounted within a container&lt;br /&gt;
# -e REGISTRY_HTTP_TLS_CERTIFICATE -path to cert inside the container&lt;br /&gt;
# -v $(pwd)/auth:/auth             -mounting authentication directory where a file with password is&lt;br /&gt;
# -e REGISTRY_AUTH htpasswd        -setting up to use 'htpasswd' authentication method&lt;br /&gt;
# registry:2                       -image name, positinal params  &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Verify&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker pull  alpine&lt;br /&gt;
docker tag   alpine     myrepo.com:5000/aa-alpine #create a tagged image (copy) on a local filesystem, &lt;br /&gt;
     # it must be prefixed with the private repo name '/' image name you want to upload as&lt;br /&gt;
&lt;br /&gt;
docker logout  # if logged in to another repository&lt;br /&gt;
docker login myrepo.com:5000/aa-alpine #login to a repository that runs as a container, stays login untill logout/reboot&lt;br /&gt;
docker login myrepo.com:5000/aa-alpine --username=rep-admin --password Passw0rd123&lt;br /&gt;
docker push  myrepo.com:5000/aa-alpine        &lt;br /&gt;
&lt;br /&gt;
docker image rmi alpine myrepo.com:5000/aa-alpine #delete image stored locally&lt;br /&gt;
docker pull             myrepo.com:5000/aa-alpine #pull image from a container repository&lt;br /&gt;
&lt;br /&gt;
# List private-repository images&lt;br /&gt;
curl --insecure -u &amp;quot;reg-admin:password&amp;quot; https://myrepo.com:5000/v2/_catalog&lt;br /&gt;
{&amp;quot;repositories&amp;quot;:[&amp;quot;aa-alpine&amp;quot;]}&lt;br /&gt;
&lt;br /&gt;
wget --no-check-certificate --http-user=reg-admin --http-password=password https://myrepo.com:5000/v2/_catalog&lt;br /&gt;
cat _catalog                                                                                                                                                                       &lt;br /&gt;
{&amp;quot;repositories&amp;quot;:[&amp;quot;my-alpine&amp;quot;,&amp;quot;myalpine&amp;quot;,&amp;quot;new-aa-busybox&amp;quot;]}&lt;br /&gt;
&lt;br /&gt;
# List tags&lt;br /&gt;
curl --insecure -u &amp;quot;reg-admin:password&amp;quot; https://myrepo.com:5000/v2/aa-alpine/tags/list&lt;br /&gt;
{&amp;quot;name&amp;quot;:&amp;quot;myalpine&amp;quot;,&amp;quot;tags&amp;quot;:[&amp;quot;latest&amp;quot;]}&lt;br /&gt;
curl --insecure -u &amp;quot;reg-admin:password&amp;quot; https://myrepo.com:5000/v2/aa-alpine/manifests/latest #entire image metadata&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note. There is no easy way to delete images from repository:2 container.&lt;br /&gt;
&lt;br /&gt;
= Docker push =&lt;br /&gt;
;Login to a docker repository&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker info | grep -B1 Registry #check if you are logged in to docker.hub repository&lt;br /&gt;
WARNING: No swap limit support&lt;br /&gt;
Registry: https://index.docker.io/v1/&lt;br /&gt;
&lt;br /&gt;
docker login&lt;br /&gt;
&lt;br /&gt;
docker info | grep -B1 Registry&lt;br /&gt;
Username: pio2pio&lt;br /&gt;
Registry: https://index.docker.io/v1/&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
; Tag and push an image&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# docker tag local-image:tagname new-repo:tagname  #create a local copy of an image&lt;br /&gt;
# docker push new-repo:tagname                     &lt;br /&gt;
&lt;br /&gt;
docker pull busybox&lt;br /&gt;
docker --tag busybox:latest pio2pio/testrepo&lt;br /&gt;
docker push pio2pio/testrepo&lt;br /&gt;
The push refers to repository [docker.io/pio2pio/testrepo]&lt;br /&gt;
683f499823be: Mounted from library/busybox &lt;br /&gt;
latest: digest: sha256:bbb143159af9eabdf45511fd5aab4fd2475d4c0e7fd4a5e154b98e838488e510 &lt;br /&gt;
size: 527&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
; Docker Content Trust&lt;br /&gt;
All images are implicitly trusted by your Docker daemon. Buy can set that ONLY signed images are allowed. You can configure your systems for trusting only image tags that have been signed.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
export=DOCKER_CONTENT_TRUST=1 #enable system to sign an image during push process&lt;br /&gt;
docker build -t myrepo.com:5000/untrusted.latest&lt;br /&gt;
docker push myrepo.com:5000/untrusted.latest&lt;br /&gt;
...&lt;br /&gt;
No tag specified, skipping trust metadata push&lt;br /&gt;
# 2nd attempt, with a tag specified now&lt;br /&gt;
docker push myrepo.com:5000/untrusted.latest:latest&lt;br /&gt;
Error: error contacting notary server: x509: certificate signed by unknown authority&lt;br /&gt;
&lt;br /&gt;
docker pull myrepo.com:5000/untrusted.latest:latest&lt;br /&gt;
Error: error contacting notary server: x509: certificate signed by unknown authority&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Errors explained:&lt;br /&gt;
Err: No tag specified, skipping trust metadata push&amp;lt;br /&amp;gt;&lt;br /&gt;
* Explenation: When image gets signed is signed by a tag. Thereofre if you skip a tag it won't get signed and metada get skipped.&lt;br /&gt;
Err: error contacting notary server: x509: certificate signed by unknown authority&lt;br /&gt;
* when uploading the image gets uploaded, but it is not trusted becasue signed with self-signed CA&lt;br /&gt;
* when downloading, and &amp;lt;code&amp;gt;DOCKER_CONTENT_TRUST=1&amp;lt;/code&amp;gt; is enabled, the image cannot be downloaded because is untrusted&lt;br /&gt;
&lt;br /&gt;
= Theory =&lt;br /&gt;
== What is a docker ==&lt;br /&gt;
Docker is a container runtime platform, where Swarm is a container orchestration platform.&lt;br /&gt;
&lt;br /&gt;
== Security ==&lt;br /&gt;
=== Mutually Authenticated TLS ===&lt;br /&gt;
Docker Swarm is ''secure by default'', it means all communication is encrypted. ''Mutually Authenticated TLS'' is the implementation was chosen to secure that communication. Any time a swarm is inicialised, a self-signed CA is generated and issues certificates to every node (mgr or wkr) to facilicate registration (join mgr or wkr) and latter those secure communications. It's transient container brought up to generate CA certs every time a cert is needed. MTLS communication is between Managers and Workers.&lt;br /&gt;
&lt;br /&gt;
== [[Linux Namespaces and Control Groups]] ==&lt;br /&gt;
&lt;br /&gt;
== Difference between docker attach and docker exec ==&lt;br /&gt;
;Attach&lt;br /&gt;
The docker attach command allows you to attach to a running container using the containers ID or name, either to view its ongoing output or to control it interactively. You can attach to the same contained process multiple times simultaneously, screen sharing style, or quickly view the progress of your detached process.&lt;br /&gt;
&lt;br /&gt;
The command docker attach is for attaching to the existing process. So when you exit, you exit the existing process.&lt;br /&gt;
&lt;br /&gt;
If we use docker attach, we can use only one instance of shell. So if we want open new terminal with new instance of container's shell, we just need run docker exec&lt;br /&gt;
&lt;br /&gt;
If the docker container was started using /bin/bash command, you can access it using attach, if not then you need to execute the command to create a bash instance inside the container using exec. Attach isn't for running an extra thing in a container, it's for attaching to the running process.&lt;br /&gt;
&lt;br /&gt;
To stop a container, use CTRL-c. This key sequence sends SIGKILL to the container. If --sig-proxy is true (the default),CTRL-c sends a SIGINT to the container. You can detach from a container and leave it running using the CTRL-p CTRL-q key sequence.&lt;br /&gt;
&lt;br /&gt;
;exec&lt;br /&gt;
&lt;br /&gt;
&amp;quot;docker exec&amp;quot; is specifically for running new things in a already started container, be it a shell or some other process. The docker exec command runs a new command in a running container.&lt;br /&gt;
&lt;br /&gt;
The command started using docker exec only runs while the containerâ€™s primary process (PID 1) is running, and it is not restarted if the container is restarted.&lt;br /&gt;
&lt;br /&gt;
exec command works only on already running container. If the container is currently stopped, you need to first run it. So now you can run any command in running container just knowing its ID (or name):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
docker exec &amp;lt;container_id_or_name&amp;gt; echo &amp;quot;Hello from container!&amp;quot;&lt;br /&gt;
docker run -it -d shykes/pybuilder /bin/bash&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The most important here is the -d option, which stands for detached. It means that the command you initially provided to the container (/bin/bash) will be run in background and the container will not stop immediately.&lt;br /&gt;
&lt;br /&gt;
= Dockerfile - python =&lt;br /&gt;
* [https://luis-sena.medium.com/creating-the-perfect-python-dockerfile-51bdec41f1c8 perfect python dockerfile] Medium&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
*[https://docs.docker.com/v1.8/installation/ubuntulinux/ Ubuntu installation] official website&lt;br /&gt;
*[https://docs.docker.com/engine/admin/systemd/ PROXY settings for systemd]&lt;br /&gt;
*[http://goinbigdata.com/docker-run-vs-cmd-vs-entrypoint/ Docker RUN vs CMD vs ENTRYPOINT]&lt;br /&gt;
*[https://vsupalov.com/docker-arg-vs-env/ docker ARG vs ENV]&lt;br /&gt;
*[https://www.fromlatest.io/#/ Docker online lintel]&lt;br /&gt;
*[https://hub.docker.com/r/portainer/portainer/ portainer] Monitor your containers via Web GUI&lt;br /&gt;
*[https://treescale.com/ treescale.com] Free private Docker registry&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Docker&amp;diff=7056</id>
		<title>Docker</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Docker&amp;diff=7056"/>
		<updated>2025-09-03T04:48:51Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Ubuntu 16.04, 18.04, 20.04 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Containers taking a world J&lt;br /&gt;
&lt;br /&gt;
= [https://docs.docker.com/install/linux/docker-ce/ubuntu/ Installation] =&lt;br /&gt;
General procedure:&lt;br /&gt;
# Make sure you don't have docker already installed from your packet manager&lt;br /&gt;
# The /var/lib/docker may be &lt;br /&gt;
&lt;br /&gt;
To install the latest version of Docker with curl:&lt;br /&gt;
&amp;lt;source  lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
curl -sSL https://get.docker.com/ | sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== CentOS ==&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo yum install bash-completion bash-completion-extras #optional, requires you log out&lt;br /&gt;
sudo yum install -y yum-utils device-mapper-persistent-data lvm2 #utils&lt;br /&gt;
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo #docker-ee.repo for EE edition&lt;br /&gt;
                      # --enable docker-ce-{edge|test} #for beta releases&lt;br /&gt;
sudo yum update&lt;br /&gt;
sudo yum clean all #not sure why this command is here&lt;br /&gt;
sudo yum install docker-ce&lt;br /&gt;
#old: sudo yum install -y --setopt=obsoletes=0 docker-ce-17.03.1.ce-1.el7.centos docker-ce-selinux-17.03.1.ce-1.el7.centos&lt;br /&gt;
sudo systemctl enable docker &amp;amp;&amp;amp; sudo systemctl start docker &amp;amp;&amp;amp; sudo systemctl status docker&lt;br /&gt;
yum-config-manager --disable jenkins #disable source to prevent accidental update ?jenkins?&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Ubuntu 24.04 ==&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Add Docker's official GPG key:&lt;br /&gt;
sudo apt-get update&lt;br /&gt;
sudo apt-get install ca-certificates curl&lt;br /&gt;
sudo install -m 0755 -d /etc/apt/keyrings&lt;br /&gt;
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc&lt;br /&gt;
sudo chmod a+r /etc/apt/keyrings/docker.asc&lt;br /&gt;
&lt;br /&gt;
# Add the repository to Apt sources:&lt;br /&gt;
echo \&lt;br /&gt;
  &amp;quot;deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \&lt;br /&gt;
  $(. /etc/os-release &amp;amp;&amp;amp; echo &amp;quot;${UBUNTU_CODENAME:-$VERSION_CODENAME}&amp;quot;) stable&amp;quot; | \&lt;br /&gt;
  sudo tee /etc/apt/sources.list.d/docker.list &amp;gt; /dev/null&lt;br /&gt;
sudo apt-get update&lt;br /&gt;
&lt;br /&gt;
# Install the latest version&lt;br /&gt;
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin&lt;br /&gt;
&lt;br /&gt;
# Install a specific version&lt;br /&gt;
## List the available versions:&lt;br /&gt;
apt-cache madison docker-ce | awk '{ print $3 }'&lt;br /&gt;
5:28.3.3-1~ubuntu.24.04~noble&lt;br /&gt;
5:28.3.2-1~ubuntu.24.04~noble&lt;br /&gt;
&lt;br /&gt;
VERSION_STRING=5:28.3.3-1~ubuntu.24.04~noble&lt;br /&gt;
sudo apt-get install docker-ce=$VERSION_STRING docker-ce-cli=$VERSION_STRING containerd.io docker-buildx-plugin docker-compose-plugin&lt;br /&gt;
&lt;br /&gt;
# Manage Docker as a non-root user&lt;br /&gt;
sudo groupadd docker&lt;br /&gt;
sudo usermod -aG docker $USER&lt;br /&gt;
newgrp docker # activate the group without logging off&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Ubuntu 16.04, 18.04, 20.04 ==&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Optional, clear out config files&lt;br /&gt;
sudo rm /etc/systemd/system/docker.service.d/docker.conf&lt;br /&gt;
sudo rm /etc/systemd/system/docker.service&lt;br /&gt;
sudo rm /etc/default/docker #environment file&lt;br /&gt;
&lt;br /&gt;
# New docker package is called now 'docker-ce'&lt;br /&gt;
sudo apt-get remove docker docker-engine docker.io containerd runc docker-ce  # start fresh&lt;br /&gt;
sudo apt-get -yq install apt-transport-https ca-certificates curl gnupg-agent software-properties-common # apt over HTTPs&lt;br /&gt;
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - # Docker official GPG key&lt;br /&gt;
sudo apt-key fingerprint 0EBFCD88 #verify&lt;br /&gt;
&lt;br /&gt;
#add the repository&lt;br /&gt;
sudo add-apt-repository &amp;quot;deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable&amp;quot; # or {edge|test}&lt;br /&gt;
sudo apt-get update # optional&lt;br /&gt;
&lt;br /&gt;
# Option 1 - install latest&lt;br /&gt;
sudo apt-get install docker-ce docker-ce-cli containerd.io&lt;br /&gt;
&lt;br /&gt;
# Option 2 - install fixed version&lt;br /&gt;
sudo apt-cache madison docker-ce # display available versions&lt;br /&gt;
sudo apt-get   install docker-ce=&amp;lt;VERSION_STRING&amp;gt;          docker-ce-cli=&amp;lt;VERSION_STRING&amp;gt;          containerd.io&lt;br /&gt;
sudo apt-get   install docker-ce=18.09.0~3-0~ubuntu-bionic docker-ce-cli=18.09.0~3-0~ubuntu-bionic containerd.io&lt;br /&gt;
sudo apt-mark  hold    docker-ce docker-ce-cli containerd.io&lt;br /&gt;
sudo apt-mark  showhold # show packages that version upgrade has been put on hold&lt;br /&gt;
&lt;br /&gt;
# Unhold&lt;br /&gt;
sudo apt-mark unhold   docker-ce docker-ce-cli containerd.io&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://docs.docker.com/engine/release-notes/ Newer versions] (&amp;gt;18.09.0) of Docker come with 3 packages:&lt;br /&gt;
* &amp;lt;code&amp;gt;containerd.io&amp;lt;/code&amp;gt; - daemon to interface with the OS API (in this case, LXC - Linux Containers), essentially decouples Docker from the OS, also provides container services for non-Docker container managers&lt;br /&gt;
* &amp;lt;code&amp;gt;docker-ce&amp;lt;/code&amp;gt; - Docker daemon, this is the part that does all the management work, requires the other two on Linux&lt;br /&gt;
* &amp;lt;code&amp;gt;docker-ce-cli&amp;lt;/code&amp;gt; - CLI tools to control the daemon, you can install them on their own if you want to control a remote Docker daemon&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example of how to run [[Jenkins CI|Jenkins docker image]]&lt;br /&gt;
&lt;br /&gt;
== Add a user to docker group ==&lt;br /&gt;
Add your user to &amp;lt;tt&amp;gt;docker group&amp;lt;/tt&amp;gt; to be able to run docker commands without need of ''sudo'' as the &amp;lt;code&amp;gt;docker.socket&amp;lt;/code&amp;gt; is owned by group &amp;lt;code&amp;gt;docker&amp;lt;/code&amp;gt;.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo usermod -aG docker $(whoami)&lt;br /&gt;
&lt;br /&gt;
# log in to the new docker group (to avoid having to log out / log in again)&lt;br /&gt;
newgrp docker&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Reason&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
[root@piotr]$ ls -al /var/run/docker.sock&lt;br /&gt;
srw-rw----. 1 root docker 7 Jan 09:00 docker.sock&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= HTTP proxy =&lt;br /&gt;
Configure ''docker'' if you run behind a proxy server. In this example CNTLM proxy runs on the host machine listening on localhost:3128. This example overrides the default docker.service file by adding configuration to the Docker systemd service file.&lt;br /&gt;
&lt;br /&gt;
First, create a systemd drop-in directory for the docker service:&lt;br /&gt;
&amp;lt;source lang=bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo mkdir /etc/systemd/system/docker.service.d&lt;br /&gt;
sudo vi    /etc/systemd/system/docker.service.d/http-proxy.conf&lt;br /&gt;
[Service]&lt;br /&gt;
Environment=&amp;quot;HTTP_PROXY=http://proxy.example.com:80/&amp;quot;&lt;br /&gt;
Environment=&amp;quot;HTTP_PROXY=http://172.31.1.1:3128/&amp;quot; #overrides previous entry&lt;br /&gt;
Environment=&amp;quot;HTTPS_PROXY=http://172.31.1.1:3128/&amp;quot;&lt;br /&gt;
# If you have internal Docker registries that you need to contact without proxying you can specify them via the NO_PROXY environment variable&lt;br /&gt;
Environment=&amp;quot;NO_PROXY=localhost,127.0.0.1,10.6.96.172,proxy.example.com:80&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Flush changes:&lt;br /&gt;
 $ sudo systemctl daemon-reload&lt;br /&gt;
Verify that the configuration has been loaded:&lt;br /&gt;
 $ systemctl show --property=Environment docker&lt;br /&gt;
 Environment=HTTP_PROXY=&amp;lt;nowiki&amp;gt;http://proxy.example.com:80/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
Restart Docker:&lt;br /&gt;
 $ sudo systemctl restart docker&lt;br /&gt;
&lt;br /&gt;
= Docker create and run, basic options = &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# It will create a container but won't start it up&lt;br /&gt;
docker container create -it --name=&amp;quot;my-container&amp;quot; ubuntu:latest /bin/bash&lt;br /&gt;
docekr container start my-container&lt;br /&gt;
&lt;br /&gt;
docker run -it --name=&amp;quot;mycentos&amp;quot; centos:latest /bin/bash&lt;br /&gt;
# -i   :- interactive mode (attach to STDIN)          \command to execute when instantiating container &lt;br /&gt;
# -t   :- attach to the current terminal (sudo TTY)&lt;br /&gt;
# -d   :- disconnect mode, daemon mode, detached mode, running the task in the background&lt;br /&gt;
# -p   :- publish to host exposed container port [ host_port(8080):container_exposedPort(80) ]&lt;br /&gt;
# --rm :- remove container after command has been executed&lt;br /&gt;
# --name=&amp;quot;name_your_container&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# -e|--env MYVAR=123 exports/passing variable to the container, echo $MYVAR will have a value 123&lt;br /&gt;
# --privileged :- option will allow Docker to perform actions normally restricted, &lt;br /&gt;
#                 like binding a device path to an internal container path. &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Docker inspect =&lt;br /&gt;
== inspect image ==&lt;br /&gt;
&amp;lt;source&amp;gt; &lt;br /&gt;
docker image inspect centos:6&lt;br /&gt;
docker image inspect centos:6 --format '{{.ContainerConfig.Hostname}}' #just a single value&lt;br /&gt;
docker image inspect centos:6 --format '{{json .ContainerConfig}}'     #json key/value output&lt;br /&gt;
docker image inspect centos:6 --format '{{.RepoTags}}'                 #shows all associated tags with the image&lt;br /&gt;
&amp;lt;/source&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Using &amp;lt;code&amp;gt;--format&amp;lt;/code&amp;gt; is similar to &amp;lt;code&amp;gt;jq&amp;lt;/code&amp;gt;&lt;br /&gt;
== inspect container ==&lt;br /&gt;
Shows current configuration state of a docker container or an image.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
docker inspect &amp;lt;container_name&amp;gt; | grep IPAddress&lt;br /&gt;
           &amp;quot;SecondaryIPAddresses&amp;quot;: null,&lt;br /&gt;
           &amp;quot;IPAddress&amp;quot;: &amp;quot;172.17.0.3&amp;quot;,&lt;br /&gt;
                   &amp;quot;IPAddress&amp;quot;: &amp;quot;172.17.0.3&amp;quot;,&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Attach/exec to a docker process =&lt;br /&gt;
If you are running eg. &amp;lt;tt&amp;gt;/bin/bash&amp;lt;/tt&amp;gt; as a command you can get attached to this running docker process. Note that when you exit the container will stop.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker attach mycentos&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To avoid stopping a container on exit of &amp;lt;code&amp;gt;attach&amp;lt;/code&amp;gt; command we can use &amp;lt;code&amp;gt;exec&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker exec -it mycentos /bin/bash&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Attaching directly to a running container and then exiting the shell will cause the container to stop. Executing another shell in a running container and then exiting that shell will not stop the underlying container process started on instantiation.&lt;br /&gt;
&lt;br /&gt;
= Entrypoint, CMD, PID1 and [https://github.com/krallin/tini tini] =&lt;br /&gt;
== Entrypoint and receiving signals ==&lt;br /&gt;
Reciving signals and handling them within containers here Docker it's the same important as for any other application. Remember containers it's a group of processes running on your host so you need to take care of signals send to your applications.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As a principal container management tool eg. &amp;lt;code&amp;gt;docker stop&amp;lt;/code&amp;gt; sends a configurable (in Dockerfile)  signal to the entrypoint of your application where &amp;lt;code&amp;gt;SIGTERM - 15 - Termination (ANSI)&amp;lt;/code&amp;gt; is default.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;ENTRYPOINT syntax&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
# exec form, require JSON array; IT SHOULD ALWAYS BE USED&lt;br /&gt;
ENTRYPOINT [&amp;quot;/app/bin/your-app&amp;quot;, &amp;quot;arg1&amp;quot;, &amp;quot;arg2&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
# shell form, it always runs as a subcommand of '/bin/sh -c', thus your application will never see any signal sent to it&lt;br /&gt;
ENTRYPOINT &amp;quot;/app/bin/your-app arg1 arg2&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;ENTRYPOINT is a shell script&lt;br /&gt;
If application is started by shell script regular way, your shell spawns your application in a new process and you won’t receive signals from Docker. Therefore we need to tell shell to replace itself with your application using the &amp;lt;code&amp;gt;[https://stackoverflow.com/questions/18351198/what-are-the-uses-of-the-exec-command-in-shell-scripts exec]&amp;lt;/code&amp;gt; command, check also &amp;lt;code&amp;gt;[https://en.wikipedia.org/wiki/Exec_(system_call) exec syscall]&amp;lt;/code&amp;gt;. Use:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
/app/bin/my-app      # incorrect, signal won't be received by 'my-app'&lt;br /&gt;
exec /app/bin/my-app # correct way&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;ENTRYPOINT exec with a pipe commands causing starting a subshell&lt;br /&gt;
If you &amp;lt;code&amp;gt;exec&amp;lt;/code&amp;gt; piping will force a command to be run in a subshell with the usual consequence: no signals to an app.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
exec /app/bin/your-app | tai64n # here you want to add timestamps by piping through tai64n,&lt;br /&gt;
                                # causing running your command in a subshell&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Let another program to be PID1 and handle signalling&lt;br /&gt;
* tini&lt;br /&gt;
* dump-init&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
ENTRYPOINT [&amp;quot;/tini&amp;quot;, &amp;quot;-v&amp;quot;, &amp;quot;--&amp;quot;, &amp;quot;/app/bin/docker-entrypoint.sh&amp;quot;]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
tini and dumb-init are also able to proxy signals to process groups which technically allows you to pipe your output.However, your pipe target receives that signal at the same time so you can’t log anything on cleanup lest you crave race conditions and SIGPIPEs. So, it's better to avoid logging at termination at all.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Change signal that will terminate your container process&lt;br /&gt;
Listen for SIGTERM or set STOPSIGNAL in your Dockerfile.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi Dockerfile&lt;br /&gt;
STOPSIGNAL SIGINT # this will trigger container termination process if someone press Ctrl^C&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;References:&lt;br /&gt;
* [https://hynek.me/articles/docker-signals/ Why Your Dockerized Application Isn’t Receiving Signals]&lt;br /&gt;
* [http://smarden.org/runit/ runit] alternative to tini&lt;br /&gt;
&lt;br /&gt;
== Tini ==&lt;br /&gt;
It's a tiny but valid init for containers:&lt;br /&gt;
* protects you from software that accidentally creates zombie processes&lt;br /&gt;
* ensures that the default signal handlers work for the software you run in your Docker image&lt;br /&gt;
* does so completely transparently! Docker images that work without Tini will work with Tini without any changes&lt;br /&gt;
* Docker 1.13+ has Tini included, to enable Tini, just pass the &amp;lt;code&amp;gt;--init&amp;lt;/code&amp;gt; flag to docker run&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Understanding Tini&lt;br /&gt;
After spawning your process, Tini will wait for signals and forward those to the child process, and periodically reap zombie processes that may be created within your container. When the &amp;quot;first&amp;quot; child process exits (/your/program in the examples above), Tini exits as well, with the exit code of the child process (so you can check your container's exit code to know whether the child exited successfully).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add Tini - general dynamicly-linked library (in the 10KB range)&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
ENV TINI_VERSION v0.18.0&lt;br /&gt;
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini&lt;br /&gt;
RUN chmod +x /tini&lt;br /&gt;
ENTRYPOINT [&amp;quot;/tini&amp;quot;, &amp;quot;--&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
# Run your program under Tini&lt;br /&gt;
CMD [&amp;quot;/your/program&amp;quot;, &amp;quot;-and&amp;quot;, &amp;quot;-its&amp;quot;, &amp;quot;arguments&amp;quot;]&lt;br /&gt;
# or docker run your-image /your/program ...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add Tini to Alpine based image&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
RUN apk add --no-cache tini&lt;br /&gt;
# Tini is now available at /sbin/tini&lt;br /&gt;
ENTRYPOINT [&amp;quot;/sbin/tini&amp;quot;, &amp;quot;--&amp;quot;]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Existing entrypoint&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
ENTRYPOINT [&amp;quot;/tini&amp;quot;, &amp;quot;--&amp;quot;, &amp;quot;/docker-entrypoint.sh&amp;quot;]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;References:&lt;br /&gt;
*[https://github.com/krallin/tini/issues/8 What is advantage of Tini?]&lt;br /&gt;
*[https://ahmet.im/blog/minimal-init-process-for-containers/ Choosing an init process for multi-process containers]&lt;br /&gt;
&lt;br /&gt;
= Mount directory in container =&lt;br /&gt;
We can mount host directory into docker container so the content will be available from the container&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
docker run -it -v /mnt/sdb1:/opt/java pio2pio/java8&lt;br /&gt;
# syntax: -v /path/on/host:/path/in/container&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Build image = &lt;br /&gt;
== Dockerfile ==&lt;br /&gt;
Each line ''RUN'' creates a container so if possible, we should join lines so it ends up with less layers.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt; &lt;br /&gt;
$ wget jkd1.8.0_111.tar.gz&lt;br /&gt;
$ cat Dockerfile &amp;lt;&amp;lt;- EOF #'&amp;lt;&amp;lt;-' heredoc with '-' minus ignores &amp;lt;tab&amp;gt; indent&lt;br /&gt;
ARG TAGVERSION=6                    #only command allowed b4 FROM&lt;br /&gt;
FROM ubuntu:${TAGVERSION}&lt;br /&gt;
FROM ubuntu:latest                  #defines base image eg. ubuntu:16.04&lt;br /&gt;
LABEL maintainer=&amp;quot;myname@gmail.com&amp;quot; #key/value pair added to a metadata of the image&lt;br /&gt;
&lt;br /&gt;
ARG ARG1=value1&lt;br /&gt;
&lt;br /&gt;
ENV ENVIRONMENT=&amp;quot;prod&amp;quot;&lt;br /&gt;
ENV SHARE /usr/local/share  #define env variables with syntax ENV space EnvironmetVariable space Value&lt;br /&gt;
ENV JAVA_HOME $SHARE/java&lt;br /&gt;
&lt;br /&gt;
# COPY jkd1.8.0_111.tar.gz /tmp #works only with files, copy a file to container filesystem, here to /tmp&lt;br /&gt;
# ADD http://example.com/file.txt&lt;br /&gt;
ADD jkd1.8.0_111.tar.gz /  #add files into the image root folder, can add also URLs&lt;br /&gt;
&lt;br /&gt;
# SHELL [&amp;quot;executable&amp;quot;,&amp;quot;params&amp;quot;] #overrides /bin/sh -c for RUN,CMD, etc..&lt;br /&gt;
&lt;br /&gt;
# Executes commands during build process in a new layer E.g., it is often used for installing software packages&lt;br /&gt;
RUN mv /jkd1.8.0_111.tar.gz $JAVA_HOME &lt;br /&gt;
RUN apt-get update&lt;br /&gt;
RUN [&amp;quot;apt-get&amp;quot;, &amp;quot;update&amp;quot;, &amp;quot;-y&amp;quot;] #in json array format, allows to run a commands but does not require shell executable&lt;br /&gt;
&lt;br /&gt;
VOLUME /mymount_point #this command does not mount anything from a host, just creates a mountpoint&lt;br /&gt;
&lt;br /&gt;
EXPOSE 80 #it doesn't automatically map the port to a hosts&lt;br /&gt;
&lt;br /&gt;
#containers usually don't have system maangement eg. systemctl/service/init.d as designed to run as single process&lt;br /&gt;
#entry point becomes main command that start the main proces&lt;br /&gt;
ENTRYPOINT apachectl &amp;quot;-DFOREGROUND&amp;quot; #think about it as the MAIN_PURPOSE_OF_CONTAINER command. &lt;br /&gt;
# It's always run by default it cannot be overridden&lt;br /&gt;
&lt;br /&gt;
#Single command that will run after the image has been created. Only one per dockerfile, can be overriden.&lt;br /&gt;
CMD [&amp;quot;/bin/bash&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
# STOPSIGNAL&lt;br /&gt;
&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Build ==&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt; &lt;br /&gt;
docker build --tag myrepo/java8 .  #-f point to custom Dockerfile name eg. -f Dockerfile2&lt;br /&gt;
# myrepo dockerhub username, java8 -image name, &lt;br /&gt;
# .      directory where is the Dockerfile&lt;br /&gt;
&lt;br /&gt;
docker build -t myrepo/java8 . --pull --no-cache --squash&lt;br /&gt;
# --pull     regardless a local copy of an image can exist force to download a new image&lt;br /&gt;
# --no-cache don't use cache to build, forcing to rebuild all interim containers&lt;br /&gt;
# --squash   after the build squash all layers into a single layer. &lt;br /&gt;
&lt;br /&gt;
docker images             #list images&lt;br /&gt;
docker push myrepo/java8 #upload the image to DockerHub repository&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;squash&amp;lt;/code&amp;gt; is enabled only on docker demon with experimental features enabled.&lt;br /&gt;
&lt;br /&gt;
= Manage containers and images =&lt;br /&gt;
== Run a container ==&lt;br /&gt;
When you ''run'' a container you will create a new container from a image that has been already build/ is available then put in running state&lt;br /&gt;
* -d detached mode, the container will continue to run after the CMD or passed on command exited&lt;br /&gt;
* -i interactive mode, allows you to login in /ssh to the container&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# docker container run [OPTIONS]           IMAGE    [COMMAND] [ARG...] # usage&lt;br /&gt;
  docker container run -it --name mycentos centos:6 /bin/bash&lt;br /&gt;
  docker           run -it pio2pio/java8 #container section command is optional&lt;br /&gt;
# -i       :- run in interactive mode, then run command /bin/bash&lt;br /&gt;
# --rm     :- will delete container after run&lt;br /&gt;
# --publish | -p 80:8080 :- publish exposed container port 80-&amp;gt; to 8080 on the docker-host&lt;br /&gt;
# --publish-all | -P     :- publish all exposed container ports to random port &amp;gt;32768&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== List images ==&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ctop #top for containers&lt;br /&gt;
docker ps -a #list containers&lt;br /&gt;
docker image ls #list images&lt;br /&gt;
docker images #short form of the command above&lt;br /&gt;
docker images --no-trunc&lt;br /&gt;
docker images -q #--quiet&lt;br /&gt;
docker images --filer &amp;quot;before=centos:6&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# List exposed ports on a container&lt;br /&gt;
docker port CONTAINER [PRIVATE_PORT[/PROTOCOL]]&lt;br /&gt;
docker port web2&lt;br /&gt;
80/tcp -&amp;gt; 0.0.0.0:81&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Search images in remote repository ==&lt;br /&gt;
Search the DockerHub for images,. You may require to do `docker login` first&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
IMAGE=ubuntu&lt;br /&gt;
docker search $IMAGE&lt;br /&gt;
NAME                            DESCRIPTION                                     STARS OFFICIAL   AUTOMATED&lt;br /&gt;
ubuntu                          Ubuntu is a Debian-based Linux operating sys…   8206  [OK]       &lt;br /&gt;
dorowu/ubuntu-desktop-lxde-vnc  Ubuntu with openssh-server and NoVNC            210              [OK]&lt;br /&gt;
rastasheep/ubuntu-sshd          Dockerized SSH service, built on top of offi…   167              [OK]&lt;br /&gt;
&lt;br /&gt;
IMAGE=apache&lt;br /&gt;
docker search $IMAGE --filter stars=50 # search images that have 50 or more stars&lt;br /&gt;
docker search $IMAGE --limit 10        # display top 10 images&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
List all available tags&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
IMAGE=nginx&lt;br /&gt;
wget -q https://registry.hub.docker.com/v1/repositories/${IMAGE}/tags -O - | sed -e 's/[][]//g' -e 's/&amp;quot;//g' -e 's/ //g' | tr '}' '\n'  | awk -F: '{print $3}' | sort -V&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Pull images ==&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
                 &amp;lt;name&amp;gt;:&amp;lt;tag&amp;gt;&lt;br /&gt;
docker pull hello-world:latest # pull latest&lt;br /&gt;
docker pull --all hello-world  # pull all tags&lt;br /&gt;
docker pull --disable-content-trust hello-world # disable verification &lt;br /&gt;
&lt;br /&gt;
docker images --digests #displays sha256: digest of an image&lt;br /&gt;
&lt;br /&gt;
# Dangling images - transitional images&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
=== [https://docs.aws.amazon.com/AmazonECR/latest/userguide/registries.html#registry_auth from Amazon ECR] ===&lt;br /&gt;
;Docker login to ECR service using IAM&lt;br /&gt;
&amp;lt;code&amp;gt;docker login&amp;lt;/code&amp;gt; does not support native IAM authentication methods. Therefore use a command below that will retrieve, decode, and convert the &amp;lt;code&amp;gt;authorization IAM token&amp;lt;/code&amp;gt; into a pre-generated &amp;lt;code&amp;gt;docker login&amp;lt;/code&amp;gt; command. Therefore produced login credentials will assume your current IAM User/Role permissions. If your current IAM user can only pull from ECR, after login with &amp;lt;code&amp;gt;docker login&amp;lt;/code&amp;gt; you still won't be able to push image to the registry. Example error you may get is &amp;lt;code&amp;gt;not authorized to perform: ecr:InitiateLayerUpload&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Login to ECR service, your IAM user requires to have relevant pull/push permissions&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
eval $(aws ecr get-login --region eu-west-1 --no-include-email)&lt;br /&gt;
     # aws ecr get-login # generates below docker command with the login token&lt;br /&gt;
     # docker login -u AWS -p **token** https://$ACCOUNT.dkr.ecr.us-east-1.amazonaws.com # &amp;lt;- output&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Docker login to ECR singular repository, min awscli v1.17&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ACCOUNT=111111111111&lt;br /&gt;
REPOSITORY=myrepo&lt;br /&gt;
aws ecr get-login-password --region eu-west-1 | docker login --username AWS --password-stdin $ACCOUNT.dkr.ecr.eu-west-1.amazonaws.com/$REPOSITORY&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html push to Amazon ECR] ===&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# List images&lt;br /&gt;
$ docker images&lt;br /&gt;
REPOSITORY                                                 TAG   IMAGE ID     CREATED        SIZE&lt;br /&gt;
ansible-aws                                                2.0.1 b09807c20c96 5 minutes ago  570MB&lt;br /&gt;
111111111111.dkr.ecr.eu-west-1.amazonaws.com/ansible-aws   1.0.0 9bf35fe9cc0e 4 weeks ago    515MB&lt;br /&gt;
&lt;br /&gt;
# Tag an image 'b09807c20c96'&lt;br /&gt;
docker tag b09807c20c96 111111111111.dkr.ecr.eu-west-1.amazonaws.com/ansible-aws:2.0.1&lt;br /&gt;
&lt;br /&gt;
# List images, to verify your newly tagged one&lt;br /&gt;
$ docker images&lt;br /&gt;
REPOSITORY                                                 TAG   IMAGE ID     CREATED        SIZE&lt;br /&gt;
111111111111.dkr.ecr.eu-west-1.amazonaws.com/ansible-aws   2.0.1 b09807c20c96 6 minutes ago  570MB # &amp;lt;- new tagged image&lt;br /&gt;
ansible-aws                                                2.0.1 b09807c20c96 6 minutes ago  570MB&lt;br /&gt;
111111111111.dkr.ecr.eu-west-1.amazonaws.com/ansible-aws   1.0.0 9bf35fe9cc0e 4 weeks ago    515MB&lt;br /&gt;
&lt;br /&gt;
# Push an image to ECR&lt;br /&gt;
docker push 111111111111.dkr.ecr.eu-west-1.amazonaws.com/ansible-aws:2.0.1&lt;br /&gt;
The push refers to repository [111111111111.dkr.ecr.eu-west-1.amazonaws.com/ansible-aws]&lt;br /&gt;
2c405c66e675: Pushed &lt;br /&gt;
...&lt;br /&gt;
77cae8ab23bf: Layer already exists &lt;br /&gt;
2.0.1: digest: sha256:111111111193969807708e1f6aea2b19a08054f418b07cf64016a6d1111111111 size: 1796&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Save and import image ==&lt;br /&gt;
In course to move a image to another filesystem we can save it into &amp;lt;code&amp;gt;.tar&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Export&lt;br /&gt;
docker image save myrepo/centos:v2 &amp;gt; mycentos.v2.tar&lt;br /&gt;
tar -tvf mycentos.v2.tar&lt;br /&gt;
&lt;br /&gt;
# Import&lt;br /&gt;
docker image import mycentos.v2.tar &amp;lt;new_image_name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Load from a stream&lt;br /&gt;
docker load &amp;lt; mycentos.v2.tar #or --input mycentos.v2.tar to avoid redirections&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Export aka commit container into image ==&lt;br /&gt;
Let's say we wanto modify stock image centos:6 by installing Apache interactivly, set to autostart then export as an new image. Let's do it!&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
docker pull centos:6&lt;br /&gt;
docker container run -it --name apache-centos6 centos:6&lt;br /&gt;
# Interactively do: yum -y update; yum install -y httpd; chkconfig httpd on; exit&lt;br /&gt;
&lt;br /&gt;
# Save container changes - option1&lt;br /&gt;
docker commit -m &amp;quot;added httpd daemon&amp;quot; -a &amp;quot;Piotr&amp;quot; b237d65fd197 newcentos:withapache #creates new image from a container's changes&lt;br /&gt;
docker commit -m &amp;quot;added httpd daemon&amp;quot; -a &amp;quot;Piotr&amp;quot; &amp;lt;container_name&amp;gt; &amp;lt;repo&amp;gt;/&amp;lt;image&amp;gt;:&amp;lt;tag&amp;gt;&lt;br /&gt;
# -a :- author&lt;br /&gt;
&lt;br /&gt;
# Save container changes - option2&lt;br /&gt;
docker container export apache-centos6 &amp;gt; apache-centos6.tar&lt;br /&gt;
docker image     import apache-centos6.tar newcentos:withapache&lt;br /&gt;
&lt;br /&gt;
docker images&lt;br /&gt;
REPOSITORY    TAG          IMAGE ID            CREATED             SIZE&lt;br /&gt;
newcentos     withapache   ea5215fb46ed        50 seconds ago      272MB&lt;br /&gt;
&lt;br /&gt;
docker image history newcentos:withapache&lt;br /&gt;
IMAGE        CREATED        CREATED BY   SIZE   COMMENT&lt;br /&gt;
ea5215fb46ed 2 minutes ago               272MB  Imported from -&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I am unsure what is a difference in creation of images from a container between:&lt;br /&gt;
* &amp;lt;code&amp;gt;docker container commit&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;docker container export&amp;lt;/code&amp;gt; - this seems creates smaller image&lt;br /&gt;
&lt;br /&gt;
== Tag images ==&lt;br /&gt;
Tags are used to usually to name Official image with a new name that we are planning to modify. This allows to create a new image, run a new container from a tag, delete the original image without affecting the new image or container started from the new tagged image.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
docker image tag #long version&lt;br /&gt;
docker tag centos:6 myucentos:v1 #this will create a duplicate of centos:6 named myucentos:v1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Tagging allows to modify repository name and maanges references to images located on a filesystem.&lt;br /&gt;
&lt;br /&gt;
== History of an image ==&lt;br /&gt;
We can display history of layers that created the image by showing interim images build in creation order. It shows only layers created on a local filesystem.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
docker image history myrepo/centos:v2&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Stop and delete all containers ==&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
docker stop $(docker ps -aq) &amp;amp;&amp;amp; docker rm $(docker ps -aq)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Delete image ==&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ docker images&lt;br /&gt;
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE&lt;br /&gt;
company-repo        0.1.0               f796d7f843cc        About an hour ago   888MB&lt;br /&gt;
&amp;lt;none&amp;gt;              &amp;lt;none&amp;gt;              04fbac2fdf48        3 hours ago         565MB&lt;br /&gt;
ubuntu              16.04               7aa3602ab41e        3 weeks ago         115MB&lt;br /&gt;
&lt;br /&gt;
# Delete image&lt;br /&gt;
$ docker rmi company-repo:0.1.0&lt;br /&gt;
Untagged: company-repo:0.1.0&lt;br /&gt;
Deleted: sha256:e5cca6a080a5c65eacff98e1b17eeb7be02651849b431b46b074899c088bd42a&lt;br /&gt;
..&lt;br /&gt;
Deleted: sha256:bc7cda232a2319578324aae620c4537938743e46081955c4dd0743a89e9e8183&lt;br /&gt;
&lt;br /&gt;
# Prune image - delete dangling (temp/interim) images. &lt;br /&gt;
# These are not associated with end-product image or containers.&lt;br /&gt;
docker image prune&lt;br /&gt;
docker image prune -a #remove all images not associated with any container &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Cleaning up space by removing docker objects ==&lt;br /&gt;
This applied both to docker container and swarm systems.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
docker system df     #show disk usage&lt;br /&gt;
TYPE                TOTAL               ACTIVE              SIZE                RECLAIMABLE&lt;br /&gt;
Images              1                   0                   131.7MB             131.7MB (100%)&lt;br /&gt;
Containers          0                   0                   0B                  0B&lt;br /&gt;
Local Volumes       0                   0                   0B                  0B&lt;br /&gt;
Build Cache         0                   0                   0B                  0B&lt;br /&gt;
&lt;br /&gt;
docker network ls #note all networks below are system created, so won't get removed&lt;br /&gt;
NETWORK ID          NAME                DRIVER              SCOPE&lt;br /&gt;
452b1c428209        bridge              bridge              local&lt;br /&gt;
528db1bf80f1        docker_gwbridge     bridge              local&lt;br /&gt;
832c8c6d73a5        host                host                local&lt;br /&gt;
t8jxy5vsy5on        ingress             overlay             swarm&lt;br /&gt;
815a9c2c4005        none                null                local&lt;br /&gt;
&lt;br /&gt;
docker system prune #removes objects created by a user only, on the current node only&lt;br /&gt;
                    #add --volumes to remove them as well&lt;br /&gt;
WARNING! This will remove:&lt;br /&gt;
        - all stopped containers&lt;br /&gt;
        - all networks not used by at least one container&lt;br /&gt;
        - all dangling images&lt;br /&gt;
        - all dangling build cache&lt;br /&gt;
Are you sure you want to continue? [y/N]&lt;br /&gt;
&lt;br /&gt;
docker system prune -a --volumes #remove all&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Docker Volumes ==&lt;br /&gt;
Docker's 'copy-on-write' philosophy drives both performance and efficiency. It's only the top layer that is writable and it's a delta of underlying layer.&lt;br /&gt;
&lt;br /&gt;
Volumes can be mounted to your container instances from your underlying host systems.&lt;br /&gt;
&lt;br /&gt;
''_data'' volumes, since they are not controlled by the storage driver (since they represent a file/directory on the host filesystem /var/lib/docker), are able to bypass the storage driver. As a result, their contents are not affected when a container is removed.&lt;br /&gt;
&lt;br /&gt;
Volumes are data mounts created on a host in &amp;lt;code&amp;gt;/var/lib/docker/volumes/&amp;lt;/code&amp;gt; directory and refereed by name in a Dockerfile.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
docker volume ls                   #list volumes created by VOLUME directive in a Dockerfile&lt;br /&gt;
sudo tree /var/lib/docker/volumes/ #list volumes on host-side&lt;br /&gt;
docker volume create  my-vol-1&lt;br /&gt;
docker volume inspect my-vol-1&lt;br /&gt;
[&lt;br /&gt;
    {&lt;br /&gt;
        &amp;quot;CreatedAt&amp;quot;: &amp;quot;2019-01-17T08:47:01Z&amp;quot;,&lt;br /&gt;
        &amp;quot;Driver&amp;quot;: &amp;quot;local&amp;quot;,&lt;br /&gt;
        &amp;quot;Labels&amp;quot;: {},&lt;br /&gt;
        &amp;quot;Mountpoint&amp;quot;: &amp;quot;/var/lib/docker/volumes/my-vol-1/_data&amp;quot;,&lt;br /&gt;
        &amp;quot;Name&amp;quot;: &amp;quot;my-vol-1&amp;quot;,&lt;br /&gt;
        &amp;quot;Options&amp;quot;: {},&lt;br /&gt;
        &amp;quot;Scope&amp;quot;: &amp;quot;local&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using volumes with Swarm services &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker container run  --name web1 -p 80:80 --mount source=my-vol-1,target=/internal-mount --replicas 3 httpd #container&lt;br /&gt;
docker service create --name web1 -p 80:80 --mount source=my-vol-1,target=/internal-mount --replicas 3 httpd #swarm service&lt;br /&gt;
# --mount --volumes|-v is not supported with services, this will replicate volumes across swarm when needed,&lt;br /&gt;
# but it will not replicate files&lt;br /&gt;
&lt;br /&gt;
docker exec -it web1 /bin/bash #connect to the container&lt;br /&gt;
roor@c123:/ echo &amp;quot;Created when connected to container: volume-web1&amp;quot; &amp;gt; /internal-mount/local.txt; exit&lt;br /&gt;
&lt;br /&gt;
# prove the file is on a host filesystem created volume&lt;br /&gt;
user@dockerhost$ cat /var/lib/docker/volumes/my-vol-1/_data/local.txt&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Host storage mount&lt;br /&gt;
Bind mapping is binding host filesystem directory to a container directory. It's not mouting volume that it'd require a mount point and volume on a host.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
mkdir /home/user/web1&lt;br /&gt;
echo &amp;quot;web1 index&amp;quot; &amp;gt; /home/user/web1/index.html&lt;br /&gt;
docker container run -d --name testweb -p 80:80 --mount type=bind,source=/home/user/web1,target=/usr/local/apache2/htdocs httpd&lt;br /&gt;
curl http://localhost:80&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Removing service is not going to remove the volume unless you delete the volume itself. It that case will be removed from all swarms.&lt;br /&gt;
&lt;br /&gt;
== Networking ==&lt;br /&gt;
=== Container Network Model ===&lt;br /&gt;
It's a concept of network implementation that is built on multiple private networks across multiple hosts overlayed and managed by IPAM. Protocol that keeps track and provision addesses.&lt;br /&gt;
&lt;br /&gt;
Main 3 components:&lt;br /&gt;
* sandbox -  contains the configuration of a container's network stack, incl. management of interfaces, routing and DNS. An implementation of a Sandbox could be a eg. Linux Network Namespace. A Sandbox may contain many endpoints from multiple networks.&lt;br /&gt;
* endpoint - joins a Sandbox to a Network. Interfaces, switches, ports, etc and belong to only one network at the time. The Endpoint construct exists so the actual connection to the network can be abstracted away from the application. This helps maintain portability.&lt;br /&gt;
* network - a clollection of endpoints that can communicate directly (bridges, VLANs, etc.) and can consist of 1toN endpoints&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Container-network-model.png|none||left|Container Network Model]]&lt;br /&gt;
&lt;br /&gt;
;IPAM (Internet Protocol Address Management)&lt;br /&gt;
Managing addesees across multiple hosts on a separate physical networks while providing routing to the underlaying swarm networks externally is ''the IPAM prblem'' for Docker. Depends on the netwok driver choice, IPAM is handled at different layers in the stack. ''Network drivers'' enable IPAM through ''DHCP drivers'' or plugin drivers so the complex implementation that would be normally overlapping addesses is supported.&lt;br /&gt;
&lt;br /&gt;
;References&lt;br /&gt;
* [https://success.docker.com/article/networking Docker Reference Architecture: Designing Scalable, Portable Docker Container Networks]&lt;br /&gt;
&lt;br /&gt;
=== Publish exposed container/service ports ===&lt;br /&gt;
;Publishing modes&lt;br /&gt;
;host: set using &amp;lt;code&amp;gt; --publish mode=host,8080:80&amp;lt;/code&amp;gt;, makes ports available only on the undelaying host system not outside the host the service may exist; defits ''routing mesh'' so user is responsible for routing&lt;br /&gt;
;ingress: responsible for ''routing mesh'' makes sure all published ports are avaialble on all hosts in the swarm cluster regardless is a service replica running on it or not&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# List exposed ports on a container&lt;br /&gt;
docker port CONTAINER [PRIVATE_PORT[/PROTOCOL]]&lt;br /&gt;
docker port web2&lt;br /&gt;
80/tcp -&amp;gt; 0.0.0.0:81&lt;br /&gt;
&lt;br /&gt;
# Publish port&lt;br /&gt;
                                          host  :  container&lt;br /&gt;
                                             \  :  /&lt;br /&gt;
docker container run -d --name web1 --publish 81:80 httpd&lt;br /&gt;
# --publish | -p :- publish to host exposed container port&lt;br /&gt;
# 81             :- port on a host, can use range eg. 81-85, so based on port availability port will be used&lt;br /&gt;
# 80             :- exposed port on a container&lt;br /&gt;
&lt;br /&gt;
ss -lnt&lt;br /&gt;
State       Recv-Q Send-Q Local Address:Port Peer Address:Port&lt;br /&gt;
LISTEN      0      100        127.0.0.1:25              *:*&lt;br /&gt;
LISTEN      0      128                *:22              *:*&lt;br /&gt;
LISTEN      0      100              ::1:25             :::*&lt;br /&gt;
LISTEN      0      128               :::81             :::*&lt;br /&gt;
LISTEN      0      128               :::22             :::*&lt;br /&gt;
&lt;br /&gt;
docker container run -d --name web1 --publish-all 81:80 httpd&lt;br /&gt;
# --publish-all | -P publish all cotainer exposed ports to random ports above &amp;gt;32768&lt;br /&gt;
CONTAINER ID IMAGE COMMAND              CREATED STATUS PORTS                   NAMES&lt;br /&gt;
c63efe9cbb94 httpd &amp;quot;httpd-foreground&amp;quot;   2 sec.. Up 1 s 80/tcp                  testweb  #port exposed but not published&lt;br /&gt;
cb0711134eb5 httpd &amp;quot;httpd-foreground&amp;quot;   4 sec.. Up 2 s 0.0.0.0:32769-&amp;gt;80/tcp   testweb1 #port exposed and published to host:32769&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Network drivers ===&lt;br /&gt;
Default network for a single host docker-host is ''bridge'' network.&lt;br /&gt;
&lt;br /&gt;
;List of Native (part of Docker Engine) Network Drivers:&lt;br /&gt;
;bridge: default on stand-alone hosts, it's private network internal to the host system, all containers on this host using Bridge network can communicate, external access is granted by port exposure or static-routes added with teh host as the gateway for that network&lt;br /&gt;
;none: used when a container does not need any networkng, still can be accessed from the host using &amp;lt;code&amp;gt;docker attach [containerID]&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;docker exec -it [containerID]&amp;lt;/code&amp;gt; commands&lt;br /&gt;
;host: aka ''Host Only Networking'', only accessable via underlaying host, access to services can be provided by exposing ports to the host system&lt;br /&gt;
;overlay: swarm scope driver, allows communication to all Docker Daemons in a cluster, self-extending if needed, maanged by Swarm manager, it's default mode of Swarm communication&lt;br /&gt;
;ingress: extended network across all nodes in the cluster; special overlay network that load balances netowrk traffic amongst a given service's working nodes; maintains a list of all IP addresses from nodes that participate in that service (using the IPVS module) and when a request comes in, routes to one of them for the indicated service; provides ''routing mesh' that allows services to be exposed to the external network without having replica running on every node in the Swarm&lt;br /&gt;
;docker gateway bridge: special bridge network that allows overlay networks (incl. ingress) access an individual DOcker daemon's physical network; every container run within a service is connected to the local Docerk daemon's host network; automatically created when Docker is initialised or joined by &amp;lt;code&amp;gt;docker swarm init&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;docker join&amp;lt;/code&amp;gt; commands.&lt;br /&gt;
&lt;br /&gt;
;Docker interfaces&lt;br /&gt;
* &amp;lt;code&amp;gt;docker0&amp;lt;/code&amp;gt; - adapter is installed by default during Docker setup and will be assigned an address range that will determine the local host IPs available to the containers running on it&lt;br /&gt;
&lt;br /&gt;
;Create bridge network&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker network ls #default networks list&lt;br /&gt;
NETWORK ID    NAME                DRIVER   SCOPE&lt;br /&gt;
130833da0920  bridge              bridge   local&lt;br /&gt;
528db1bf80f1  docker_gwbridge     bridge   local&lt;br /&gt;
832c8c6d73a5  host                host     local&lt;br /&gt;
t8jxy5vsy5on  ingress             overlay  swarm  #'ingress' special network 1 per cluster&lt;br /&gt;
815a9c2c4005  none                null     local&lt;br /&gt;
&lt;br /&gt;
docker network inspect bridge #bridge is a default network containers are deployed to&lt;br /&gt;
&lt;br /&gt;
docker container run  -d web1 -p 8080:80 httpd #expose container port :80 -&amp;gt; :8080 on the docker-home&lt;br /&gt;
docker container inspect web1 | grep IPAdd&lt;br /&gt;
docker container inspect --format=&amp;quot;{{.NetworkSettings.Networks.bridge.IPAddress}}&amp;quot; web1 #get container ip&lt;br /&gt;
curl http://$(IPAddr)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Create bridge network&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker network create --driver=bridge --subnet=192.168.1.0/24 --opt &amp;quot;com.docker.network.driver.mtu&amp;quot;=1501 deviceeth0&lt;br /&gt;
&lt;br /&gt;
docker network ls&lt;br /&gt;
docker network inspect deviceeth0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Create overlay network&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker network create --driver=overlay --subnet=192.168.1.0/24 --gateway=192.168.1.1 overlay0&lt;br /&gt;
docker network ls &lt;br /&gt;
NETWORK ID          NAME                DRIVER              SCOPE&lt;br /&gt;
130833da0920        bridge              bridge              local&lt;br /&gt;
528db1bf80f1        docker_gwbridge     bridge              local&lt;br /&gt;
832c8c6d73a5        host                host                local&lt;br /&gt;
t8jxy5vsy5on        ingress             overlay             swarm&lt;br /&gt;
815a9c2c4005        none                null                local&lt;br /&gt;
2x6bq1czzdc1        overlay0            overlay             swarm&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Inspect network&lt;br /&gt;
&amp;lt;source lang=json&amp;gt;&lt;br /&gt;
docker network inspect overlay0&lt;br /&gt;
[&lt;br /&gt;
    {&lt;br /&gt;
        &amp;quot;Name&amp;quot;: &amp;quot;overlay0&amp;quot;,&lt;br /&gt;
        &amp;quot;Id&amp;quot;: &amp;quot;2x6bq1czzdc102sl6ge7gpm3w&amp;quot;,&lt;br /&gt;
        &amp;quot;Created&amp;quot;: &amp;quot;2019-01-19T11:24:02.146339562Z&amp;quot;,&lt;br /&gt;
        &amp;quot;Scope&amp;quot;: &amp;quot;swarm&amp;quot;,&lt;br /&gt;
        &amp;quot;Driver&amp;quot;: &amp;quot;overlay&amp;quot;,&lt;br /&gt;
        &amp;quot;EnableIPv6&amp;quot;: false,&lt;br /&gt;
        &amp;quot;IPAM&amp;quot;: {&lt;br /&gt;
            &amp;quot;Driver&amp;quot;: &amp;quot;default&amp;quot;,&lt;br /&gt;
            &amp;quot;Options&amp;quot;: null,&lt;br /&gt;
            &amp;quot;Config&amp;quot;: [&lt;br /&gt;
                {&lt;br /&gt;
                    &amp;quot;Subnet&amp;quot;: &amp;quot;192.168.1.0/24&amp;quot;,&lt;br /&gt;
                    &amp;quot;Gateway&amp;quot;: &amp;quot;192.168.1.1&amp;quot;&lt;br /&gt;
                }&lt;br /&gt;
            ]&lt;br /&gt;
        },&lt;br /&gt;
        &amp;quot;Internal&amp;quot;: false,&lt;br /&gt;
        &amp;quot;Attachable&amp;quot;: false,&lt;br /&gt;
        &amp;quot;Ingress&amp;quot;: false,&lt;br /&gt;
        &amp;quot;ConfigFrom&amp;quot;: {&lt;br /&gt;
            &amp;quot;Network&amp;quot;: &amp;quot;&amp;quot;&lt;br /&gt;
        },&lt;br /&gt;
        &amp;quot;ConfigOnly&amp;quot;: false,&lt;br /&gt;
        &amp;quot;Containers&amp;quot;: null,&lt;br /&gt;
        &amp;quot;Options&amp;quot;: {&lt;br /&gt;
            &amp;quot;com.docker.network.driver.overlay.vxlanid_list&amp;quot;: &amp;quot;4097&amp;quot;&lt;br /&gt;
        },&lt;br /&gt;
        &amp;quot;Labels&amp;quot;: null&lt;br /&gt;
    }&lt;br /&gt;
]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Inspect container network&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker container inspect testweb --format {{.HostConfig.NetworkMode}}&lt;br /&gt;
overlay0&lt;br /&gt;
docker container inspect testweb --format {{.NetworkSettings.Networks.dev_bridge.IPAddress}}&lt;br /&gt;
192.168.1.3&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Connect/disconnect from a network can be done when a container is running. Connect won't disconnect from a current network.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker network connect --ip=192.168.1.10 deviceeth0 web1&lt;br /&gt;
docker container inspect --format=&amp;quot;{{.NetworkSettings.Networks.bridge.IPAddress}}&amp;quot; web1&lt;br /&gt;
docker container inspect --format=&amp;quot;{{.NetworkSettings.Networks.deviceeth0.IPAddress}}&amp;quot; web1&lt;br /&gt;
curl http://$(IPAddr)&lt;br /&gt;
docker network disconnect deviceeth0 web1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Overlay network in Swarm cluster ===&lt;br /&gt;
Overlay network can be created/removed/updated like any other docker objects. It allows inter-service(containers) communication, where &amp;lt;code&amp;gt;--gateway&amp;lt;/code&amp;gt; ip address is used to reach to outside eg. Inernet or the host network. When creating the &amp;lt;code&amp;gt;overlay&amp;lt;/code&amp;gt; network on the manager host it will get recreated on worker nodes only when is referenced by any service that is using it. See below.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
swarm-mgr$ docker network create --driver=overlay --subnet=192.168.1.0/24 --gateway=192.168.1.1 overlay0&lt;br /&gt;
swarm-mgr$ docker service create --name web1 -p 8080:80 --network=overlay0 --replicas 2 httpd&lt;br /&gt;
uvxymzdkcfwvs2oznbnk7nv03&lt;br /&gt;
overall progress: 2 out of 2 tasks &lt;br /&gt;
1/2: running   [==================================================&amp;gt;] &lt;br /&gt;
2/2: running   [==================================================&amp;gt;] &lt;br /&gt;
&lt;br /&gt;
swarm-wkr$ docker network ls&lt;br /&gt;
NETWORK ID          NAME                DRIVER              SCOPE&lt;br /&gt;
ba175ebd2a6f        bridge              bridge              local&lt;br /&gt;
a5848f607d8c        docker_gwbridge     bridge              local&lt;br /&gt;
fccfb9c1fdc3        host                host                local&lt;br /&gt;
t8jxy5vsy5on        ingress             overlay             swarm&lt;br /&gt;
127b10783faa        none                null                local&lt;br /&gt;
2x6bq1czzdc1        overlay0            overlay             swarm&lt;br /&gt;
&lt;br /&gt;
# remove network, only affected newly created servces not the running onces&lt;br /&gt;
swarm-mgr$ docker service update --network-rm=overlay0 web1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DNS ===&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
docker container run -d --name testweb1 -P --dns=8.8.8.8 \&lt;br /&gt;
                                           --dns=8.8.4.4 \&lt;br /&gt;
                                           --dns-search &amp;quot;mydomain.local&amp;quot; \&lt;br /&gt;
                                           httpd&lt;br /&gt;
# -P :- publish-all exposed ports to random port &amp;gt;32768&lt;br /&gt;
&lt;br /&gt;
docker container exec -it testweb1 /bin/bash -c 'cat /etc/resolv.conf'&lt;br /&gt;
search us-east-2.compute.internal&lt;br /&gt;
nameserver 8.8.8.8&lt;br /&gt;
nameserver 8.8.4.4&lt;br /&gt;
&lt;br /&gt;
# System wide settings, requires docker.service restart&lt;br /&gt;
cat &amp;gt; /etc/docker/daemon.json &amp;lt;&amp;lt;EOF&lt;br /&gt;
{ &lt;br /&gt;
  &amp;quot;dns&amp;quot;: [&amp;quot;8.8.8.8&amp;quot;, &amp;quot;8.8.4.4&amp;quot;]&lt;br /&gt;
}&lt;br /&gt;
EOF&lt;br /&gt;
sudo systemctl restart docker.service #required&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Examples =&lt;br /&gt;
== Lint - best practices ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ docker run --rm -i hadolint/hadolint &amp;lt; Dockerfile&lt;br /&gt;
/dev/stdin:9:16 unexpected newline expecting &amp;quot;\ &amp;quot;, '=', a space followed by the value for the variable 'MAC_ADDRESS', or at least one space&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Default project ==&lt;br /&gt;
As good practice all Docker files should be source controlled. The basic self explanatory structure can looks like below, and skeleton be created with a commend below:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
mkdir APROJECT &amp;amp;&amp;amp; d=$_; touch $d/{build.sh,run.sh,Dockerfile,README.md,VERSION};mkdir $d/assets; touch $_/{entrypoint.sh,install.sh}&lt;br /&gt;
&lt;br /&gt;
└── APROJECT&lt;br /&gt;
    ├── assets&lt;br /&gt;
    │   ├── entrypoint.sh&lt;br /&gt;
    │   └── install.sh&lt;br /&gt;
    ├── build.sh&lt;br /&gt;
    ├── Dockerfile&lt;br /&gt;
    ├── README.md&lt;br /&gt;
    └── VERSION&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Dockerfile ==&lt;br /&gt;
&amp;lt;code&amp;gt;Dockerfile&amp;lt;/code&amp;gt; it is simply a build file.&lt;br /&gt;
=== Semantics ===&lt;br /&gt;
;&amp;lt;code&amp;gt;entrypoint&amp;lt;/code&amp;gt;: Container config: what to start when this image is ran. &lt;br /&gt;
&lt;br /&gt;
;&amp;lt;code&amp;gt;entrypoint&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cmd&amp;lt;/code&amp;gt;: Docker allows you to define an Entrypoint and Cmd which you can mix and match in a Dockerfile. Entrypoint is the executable, and Cmd are the arguments passed to the Entrypoint. The Dockerfile schema is quite lenient and allows users to set Cmd without Entrypoint, which means that the first argument in Cmd will be the executable to run.&lt;br /&gt;
&lt;br /&gt;
=== User management ===&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
RUN addgroup --gid 1001 jenkins -q&lt;br /&gt;
RUN adduser  --gid 1001 --home /tank --disabled-password --gecos '' --uid 1001 jenkins&lt;br /&gt;
# --gid add user to group 1001&lt;br /&gt;
# --gecos parameter is used to set the additional information. In this case it is just empty.&lt;br /&gt;
# --disabled-password it's like  --disabled-login,  but  logins  are still possible (for example using SSH RSA keys) but not using password authentication&lt;br /&gt;
USER jenkins:jenkins #sets user for next RUN, CMD and ENTRYPOINT command&lt;br /&gt;
WORKDIR /tank #changes cwd for next RUN, CMD, ENTRYPOINT, COPY and ADD&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Multiple stage build ===&lt;br /&gt;
Introduced in Docker 17.06, allows to use multiple &amp;lt;code&amp;gt;FROM&amp;lt;/code&amp;gt; statements allowing for multi stage builds.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
FROM microsoft/aspnetcore-build AS build-env&lt;br /&gt;
WORKDIR /app&lt;br /&gt;
&lt;br /&gt;
# copy csproj and restore as distinct layers&lt;br /&gt;
COPY *.csproj ./&lt;br /&gt;
RUN dotnet restore&lt;br /&gt;
&lt;br /&gt;
# copy everything else and build&lt;br /&gt;
COPY . ./&lt;br /&gt;
RUN dotnet publish -c Release -o output&lt;br /&gt;
&lt;br /&gt;
# build runtime image&lt;br /&gt;
FROM microsoft/aspnetcore&lt;br /&gt;
WORKDIR /app&lt;br /&gt;
COPY --from=build-env /app/output .   #multi stage: copy files from previous container [as build-env]&lt;br /&gt;
ENTRYPOINT [&amp;quot;dotnet&amp;quot;, &amp;quot;LetsKube.dll&amp;quot;]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Squash an image =&lt;br /&gt;
Docker uses &amp;lt;code&amp;gt;Union&amp;lt;/code&amp;gt; filesystem that allows multiple volumes (layers) to share common and override changes by applying them on top layer. &lt;br /&gt;
There is no official way to ''flatten'' layers to a single storage layer or minimize an image size (2017). Below it's just practical approach. &lt;br /&gt;
# Start container from an image&lt;br /&gt;
# Export a container to &amp;lt;code&amp;gt;.tar&amp;lt;/code&amp;gt; with all it's file systems&lt;br /&gt;
# Import container with new image name&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the process completes and original image gets deleted the new image &amp;lt;code&amp;gt;docker image history&amp;lt;/code&amp;gt; command will show only one layer. Often the image will be smaller.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# run a container from an image&lt;br /&gt;
docker run myweb:v3&lt;br /&gt;
# export container to .tar&lt;br /&gt;
docker export &amp;lt;contr_name&amp;gt; &amp;gt; myweb.v3.tar&lt;br /&gt;
docker save   &amp;lt;image id&amp;gt;   &amp;gt; image.tar #not verified command&lt;br /&gt;
docker import myweb.v3.tar   myweb:v4&lt;br /&gt;
docker load   myweb.v3.tar             #not verified command  &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Resources&lt;br /&gt;
*[https://github.com/jwilder/docker-squash docker-squash] GitHub&lt;br /&gt;
&lt;br /&gt;
= Gracefully stop / kill a container =&lt;br /&gt;
''all below are only notes''&lt;br /&gt;
&lt;br /&gt;
Trap ctrl_c then kill/rm container.&lt;br /&gt;
*--init&lt;br /&gt;
*--sig-proxy this only works when --tty=false but by default is true&lt;br /&gt;
&lt;br /&gt;
= Proxy =&lt;br /&gt;
If you are behind corporate proxy, you should use Docker client &amp;lt;code&amp;gt;~/.docker/config.json&amp;lt;/code&amp;gt; config file. It requires Docker &lt;br /&gt;
17.07 minimum version.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
 &amp;quot;proxies&amp;quot;:&lt;br /&gt;
 {&lt;br /&gt;
   &amp;quot;default&amp;quot;:&lt;br /&gt;
   {&lt;br /&gt;
     &amp;quot;httpProxy&amp;quot;: &amp;quot;http://10.0.0.1:3128&amp;quot;,&lt;br /&gt;
     &amp;quot;httpsProxy&amp;quot;: &amp;quot;http://10.0.0.1:3128&amp;quot;,&lt;br /&gt;
     &amp;quot;noProxy&amp;quot;: &amp;quot;localhost,127.0.0.1,*.test.example.com,.example2.com&amp;quot;&lt;br /&gt;
   }&lt;br /&gt;
 }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
More you can find [https://docs.docker.com/network/proxy/#configure-the-docker-client here]&lt;br /&gt;
&lt;br /&gt;
== Insecure proxy ==&lt;br /&gt;
These can be added to different places, the order is based on latest practices and versioning&lt;br /&gt;
;docker-ce 18.6&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
    &amp;quot;insecure-registries&amp;quot; : [ &amp;quot;localhost:443&amp;quot;,&amp;quot;10.0.0.0/8&amp;quot;, &amp;quot;172.16.0.0/12&amp;quot;, &amp;quot;192.168.0.0/16&amp;quot; ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo systemctl daemon-reload&lt;br /&gt;
sudo systemctl restart docker&lt;br /&gt;
sudo systemctl show docker | grep Env&lt;br /&gt;
docker info #check Insecure Registries&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Using environment file, prior version 18&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sudo vi /etc/default/docker&lt;br /&gt;
DOCKER_HOME='--graph=/tank/docker'&lt;br /&gt;
DOCKER_GROUP='--group=docker'&lt;br /&gt;
DOCKER_LOG_DRIVER='--log-driver=json-file'&lt;br /&gt;
DOCKER_STORAGE_DRIVER='--storage-driver=btrfs'&lt;br /&gt;
DOCKER_ICC='--icc=false'&lt;br /&gt;
DOCKER_IPMASQ='--ip-masq=true'&lt;br /&gt;
DOCKER_IPTABLES='--iptables=true'&lt;br /&gt;
DOCKER_IPFORWARD='--ip-forward=true'&lt;br /&gt;
DOCKER_ADDRESSES='--host=unix:///var/run/docker.sock'&lt;br /&gt;
DOCKER_INSECURE_REGISTRIES='--insecure-registry 10.0.0.0/8 --insecure-registry 172.16.0.0/12 --insecure-registry 192.168.0.0/16'&lt;br /&gt;
DOCKER_OPTS=&amp;quot;${DOCKER_HOME} ${DOCKER_GROUP} ${DOCKER_LOG_DRIVER} ${DOCKER_STORAGE_DRIVER} ${DOCKER_ICC} ${DOCKER_IPMASQ} ${DOCKER_IPTABLES} ${DOCKER_IPFORWARD} ${DOCKER_ADDRESSES} ${DOCKER_INSECURE_REGISTRIES}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
$ sudo vi /etc/systemd/system/docker.service.d/docker.conf&lt;br /&gt;
[Service]&lt;br /&gt;
EnvironmentFile=-/etc/default/docker&lt;br /&gt;
ExecStart=/usr/bin/dockerd $DOCKER_HOME $DOCKER_GROUP $DOCKER_LOG_DRIVER $DOCKER_STORAGE_DRIVER $DOCKER_ICC $DOCKER_IPMASQ $DOCKER_IPTABLES $DOCKER_IPFORWARD $DOCKER_ADDRESSES $DOCKER_INSECURE_REGISTRIES&lt;br /&gt;
&lt;br /&gt;
$ sudo vi /etc/systemd/system/docker.service&lt;br /&gt;
[Unit]&lt;br /&gt;
Description=Docker Application Container Engine&lt;br /&gt;
Documentation=https://docs.docker.com&lt;br /&gt;
After=network-online.target docker.socket firewalld.service&lt;br /&gt;
Wants=network-online.target&lt;br /&gt;
Requires=docker.socket&lt;br /&gt;
&lt;br /&gt;
[Service]&lt;br /&gt;
EnvironmentFile=-/etc/default/docker&lt;br /&gt;
Type=notify&lt;br /&gt;
# the default is not to use systemd for cgroups because the delegate issues still&lt;br /&gt;
# exists and systemd currently does not support the cgroup feature set required&lt;br /&gt;
# for containers run by docker&lt;br /&gt;
ExecStart=/usr/bin/dockerd -H fd://&lt;br /&gt;
ExecReload=/bin/kill -s HUP $MAINPID&lt;br /&gt;
LimitNOFILE=1048576&lt;br /&gt;
# Having non-zero Limit*s causes performance problems due to accounting overhead&lt;br /&gt;
# in the kernel. We recommend using cgroups to do container-local accounting.&lt;br /&gt;
LimitNPROC=infinity&lt;br /&gt;
LimitCORE=infinity&lt;br /&gt;
# Uncomment TasksMax if your systemd version supports it.&lt;br /&gt;
# Only systemd 226 and above support this version.&lt;br /&gt;
TasksMax=infinity&lt;br /&gt;
TimeoutStartSec=0&lt;br /&gt;
# set delegate yes so that systemd does not reset the cgroups of docker containers&lt;br /&gt;
Delegate=yes&lt;br /&gt;
# kill only the docker process, not all processes in the cgroup&lt;br /&gt;
KillMode=process&lt;br /&gt;
# restart the docker process if it exits prematurely&lt;br /&gt;
Restart=on-failure&lt;br /&gt;
StartLimitBurst=3&lt;br /&gt;
StartLimitInterval=60s&lt;br /&gt;
&lt;br /&gt;
[Install]&lt;br /&gt;
WantedBy=multi-user.target&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Run docker without sudo ==&lt;br /&gt;
Adding a user to docker group should be sufficient. However on apparmor, SELinux or a filesystem with ACL enabled additional permissions might be required in respect to access a &amp;lt;tt&amp;gt;socket file&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
$ ll /var/run/docker.sock&lt;br /&gt;
srw-rw---- 1 root docker 0 Sep  6 12:31 /var/run/docker.sock=&lt;br /&gt;
# ACL&lt;br /&gt;
$ sudo getfacl /var/run/docker.sock&lt;br /&gt;
getfacl: Removing leading '/' from absolute path names&lt;br /&gt;
# file: var/run/docker.sock&lt;br /&gt;
# owner: root&lt;br /&gt;
# group: docker&lt;br /&gt;
user::rw-&lt;br /&gt;
group::rw-&lt;br /&gt;
other::---&lt;br /&gt;
&lt;br /&gt;
# Grant ACL to jenkns user&lt;br /&gt;
$ sudo setfacl -m user:username:rw /var/run/docker.sock&lt;br /&gt;
&lt;br /&gt;
$ sudo getfacl /var/run/docker.sock&lt;br /&gt;
getfacl: Removing leading '/' from absolute path names&lt;br /&gt;
# file: var/run/docker.sock&lt;br /&gt;
# owner: root&lt;br /&gt;
# group: docker&lt;br /&gt;
user::rw-&lt;br /&gt;
user:jenkins:rw-&lt;br /&gt;
group::rw-&lt;br /&gt;
mask::rw-&lt;br /&gt;
other::---&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
;References&lt;br /&gt;
* [https://askubuntu.com/questions/477551/how-can-i-use-docker-without-sudo how-can-i-use-docker-without-sudo]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
*[https://www.weave.works/blog/my-container-wont-stop-on-ctrl-c-and-other-minor-tragedies/ my-container-wont-stop-on-ctrl-c-and-other-minor-tragedies]&lt;br /&gt;
*[https://github.com/moby/moby/pull/12228 PID1 in container aka tinit]&lt;br /&gt;
*[https://container-solutions.com/understanding-volumes-docker/ understanding-volumes-docker]&lt;br /&gt;
&lt;br /&gt;
= Docker Enterprise Edition =&lt;br /&gt;
*[https://success.docker.com/article/compatibility-matrix Compatibility Matrix]&lt;br /&gt;
Components:&lt;br /&gt;
* Docker daemon (fka &amp;quot;Engine&amp;quot;)&lt;br /&gt;
* Docker Trusted Registry (DTR)&lt;br /&gt;
* Docker Universal Control Plane (UCP)&lt;br /&gt;
&lt;br /&gt;
= Docker Swarm =&lt;br /&gt;
== Swarm - sizing ==&lt;br /&gt;
;Universal Control Plane (UCP)&lt;br /&gt;
This is for only Enterpsise Edition&lt;br /&gt;
* ports managers, workers in/out&lt;br /&gt;
&lt;br /&gt;
Hardware requirments:&lt;br /&gt;
* 8gb RAM for  managers or DTR Docker Trsuted Registry&lt;br /&gt;
* 4gb RAM for workers&lt;br /&gt;
* 3gb free space&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Performance Consideration (Timing)&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
Component                              Timeout(ms)  Configurable&lt;br /&gt;
Raft consensus between manager nodes   3000         no&lt;br /&gt;
Gossip protocol for overlay networking 5000         no&lt;br /&gt;
etcd                                   500          yes&lt;br /&gt;
RethinkDB                              10000        no&lt;br /&gt;
Stand-alone swarm                      90000        no&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Compatibility Docker EE&lt;br /&gt;
* Docker Engine 17.06+&lt;br /&gt;
* DTR 2.3+&lt;br /&gt;
* UCP 2.2+&lt;br /&gt;
== Swarm with single host manager ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Initialise Swarm&lt;br /&gt;
docker swarm init --advertise-addr 172.31.16.10 #Iyou get SWMTKN-token&lt;br /&gt;
To add a worker to this swarm, run the following command:&lt;br /&gt;
    docker swarm join --token SWMTKN-1-1i2v91qbj0pg88dxld15vpx3e74qm5clk7xkcrg6j3xknedqui-dh60f4j09itiqjfhqa196ufvo 172.31.16.10:2377&lt;br /&gt;
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.&lt;br /&gt;
&lt;br /&gt;
# Join tokens&lt;br /&gt;
docker swarm join-token manager #display manager join-token, run on manager&lt;br /&gt;
docker swarm join-token worker  #display worker  join-token, run on manager&lt;br /&gt;
&lt;br /&gt;
# Join worker, run new-worker-node&lt;br /&gt;
#                                 -&amp;gt;            swarm cluster id                    &amp;lt;-&amp;gt; this part is mgr/wkr &amp;lt;- -&amp;gt; mgr node &amp;lt;-&lt;br /&gt;
docker swarm join --token SWMTKN-1-1i2v91qbj0pg88dxld15vpx3e74qm5clk7xkcrg6j3xknedqui-dh60f4j09itiqjfhqa196ufvo 172.31.16.10:2377&lt;br /&gt;
&lt;br /&gt;
# Join another manager, run on new-manager-node&lt;br /&gt;
docker swarm join-token manager #run on primary manager if you wish add another manager&lt;br /&gt;
# in output you get a token. You notice that 1st part up to dash identifies Swarm cluster and the other part is role id.&lt;br /&gt;
&lt;br /&gt;
# join to swarm (cluster), token will identify a role in the cluster manager or worker&lt;br /&gt;
docker swarm join --token SWMTKN-xxxx&lt;br /&gt;
docker swarm join --token SWMTKN-1-1i2v91qbj0pg88dxld15vpx3e74qm5clk7xkcrg6j3xknedqui-dh60f4j09itiqjfhqa196ufvo 172.31.16.10:2377&lt;br /&gt;
This node joined a swarm as a worker.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check Swarm status&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker node ls&lt;br /&gt;
[cloud_user@ip-172-31-16-10 swarm-manager]$ docker node ls&lt;br /&gt;
ID                            HOSTNAME                          STATUS   AVAILABILITY MANAGER STATUS ENGINE VERSION&lt;br /&gt;
641bfndn49b1i1dj17s8cirgw *   ip-172-31-16-10.mylabserver.com   Ready    Active       Leader         18.09.0&lt;br /&gt;
vlw7te728z7bvd7ulb3hn08am     ip-172-31-16-94.mylabserver.com   Ready    Active                      18.09.0&lt;br /&gt;
&lt;br /&gt;
docker system info | grep -A 7 Swarm&lt;br /&gt;
Swarm: active&lt;br /&gt;
 NodeID: 641bfndn49b1i1dj17s8cirgw&lt;br /&gt;
 Is Manager: true&lt;br /&gt;
 ClusterID: 4jqxdmfd0w5pc4if4fskgd5nq&lt;br /&gt;
 Managers: 1&lt;br /&gt;
 Nodes: 2&lt;br /&gt;
 Default Address Pool: 10.0.0.0/8  &lt;br /&gt;
 SubnetSize: 24&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Troubleshooting&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo systemctl disable firewalld &amp;amp;&amp;amp; sudo systemctl stop firewalld # CentOS&lt;br /&gt;
sudo -i; printf &amp;quot;\n10.0.0.11 mgr01\n10.0.0.12 node01\n&amp;quot; &amp;gt;&amp;gt; /etc/hosts # Add nodes to hosts file; exit&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Swarm cluster ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker node update -availability drain [node] #drain services for Manager Only nodes&lt;br /&gt;
docker service update --force [service_name]  #force re-balance services across cluster&lt;br /&gt;
&lt;br /&gt;
docker swarm leave #node leaves a cluster&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Locking / unlocking swarm cluster ==&lt;br /&gt;
Logs used by Swarm manager are encrypted on disk. Access to nodes gives access to keys that encrypt them. It further protects cluster as requires a unlocking key when restarting manager/nodes.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker swarm init   --auto-lock=true #initialize with &lt;br /&gt;
docker swarm update --auto-lock=true #update current swarm&lt;br /&gt;
# both will produce unlock token STKxxx&lt;br /&gt;
docker swarm unlock #it'll ask for the unlock token&lt;br /&gt;
docker swarm update --auto-lock=false #disable key locking&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you have access to a manager you can always get unlock key using:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker swarm unlock-key&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Key management&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker swarm unlock-key --rotate #could be in a cron&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Backup and restore swarm cluster ==&lt;br /&gt;
This priocess describes how to backup whole cluster configuration so can be restored on a new set of servers.&lt;br /&gt;
&lt;br /&gt;
Create docker apps running across swarm&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker service create --name bkweb --publish 80:80 --replicas 2 httpd&lt;br /&gt;
$ docker service ls&lt;br /&gt;
ID           NAME      MODE          REPLICAS  IMAGE         PORTS&lt;br /&gt;
q9jki3n2hffm bkweb     replicated    2/2       httpd:latest  *:80-&amp;gt;80/tcp&lt;br /&gt;
&lt;br /&gt;
$ docker service ps bkweb #note containers run on 2 different nodes&lt;br /&gt;
ID           NAME      IMAGE         NODE                      DESIRED STATE CURRENT STATE          &lt;br /&gt;
j964jm1lq3q5 bkweb.1   httpd:latest  server2c.mylabserver.com  Running       Running about a minute ago&lt;br /&gt;
jpjx3mk7hhm0 bkweb.2   httpd:latest  server1c.mylabserver.com  Running       Running about a minute ago&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Backup state files&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo -i&lt;br /&gt;
cd /var/lib/docker/swarm&lt;br /&gt;
cat docker-state.json #contains info about managers, workers, certificates, etc..&lt;br /&gt;
cat state.json&lt;br /&gt;
sudo systemctl stop docker.service&lt;br /&gt;
&lt;br /&gt;
# Backup swarm cluster, this file can be then used to recover whole swarm cluster on another set of servers&lt;br /&gt;
sudo tar -czvf swarm.tar.gz /var/lib/docker/swarm/&lt;br /&gt;
&lt;br /&gt;
#the running docker containers should be brought up as they were before stopping the service&lt;br /&gt;
systemctl start docker&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Recover using swarm.tar backup&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# scp swarm.tar to recovery node - what it'd be a node with just installed docker&lt;br /&gt;
sudo rm -rf /var/lib/docker/swarm&lt;br /&gt;
sudo systemctl stop docker&lt;br /&gt;
&lt;br /&gt;
# Option1 untar directly&lt;br /&gt;
sudo tar -xzvf swarm.tar.gz -C /var/lib/docker/swarm&lt;br /&gt;
&lt;br /&gt;
# Option2 copy recursivly, -f override if a file exists&lt;br /&gt;
tar -xzvf swarm.tar.gz; cd /var/lib/docker&lt;br /&gt;
cp -rf swarm/ /var/lib/docker/&lt;br /&gt;
&lt;br /&gt;
sudo systemctl start docker&lt;br /&gt;
docker swarm init --force-new-cluster # produces the exactly same token&lt;br /&gt;
# you should join all required nodes to new manager ip&lt;br /&gt;
# scale services down to 1, then scale up so get distributed to other nodes&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Run containers as a services ==&lt;br /&gt;
Docker container has a number limitation therefore running as a service where Cluster Manager: Swarm or Kubernetes manages networking, access, loadbalancing etc.. is a way to scale with ease. The service is using eg. mesh routing to deal with access to containers.&lt;br /&gt;
&lt;br /&gt;
Swarm nodes setup 1 manager and 2 workers&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
ID                            HOSTNAME                          STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION&lt;br /&gt;
641bfndn49b1i1dj17s8cirgw *   swarm-mgr-1.example.com   Ready   Active Leader       18.09.1&lt;br /&gt;
vlw7te728z7bvd7ulb3hn08am     swarm-wkr-1.example.com   Ready   Active              18.09.1&lt;br /&gt;
r8h7xmevue9v2mgysmld59py2     swarm-wkr-2.example.com   Ready   Active              18.09.0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create and run a service&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docekr pull httpd&lt;br /&gt;
docker service create --name serviceweb --publish 80:80 httpd&lt;br /&gt;
# --publish|-p -expose a port on all containers in the running cluster&lt;br /&gt;
&lt;br /&gt;
docker service ls&lt;br /&gt;
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS&lt;br /&gt;
vt0ftkifbd84        serviceweb          replicated          1/1                 httpd:latest        *:80-&amp;gt;80/tcp&lt;br /&gt;
&lt;br /&gt;
docker service ps serviceweb #show nodes that a container is running on, here on mgr-1 node&lt;br /&gt;
ID           NAME         IMAGE        NODE                    DESIRED STATE CURRENT STATE  ERROR  PORTS&lt;br /&gt;
e6rx3tzgp1e5 serviceweb.1 httpd:latest swarm-mgr-1.example.com Running       Running about                  &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When running as a service even if a container runs on a single node (replica=1) the container can be accessed from any of swarm nodes. It's because service exposed port has been exposed to extended mesh private network that the container is running on.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
[user@swarm-mgr-1 ~]$ curl -k http://swarm-mgr-1.example.com&lt;br /&gt;
  &amp;lt;html&amp;gt;&amp;lt;body&amp;gt;&amp;lt;h1&amp;gt;It works!&amp;lt;/h1&amp;gt;&amp;lt;/body&amp;gt;&amp;lt;/html&amp;gt;&lt;br /&gt;
[user@swarm-mgr-1 ~]$ curl -k http://swarm-wkr-1.example.com&lt;br /&gt;
  &amp;lt;html&amp;gt;&amp;lt;body&amp;gt;&amp;lt;h1&amp;gt;It works!&amp;lt;/h1&amp;gt;&amp;lt;/body&amp;gt;&amp;lt;/html&amp;gt;&lt;br /&gt;
[user@swarm-mgr-1 ~]$ curl -k http://swarm-wkr-2.example.com&lt;br /&gt;
  &amp;lt;html&amp;gt;&amp;lt;body&amp;gt;&amp;lt;h1&amp;gt;It works!&amp;lt;/h1&amp;gt;&amp;lt;/body&amp;gt;&amp;lt;/html&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Service update, can be done to limits, volumes, env-variables and more...&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker service scale devweb=3                 #or&lt;br /&gt;
docker service update --replicas 3 serviceweb #--detach=false shows visual progress in older versions, default in v18.06&lt;br /&gt;
serviceweb&lt;br /&gt;
overall progress: 3 out of 3 tasks &lt;br /&gt;
1/3: running   [==================================================&amp;gt;] &lt;br /&gt;
2/3: running   [==================================================&amp;gt;] &lt;br /&gt;
3/3: running   [==================================================&amp;gt;] &lt;br /&gt;
verify: Service converged &lt;br /&gt;
&lt;br /&gt;
# Limits(soft limit) and reservations(hard limit), this causes to start new services(containers)&lt;br /&gt;
docker service update --limit-cpu=.5 --reserve-cpu=.75 --limit-memory=128m --reserve-memory=256m serviceweb&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Templating service names ==&lt;br /&gt;
This allows to control eg. hostname in a cluster. Useful for big clusters to easier identify services where they run from hostname.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker service create --name web --hostname&amp;quot;{{.Node.ID}}-{{.Service.Name}}&amp;quot; httpd&lt;br /&gt;
docker service ps --no-trunc web&lt;br /&gt;
docker inspect --format=&amp;quot;{{}.Config.Hostname}&amp;quot; web.1.ab10_serviceID_cd&lt;br /&gt;
aa_nodeID_bb-web&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Node lables for task/service placement ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker node ls&lt;br /&gt;
ID                            HOSTNAME                  STATUS AVAILABILITY        MANAGER STATUS      ENGINE VERSION&lt;br /&gt;
641bfndn49b1i1dj17s8cirgw *   swarm-mgr-1.example.com   Ready  Active              Leader              18.09.1&lt;br /&gt;
vlw7te728z7bvd7ulb3hn08am     swarm-wkr-1.example.com   Ready  Active                                  18.09.1&lt;br /&gt;
r8h7xmevue9v2mgysmld59py2     swarm-wkr-2.example.com   Ready  Active                                  18.09.1&lt;br /&gt;
&lt;br /&gt;
docker node inspect 641bfndn49b1i1dj17s8cirgw --pretty&lt;br /&gt;
ID:                     641bfndn49b1i1dj17s8cirgw&lt;br /&gt;
Hostname:               swarm-mgr-1.example.com &lt;br /&gt;
Joined at:              2019-01-08 12:16:56.277717163 +0000 utc&lt;br /&gt;
Status:&lt;br /&gt;
 State:                 Ready&lt;br /&gt;
 Availability:          Active&lt;br /&gt;
 Address:               172.31.10.10&lt;br /&gt;
Manager Status:&lt;br /&gt;
 Address:               172.31.10.10:2377&lt;br /&gt;
 Raft Status:           Reachable&lt;br /&gt;
 Leader:                Yes&lt;br /&gt;
Platform:&lt;br /&gt;
 Operating System:      linux&lt;br /&gt;
 Architecture:          x86_64&lt;br /&gt;
Resources:&lt;br /&gt;
 CPUs:                  2&lt;br /&gt;
 Memory:                3.699GiB&lt;br /&gt;
Plugins:&lt;br /&gt;
 Log:           awslogs, fluentd, gcplogs, gelf, journald, json-file, local, logentries, splunk, syslog&lt;br /&gt;
 Network:               bridge, host, macvlan, null, overlay&lt;br /&gt;
 Volume:                local&lt;br /&gt;
Engine Version:         18.09.1&lt;br /&gt;
TLS Info:&lt;br /&gt;
 TrustRoot:&lt;br /&gt;
-----BEGIN CERTIFICATE-----&lt;br /&gt;
MIIBajCCARCgAwIBAgIUKXz3wtc8OA8uzTo1pO86ko+PB+EwCgYIKoZIzj0EAwIw&lt;br /&gt;
..&lt;br /&gt;
-----END CERTIFICATE-----&lt;br /&gt;
 Issuer Subject:        MBMxETAPBgNVBAMTCHN3YX.....h&lt;br /&gt;
 Issuer Public Key:     MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEy......==&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Apply label to a node&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker node update --label-add node-env=testnode r8h7xmevue9v2mgysmld59py2&lt;br /&gt;
docker node inspect r8h7xmevue9v2mgysmld59py2 --pretty | grep -B1 -A2 Labels&lt;br /&gt;
ID:                     r8h7xmevue9v2mgysmld59py2&lt;br /&gt;
Labels:&lt;br /&gt;
 - node-env=testnode&lt;br /&gt;
Hostname:               swarm-wkr-1.example.com&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
How to use it. Run a service with &amp;lt;code&amp;gt;--constraint&amp;lt;/code&amp;gt; option that pins services to run on a node meeting given criteria. In our case to run on a node where &amp;lt;code&amp;gt;node.labels.node-env == testnode&amp;lt;/code&amp;gt;. Note that all replicas are running on the same node unlike they'd be distributed across the cluster.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker service create --name constraints -p 80:80 --constraint 'node.labels.node-env == testnode' --replicas 3 httpd #node.role, node.id, node.hostname&lt;br /&gt;
zrk15vfdaitc1rvw9wqh2s0ot&lt;br /&gt;
overall progress: 3 out of 3 tasks &lt;br /&gt;
1/3: running   [==================================================&amp;gt;] &lt;br /&gt;
2/3: running   [==================================================&amp;gt;] &lt;br /&gt;
3/3: running   [==================================================&amp;gt;] &lt;br /&gt;
verify: Service converged &lt;br /&gt;
[cloud_user@mrpiotrpawlak1c ~]$ docker service ls&lt;br /&gt;
ID           NAME          MODE         REPLICAS IMAGE         PORTS&lt;br /&gt;
zrk15vfdaitc constraints   replicated   3/3      httpd:latest  *:80-&amp;gt;80/tcp&lt;br /&gt;
&lt;br /&gt;
[user@swarm-wkr-2 ~]$ docker service ps constraints&lt;br /&gt;
ID           NAME          IMAGE        NODE                      DESIRED STATE       CURRENT STATE            ERROR               PORTS&lt;br /&gt;
y5z4mt99uzpo constraints.1 httpd:latest swarm-wkr-2.example.com   Running Running 41 seconds ago                       &lt;br /&gt;
zqbn4ips969q constraints.2 httpd:latest swarm-wkr-2.example.com   Running Running 41 seconds ago                       &lt;br /&gt;
vnb10jcs2915 constraints.3 httpd:latest swarm-wkr-2.example.com   Running Running 41 seconds ago &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Scaling services ==&lt;br /&gt;
These commands be issued on a manager node&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker pull docker nginx&lt;br /&gt;
docker service create --name web --publish 80:80 httpd&lt;br /&gt;
docker service ps web                  #there is only 1 replica&lt;br /&gt;
docker service update --replicas 3 web #update to 3 replicas&lt;br /&gt;
docker service create --name nginx --publish 5901:80 nginx&lt;br /&gt;
elinks http://swarm-mgr-1.com:5901     #nginx website will be presented&lt;br /&gt;
&lt;br /&gt;
# scale is equivalent to update --replicas command for a single or multiple services&lt;br /&gt;
docker service scale web=3 nginx=3&lt;br /&gt;
docker service ls&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Replicated services vs global services ==&lt;br /&gt;
;Global Replicated: mode runs at least one copy of a service on each swarm node, even if you join another node the service will coverage there as well. In global mode you cannot use &amp;lt;code&amp;gt;update --replicats&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;scale&amp;lt;/code&amp;gt; commands. It is not possible to update the mode type.&lt;br /&gt;
;Replicated mode: allows for grater control and flexibility of running number of services.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# creates a single service running across whole cluster in replicated mode&lt;br /&gt;
docker service create --name web --publish 80:80 httpd&lt;br /&gt;
&lt;br /&gt;
# run in a global node&lt;br /&gt;
docker service create --name web --publish 5901:80 --mode global httpd&lt;br /&gt;
docker service ls #note distinct mode names: global and replicated&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Docker compose and deploy to Swarm =&lt;br /&gt;
Install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo yum install epel&lt;br /&gt;
sudo yum install pip&lt;br /&gt;
sudo pip install --upgrade pip&lt;br /&gt;
# install docker CE or EE to avoid Python libs conflits&lt;br /&gt;
sudo pip install docker-compose&lt;br /&gt;
&lt;br /&gt;
# Troubleshooting&lt;br /&gt;
## Err: Cannot uninstall 'requests'. It is a distutils installed project...&lt;br /&gt;
pip install --ignore-installed requests&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Dockerfile&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
cat &amp;gt;Dockerfile &amp;lt;&amp;lt;EOF&lt;br /&gt;
FROM centos:latest&lt;br /&gt;
RUN yum install -y httpd&lt;br /&gt;
RUN echo &amp;quot;Website1&amp;quot; &amp;gt;&amp;gt; /var/www/html/index.html&lt;br /&gt;
EXPOSE 80&lt;br /&gt;
ENTRYPOINT apachectl -DFOREGROUND&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Docker compose file&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
cat &amp;gt;docker-compose.yml &amp;lt;&amp;lt;EOF&lt;br /&gt;
version: '3'&lt;br /&gt;
services:&lt;br /&gt;
  apiweb1:&lt;br /&gt;
    image: httpd_1:v1&lt;br /&gt;
    build: .&lt;br /&gt;
    ports:&lt;br /&gt;
      - &amp;quot;81:80&amp;quot;&lt;br /&gt;
  apiweb2:&lt;br /&gt;
    image: httpd_1:v1&lt;br /&gt;
    ports:&lt;br /&gt;
      - &amp;quot;82:80&amp;quot;&lt;br /&gt;
  load-balancer:&lt;br /&gt;
    image: nginx:latest&lt;br /&gt;
    ports:&lt;br /&gt;
      - &amp;quot;80:80&amp;quot;&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Run docker compose, on the current node only&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker-compose up -d&lt;br /&gt;
WARNING: The Docker Engine you're using is running in swarm mode.&lt;br /&gt;
Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.&lt;br /&gt;
To deploy your application across the swarm, use `docker stack deploy`.&lt;br /&gt;
Creating compose_apiweb2_1       ... done&lt;br /&gt;
Creating compose_apiweb1_1       ... done&lt;br /&gt;
Creating compose_load-balancer_1 ... done&lt;br /&gt;
&lt;br /&gt;
docker ps&lt;br /&gt;
CONTAINER ID IMAGE        COMMAND                 CREATED  STATUS   PORTS              NAMES&lt;br /&gt;
14f8b6b10c2d nginx:latest &amp;quot;nginx -g 'daemon of…&amp;quot;  2 minutesUp 2 min 0.0.0.0:80-&amp;gt;80/tcp compose_load-balancer_1&lt;br /&gt;
e9b5b37fe4e5 httpd_1:v1   &amp;quot;/bin/sh -c 'apachec…&amp;quot;  2 minutesUp 2 min 0.0.0.0:81-&amp;gt;80/tcp compose_apiweb1_1&lt;br /&gt;
28ee22a8eae0 httpd_1:v1   &amp;quot;/bin/sh -c 'apachec…&amp;quot;  2 minutesUp 2 min 0.0.0.0:82-&amp;gt;80/tcp compose_apiweb2_1&lt;br /&gt;
&lt;br /&gt;
# Verify&lt;br /&gt;
curl http://localhost:81&lt;br /&gt;
curl http://localhost:82&lt;br /&gt;
curl http://localhost:80 #nginx&lt;br /&gt;
&lt;br /&gt;
# Prep before deploying docker-compose to Swarm. Also images needs to be build before hand.&lt;br /&gt;
# Docker stack does not support building images&lt;br /&gt;
docker-compose down --volumes #save everything to storage volumes&lt;br /&gt;
Stopping compose_load-balancer_1 ... done&lt;br /&gt;
Stopping compose_apiweb1_1       ... done&lt;br /&gt;
Stopping compose_apiweb2_1       ... done&lt;br /&gt;
Removing compose_load-balancer_1 ... done&lt;br /&gt;
Removing compose_apiweb1_1       ... done&lt;br /&gt;
Removing compose_apiweb2_1       ... done&lt;br /&gt;
Removing network compose_default&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Deploy compose to Swarm&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker stack deploy --compose-file docker-compose.yml customcompose-stack #customcompose-stack is a prefix for service name&lt;br /&gt;
Ignoring unsupported options: build&lt;br /&gt;
Creating network customcompose-stack_default&lt;br /&gt;
Creating service customcompose-stack_apiweb1&lt;br /&gt;
Creating service customcompose-stack_apiweb2&lt;br /&gt;
Creating service customcompose-stack_load-balancer&lt;br /&gt;
&lt;br /&gt;
docker stack services customcompose-stack #or&lt;br /&gt;
docker service ls&lt;br /&gt;
ID           NAME                               MODE       REPLICAS IMAGE        PORTS&lt;br /&gt;
k7wwkncov49p customcompose-stack_apiweb1        replicated 0/1      httpd_1:v1   *:81-&amp;gt;80/tcp&lt;br /&gt;
nl0j5folpmha customcompose-stack_apiweb2        replicated 0/1      httpd_1:v1   *:82-&amp;gt;80/tcp&lt;br /&gt;
x6p14gmpjyra customcompose-stack_load-balancer  replicated 1/1      nginx:latest *:80-&amp;gt;80/tcp&lt;br /&gt;
&lt;br /&gt;
docker stack rm customcompose-stack #remove stack&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Selecting a Storage Driver = &lt;br /&gt;
Go to Docker version matrix to verify what drivers are supported on your platform. Changing storage driver is destructive and you loose all containers volumes. Therefore you need to export/backup then re-import after the storage driver change.&lt;br /&gt;
&lt;br /&gt;
;CentOS&lt;br /&gt;
Device mapper is officialy supported on CentOS. It can be used on a disc as blockstorage it uses loopback adapter to provide that. Or can be blockstorage devive allowing Docker to mamange it.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker info --format '{{json .Driver}}'&lt;br /&gt;
docker info -f '{{json .}}' | jq .Driver&lt;br /&gt;
docker info | grep Storage&lt;br /&gt;
&lt;br /&gt;
sudo touch /etc/docker/daemon.json&lt;br /&gt;
sudo vi    /etc/docker/daemon.json #additional options are available&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;storage-driver&amp;quot;:&amp;quot;devicemapper&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Preserving any current images, requires export/backup and re-import after the storage driver change.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker images&lt;br /&gt;
sudo systemctl docker restart&lt;br /&gt;
ls -l /var/lib/docker/devicemapper #new location to storing images&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note, in &amp;lt;code&amp;gt;/var/lib/docker&amp;lt;/code&amp;gt; new directory &amp;lt;code&amp;gt;devicemapper&amp;lt;/code&amp;gt; has been created to store images from now on.&lt;br /&gt;
&lt;br /&gt;
;Update 2019 - Docker Engine 18.09.1&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
WARNING: the devicemapper storage-driver is deprecated, and will be removed in a future release.&lt;br /&gt;
WARNING: devicemapper: usage of loopback devices is strongly discouraged for production use.&lt;br /&gt;
         Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Selecting a logginig driver =&lt;br /&gt;
Available list of [https://docs.docker.com/config/containers/logging/configure/#supported-logging-driverslogging drivers] can be seen on Docker documentation page. Most popular are:&lt;br /&gt;
*none - No logs are available for the container and docker logs does not return any output.&lt;br /&gt;
*json-file - (default) the logs are formatted as JSON. The default logging driver for Docker.&lt;br /&gt;
*syslog - Writes logging messages to the syslog facility. The syslog daemon must be running on the host machine.&lt;br /&gt;
*journald - Writes log messages to journald. The journald daemon must be running on the host machine.&lt;br /&gt;
*fluentd - Writes log messages to fluentd (forward input). The fluentd daemon must be running on the host machine.&lt;br /&gt;
*awslogs - Writes log messages to Amazon CloudWatch Logs.&lt;br /&gt;
*splunk - Writes log messages to splunk using the HTTP Event Collector.&lt;br /&gt;
*etwlogs - (Windows) Writes log messages as Event Tracing for Windows (ETW) events&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker info | grep logging&lt;br /&gt;
docker container run -d --name &amp;lt;webjson&amp;gt; --logg-driver json-file httpd #per docker container setup&lt;br /&gt;
docker logs &amp;lt;testjson&amp;gt;&lt;br /&gt;
&lt;br /&gt;
docker container run -d --name &amp;lt;web&amp;gt; httpd #start new container&lt;br /&gt;
docker logs -f _testweb_                   #display standard-out logs&lt;br /&gt;
docker service log -f &amp;lt;web&amp;gt; #for swarm all container replicas logs&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable syslog logginig driver&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo vi /etc/rsyslog.conf&lt;br /&gt;
#uncomment below&lt;br /&gt;
$ModLoad imudp&lt;br /&gt;
$UDPServerRun 514&lt;br /&gt;
&lt;br /&gt;
sudo systemctl start rsyslog&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Change logging driver. Then standard output won't be available after the change.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;log-driver&amp;quot;: &amp;quot;syslog&amp;quot;,&lt;br /&gt;
  &amp;quot;log-opts&amp;quot;: {&lt;br /&gt;
    &amp;quot;syslog-address&amp;quot;: &amp;quot;udp://172.31.10.1&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sudo systemctl restart docker&lt;br /&gt;
docker info | grep logging&lt;br /&gt;
tail -f /var/log/messages #this will show all logging eg. access logs for httpd server&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Docker daemon logs ==&lt;br /&gt;
System level logs&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# CentOS&lt;br /&gt;
/var/messages | grep -i docker&lt;br /&gt;
&lt;br /&gt;
# Ubuntu&lt;br /&gt;
sudo journalctl -u docker.service --no-hostname&lt;br /&gt;
sudo journalctl -u docker -o json | jq -cMr '.MESSAGE'&lt;br /&gt;
sudo journalctl -u docker -o json | jq -cMr 'select(has(&amp;quot;CONTAINER_ID&amp;quot;) | not) | .MESSAGE'&lt;br /&gt;
/var/log/syslog | grep -i docker&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Docker container or service logs&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker container logs [OPTIONS] containerID  #single container logs&lt;br /&gt;
docker service   logs [OPTIONS] service|task #agregate logs across all cluster deployed container replicas &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Container life-cycle policies - eg. autostart =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker container run -d --name web --restart &amp;lt;on-failure|unless-stopped|no|none(default)|always&amp;gt; httpd&lt;br /&gt;
# --restart -restart on crash or exit 1 or service or system reboot&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Definitions:&lt;br /&gt;
* always - it will restart container always, even if stopped manually, restarting docker-deamon will start container&lt;br /&gt;
* unless-stopped - it will restart container always unless stopped manually by &amp;lt;code&amp;gt;docker container stop&amp;lt;/code&amp;gt;&lt;br /&gt;
* on-failure - restart if container exits with non-zero exit code&lt;br /&gt;
&lt;br /&gt;
= Universal Control Plance - UCP =&lt;br /&gt;
It's an application what allow to see all operational details for Swarm cluster when using Docker EE editin. 30 days trial is available.&lt;br /&gt;
&lt;br /&gt;
;Communication between Docker Engine, UCP and DTR (Docker Trusted Registry)&lt;br /&gt;
* over TCP/UDP - depends on a port, and whether a response is required, or if a message is a notification&lt;br /&gt;
* IPC - interprocess communication (intra-host), services on the same node&lt;br /&gt;
* API - over TCP, uses API directlyto query or update components in a cluster&lt;br /&gt;
&lt;br /&gt;
;References&lt;br /&gt;
* [https://docs.docker.com/ee/ucp/ucp-architecture/ UCP architecture]&lt;br /&gt;
&lt;br /&gt;
== Install/uninstall UCP &amp;lt;code&amp;gt;image: docker/ucp&amp;lt;/code&amp;gt; ==&lt;br /&gt;
OS support: &lt;br /&gt;
* UCP 2.2.11 is supported running on RHEL 7.5 and Ubuntu 18.04&lt;br /&gt;
&lt;br /&gt;
For labs purpose, we can use eg. &amp;lt;code&amp;gt;ucp.example.com&amp;lt;/code&amp;gt; the domain &amp;lt;code&amp;gt;example.com&amp;lt;/code&amp;gt; is included in UCP and DTR wildcard self-signed certificate.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install on a manager node&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
export UCP_USERNAME=ucp-admin&lt;br /&gt;
export UCP_PASSWORD=ucp-admin&lt;br /&gt;
export UCP_MGR_NODE_IP=172.31.101.248&lt;br /&gt;
&lt;br /&gt;
docker container run --rm -it --name ucp \&lt;br /&gt;
  -v /var/run/docker.sock:/var/run/docker.sock docker/ucp:2.2.15 \&lt;br /&gt;
  install --host-address=$UCP_MGR_NODE_IP --interactive --debug&lt;br /&gt;
&lt;br /&gt;
# --rm  :- because this container will be only transitinal container&lt;br /&gt;
# -it   :- because installation we want interactive&lt;br /&gt;
# -v    :- link the container with a file on a host&lt;br /&gt;
# --san :- add subject alternative names to certificates (e.g. --san www1.acme.com --san www2.acme.com)&lt;br /&gt;
# --host-address    :- IP address or network interface name to advertise to other nodes&lt;br /&gt;
# docker/ucp:2.2.11 :- image version&lt;br /&gt;
# --dns        :- custom DNS servers for the UCP containers&lt;br /&gt;
# --dns-search :- ustom DNS search domains for the UCP containers&lt;br /&gt;
# --admin-username &amp;quot;$UCP_USERNAME&amp;quot; --admin-password &amp;quot;$UCP_PASSWORD&amp;quot; #seems these are not supported, although are in a guide&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If not provided you will be asked for: &lt;br /&gt;
* Admin password during the process&lt;br /&gt;
* You may enter additional aliases (SANs) now or press enter to proceed with the above list:&lt;br /&gt;
** Additinall aliases: ucp ucp.example.com&lt;br /&gt;
 DEBU[0062] User entered: ucp ucp.ciscolinux.co.uk&lt;br /&gt;
 DEBU[0062] Hostnames: [host1c.mylabserver.com 127.0.0.1 172.17.0.1 172.31.101.248 ucp ucp.ciscolinux.co.uk] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You may want to add DNS entries in &amp;lt;code&amp;gt;/etc/hosts&amp;lt;/code&amp;gt; for&lt;br /&gt;
* ''ucp'' or ''ucp.example.com'' pointing to manager public ip&lt;br /&gt;
* ''dtr'' or ''dtr.example.com'' pointing a worker node public IPs. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Verify&lt;br /&gt;
* connect to https://ucp.example.com:443. &lt;br /&gt;
* &amp;lt;code&amp;gt;docker ps&amp;lt;/code&amp;gt; should see a number of containers running now, they need to see each other therefore we used &amp;lt;code&amp;gt;hosts&amp;lt;/code&amp;gt; entries to allow this.&lt;br /&gt;
&lt;br /&gt;
;Uninstall&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker container run --rm -it --name ucp \&lt;br /&gt;
  -v /var/run/docker.sock:/var/run/docker.sock \&lt;br /&gt;
  docker/ucp uninstall-ucp --interactive&lt;br /&gt;
&lt;br /&gt;
INFO[0000] Your engine version 18.09.1, build 4c52b90 (4.15.0-1031-aws) is compatible with UCP 3.1.2 (b822777) &lt;br /&gt;
INFO[0000] We're about to uninstall from this swarm cluster. UCP ID: t0ltwwcw5tdbtjo2fxlzmj8p4 &lt;br /&gt;
Do you want to proceed with the uninstall? (y/n): y&lt;br /&gt;
INFO[0000] Uninstalling UCP on each node...             &lt;br /&gt;
INFO[0031] UCP has been removed from this cluster successfully. &lt;br /&gt;
INFO[0033] Removing UCP Services&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Install DTR Docker Trusted Repository &amp;lt;code&amp;gt;image: docker/dtr&amp;lt;/code&amp;gt; ==&lt;br /&gt;
It's recommended for single core systems to wait 5 minutes after UCP deployment to relese more cpu cycle. You can see the load may peaking up at 1.0 using &amp;lt;code&amp;gt;w&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
Connect to UCP service https://ucp.example.com, login with creds created. Uload a license.lic file.&lt;br /&gt;
Go to Admin Settings &amp;gt; Docker Trusted Registry &amp;gt; Pick one of UCP Nodes [worker]&lt;br /&gt;
You may disable TLS verification on self-signed certificate&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Run a given command on a node you want to install DTR. &amp;lt;code&amp;gt;UCP_NODE&amp;lt;/code&amp;gt; in lab environment can cause a few issues. For a convinience to avoid avoid port conflict :80,:443 use different node that UCP is instaled. Eg. dns ''user2c.mylabserver.com'' or private IP. &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
export UCP_NODE=wkr-172.31.107.250 #for convinince, to avoid port conflict :80,:443 use worker IP&lt;br /&gt;
export UCP_USERNAME=ucp-admin&lt;br /&gt;
export UCP_PASSWORD=ucp-admin&lt;br /&gt;
export UCP_URL=https://ucp.example.com:443 #avoid using example.com to avoid SSL name validation issues&lt;br /&gt;
docker pull docker/dtr&lt;br /&gt;
&lt;br /&gt;
# Optional. Download UCP public certificate&lt;br /&gt;
curl -k https://ucp.ciscolinux.co.uk/ca &amp;gt; ucp-ca.pem&lt;br /&gt;
&lt;br /&gt;
docker container run -it --rm docker/dtr install \&lt;br /&gt;
  --ucp-node $UCP_NODE --ucp-url $UCP_URL --debug \&lt;br /&gt;
  --ucp-username $UCP_USERNAME --ucp-password $UCP_PASSWORD \&lt;br /&gt;
  --ucp-insecure-tls  # --ucp-ca &amp;quot;$(cat ucp-ca.pem)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# --ucp-node :- hostname/IP of the UCP node (any node managed by UCP) to deploy DTR. Random by default&lt;br /&gt;
# --ucp-url  :- the UCP URL including domain and port.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It will ask for if not specified:&lt;br /&gt;
* ucp-password: you know it from UCP installation step&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Sygnificiant installation logs&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
..&lt;br /&gt;
INFO[0006] Only one available UCP node detected. Picking UCP node 'user2c.labserver.com' &lt;br /&gt;
..&lt;br /&gt;
INFO[0006] verifying [80 443] ports on user2c.labserver.com &lt;br /&gt;
..&lt;br /&gt;
INFO[0000] Using default overlay subnet: 10.1.0.0/24    &lt;br /&gt;
INFO[0000] Creating network: dtr-ol                     &lt;br /&gt;
INFO[0000] Connecting to network: dtr-ol                &lt;br /&gt;
..&lt;br /&gt;
INFO[0008] Generated TLS certificate. dnsNames=&amp;quot;[*.com *.*.com example.com *.dtr *.*.dtr]&amp;quot; domains=&amp;quot;[*.com *.*.com 172.17.0.1 example.com *.dtr *.*.dtr]&amp;quot; ipAddresses=&amp;quot;[172.17.0.1]&amp;quot;&lt;br /&gt;
..&lt;br /&gt;
INFO[0073] You can use flag '--existing-replica-id 10e168476b49' when joining other replicas to your Docker Trusted Registry Cluster &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Verify by logging in to https://dtr.example.com&lt;br /&gt;
DTR installation process above has also installed a number of containers on maanger/worker nodes named &amp;lt;code&amp;gt;ucp-agent&amp;lt;/code&amp;gt; and number of containers on dedicated DTR node. &lt;br /&gt;
You can verify DTR by logging to https://dtr.example.com with UCP credentials &amp;lt;code&amp;gt;ucp-admin&amp;lt;/code&amp;gt; and the same password if you haven't changed any commands above. Then you should be presented with registry.docker.io like theme. Any images stored there will be trusted from a perspective of our organisation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Verify by going to UCP https://ucp.example.com, admin settings &amp;gt; Docker Trusted Registry&lt;br /&gt;
[[File:Ucp-dtr-in-admin.png|none|400px|left|Ucp-dtr-in-admin]]&lt;br /&gt;
&lt;br /&gt;
== Backup UCP and DTR  configuration ==&lt;br /&gt;
This is build into UCP. The process is to start a special container to export UCP configuration to tar file. This can be run as &amp;lt;code&amp;gt;cron&amp;lt;/code&amp;gt; job.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker container run --log-driver non --rm -i --name ucp -v /var/run/docker.sock:/var/run/docker.sock docker/ucp backup &amp;gt; backup.tar&lt;br /&gt;
# --rm it's transitional container&lt;br /&gt;
# -i run interactivly&lt;br /&gt;
&lt;br /&gt;
# At first run it will error with --id m79xxxxxxxxx, asking to re-run teh command with this id.&lt;br /&gt;
&lt;br /&gt;
# Restore command&lt;br /&gt;
docker container run --log-driver non --rm -i --name ucp -v /var/run/docker.sock:/var/run/docker.sock docker/ucp restore --id m79xxx &amp;lt; backup.tar&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;DTR&lt;br /&gt;
Durign a backup DTR will not be available.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker container run --log-driver non --rm docker/dtr backup  --ucp-insecure-tls --ucp-url &amp;lt;ucp_server_dns:443&amp;gt; --ucp-username admin --ucp-password &amp;lt;password&amp;gt; &amp;gt; dtr-backup.tar&lt;br /&gt;
&lt;br /&gt;
# will you be asked for:&lt;br /&gt;
# Choose a replica to back up from: enter&lt;br /&gt;
&lt;br /&gt;
# Restore command&lt;br /&gt;
docker container run --log-driver non --rm docker/dtr restore --ucp-insecure-tls --ucp-url &amp;lt;ucp_server_dns:443&amp;gt; --ucp-username admin --ucp-password &amp;lt;password&amp;gt; &amp;lt; dtr-backup.tar&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== UCP RBAC ==&lt;br /&gt;
The main concept is:&lt;br /&gt;
* administrators can make changes to the UCP swarm/kubernetes, User Management, Orgainisation, Team and Roles&lt;br /&gt;
* users - range of access from Full Control of resources to no access&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Ucp-rbac.png|500px|none|left|Ucp-rbac]]&lt;br /&gt;
&lt;br /&gt;
Note that only Scheduler role allows access to Node to view nodes. Plus schedule workloads of course.&lt;br /&gt;
&lt;br /&gt;
= UCP Client bundle =&lt;br /&gt;
UCP client bundle allows to export a bundle containing a certificate and environment settings that will poind docker-client to UCP in order to use a cluster, create images and services.&lt;br /&gt;
&lt;br /&gt;
;Download bundle&lt;br /&gt;
# Create a user with priviliges that yuo wish docker-client to run as&lt;br /&gt;
# Download a client budle from User Profile &amp;gt; Client bundle &amp;gt; + New Client Bundle&lt;br /&gt;
# File &amp;lt;code&amp;gt;ucp-bundle-[username].zip will get downloaded&amp;lt;/code&amp;gt; &amp;lt;p&amp;gt;&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
unzip ucp-bundle-bob.zip &lt;br /&gt;
Archive:  ucp-bundle-bob.zip&lt;br /&gt;
 extracting: ca.pem                  &lt;br /&gt;
 extracting: cert.pem                &lt;br /&gt;
 extracting: key.pem                 &lt;br /&gt;
 extracting: cert.pub                &lt;br /&gt;
 extracting: env.sh                  &lt;br /&gt;
 extracting: env.ps1                 &lt;br /&gt;
 extracting: env.cmd     &lt;br /&gt;
&lt;br /&gt;
cat env.sh &lt;br /&gt;
export COMPOSE_TLS_VERSION=TLSv1_2&lt;br /&gt;
export DOCKER_TLS_VERIFY=1&lt;br /&gt;
export DOCKER_CERT_PATH=&amp;quot;$PWD&amp;quot;&lt;br /&gt;
export DOCKER_HOST=tcp://3.16.143.49:443&lt;br /&gt;
#&lt;br /&gt;
# Bundle for user bob&lt;br /&gt;
# UCP Instance ID t0ltwwcw5tdbtjo2fxlzmj8p4&lt;br /&gt;
#&lt;br /&gt;
# This admin cert will also work directly against Swarm and the individual&lt;br /&gt;
# engine proxies for troubleshooting.  After sourcing this env file, use&lt;br /&gt;
# &amp;quot;docker info&amp;quot; to discover the location of Swarm managers and engines.&lt;br /&gt;
# and use the --host option to override $DOCKER_HOST&lt;br /&gt;
#&lt;br /&gt;
# Run this command from within this directory to configure your shell:&lt;br /&gt;
# eval $(&amp;lt;env.sh)&lt;br /&gt;
&lt;br /&gt;
eval $(&amp;lt;env.sh) # apply ucp-bundle&lt;br /&gt;
&lt;br /&gt;
docker images # to list UCP managed images&lt;br /&gt;
&amp;lt;/source&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
# &amp;lt;li value=&amp;quot;4&amp;quot;&amp;gt; In my lab I had to update DOCKER_HOST from public IP to private IP &amp;lt;/li&amp;gt;&lt;br /&gt;
Err: error during connect: Get https://3.16.143.49:443/v1.39/images/json: x509: certificate is valid for 127.0.0.1, 172.31.101.248, 172.17.0.1, not 3.16.143.49&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
export DOCKER_HOST=tcp://172.31.101.248:443&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# &amp;lt;li value=&amp;quot;5&amp;quot;&amp;gt; Verify if you have permissions to create a service&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker service create --name test111 httpd&lt;br /&gt;
Error response from daemon: access denied:&lt;br /&gt;
no access to Service Create, on collection swarm&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# &amp;lt;li value=&amp;quot;6&amp;quot;&amp;gt; Add Grants to the user&amp;lt;/li&amp;gt;&lt;br /&gt;
## Go to User Management &amp;gt; Granst &amp;gt; Create Grant&lt;br /&gt;
## Base on a Roles, select Full Control&lt;br /&gt;
## Select Subjects, All Users, select the user&lt;br /&gt;
## Click Create&lt;br /&gt;
# Re-run service create command that should succeed now. This service can be managed now also within UCP console.&lt;br /&gt;
&lt;br /&gt;
= Docker Secure Registry | image: registry =&lt;br /&gt;
Docker provides a special docker image that can be used to manage docker imagages both internally or externally thus steps below include securing the access with SSL certificate.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create certificate&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
mkdir ~/{auth,certs}&lt;br /&gt;
# create self-signed certificate for Docker Repository&lt;br /&gt;
mkdir certs; cd _$ #cd to last argument in history&lt;br /&gt;
openssl req -newkey rsa:4096 -nodes -sha256 -keyout repo-key.pem -x509 -days 365 -out repo-cer.pem -subj /CN=myrepo.com&lt;br /&gt;
# trusted-certs docker client directory, docker client looks for trusted certs when conencting to reomote repo&lt;br /&gt;
sudo mkdir -p /etc/docker/certs.d/myrepo.com:5000 #port 5000 is a default port&lt;br /&gt;
sudo cp repo-cer.pem /etc/docker/certs.d/myrepo.com:5000/ca.crt &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;ca.crt&amp;lt;/code&amp;gt; is default/required CAroot trustcert file name, that the docker client (docker login API) uses when conencting to remote repository. In our case we trust any cert signed by CA=ca.crt when connecting to myrepo.com:5000 as same certs (selfsigned), got installed in &amp;lt;code&amp;gt;repository:2&amp;lt;/code&amp;gt; container via &amp;lt;code&amp;gt;-v /certs/&amp;lt;/code&amp;gt; option.&lt;br /&gt;
&lt;br /&gt;
Optional for development purposes to add doamin ''myrepo.com'' to hostfile binding to local interface ip address.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo -i; echo &amp;quot;172.16.10.10 myrepo.com&amp;quot; &amp;gt;&amp;gt; /etc/hosts; exit&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional add insecure-registry entry&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
sudo vi /etc/docker/deamon.json&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;insecure-registries&amp;quot; : [ &amp;quot;myrepo.com:5000&amp;quot;]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Pull special Docker Registry image&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
mkdir -p ~/auth #authentication directory, used when deploying local repository&lt;br /&gt;
docker pull registry:2&lt;br /&gt;
docker run --entrypoint htpasswd registry:2 -Bbn reg-admin Passw0rd123 &amp;gt; ~/auth/htpasswd&lt;br /&gt;
# -Bbn        -parameters&lt;br /&gt;
# reg-admin   -user&lt;br /&gt;
# Passw0rd123 -password string for basic htpasswd authentication method, the hashed password will be displayed to STDOUT&lt;br /&gt;
&lt;br /&gt;
$ cat ~/auth/htpasswd&lt;br /&gt;
reg-admin:$2y$05$DnTWDHp7uTwaDrw4sXpUbuDDIlLwu3c8MEMsHPjK/AcUMdK/TD6fO&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Run Registry container&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
cd ~&lt;br /&gt;
docker run -d -p 5000:5000 --name myrepo \&lt;br /&gt;
       -v $(pwd)/certs:/certs \&lt;br /&gt;
       -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/repo-cer.pem \&lt;br /&gt;
       -e REGISTRY_HTTP_TLS_KEY=/certs/repo-key.pem \&lt;br /&gt;
       -v $(pwd)/auth:/auth \&lt;br /&gt;
       -e REGISTRY_AUTH=htpasswd \&lt;br /&gt;
       -e REGISTRY_AUTH_HTPASSWD_REALM=&amp;quot;Registry Realm&amp;quot; \&lt;br /&gt;
       -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \&lt;br /&gt;
       registry:2&lt;br /&gt;
# -v                               -indicate where our certificates will be mounted within a container&lt;br /&gt;
# -e REGISTRY_HTTP_TLS_CERTIFICATE -path to cert inside the container&lt;br /&gt;
# -v $(pwd)/auth:/auth             -mounting authentication directory where a file with password is&lt;br /&gt;
# -e REGISTRY_AUTH htpasswd        -setting up to use 'htpasswd' authentication method&lt;br /&gt;
# registry:2                       -image name, positinal params  &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Verify&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker pull  alpine&lt;br /&gt;
docker tag   alpine     myrepo.com:5000/aa-alpine #create a tagged image (copy) on a local filesystem, &lt;br /&gt;
     # it must be prefixed with the private repo name '/' image name you want to upload as&lt;br /&gt;
&lt;br /&gt;
docker logout  # if logged in to another repository&lt;br /&gt;
docker login myrepo.com:5000/aa-alpine #login to a repository that runs as a container, stays login untill logout/reboot&lt;br /&gt;
docker login myrepo.com:5000/aa-alpine --username=rep-admin --password Passw0rd123&lt;br /&gt;
docker push  myrepo.com:5000/aa-alpine        &lt;br /&gt;
&lt;br /&gt;
docker image rmi alpine myrepo.com:5000/aa-alpine #delete image stored locally&lt;br /&gt;
docker pull             myrepo.com:5000/aa-alpine #pull image from a container repository&lt;br /&gt;
&lt;br /&gt;
# List private-repository images&lt;br /&gt;
curl --insecure -u &amp;quot;reg-admin:password&amp;quot; https://myrepo.com:5000/v2/_catalog&lt;br /&gt;
{&amp;quot;repositories&amp;quot;:[&amp;quot;aa-alpine&amp;quot;]}&lt;br /&gt;
&lt;br /&gt;
wget --no-check-certificate --http-user=reg-admin --http-password=password https://myrepo.com:5000/v2/_catalog&lt;br /&gt;
cat _catalog                                                                                                                                                                       &lt;br /&gt;
{&amp;quot;repositories&amp;quot;:[&amp;quot;my-alpine&amp;quot;,&amp;quot;myalpine&amp;quot;,&amp;quot;new-aa-busybox&amp;quot;]}&lt;br /&gt;
&lt;br /&gt;
# List tags&lt;br /&gt;
curl --insecure -u &amp;quot;reg-admin:password&amp;quot; https://myrepo.com:5000/v2/aa-alpine/tags/list&lt;br /&gt;
{&amp;quot;name&amp;quot;:&amp;quot;myalpine&amp;quot;,&amp;quot;tags&amp;quot;:[&amp;quot;latest&amp;quot;]}&lt;br /&gt;
curl --insecure -u &amp;quot;reg-admin:password&amp;quot; https://myrepo.com:5000/v2/aa-alpine/manifests/latest #entire image metadata&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note. There is no easy way to delete images from repository:2 container.&lt;br /&gt;
&lt;br /&gt;
= Docker push =&lt;br /&gt;
;Login to a docker repository&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker info | grep -B1 Registry #check if you are logged in to docker.hub repository&lt;br /&gt;
WARNING: No swap limit support&lt;br /&gt;
Registry: https://index.docker.io/v1/&lt;br /&gt;
&lt;br /&gt;
docker login&lt;br /&gt;
&lt;br /&gt;
docker info | grep -B1 Registry&lt;br /&gt;
Username: pio2pio&lt;br /&gt;
Registry: https://index.docker.io/v1/&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
; Tag and push an image&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# docker tag local-image:tagname new-repo:tagname  #create a local copy of an image&lt;br /&gt;
# docker push new-repo:tagname                     &lt;br /&gt;
&lt;br /&gt;
docker pull busybox&lt;br /&gt;
docker --tag busybox:latest pio2pio/testrepo&lt;br /&gt;
docker push pio2pio/testrepo&lt;br /&gt;
The push refers to repository [docker.io/pio2pio/testrepo]&lt;br /&gt;
683f499823be: Mounted from library/busybox &lt;br /&gt;
latest: digest: sha256:bbb143159af9eabdf45511fd5aab4fd2475d4c0e7fd4a5e154b98e838488e510 &lt;br /&gt;
size: 527&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
; Docker Content Trust&lt;br /&gt;
All images are implicitly trusted by your Docker daemon. Buy can set that ONLY signed images are allowed. You can configure your systems for trusting only image tags that have been signed.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
export=DOCKER_CONTENT_TRUST=1 #enable system to sign an image during push process&lt;br /&gt;
docker build -t myrepo.com:5000/untrusted.latest&lt;br /&gt;
docker push myrepo.com:5000/untrusted.latest&lt;br /&gt;
...&lt;br /&gt;
No tag specified, skipping trust metadata push&lt;br /&gt;
# 2nd attempt, with a tag specified now&lt;br /&gt;
docker push myrepo.com:5000/untrusted.latest:latest&lt;br /&gt;
Error: error contacting notary server: x509: certificate signed by unknown authority&lt;br /&gt;
&lt;br /&gt;
docker pull myrepo.com:5000/untrusted.latest:latest&lt;br /&gt;
Error: error contacting notary server: x509: certificate signed by unknown authority&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Errors explained:&lt;br /&gt;
Err: No tag specified, skipping trust metadata push&amp;lt;br /&amp;gt;&lt;br /&gt;
* Explenation: When image gets signed is signed by a tag. Thereofre if you skip a tag it won't get signed and metada get skipped.&lt;br /&gt;
Err: error contacting notary server: x509: certificate signed by unknown authority&lt;br /&gt;
* when uploading the image gets uploaded, but it is not trusted becasue signed with self-signed CA&lt;br /&gt;
* when downloading, and &amp;lt;code&amp;gt;DOCKER_CONTENT_TRUST=1&amp;lt;/code&amp;gt; is enabled, the image cannot be downloaded because is untrusted&lt;br /&gt;
&lt;br /&gt;
= Theory =&lt;br /&gt;
== What is a docker ==&lt;br /&gt;
Docker is a container runtime platform, where Swarm is a container orchestration platform.&lt;br /&gt;
&lt;br /&gt;
== Security ==&lt;br /&gt;
=== Mutually Authenticated TLS ===&lt;br /&gt;
Docker Swarm is ''secure by default'', it means all communication is encrypted. ''Mutually Authenticated TLS'' is the implementation was chosen to secure that communication. Any time a swarm is inicialised, a self-signed CA is generated and issues certificates to every node (mgr or wkr) to facilicate registration (join mgr or wkr) and latter those secure communications. It's transient container brought up to generate CA certs every time a cert is needed. MTLS communication is between Managers and Workers.&lt;br /&gt;
&lt;br /&gt;
== [[Linux Namespaces and Control Groups]] ==&lt;br /&gt;
&lt;br /&gt;
== Difference between docker attach and docker exec ==&lt;br /&gt;
;Attach&lt;br /&gt;
The docker attach command allows you to attach to a running container using the containers ID or name, either to view its ongoing output or to control it interactively. You can attach to the same contained process multiple times simultaneously, screen sharing style, or quickly view the progress of your detached process.&lt;br /&gt;
&lt;br /&gt;
The command docker attach is for attaching to the existing process. So when you exit, you exit the existing process.&lt;br /&gt;
&lt;br /&gt;
If we use docker attach, we can use only one instance of shell. So if we want open new terminal with new instance of container's shell, we just need run docker exec&lt;br /&gt;
&lt;br /&gt;
If the docker container was started using /bin/bash command, you can access it using attach, if not then you need to execute the command to create a bash instance inside the container using exec. Attach isn't for running an extra thing in a container, it's for attaching to the running process.&lt;br /&gt;
&lt;br /&gt;
To stop a container, use CTRL-c. This key sequence sends SIGKILL to the container. If --sig-proxy is true (the default),CTRL-c sends a SIGINT to the container. You can detach from a container and leave it running using the CTRL-p CTRL-q key sequence.&lt;br /&gt;
&lt;br /&gt;
;exec&lt;br /&gt;
&lt;br /&gt;
&amp;quot;docker exec&amp;quot; is specifically for running new things in a already started container, be it a shell or some other process. The docker exec command runs a new command in a running container.&lt;br /&gt;
&lt;br /&gt;
The command started using docker exec only runs while the containerâ€™s primary process (PID 1) is running, and it is not restarted if the container is restarted.&lt;br /&gt;
&lt;br /&gt;
exec command works only on already running container. If the container is currently stopped, you need to first run it. So now you can run any command in running container just knowing its ID (or name):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
docker exec &amp;lt;container_id_or_name&amp;gt; echo &amp;quot;Hello from container!&amp;quot;&lt;br /&gt;
docker run -it -d shykes/pybuilder /bin/bash&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The most important here is the -d option, which stands for detached. It means that the command you initially provided to the container (/bin/bash) will be run in background and the container will not stop immediately.&lt;br /&gt;
&lt;br /&gt;
= Dockerfile - python =&lt;br /&gt;
* [https://luis-sena.medium.com/creating-the-perfect-python-dockerfile-51bdec41f1c8 perfect python dockerfile] Medium&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
*[https://docs.docker.com/v1.8/installation/ubuntulinux/ Ubuntu installation] official website&lt;br /&gt;
*[https://docs.docker.com/engine/admin/systemd/ PROXY settings for systemd]&lt;br /&gt;
*[http://goinbigdata.com/docker-run-vs-cmd-vs-entrypoint/ Docker RUN vs CMD vs ENTRYPOINT]&lt;br /&gt;
*[https://vsupalov.com/docker-arg-vs-env/ docker ARG vs ENV]&lt;br /&gt;
*[https://www.fromlatest.io/#/ Docker online lintel]&lt;br /&gt;
*[https://hub.docker.com/r/portainer/portainer/ portainer] Monitor your containers via Web GUI&lt;br /&gt;
*[https://treescale.com/ treescale.com] Free private Docker registry&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Terraform&amp;diff=7055</id>
		<title>Terraform</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Terraform&amp;diff=7055"/>
		<updated>2025-09-01T06:02:26Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Install terraform */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article is about utilising a tool from HashiCorp called Terraform to build infrastructure as a code - IoC.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Note| most of the paragraphs have examples of Terraform prior 0.12 version syntax that uses HCLv1. HCLv2 has been introduced with v0.12+ that contains significiant syntax and capabilites improvments. }}&lt;br /&gt;
&lt;br /&gt;
= Install terraform =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
wget https://releases.hashicorp.com/terraform/0.11.11/terraform_0.11.11_linux_amd64.zip&lt;br /&gt;
unzip terraform_0.11.11_linux_amd64.zip&lt;br /&gt;
sudo mv ./terraform /usr/local/bin&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== [https://github.com/kamatama41/tfenv tfenv] - manage multiple versions of Teraform ==&lt;br /&gt;
Install and usage&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
git clone https://github.com/tfutils/tfenv.git ~/.tfenv --depth=1&lt;br /&gt;
echo &amp;quot;[ -d $HOME/.tfenv ] &amp;amp;&amp;amp; export PATH=$PATH:$HOME/.tfenv/bin/&amp;quot; &amp;gt;&amp;gt; ~/.bashrc # or ~/.bash_profile&lt;br /&gt;
&lt;br /&gt;
# Use&lt;br /&gt;
tfenv install v1.12.1&lt;br /&gt;
tfenv use v1.12.1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== IDE ==&lt;br /&gt;
Development I use:&lt;br /&gt;
* VSCode with 1.41.1+ (for reference) with extensions:&lt;br /&gt;
** Terraform Autocomplete by erd0s&lt;br /&gt;
** Terraform by Mikael Olenfalk with enabled Language Server; open the command pallet with &amp;lt;code&amp;gt;Ctrl+Shift+P&amp;lt;/code&amp;gt;&lt;br /&gt;
:[[File:ClipCapIt-200202-153128.PNG]]&lt;br /&gt;
&lt;br /&gt;
= Basic configuration =&lt;br /&gt;
When terraform is run it looks for .tf file where configuration is stored. The look up process is limited to a flat directory and never leaves the directory that runs from. Therefore if you wish to address a common file a symbolic-link needs to be created within the directory you have .tf file.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
$ vi example.tf &lt;br /&gt;
provider &amp;quot;aws&amp;quot; {&lt;br /&gt;
  access_key = &amp;quot;AK01234567890OGD6WGA&amp;quot; &lt;br /&gt;
  secret_key = &amp;quot;N8012345678905acCY6XIc1bYjsvvlXHUXMaxOzN&amp;quot;&lt;br /&gt;
  region     = &amp;quot;eu-west-1&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
resource &amp;quot;aws_instance&amp;quot; &amp;quot;webserver&amp;quot; {&lt;br /&gt;
  ami           = &amp;quot;ami-405f7226&amp;quot;&lt;br /&gt;
  instance_type = &amp;quot;t2.nano&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since version 10.8.x major changes and features have been introduced including split of providers binary. Now each provider is a separate binary. Please see below example for Azure provider and other internal Terraform developed providers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Azure ==&lt;br /&gt;
Terraform credentials&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
export ARM_SUBSCRIPTION_ID=&amp;quot;YOUR_SUBSCRIPTION_ID&amp;quot;&lt;br /&gt;
export ARM_TENANT_ID=&amp;quot;TENANT_ID&amp;quot;&lt;br /&gt;
export ARM_CLIENT_ID=&amp;quot;CLIENT_ID&amp;quot;&lt;br /&gt;
export ARM_CLIENT_SECRET=&amp;quot;CLIENT_SECRET&amp;quot;&lt;br /&gt;
export TF_VAR_client_id=${ARM_CLIENT_ID}&lt;br /&gt;
export TF_VAR_client_secret=${ARM_CLIENT_SECRET}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example, how to source credentials&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
export VAULT_CLIENT_ADDR=http://10.1.1.1:8200&lt;br /&gt;
export VAULT_TOKEN=11111111-1111-1111-1111-1111111111111&lt;br /&gt;
vault read -format=json -address=$VAULT_CLIENT_ADDR secret/azure/subscription   | jq -r '.data | .subscription_id, .tenant_id'&lt;br /&gt;
vault read -format=json -address=$VAULT_CLIENT_ADDR secret/azure/${application} | jq -r '.data | .client_id, .client_secret'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Terraform providers, modules and backend config&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
$ vi providers.tf&lt;br /&gt;
provider &amp;quot;azurerm&amp;quot; {&lt;br /&gt;
  version         = &amp;quot;1.10.0&amp;quot;&lt;br /&gt;
  subscription_id = &amp;quot;${var.subscription_id}&amp;quot;&lt;br /&gt;
  tenant_id       = &amp;quot;${var.tenant_id}&amp;quot;&lt;br /&gt;
  client_id       = &amp;quot;${var.client_id}&amp;quot;&lt;br /&gt;
  client_secret   = &amp;quot;${var.client_secret}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# HashiCorp special providers https://github.com/terraform-providers&lt;br /&gt;
provider &amp;quot;template&amp;quot; { version = &amp;quot;1.0.0&amp;quot; }&lt;br /&gt;
provider &amp;quot;external&amp;quot; { version = &amp;quot;1.0.0&amp;quot; }&lt;br /&gt;
provider &amp;quot;local&amp;quot;    { version = &amp;quot;1.1.0&amp;quot; }&lt;br /&gt;
&lt;br /&gt;
terraform {&lt;br /&gt;
  backend &amp;quot;local&amp;quot; {}&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== AWS ==&lt;br /&gt;
;References&lt;br /&gt;
*[https://www.padok.fr/en/blog/terraform-s3-bucket-aws S3 bucket for all accounts]&lt;br /&gt;
*[https://www.padok.fr/en/blog/authentication-aws-profiles Multi account auth using aws profiles and &amp;lt;code&amp;gt;provider &amp;quot;aws&amp;quot; {}&amp;lt;/code&amp;gt;]&lt;br /&gt;
=== Local state ===&lt;br /&gt;
Local state configuration&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
vi backend.tf&lt;br /&gt;
terraform {&lt;br /&gt;
  version          = &amp;quot;~&amp;gt; 1.0&amp;quot;&lt;br /&gt;
  required_version = &amp;quot;= 0.12.29&amp;quot;&lt;br /&gt;
  backend &amp;quot;local&amp;quot; {}&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Remote state (single) for multi account deployments ===&lt;br /&gt;
There are many combination setting up backend and AWS credentials. Important understand is that &amp;lt;code&amp;gt;terraform { backend{} }&amp;lt;/code&amp;gt; block does NOT use &amp;lt;code&amp;gt;provider &amp;quot;aws {}&amp;quot;&amp;lt;/code&amp;gt; configuration in order to access the state bucket. It only uses the backend one.&lt;br /&gt;
* exporting credentials allows working with assume roles that are different in the backend and terraform blocks. &lt;br /&gt;
* specifying different &amp;lt;code&amp;gt;profile = &amp;lt;/code&amp;gt; in each blocks&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Credentials&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
## profile allows assumes roles in other accounts&lt;br /&gt;
#export AWS_PROFILE=&amp;quot;piotr&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Environment credentials for a user that can assume roles (eg. ) in other accounts:&lt;br /&gt;
#          | * arn:aws:iam::111111111111:role/terraform-s3state              - save state in s3 bucket&lt;br /&gt;
#          | * arn:aws:iam::222222222222:role/terraform-crossaccount-admin   - deploy resources&lt;br /&gt;
export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE&lt;br /&gt;
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY&lt;br /&gt;
export AWS_DEFAULT_REGION=us-east-1&lt;br /&gt;
&lt;br /&gt;
# unset all of them if need to &lt;br /&gt;
unset ${!AWS@}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;code&amp;gt;terraform {}&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
terraform {&lt;br /&gt;
  version          = &amp;quot;~&amp;gt; 1.0&amp;quot;&lt;br /&gt;
  required_version = &amp;quot;= 0.12.29&amp;quot;&lt;br /&gt;
# profile &amp;quot;dev-us&amp;quot; # we use 'role_arn' but could specify aws profile instead&lt;br /&gt;
  backend &amp;quot;s3&amp;quot; {&lt;br /&gt;
    bucket  = &amp;quot;tfstate-${var.project}-${var.account-id}&amp;quot; # must exist beforehand&lt;br /&gt;
    key     = &amp;quot;terraform/aws/${var.project}/tfstate&amp;quot;     # this could be much simpler when working with terraform workspaces&lt;br /&gt;
    region  = &amp;quot;${var.region}&amp;quot;&lt;br /&gt;
    role_arn  = &amp;quot;arn:aws:iam::111111111111:role/terraform-s3state&amp;quot; # role to assume in an infra account that the s3 state exists&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;code&amp;gt;provider {}&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
provider &amp;quot;aws&amp;quot; {&lt;br /&gt;
## We could use profiles but instead we use 'assume_role' option. Also on your laptop &lt;br /&gt;
## it should be your creds profile eg. 'piotr-xaccount-admin'&lt;br /&gt;
#profile = &amp;quot;terraform-crossaccount-admin&amp;quot;&lt;br /&gt;
#shared_credentials_file = &amp;quot;/home/piotr/.aws/credentials&amp;quot;&lt;br /&gt;
  assume_role = {&lt;br /&gt;
    role_arn  = &amp;quot;arn:aws:iam::&amp;lt;MY_PROD_ACCOUNT&amp;gt;:role/terraform-crossaccount-admin&amp;quot;       # assume role in target account&lt;br /&gt;
    role_arn  = &amp;quot;arn:aws:iam::${var.aws_account}:role/terraform-crossaccount-admin&amp;quot; # can use variables&lt;br /&gt;
  }&lt;br /&gt;
  region  = &amp;quot;var.aws_region&amp;quot;&lt;br /&gt;
  allowed_account_ids = [ &amp;quot;111111111111&amp;quot;, &amp;quot;222222222222&amp;quot; ] # safety net&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Workspace configuration&lt;br /&gt;
Dev configuration in &amp;lt;code&amp;gt;dev.tfvars&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
aws_region  = &amp;quot;eu-west-1&amp;quot;&lt;br /&gt;
aws_account = &amp;quot;&amp;lt;MY_DEV_ACCOUNT&amp;gt;&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Prod configuration in &amp;lt;code&amp;gt;prod.tfvars&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
aws_region  = &amp;quot;eu-west-1&amp;quot;&lt;br /&gt;
aws_account = &amp;quot;&amp;lt;MY_PROD_ACCOUNT&amp;gt;&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Workspaces&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
terraform init&lt;br /&gt;
terraform workspace new dev&lt;br /&gt;
terraform workspace new prod&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Apply on one account&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
terraform workspace select dev&lt;br /&gt;
terraform apply --var-file $(terraform workspace show).tfvars&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== GCP Google Cloud Platform ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Generate default app credentials&lt;br /&gt;
&lt;br /&gt;
gcloud auth application-default login&lt;br /&gt;
Go to the following link in your browser:&lt;br /&gt;
https://accounts.google.com/o/oauth2/auth?response_type=code&amp;amp;client_id=****_challenge_method=S256&lt;br /&gt;
Enter verification code: ***&lt;br /&gt;
Credentials saved to file: [/home/piotr/.config/gcloud/application_default_credentials.json]&lt;br /&gt;
&lt;br /&gt;
These credentials will be used by any library that requests Application Default Credentials (ADC).&lt;br /&gt;
Quota project &amp;quot;test-devops-candidate1&amp;quot; was added to ADC which can be used by Google client libraries for billing and quota. Note that some services may still bill the project owning the resource&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Plan / apply =&lt;br /&gt;
== Meaning of markings in a plan output ==&lt;br /&gt;
For now, here they are, until we get it included in the docs better:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;+&amp;lt;/code&amp;gt; create&lt;br /&gt;
* &amp;lt;code&amp;gt;-&amp;lt;/code&amp;gt; destroy&lt;br /&gt;
* &amp;lt;code&amp;gt;-/+&amp;lt;/code&amp;gt; replace (destroy and then create, or vice-versa if create-before-destroy is used)&lt;br /&gt;
* &amp;lt;code&amp;gt;~&amp;lt;/code&amp;gt; update in-place&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;lt;=&amp;lt;/code&amp;gt; applies only to data resources. You won't see this one often, because whenever possible Terraform does reads during the refresh phase. You will see it, though, if you have a data resource whose configuration depends on something that we don't know yet, such as an attribute of a resource that isn't yet created. In that case, it's necessary to wait until apply time to find out the final configuration before doing the read.&lt;br /&gt;
&lt;br /&gt;
== Plan and apply ==&lt;br /&gt;
Apply stage, if runs first time will create terraform.tfstate after all changes are done. This file should not be modified manually. It's used to compare what is out in cloud already so the next time APPLY stage runs it will look at the file and execute only necessary changes.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Terraform plan and apply&lt;br /&gt;
|- &lt;br /&gt;
! terraform plan&lt;br /&gt;
! terraform apply&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| &amp;lt;source&amp;gt;$ terraform plan&lt;br /&gt;
Refreshing Terraform state in-memory prior to plan...&lt;br /&gt;
The refreshed state will be used to calculate this plan, but will not be&lt;br /&gt;
persisted to local or remote state storage.&lt;br /&gt;
&amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
+ aws_instance.webserver&lt;br /&gt;
   ami:                         &amp;quot;ami-405f7226&amp;quot;&lt;br /&gt;
   associate_public_ip_address: &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   availability_zone:           &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   ebs_block_device.#:          &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   ephemeral_block_device.#:    &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   instance_state:              &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   instance_type:               &amp;quot;t2.nano&amp;quot;&lt;br /&gt;
   ipv6_addresses.#:            &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   key_name:                    &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   network_interface_id:        &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   placement_group:             &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   private_dns:                 &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   private_ip:                  &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   public_dns:                  &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   public_ip:                   &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   root_block_device.#:         &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   security_groups.#:           &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   source_dest_check:           &amp;quot;true&amp;quot;&lt;br /&gt;
   subnet_id:                   &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   tenancy:                     &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   vpc_security_group_ids.#:    &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
| &amp;lt;source&amp;gt;$ terraform apply&lt;br /&gt;
aws_instance.webserver: Creating...&lt;br /&gt;
 ami:                         &amp;quot;&amp;quot; =&amp;gt; &amp;quot;ami-405f7226&amp;quot;&lt;br /&gt;
 associate_public_ip_address: &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 availability_zone:           &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 ebs_block_device.#:          &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 ephemeral_block_device.#:    &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 instance_state:              &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 instance_type:               &amp;quot;&amp;quot; =&amp;gt; &amp;quot;t2.nano&amp;quot;&lt;br /&gt;
 ipv6_addresses.#:            &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 key_name:                    &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 network_interface_id:        &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 placement_group:             &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 private_dns:                 &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 private_ip:                  &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 public_dns:                  &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 public_ip:                   &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 root_block_device.#:         &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 security_groups.#:           &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 source_dest_check:           &amp;quot;&amp;quot; =&amp;gt; &amp;quot;true&amp;quot;&lt;br /&gt;
 subnet_id:                   &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 tenancy:                     &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 vpc_security_group_ids.#:    &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
aws_instance.webserver: Still creating... (10s elapsed)&lt;br /&gt;
aws_instance.webserver: Creation complete (ID: i-0eb33af34b94d1a78)&lt;br /&gt;
&lt;br /&gt;
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.&lt;br /&gt;
&lt;br /&gt;
The state of your infrastructure has been saved to the path&lt;br /&gt;
below. This state is required to modify and destroy your&lt;br /&gt;
infrastructure, so keep it safe. To inspect the complete state&lt;br /&gt;
use the `terraform show` command.&lt;br /&gt;
&lt;br /&gt;
State path:  &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Show ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ terraform show&lt;br /&gt;
aws_instance.webserver:&lt;br /&gt;
 id = i-0eb33af34b94d1a78&lt;br /&gt;
 ami = ami-405f7226&lt;br /&gt;
 associate_public_ip_address = true&lt;br /&gt;
 availability_zone = eu-west-1c&lt;br /&gt;
 disable_api_termination = false&lt;br /&gt;
(...)&lt;br /&gt;
 source_dest_check = true&lt;br /&gt;
 subnet_id = subnet-92a4bbf6&lt;br /&gt;
 tags.% = 0&lt;br /&gt;
 tenancy = default&lt;br /&gt;
 vpc_security_group_ids.# = 1&lt;br /&gt;
 vpc_security_group_ids.1039819662 = sg-5201fb2b&lt;br /&gt;
&lt;br /&gt;
$&amp;gt; terraform destroy&lt;br /&gt;
Do you really want to destroy?&lt;br /&gt;
 Terraform will delete all your managed infrastructure.&lt;br /&gt;
 There is no undo. Only 'yes' will be accepted to confirm.&lt;br /&gt;
 Enter a value: yes&lt;br /&gt;
aws_instance.webserver: Refreshing state... (ID: i-0eb33af34b94d1a78)&lt;br /&gt;
aws_instance.webserver: Destroying... (ID: i-0eb33af34b94d1a78)&lt;br /&gt;
aws_instance.webserver: Still destroying... (ID: i-0eb33af34b94d1a78, 10s elapsed)&lt;br /&gt;
aws_instance.webserver: Still destroying... (ID: i-0eb33af34b94d1a78, 20s elapsed)&lt;br /&gt;
aws_instance.webserver: Still destroying... (ID: i-0eb33af34b94d1a78, 30s elapsed)&lt;br /&gt;
aws_instance.webserver: Destruction complete&lt;br /&gt;
 &lt;br /&gt;
Destroy complete! Resources: 1 destroyed.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
After the instance has been terminated the terraform.tfstate looks like below:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi terraform.tfstate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=json&amp;gt;&lt;br /&gt;
 {&lt;br /&gt;
     &amp;quot;version&amp;quot;: 3,&lt;br /&gt;
     &amp;quot;terraform_version&amp;quot;: &amp;quot;0.9.1&amp;quot;,&lt;br /&gt;
     &amp;quot;serial&amp;quot;: 1,&lt;br /&gt;
     &amp;quot;lineage&amp;quot;: &amp;quot;c22ccad7-ff26-4b8a-bf19-819477b45202&amp;quot;,&lt;br /&gt;
     &amp;quot;modules&amp;quot;: [&lt;br /&gt;
         {&lt;br /&gt;
             &amp;quot;path&amp;quot;: [&lt;br /&gt;
                 &amp;quot;root&amp;quot;&lt;br /&gt;
             ],&lt;br /&gt;
             &amp;quot;outputs&amp;quot;: {},&lt;br /&gt;
             &amp;quot;resources&amp;quot;: {},&lt;br /&gt;
             &amp;quot;depends_on&amp;quot;: []&lt;br /&gt;
         }&lt;br /&gt;
     ]&lt;br /&gt;
 }&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= AWS credentials profiles and variable files=&lt;br /&gt;
Instead to reference secret_access keys within .tf file directly we can use AWS profile file. This file will be look at for the profile variable we specify in variables.tf file. Note: there is '''no double quotes'''.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ vi ~/.aws/credentials    #AWS credentials file with named profiles&lt;br /&gt;
[terraform-profile1]       #profile name&lt;br /&gt;
aws_access_key_id     = AAAAAAAAAAA&lt;br /&gt;
aws_secret_access_key = BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then we can now remove the secret_access keys from the main .tf file (example.tf) and amend as follows:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi provider.tf&lt;br /&gt;
terraform {&lt;br /&gt;
  version          = &amp;quot;~&amp;gt; 1.0&amp;quot;&lt;br /&gt;
  required_version = &amp;quot;= 0.11.11&amp;quot;&lt;br /&gt;
  region           = &amp;quot;eu-west-1&amp;quot;&lt;br /&gt;
  backend &amp;quot;s3&amp;quot; {}  # in this case all s3 details are passed as ENV vars&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
provider &amp;quot;aws&amp;quot; {&lt;br /&gt;
  version    =   &amp;quot;~&amp;gt; 1.57&amp;quot;&lt;br /&gt;
# Static credentials - provided directly&lt;br /&gt;
  access_key = &amp;quot;AAAAAAAAAAA&amp;quot;&lt;br /&gt;
  secret_key = &amp;quot;BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Shared Credentials file - $HOME/.aws/credentials, static credentials are not needed then&lt;br /&gt;
# profile                 = &amp;quot;terraform-profile1&amp;quot;           #profile name in credentials file, acc 111111111111&lt;br /&gt;
# shared_credentials_file = &amp;quot;/home/user1/.aws/credentials&amp;quot; #if different than default&lt;br /&gt;
&lt;br /&gt;
# If specified, assume role in another account using the user credentials&lt;br /&gt;
# defined in the profile above&lt;br /&gt;
# assume_role {&lt;br /&gt;
#   role_arn     = &amp;quot;${var.aws_xaccount_role}&amp;quot; #variable version&lt;br /&gt;
#   role_arn     = &amp;quot;arn:aws:iam::222222222222:role/CrossAccountSignin_Terraform&amp;quot;&lt;br /&gt;
# }&lt;br /&gt;
# allowed_account_ids = [ &amp;quot;111111111111&amp;quot;, &amp;quot;222222222222&amp;quot; ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
provider &amp;quot;template&amp;quot; {&lt;br /&gt;
  version = &amp;quot;~&amp;gt; 1.0.0&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
and create a variable file to reference it&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ vi variables.tf&lt;br /&gt;
variable &amp;quot;region&amp;quot; {&lt;br /&gt;
  default = &amp;quot;eu-west-1&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
variable &amp;quot;profile&amp;quot; {} #variable without a default value will prompt to type in the value. And that should be 'terraform-profile1'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Run terraform&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ terraform plan -var 'profile=terraform-profile1'  #this way value can be set&lt;br /&gt;
$ terraform plan -destroy -input=false&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= AWS example =&lt;br /&gt;
Prerequisites are:&lt;br /&gt;
*~/.aws/credential file exists&lt;br /&gt;
*variables.tf exist, with context below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you remove &amp;lt;tt&amp;gt;default&amp;lt;/tt&amp;gt; value you will be prompted for it.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;inputs.tf&amp;lt;/code&amp;gt; also known as a variable file.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; vi inputs.tf&lt;br /&gt;
variable &amp;quot;region&amp;quot; { default = &amp;quot;eu-west-1&amp;quot;  } &lt;br /&gt;
variable &amp;quot;profile&amp;quot; {&lt;br /&gt;
       description = &amp;quot;Provide AWS credentials profile you want to use, saved in ~/.aws/credentials file&amp;quot;&lt;br /&gt;
       default     = &amp;quot;terraform-profile&amp;quot; }&lt;br /&gt;
variable &amp;quot;key_name&amp;quot; {&lt;br /&gt;
        description = &amp;lt;&amp;lt;DESCRIPTION&lt;br /&gt;
Provide name of the ssh private key file name, ~/.ssh will be search&lt;br /&gt;
This is the key assosiated with the IAM user in AWS. Example: id_rsa&lt;br /&gt;
DESCRIPTION&lt;br /&gt;
        default     = &amp;quot;id_rsa&amp;quot; }&lt;br /&gt;
variable &amp;quot;public_key_path&amp;quot; {&lt;br /&gt;
        description = &amp;lt;&amp;lt;DESCRIPTION&lt;br /&gt;
Path to the SSH public keys for authentication. This key will be injected&lt;br /&gt;
into all ec2 instances created by Terraform.&lt;br /&gt;
Example: ~./ssh/terraform.pub&lt;br /&gt;
DESCRIPTION&lt;br /&gt;
        default     = &amp;quot;~/.ssh/id_rsa.pub&amp;quot; }&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Terraform .tf file&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ vi example.tf&lt;br /&gt;
provider &amp;quot;aws&amp;quot; {&lt;br /&gt;
  region = &amp;quot;${var.region}&amp;quot;&lt;br /&gt;
  profile = &amp;quot;${var.profile}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
resource &amp;quot;aws_vpc&amp;quot; &amp;quot;vpc&amp;quot; {&lt;br /&gt;
  cidr_block = &amp;quot;10.0.0.0/16&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
# Create an internet gateway to give our subnet access to the open internet&lt;br /&gt;
resource &amp;quot;aws_internet_gateway&amp;quot; &amp;quot;internet-gateway&amp;quot; {&lt;br /&gt;
  vpc_id = &amp;quot;${aws_vpc.vpc.id}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
# Give the VPC internet access on its main route table&lt;br /&gt;
resource &amp;quot;aws_route&amp;quot; &amp;quot;internet_access&amp;quot; {&lt;br /&gt;
  route_table_id         = &amp;quot;${aws_vpc.vpc.main_route_table_id}&amp;quot;&lt;br /&gt;
  destination_cidr_block = &amp;quot;0.0.0.0/0&amp;quot;&lt;br /&gt;
  gateway_id             = &amp;quot;${aws_internet_gateway.internet-gateway.id}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
# Create a subnet to launch our instances into&lt;br /&gt;
resource &amp;quot;aws_subnet&amp;quot; &amp;quot;default&amp;quot; {&lt;br /&gt;
  vpc_id                  = &amp;quot;${aws_vpc.vpc.id}&amp;quot;&lt;br /&gt;
  cidr_block              = &amp;quot;10.0.1.0/24&amp;quot;&lt;br /&gt;
  map_public_ip_on_launch = true&lt;br /&gt;
&lt;br /&gt;
  tags {&lt;br /&gt;
    Name = &amp;quot;Public&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
# Our default security group to access&lt;br /&gt;
# instances over SSH and HTTP&lt;br /&gt;
resource &amp;quot;aws_security_group&amp;quot; &amp;quot;default&amp;quot; {&lt;br /&gt;
  name        = &amp;quot;terraform_securitygroup&amp;quot;&lt;br /&gt;
  description = &amp;quot;Used for public instances&amp;quot;&lt;br /&gt;
  vpc_id      = &amp;quot;${aws_vpc.vpc.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  # SSH access from anywhere&lt;br /&gt;
  ingress {&lt;br /&gt;
    from_port   = 22&lt;br /&gt;
    to_port     = 22&lt;br /&gt;
    protocol    = &amp;quot;tcp&amp;quot;&lt;br /&gt;
    cidr_blocks = [&amp;quot;0.0.0.0/0&amp;quot;]&lt;br /&gt;
  }&lt;br /&gt;
  # HTTP access from the VPC&lt;br /&gt;
  ingress {&lt;br /&gt;
    from_port   = 80&lt;br /&gt;
    to_port     = 80&lt;br /&gt;
    protocol    = &amp;quot;tcp&amp;quot;&lt;br /&gt;
    cidr_blocks = [&amp;quot;10.0.0.0/16&amp;quot;]&lt;br /&gt;
  }&lt;br /&gt;
  # outbound internet access&lt;br /&gt;
  egress {&lt;br /&gt;
    from_port   = 0&lt;br /&gt;
    to_port     = 0&lt;br /&gt;
    protocol    = &amp;quot;-1&amp;quot; # all protocols&lt;br /&gt;
    cidr_blocks = [&amp;quot;0.0.0.0/0&amp;quot;]&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
resource &amp;quot;aws_key_pair&amp;quot; &amp;quot;auth&amp;quot; {&lt;br /&gt;
  key_name   = &amp;quot;${var.key_name}&amp;quot;&lt;br /&gt;
  public_key = &amp;quot;${file(var.public_key_path)}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
resource &amp;quot;aws_instance&amp;quot; &amp;quot;webserver&amp;quot; {&lt;br /&gt;
  ami = &amp;quot;ami-405f7226&amp;quot;&lt;br /&gt;
  instance_type = &amp;quot;t2.nano&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  key_name = &amp;quot;${aws_key_pair.auth.id}&amp;quot;&lt;br /&gt;
  vpc_security_group_ids = [&amp;quot;${aws_security_group.default.id}&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
  # We're going to launch into the public subnet for this.&lt;br /&gt;
  # Normally, in production environments, webservers would be in&lt;br /&gt;
  # private subnets.&lt;br /&gt;
  subnet_id = &amp;quot;${aws_subnet.default.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  # The connection block tells our provisioner how to&lt;br /&gt;
  # communicate with the instance&lt;br /&gt;
  connection {&lt;br /&gt;
    user = &amp;quot;ubuntu&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
  # We run a remote provisioner on the instance after creating it &lt;br /&gt;
  # to install Nginx. By default, this should be on port 80&lt;br /&gt;
  provisioner &amp;quot;remote-exec&amp;quot; {&lt;br /&gt;
    inline = [&lt;br /&gt;
      &amp;quot;sudo apt-get -y update&amp;quot;,&lt;br /&gt;
      &amp;quot;sudo apt-get -y install nginx&amp;quot;,&lt;br /&gt;
      &amp;quot;sudo service nginx start&amp;quot;&lt;br /&gt;
    ]&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Run a plan ==&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
$ terraform plan&lt;br /&gt;
var.key_name&lt;br /&gt;
  Name of the AWS key pair&lt;br /&gt;
&lt;br /&gt;
  Enter a value: id_rsa        #name of the key_pair&lt;br /&gt;
&lt;br /&gt;
var.profile&lt;br /&gt;
  AWS credentials profile you want to use&lt;br /&gt;
&lt;br /&gt;
  Enter a value: terraform-profile   #aws profile in ~/.aws/credentials file&lt;br /&gt;
&lt;br /&gt;
var.public_key_path&lt;br /&gt;
  Path to the SSH public keys for authentication.&lt;br /&gt;
  Example: ~./ssh/terraform.pub&lt;br /&gt;
&lt;br /&gt;
  Enter a value: ~/.ssh/id_rsa.pub  #path to the matching public key&lt;br /&gt;
&lt;br /&gt;
Refreshing Terraform state in-memory prior to plan...&lt;br /&gt;
The refreshed state will be used to calculate this plan, but will not be&lt;br /&gt;
persisted to local or remote state storage.&lt;br /&gt;
&lt;br /&gt;
The Terraform execution plan has been generated and is shown below.&lt;br /&gt;
Resources are shown in alphabetical order for quick scanning. Green resources&lt;br /&gt;
will be created (or destroyed and then created if an existing resource&lt;br /&gt;
exists), yellow resources are being changed in-place, and red resources&lt;br /&gt;
will be destroyed. Cyan entries are data sources to be read.&lt;br /&gt;
&lt;br /&gt;
+ aws_instance.webserver&lt;br /&gt;
    ami:                         &amp;quot;ami-405f7226&amp;quot;&lt;br /&gt;
    associate_public_ip_address: &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    availability_zone:           &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    ebs_block_device.#:          &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    ephemeral_block_device.#:    &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    instance_state:              &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    instance_type:               &amp;quot;t2.nano&amp;quot;&lt;br /&gt;
    ipv6_addresses.#:            &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    key_name:                    &amp;quot;${aws_key_pair.auth.id}&amp;quot;&lt;br /&gt;
    network_interface_id:        &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    placement_group:             &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    private_dns:                 &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    private_ip:                  &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    public_dns:                  &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    public_ip:                   &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    root_block_device.#:         &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    security_groups.#:           &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    source_dest_check:           &amp;quot;true&amp;quot;&lt;br /&gt;
    subnet_id:                   &amp;quot;${aws_subnet.default.id}&amp;quot;&lt;br /&gt;
    tenancy:                     &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    vpc_security_group_ids.#:    &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
+ aws_internet_gateway.internet-gateway&lt;br /&gt;
    vpc_id: &amp;quot;${aws_vpc.vpc.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
+ aws_key_pair.auth&lt;br /&gt;
    fingerprint: &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    key_name:    &amp;quot;id_rsa&amp;quot;&lt;br /&gt;
    public_key:  &amp;quot;ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDfc piotr@ubuntu&amp;quot;&lt;br /&gt;
&lt;br /&gt;
...omitted...&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
Plan: 7 to add, 0 to change, 0 to destroy.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Plan a single target&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; terraform plan -target=aws_ami_from_instance.golden&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Terraform apply ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; terraform apply&lt;br /&gt;
$&amp;gt; terraform show # shoe current resources in the state file&lt;br /&gt;
aws_instance.webserver:&lt;br /&gt;
 id = i-09c1c665cef284235&lt;br /&gt;
 ami = ami-405f7226&lt;br /&gt;
&amp;lt;...&amp;gt;&lt;br /&gt;
aws_security_group.default:&lt;br /&gt;
 id = sg-b14bb1c8&lt;br /&gt;
 description = Used for public instances&lt;br /&gt;
 egress.# = 1&lt;br /&gt;
&amp;lt;...&amp;gt;&lt;br /&gt;
aws_subnet.default:&lt;br /&gt;
 id = subnet-6f4f510b&lt;br /&gt;
&amp;lt;...&amp;gt;&lt;br /&gt;
aws_vpc.vpc:&lt;br /&gt;
 id = vpc-9ba0b7ff&lt;br /&gt;
&amp;lt;...&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Apply a single resource using &amp;lt;code&amp;gt;-target &amp;lt;resource&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; terraform apply -target=aws_ami_from_instance.golden&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Terraform destroy ==&lt;br /&gt;
Run destroy command to delete all resources that were created&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; terraform destroy&lt;br /&gt;
&lt;br /&gt;
aws_key_pair.auth: Refreshing state... (ID: id_rsa)&lt;br /&gt;
aws_vpc.vpc: Refreshing state... (ID: vpc-9ba0b7ff)&lt;br /&gt;
&amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Destroy complete! Resources: 7 destroyed.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Destroy a single resource - targeting&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; terraform show&lt;br /&gt;
$&amp;gt; terraform destroy -target=aws_ami_from_instance.golden&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Terraform taint ==&lt;br /&gt;
Get a resource list&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
terraform state list&lt;br /&gt;
# select item for the list #&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Version 0.11: resource index must be addressed as eg. &amp;lt;code&amp;gt;aws_instance.main.0&amp;lt;/code&amp;gt; not  &amp;lt;code&amp;gt;aws_instance.main[0]&amp;lt;/code&amp;gt;. It's not possible to tain whole module&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
terraform taint -module=&amp;lt;MODULE_NAME&amp;gt; aws_instance.main.0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Version 0.12: resources and modules can be addressed in more natural way&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
terraform taint module.MODULE_NAME.aws_instance.main.0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Use ansible from Terraform - Provision using Ansible =&lt;br /&gt;
Unsurr if this is the best approach due to the fact of how to store the state of local-exec Ansible run. Could be set to always run as Ansible playbooks are immutable. Exame: https://github.com/dzeban/c10k/blob/master/infrastructure/main.tf&lt;br /&gt;
&lt;br /&gt;
= Debug =&lt;br /&gt;
== Output complex object ==&lt;br /&gt;
Often it is required to manipulate a data structure that is an output of &amp;lt;tt&amp;gt;resource&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;data.resource&amp;lt;/tt&amp;gt; or simply a template that might be hidden computation not always displayed on your screen. You can use following techniques to iterate over you code output:&lt;br /&gt;
&lt;br /&gt;
;Output and [https://www.terraform.io/docs/providers/null/resource.html null_resource] - empty virtual container that can run any arbitrary commands&lt;br /&gt;
* '''Problem statement:''' Display computed Terrafom &amp;lt;code&amp;gt;templatefile&amp;lt;/code&amp;gt;&lt;br /&gt;
* '''Solution:''' Use &amp;lt;code&amp;gt;null_resource&amp;lt;/code&amp;gt; to create a template, such template will be shown in a &amp;lt;tt&amp;gt;plan&amp;lt;/tt&amp;gt;. If such template is Json policy, invalid policies fail and you cannot see why. Plan will show the object being constructed, running &amp;lt;code&amp;gt;terraform apply&amp;lt;/code&amp;gt; it can be saved into state file as output variable. Then the object can be re-used for further transformations.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;Terraform&amp;quot;&amp;gt;&lt;br /&gt;
data &amp;quot;aws_caller_identity&amp;quot; &amp;quot;current&amp;quot; {}&lt;br /&gt;
&lt;br /&gt;
# resource &amp;quot;aws_kms_key&amp;quot; &amp;quot;secretmanager&amp;quot; {&lt;br /&gt;
#  policy = templatefile(&amp;quot;./templates/kms_secretmanager.policy.json.tpl&amp;quot;, ... # debugging policy with &lt;br /&gt;
# }                                                                           # null_resource and ouput&lt;br /&gt;
&lt;br /&gt;
resource &amp;quot;aws_kms_alias&amp;quot; &amp;quot;secretmanager&amp;quot; {&lt;br /&gt;
  name          = &amp;quot;alias/secretmanager&amp;quot;&lt;br /&gt;
  target_key_id = aws_kms_key.secretmanager.key_id&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
resource &amp;quot;null_resource&amp;quot; &amp;quot;policytest&amp;quot; {&lt;br /&gt;
  triggers = {&lt;br /&gt;
    policytest = templatefile(&amp;quot;./templates/kms_secretmanager.policy.json.tpl&amp;quot;, &lt;br /&gt;
    {&lt;br /&gt;
      arns_json                 = jsonencode(var.crossAccountIamUsers_arns)&lt;br /&gt;
      currentAccountId          = data.aws_caller_identity.current.account_id&lt;br /&gt;
      crossAccountAccessEnabled = length([var.crossAccountIamUsers_arns]) &amp;gt; 0 ? true : false&lt;br /&gt;
    })&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
output &amp;quot;policy&amp;quot; {&lt;br /&gt;
  value = templatefile(&amp;quot;./templates/kms_secretmanager.policy.json.tpl&amp;quot;, &lt;br /&gt;
    {&lt;br /&gt;
      arns_json                 = jsonencode(var.crossAccountIamUsers_arns)&lt;br /&gt;
      currentAccountId          = data.aws_caller_identity.current.account_id&lt;br /&gt;
      crossAccountAccessEnabled = length([var.crossAccountIamUsers_arns]) &amp;gt; 0 ? true : false&lt;br /&gt;
    }&lt;br /&gt;
  )&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Policy template file &amp;lt;code&amp;gt;./templates/kms_secretmanager.policy.json.tpl&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=json&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
    &amp;quot;Version&amp;quot;: &amp;quot;2012-10-17&amp;quot;,&lt;br /&gt;
    &amp;quot;Id&amp;quot;: &amp;quot;key-consolepolicy-1&amp;quot;,&lt;br /&gt;
    &amp;quot;Statement&amp;quot;: [&lt;br /&gt;
        {&lt;br /&gt;
            &amp;quot;Sid&amp;quot;: &amp;quot;Enable IAM User Permissions&amp;quot;,&lt;br /&gt;
            &amp;quot;Effect&amp;quot;: &amp;quot;Allow&amp;quot;,&lt;br /&gt;
            &amp;quot;Principal&amp;quot;: {&lt;br /&gt;
                &amp;quot;AWS&amp;quot;: &amp;quot;arn:aws:iam::${currentAccountId}:root&amp;quot;&lt;br /&gt;
            },&lt;br /&gt;
            &amp;quot;Action&amp;quot;: &amp;quot;kms:*&amp;quot;,&lt;br /&gt;
            &amp;quot;Resource&amp;quot;: &amp;quot;*&amp;quot;&lt;br /&gt;
        },&lt;br /&gt;
%{ if crossAccountAccessEnabled == true ~}&lt;br /&gt;
        {&lt;br /&gt;
            &amp;quot;Sid&amp;quot;: &amp;quot;Allow cross-accounts retrieve secrets&amp;quot;,&lt;br /&gt;
            &amp;quot;Effect&amp;quot;: &amp;quot;Allow&amp;quot;,&lt;br /&gt;
            &amp;quot;Principal&amp;quot;: {&lt;br /&gt;
                &amp;quot;AWS&amp;quot;: ${arns_json}&lt;br /&gt;
            },&lt;br /&gt;
            &amp;quot;Action&amp;quot;: [&lt;br /&gt;
                &amp;quot;kms:Decrypt&amp;quot;,&lt;br /&gt;
                &amp;quot;kms:DescribeKey&amp;quot;&lt;br /&gt;
            ],&lt;br /&gt;
            &amp;quot;Resource&amp;quot;: &amp;quot;*&amp;quot;&lt;br /&gt;
        }&lt;br /&gt;
%{ endif ~}&lt;br /&gt;
    ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Run&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ terraform apply -var-file=test.tfvars -target null_resource.policytest # -var-file contains 'var.crossAccountIamUsers_arns' list variable&lt;br /&gt;
&lt;br /&gt;
Terraform will perform the following actions:&lt;br /&gt;
&lt;br /&gt;
  # null_resource.policytest will be created&lt;br /&gt;
  + resource &amp;quot;null_resource&amp;quot; &amp;quot;policytest&amp;quot; {&lt;br /&gt;
      + id       = (known after apply)&lt;br /&gt;
      + triggers = {&lt;br /&gt;
          + &amp;quot;policytest&amp;quot; = jsonencode(&lt;br /&gt;
                {&lt;br /&gt;
                  + Id        = &amp;quot;key-consolepolicy-1&amp;quot;&lt;br /&gt;
                  + Statement = [&lt;br /&gt;
                      + {&lt;br /&gt;
                          + Action    = &amp;quot;kms:*&amp;quot;&lt;br /&gt;
                          + Effect    = &amp;quot;Allow&amp;quot;&lt;br /&gt;
                          + Principal = {&lt;br /&gt;
                              + AWS = &amp;quot;arn:aws:iam::111111111111:root&amp;quot;&lt;br /&gt;
                            }&lt;br /&gt;
                          + Resource  = &amp;quot;*&amp;quot;&lt;br /&gt;
                          + Sid       = &amp;quot;Enable IAM User Permissions&amp;quot;&lt;br /&gt;
                        },&lt;br /&gt;
                      + {&lt;br /&gt;
                          + Action    = [&lt;br /&gt;
                              + &amp;quot;kms:Decrypt&amp;quot;,&lt;br /&gt;
                              + &amp;quot;kms:DescribeKey&amp;quot;,&lt;br /&gt;
                            ]&lt;br /&gt;
                          + Effect    = &amp;quot;Allow&amp;quot;&lt;br /&gt;
                          + Principal = {&lt;br /&gt;
                              + AWS = [&lt;br /&gt;
                                  + &amp;quot;arn:aws:iam::111111111111:user/dev&amp;quot;,&lt;br /&gt;
                                  + &amp;quot;arn:aws:iam::111111111111:user/test&amp;quot;,&lt;br /&gt;
                                ]&lt;br /&gt;
                            }&lt;br /&gt;
                          + Resource  = &amp;quot;*&amp;quot;&lt;br /&gt;
                          + Sid       = &amp;quot;Allow cross-accounts retrieve secrets&amp;quot;&lt;br /&gt;
                        },&lt;br /&gt;
                    ]&lt;br /&gt;
                  + Version   = &amp;quot;2012-10-17&amp;quot;&lt;br /&gt;
                }&lt;br /&gt;
            )&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
Plan: 1 to add, 0 to change, 0 to destroy.&lt;br /&gt;
&lt;br /&gt;
Do you want to perform these actions?&lt;br /&gt;
  Terraform will perform the actions described above.&lt;br /&gt;
  Only 'yes' will be accepted to approve.&lt;br /&gt;
&lt;br /&gt;
  Enter a value: yes # &amp;lt;- manual imput&lt;br /&gt;
&lt;br /&gt;
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.&lt;br /&gt;
&lt;br /&gt;
Outputs:&lt;br /&gt;
policy = {&lt;br /&gt;
    &amp;quot;Version&amp;quot;: &amp;quot;2012-10-17&amp;quot;,&lt;br /&gt;
    &amp;quot;Id&amp;quot;: &amp;quot;key-consolepolicy-1&amp;quot;,&lt;br /&gt;
    &amp;quot;Statement&amp;quot;: [&lt;br /&gt;
        {&lt;br /&gt;
            &amp;quot;Sid&amp;quot;: &amp;quot;Enable IAM User Permissions&amp;quot;,&lt;br /&gt;
            &amp;quot;Effect&amp;quot;: &amp;quot;Allow&amp;quot;,&lt;br /&gt;
            &amp;quot;Principal&amp;quot;: {&lt;br /&gt;
                &amp;quot;AWS&amp;quot;: &amp;quot;arn:aws:iam::111111111111:root&amp;quot;&lt;br /&gt;
            },&lt;br /&gt;
            &amp;quot;Action&amp;quot;: &amp;quot;kms:*&amp;quot;,&lt;br /&gt;
            &amp;quot;Resource&amp;quot;: &amp;quot;*&amp;quot;&lt;br /&gt;
        },&lt;br /&gt;
        {&lt;br /&gt;
            &amp;quot;Sid&amp;quot;: &amp;quot;Allow cross-accounts retrieve secrets&amp;quot;,&lt;br /&gt;
            &amp;quot;Effect&amp;quot;: &amp;quot;Allow&amp;quot;,&lt;br /&gt;
            &amp;quot;Principal&amp;quot;: {&lt;br /&gt;
                &amp;quot;AWS&amp;quot;: [&amp;quot;arn:aws:iam::111111111111:user/dev&amp;quot;,&amp;quot;arn:aws:iam::111111111111:user/test&amp;quot;]&lt;br /&gt;
            },&lt;br /&gt;
            &amp;quot;Action&amp;quot;: [&lt;br /&gt;
                &amp;quot;kms:Decrypt&amp;quot;,&lt;br /&gt;
                &amp;quot;kms:DescribeKey&amp;quot;&lt;br /&gt;
            ],&lt;br /&gt;
            &amp;quot;Resource&amp;quot;: &amp;quot;*&amp;quot;&lt;br /&gt;
        }&lt;br /&gt;
    ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Debug =&lt;br /&gt;
== Debug and analyze logs ==&lt;br /&gt;
We are going to enable logging to a file in Terraform. Convert log file to pdf and use sheri.ai to give us the answers.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Pre req - Ubuntu 22.04&lt;br /&gt;
sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install ghostscript # for ps2pdf converter&lt;br /&gt;
&lt;br /&gt;
# Set Terraform logging&lt;br /&gt;
export TF_LOG=TRACE # DEBUG&lt;br /&gt;
export TF_LOG_PATH=/tmp/tflogs.log&lt;br /&gt;
&lt;br /&gt;
terraform plan|apply&lt;br /&gt;
vim $TF_LOG_PATH -c &amp;quot;hardcopy &amp;gt; ${TF_LOG_PATH}.ps | q&amp;quot;; ps2pdf ${TF_LOG_PATH}.ps ${TF_LOG_PATH}-$(echo $(date +&amp;quot;%Y%m%d-%H%M&amp;quot;)).pdf&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Debug using &amp;lt;code&amp;gt;terraform console&amp;lt;/code&amp;gt;==&lt;br /&gt;
This command provides an interactive command-line console for evaluating and experimenting with expressions. This is useful for testing interpolations before using them in configurations, and for interacting with any values currently saved in state. Terraform console will read configured state even if it is remote.&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
$&amp;gt; terraform console #-state=path # note I have 'tfstate' available; this could be remote state&lt;br /&gt;
&amp;gt; var.vpc_cidr       # &amp;lt;- new syntax&lt;br /&gt;
10.123.0.0/16&lt;br /&gt;
&amp;gt; &amp;quot;${var.vpc_cidr}&amp;quot;  # &amp;lt;- old syntax&lt;br /&gt;
10.123.0.0/16&lt;br /&gt;
&amp;gt; aws_security_group.tf_public_sg.id   # interpolate from state&lt;br /&gt;
sg-04d51b5ae10e6f0b0&lt;br /&gt;
&amp;gt; help&lt;br /&gt;
The Terraform console allows you to experiment with Terraform interpolations.&lt;br /&gt;
You may access resources in the state (if you have one) just as you would&lt;br /&gt;
from a configuration. For example: &amp;quot;aws_instance.foo.id&amp;quot; would evaluate&lt;br /&gt;
to the ID of &amp;quot;aws_instance.foo&amp;quot; if it exists in your state.&lt;br /&gt;
&lt;br /&gt;
Type in the interpolation to test and hit &amp;lt;enter&amp;gt; to see the result.&lt;br /&gt;
&lt;br /&gt;
To exit the console, type &amp;quot;exit&amp;quot; and hit &amp;lt;enter&amp;gt;, or use Control-C or&lt;br /&gt;
Control-D.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ echo &amp;quot;aws_iam_user.notif.arn&amp;quot; | terraform console&lt;br /&gt;
arn:aws:iam::123456789:user/notif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Log user_data to console logs ==&lt;br /&gt;
In Linux add a line below after she-bang&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
exec &amp;gt; &amp;gt;(tee /var/log/user-data.log|logger -t user-data -s 2&amp;gt;/dev/console)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Now you can go and open System Logs in AWS Console to view user-data script logs.&lt;br /&gt;
&lt;br /&gt;
= terraform graph to visualise configuration =&lt;br /&gt;
== Graph dependencies ==&lt;br /&gt;
Create visualised file. You may need to install &amp;lt;code&amp;gt;sudo apt-get install graphviz&amp;lt;/code&amp;gt; if it is not in your system.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo apt install graphviz # installs 'dot'&lt;br /&gt;
terraform graph | dot -Tpng &amp;gt; graph.png&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
[[File:Example2.png|none|left|700px|Terraform visual configuration]]&lt;br /&gt;
&lt;br /&gt;
== [https://serverfault.com/questions/1005761/what-does-error-cycle-means-in-terraform Cycle error] ==&lt;br /&gt;
Example cycle error:&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
Error: Cycle: module.gke.google_container_node_pool.pools[&amp;quot;low-standard-n1&amp;quot;]&lt;br /&gt;
 module.gke.google_container_node_pool.pools[&amp;quot;medium-standard-n1&amp;quot;]&lt;br /&gt;
 module.gke.google_container_node_pool.pools[&amp;quot;large-standard-n1&amp;quot;]&lt;br /&gt;
 module.gke.local.cluster_endpoint (expand)&lt;br /&gt;
 module.gke.output.endpoint (expand)&lt;br /&gt;
 provider[&amp;quot;registry.terraform.io/gavinbunney/kubectl&amp;quot;]&lt;br /&gt;
 kubectl_manifest.sync[&amp;quot;source.toolkit.fluxcd.io/v1beta1/gitrepository/flux-system/flux-system&amp;quot;] (destroy)&lt;br /&gt;
 module.gke.google_container_node_pool.pools[&amp;quot;preemptible&amp;quot;] (destroy)&lt;br /&gt;
 module.gke.module.gcloud_delete_default_kube_dns_configmap.module.gcloud_kubectl.null_resource.additional_components[0] (destroy)&lt;br /&gt;
 module.gke.module.gcloud_delete_default_kube_dns_configmap.module.gcloud_kubectl.null_resource.run_command[0] (destroy)&lt;br /&gt;
 module.gke.module.gcloud_delete_default_kube_dns_configmap.module.gcloud_kubectl.null_resource.module_depends_on[0] (destroy)&lt;br /&gt;
 module.gke.module.gcloud_delete_default_kube_dns_configmap.module.gcloud_kubectl.null_resource.run_destroy_command[0] (destroy)&lt;br /&gt;
 module.gke.kubernetes_config_map.kube-dns[0] (destroy)&lt;br /&gt;
 module.gke.google_container_cluster.primary&lt;br /&gt;
 module.gke.local.cluster_output_master_auth (expand)&lt;br /&gt;
 module.gke.local.cluster_master_auth_list_layer1 (expand)&lt;br /&gt;
 module.gke.local.cluster_master_auth_list_layer2 (expand)&lt;br /&gt;
 module.gke.local.cluster_master_auth_map (expand)&lt;br /&gt;
 module.gke.local.cluster_ca_certificate (expand)&lt;br /&gt;
 module.gke.output.ca_certificate (expand)&lt;br /&gt;
 provider[&amp;quot;registry.terraform.io/hashicorp/kubernetes&amp;quot;]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;-draw-cycles&amp;lt;/code&amp;gt; command causes Terraform to mark the arrows that are related to the cycle being reported using the color red. If you cannot visually distinguish red from black, you may wish to first edit the generated Graphviz code to replace red with some other color you can distinguish.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo apt install graphviz&lt;br /&gt;
terraform graph -draw-cycles -type=plan &amp;gt; cycle-plan.graphviz&lt;br /&gt;
terraform graph -draw-cycles | dot -Tpng &amp;gt; cycles.png&lt;br /&gt;
terraform graph -draw-cycles | dot -Tsvg &amp;gt; cycles.svg&lt;br /&gt;
terraform graph -draw-cycles | dot -Tpdf &amp;gt; cycles.pdf&lt;br /&gt;
# | -draw-cycles - highlight any cycles in the graph with colored edges. This helps when diagnosing cycle errors.&lt;br /&gt;
# | -type=plan   - type of graph to output. Can be: plan, plan-destroy, apply, validate, input, refresh.&lt;br /&gt;
&lt;br /&gt;
# For large graphs you may want to install inkscape&lt;br /&gt;
sudo apt install inkscape --no-install-suggests --no-install-recommends&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Awoid cycle errors in modules by structuring your config to avoid cross-module references. So instead of directly accessing an output of one module from inside another, set it up as in input parameter instead and wire everything together on the top level.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;How to get it solved&lt;br /&gt;
With the cycling dependency issue, study the graph then decide on removing from the state a resource that should be generated later. If the graph is not clear or too complex to read you may need to guess and delete from the state a resource marked for deletion, ie:&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
terraform state  rm kubectl_manifest.install[\&amp;quot;apps/v1/deployment/flux-system/kustomize-controller\&amp;quot;]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Remote state =&lt;br /&gt;
== Enable ==&lt;br /&gt;
Create s3 bucket with unique name, enable versioning and choose a region.&lt;br /&gt;
&lt;br /&gt;
Then configure terraform:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ terraform remote config \&lt;br /&gt;
     -backend=s3 \&lt;br /&gt;
     -backend-config=&amp;quot;bucket=YOUR_BUCKET_NAME&amp;quot; \&lt;br /&gt;
     -backend-config=&amp;quot;key=terraform.tfstate&amp;quot; \&lt;br /&gt;
     -backend-config=&amp;quot;region=YOUR_BUCKET_REGION&amp;quot; \&lt;br /&gt;
     -backend-config=&amp;quot;encrypt=true&amp;quot;&lt;br /&gt;
 Remote configuration updated&lt;br /&gt;
 Remote state configured and pulled.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
After running this command, you should see your Terraform state show up in that S3 bucket.&lt;br /&gt;
&lt;br /&gt;
== Locking ==&lt;br /&gt;
Add &amp;lt;code&amp;gt;dynamodb_table&amp;lt;/code&amp;gt; name to backend configuration. &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
terraform {&lt;br /&gt;
  version          = &amp;quot;~&amp;gt; 1.0&amp;quot;&lt;br /&gt;
  required_version = &amp;quot;= 0.11.11&amp;quot;&lt;br /&gt;
  backend &amp;quot;s3&amp;quot; {&lt;br /&gt;
    dynamodb_table = &amp;quot;tfstate-lock&amp;quot;&lt;br /&gt;
    profile        = &amp;quot;terraform-agent&amp;quot;&lt;br /&gt;
#   assume_role {&lt;br /&gt;
#     role_arn     = &amp;quot;${var.aws_xaccount_role}&amp;quot;&lt;br /&gt;
#     session_name = &amp;quot;${var.aws_xsession_name}&amp;quot;&lt;br /&gt;
#   }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In AWS create dynamo-db table, named: &amp;lt;tt&amp;gt;tfsate-lock&amp;lt;/tt&amp;gt; with index &amp;lt;tt&amp;gt;LockID&amp;lt;/tt&amp;gt;; as on a picture below. It an event of taking a lock the entry similar to one below gets created.&lt;br /&gt;
[[File:Terraform-dynamo-db-state-locking.png|none|left|Terraform-dynamo-db-state-locking]]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
{&amp;quot;ID&amp;quot;:&amp;quot;62a453e8-7fbc-cfa2-e07f-be1381b82af3&amp;quot;,&amp;quot;Operation&amp;quot;:&amp;quot;OperationTypePlan&amp;quot;,&amp;quot;Info&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;Who&amp;quot;:&amp;quot;piotr@laptop1&amp;quot;,&amp;quot;Version&amp;quot;:&amp;quot;0.11.11&amp;quot;,&amp;quot;Created&amp;quot;:&amp;quot;2019-03-07T08:49:33.3078722Z&amp;quot;,&amp;quot;Path&amp;quot;:&amp;quot;tfstate-acmedev01-acmedev-111111111111/aws/acmedev01/state&amp;quot;}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Workspaces =&lt;br /&gt;
== [https://discuss.hashicorp.com/t/how-to-change-the-name-of-a-workspace/24010 Rename a workspace / move the state file] ==&lt;br /&gt;
{{Note|The state manipulation commands run through Terraform’s automatic state upgrading processes and so best to do this with the same Terraform CLI version that you’ve most recently been using against this workspace so that the state won’t be implicitly upgraded as part of the operation.}}&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
terraform workspace select old-name&lt;br /&gt;
terraform state pull &amp;gt;old-name.tfstate&lt;br /&gt;
terraform workspace new new-name&lt;br /&gt;
terraform state push old-name.tfstate&lt;br /&gt;
terraform show # confirm that the newly-imported state looks 'right', before deleting the old workspace&lt;br /&gt;
terraform workspace delete -force old-name&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
Variables can be provided via cli&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
terraform apply -var=&amp;quot;image_id=ami-abc123&amp;quot;&lt;br /&gt;
terraform apply -var='image_id_list=[&amp;quot;ami-abc123&amp;quot;,&amp;quot;ami-def456&amp;quot;]'&lt;br /&gt;
terraform apply -var='image_id_map={&amp;quot;us-east-1&amp;quot;:&amp;quot;ami-abc123&amp;quot;,&amp;quot;us-east-2&amp;quot;:&amp;quot;ami-def456&amp;quot;}'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Terraform also automatically loads a number of variable definitions files if they are present:&lt;br /&gt;
* Files named exactly &amp;lt;code&amp;gt;terraform.tfvars&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;terraform.tfvars.json&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Any files with names ending in &amp;lt;code&amp;gt;.auto.tfvars&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;.auto.tfvars.json&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=Syntax Terraform 0.12.6+=&lt;br /&gt;
{{Note|This [https://alexharv074.github.io/2019/06/02/adventures-in-the-terraform-dsl-part-iii-iteration-enhancements-in-terraform-0.12.html#for-expressions for-expressions] link is a little diamond for this subject}}&lt;br /&gt;
&lt;br /&gt;
== Map and nested block ==&lt;br /&gt;
Terrafom 0.12 introduces stricter validation for followings but allows map keys to be set dynamically from expressions. Note of &amp;quot;=&amp;quot; sign.&lt;br /&gt;
* a map attribute - usually have user-defined keys, like we see in the tags example &lt;br /&gt;
* a nested block always has a fixed set of supported arguments defined by the resource type schema, which Terraform will validate&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
resource &amp;quot;aws_instance&amp;quot; &amp;quot;example&amp;quot; {&lt;br /&gt;
  instance_type = &amp;quot;t2.micro&amp;quot;&lt;br /&gt;
  ami           = &amp;quot;ami-abcd1234&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  tags = {             # &amp;lt;- a map attribute, requires '='&lt;br /&gt;
    Name = &amp;quot;example instance&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  ebs_block_device {    # &amp;lt;- a nested block, no '='&lt;br /&gt;
    device_name = &amp;quot;sda2&amp;quot;&lt;br /&gt;
    volume_type = &amp;quot;gp2&amp;quot;&lt;br /&gt;
    volume_size = 24&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== [https://alexharv074.github.io/2019/06/02/adventures-in-the-terraform-dsl-part-iii-iteration-enhancements-in-terraform-0.12.html For_each] ==&lt;br /&gt;
* [https://alexharv074.github.io/2019/06/02/adventures-in-the-terraform-dsl-part-iii-iteration-enhancements-in-terraform-0.12.html terraform iterations]&lt;br /&gt;
* [https://discuss.hashicorp.com/t/produce-maps-from-list-of-strings-of-a-map/2197 Produce maps from list of strings of a map]&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ For_each and new allowed formatting without the need for &amp;quot;${var.vpc_cidr}&amp;quot; syntax = var.vpc_cidr&lt;br /&gt;
|- &lt;br /&gt;
! main.tf&lt;br /&gt;
! variables.tf and outputs.tf&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| &amp;lt;source&amp;gt;# vi main.tf&lt;br /&gt;
resource &amp;quot;aws_vpc&amp;quot; &amp;quot;tf_vpc&amp;quot; {&lt;br /&gt;
  cidr_block           = &amp;quot;${var.vpc_cidr}&amp;quot;&lt;br /&gt;
  enable_dns_hostnames = true&lt;br /&gt;
  enable_dns_support   = true&lt;br /&gt;
  tags =  {           #&amp;lt;-note of '=' as this is an argument&lt;br /&gt;
    Name = &amp;quot;tf_vpc&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
resource &amp;quot;aws_security_group&amp;quot; &amp;quot;tf_public_sg&amp;quot; {&lt;br /&gt;
  name        = &amp;quot;tf_public_sg&amp;quot;&lt;br /&gt;
  description = &amp;quot;Used for access to the public instances&amp;quot;&lt;br /&gt;
  vpc_id      = &amp;quot;${aws_vpc.tf_vpc.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  dynamic &amp;quot;ingress&amp;quot; {&lt;br /&gt;
    for_each = [ for s in var.service_ports: {&lt;br /&gt;
       from_port = s.from_port&lt;br /&gt;
       to_port   = s.to_port   }]&lt;br /&gt;
    content {&lt;br /&gt;
      from_port   = ingress.value.from_port&lt;br /&gt;
      to_port     = ingress.value.to_port&lt;br /&gt;
      protocol    = &amp;quot;tcp&amp;quot;&lt;br /&gt;
      cidr_blocks = [ var.accessip ]&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
# Commented block has been replaced by 'dynamic &amp;quot;ingress&amp;quot;'&lt;br /&gt;
# ingress {  #SSH&lt;br /&gt;
#   from_port   = 22&lt;br /&gt;
#   to_port     = 22&lt;br /&gt;
#   protocol    = &amp;quot;tcp&amp;quot;&lt;br /&gt;
#   cidr_blocks = [&amp;quot;${var.accessip}&amp;quot;]&lt;br /&gt;
# }&lt;br /&gt;
# ingress {  #HTTP&lt;br /&gt;
#   from_port   = 80&lt;br /&gt;
#   to_port     = 80&lt;br /&gt;
#   protocol    = &amp;quot;tcp&amp;quot;&lt;br /&gt;
#   cidr_blocks = [&amp;quot;${var.accessip}&amp;quot;]&lt;br /&gt;
# }&lt;br /&gt;
  egress { &lt;br /&gt;
    from_port   = 0&lt;br /&gt;
    to_port     = 0&lt;br /&gt;
    protocol    = &amp;quot;-1&amp;quot;&lt;br /&gt;
    cidr_blocks = [&amp;quot;0.0.0.0/0&amp;quot;]&lt;br /&gt;
  }&lt;br /&gt;
}&amp;lt;/source&amp;gt; &lt;br /&gt;
| &amp;lt;source&amp;gt;# vi variables.tf&lt;br /&gt;
variable &amp;quot;vpc_cidr&amp;quot; { default = &amp;quot;10.123.0.0/16&amp;quot; }&lt;br /&gt;
variable &amp;quot;accessip&amp;quot; { default = &amp;quot;0.0.0.0/0&amp;quot;     }&lt;br /&gt;
&lt;br /&gt;
variable &amp;quot;service_ports&amp;quot; {&lt;br /&gt;
  type = &amp;quot;list&amp;quot;&lt;br /&gt;
  default = [&lt;br /&gt;
    { from_port = 22, to_port = 22 },&lt;br /&gt;
    { from_port = 80, to_port = 80 }&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# vi outputs.tf&lt;br /&gt;
output &amp;quot;public_sg&amp;quot; { &lt;br /&gt;
  value = aws_security_group.tf_public_sg.id&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
output &amp;quot;ingress_port_mapping&amp;quot; {&lt;br /&gt;
  value = {&lt;br /&gt;
    for ingress in aws_security_group.tf_public_sg.ingress:&lt;br /&gt;
    format(&amp;quot;From %d&amp;quot;, ingress.from_port) =&amp;gt; format(&amp;quot;To %d&amp;quot;, ingress.to_port)&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Computed 'Outputs:'&lt;br /&gt;
ingress_port_mapping = {&lt;br /&gt;
  &amp;quot;From 22&amp;quot; = &amp;quot;To 22&amp;quot;&lt;br /&gt;
  &amp;quot;From 80&amp;quot; = &amp;quot;To 80&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
public_sg = sg-04d51b5ae10e6f0b0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== [https://www.sheldonhull.com/blog/how-to-iterate-through-a-list-of-objects-with-terraforms-for-each-function/ Iterate over list of objects] ===&lt;br /&gt;
[https://stackoverflow.com/questions/58594506/how-to-for-each-through-a-listobjects-in-terraform-0-12 how-to-for-each-through-a-listobjects]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
# debug.tf&lt;br /&gt;
locals {&lt;br /&gt;
  users = [&lt;br /&gt;
    # list of objects&lt;br /&gt;
    { name = &amp;quot;foo&amp;quot;, is_enabled = true  },&lt;br /&gt;
    { name = &amp;quot;bar&amp;quot;, is_enabled = false },&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
resource &amp;quot;null_resource&amp;quot; &amp;quot;this&amp;quot; {&lt;br /&gt;
    for_each = { for name in local.users: name.name =&amp;gt; name.is_enabled }&lt;br /&gt;
    connection {&lt;br /&gt;
      name     = each.key&lt;br /&gt;
      email    = each.value&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
output &amp;quot;users_map&amp;quot; {&lt;br /&gt;
  value = { for name in local.users: name.name =&amp;gt; name.is_enabled }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# terraform init&lt;br /&gt;
# terraform apply&lt;br /&gt;
&lt;br /&gt;
null_resource.this[&amp;quot;bar&amp;quot;]: Creating...&lt;br /&gt;
null_resource.this[&amp;quot;foo&amp;quot;]: Creating...&lt;br /&gt;
null_resource.this[&amp;quot;bar&amp;quot;]: Creation complete after 0s [id=7228791922218879597]&lt;br /&gt;
null_resource.this[&amp;quot;foo&amp;quot;]: Creation complete after 0s [id=7997705376010456213]&lt;br /&gt;
&lt;br /&gt;
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.&lt;br /&gt;
&lt;br /&gt;
Outputs:&lt;br /&gt;
&lt;br /&gt;
users_map = {&lt;br /&gt;
  &amp;quot;bar&amp;quot; = false&lt;br /&gt;
  &amp;quot;foo&amp;quot; = true&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Plan is more readable and explicit ==&lt;br /&gt;
[[Terraform/plan_tf_11_vs_12|See comparison]]&lt;br /&gt;
&lt;br /&gt;
== [https://www.hashicorp.com/blog/terraform-0-12-rich-value-types/ Rich Value Types] - for previewing whole resource object ==&lt;br /&gt;
'''Resources and Modules as Values''' Terraform 0.12 now permits using entire resources as object values within configuration, including returning them as outputs and passing them as input variables:&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
output &amp;quot;vpc&amp;quot; {&lt;br /&gt;
  value = aws_vpc.example&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The type of this output value is an object type derived from the schema of the &amp;lt;code&amp;gt;aws_vpc&amp;lt;/code&amp;gt; resource type. The calling module can then access attributes of this result in the same way as the returning module would use &amp;lt;code&amp;gt;aws_vpc.example&amp;lt;/code&amp;gt;, such as &amp;lt;code&amp;gt;module.example.vpc.cidr_block&amp;lt;/code&amp;gt;. This works also for modules with an expression like &amp;lt;code&amp;gt;module.vpc&amp;lt;/code&amp;gt; evaluating to an object value with attributes corresponding to the modules's named outputs.&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;code&amp;gt;for&amp;lt;/code&amp;gt; ==&lt;br /&gt;
* [https://discuss.hashicorp.com/t/produce-maps-from-list-of-strings-of-a-map/2197 Produce maps from list of strings of a map]&lt;br /&gt;
This is mostly used for parsing preexisting lists and maps rather than generating ones. For example, we are able to convert all elements in a list of strings to upper case using this expression.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
local {&lt;br /&gt;
  upper_list = [for i in var.list : upper(i)] # creates a new list &lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The For iterates over each element of the list and returns the value of upper(el) for each element in form of a list. We can also use this expression to generate maps.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
local {&lt;br /&gt;
  upper_map = {for i in var.list : i =&amp;gt; upper(i)} # creates a map with key = value&lt;br /&gt;
                                                  #                 { i[0] = upper(i[0])&lt;br /&gt;
                                                  #                   i[1] = upper(i[1]) }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Use ''if'' as a filter in ''for'' expression&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
[for i in var.list : upper(i) if i != &amp;quot;&amp;quot;]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
In this case, the original element from list now correspond to their uppercase version.&lt;br /&gt;
&lt;br /&gt;
Lastly, we can include an if statement as a filter in for expressions. Unfortunately, we are not able to use if in logical operations like the ternary operators we used before. The following state will try to return a list of all non-empty elements in their uppercase state.&lt;br /&gt;
&lt;br /&gt;
== Manipulate list and complex object ==&lt;br /&gt;
Build a new list by removing items that their string value do not match regex expression&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Resource that generates an object&lt;br /&gt;
resource &amp;quot;aws_acm_certificate&amp;quot; &amp;quot;main&amp;quot; {...}&lt;br /&gt;
&lt;br /&gt;
# Preview of input object 'aws_acm_certificate.main.domain_validation_options'&lt;br /&gt;
output &amp;quot;domain_validation_options&amp;quot; {&lt;br /&gt;
  value       = aws_acm_certificate.main.domain_validation_options&lt;br /&gt;
  description = &amp;quot;array/list of maps taken from resource object(aws_acm_certificate.issued) describing all validation domain records&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
$ terraform output domain_validation_options&lt;br /&gt;
[ # &amp;lt;- array starts here&lt;br /&gt;
  { # &amp;lt;- an item of array the map object&lt;br /&gt;
    &amp;quot;domain_name&amp;quot; = &amp;quot;*.dev.example.com&amp;quot;&lt;br /&gt;
    &amp;quot;resource_record_name&amp;quot; = &amp;quot;_11111111111111111111111111111111.dev.example.com.&amp;quot;&lt;br /&gt;
    &amp;quot;resource_record_type&amp;quot; = &amp;quot;CNAME&amp;quot;&lt;br /&gt;
    &amp;quot;resource_record_value&amp;quot; = &amp;quot;_22222222222222222222222222222222.mzlfeqexyx.acm-validations.aws.&amp;quot;&lt;br /&gt;
  },&lt;br /&gt;
  {&lt;br /&gt;
    &amp;quot;domain_name&amp;quot; = &amp;quot;api.example.com&amp;quot;&lt;br /&gt;
    &amp;quot;resource_record_name&amp;quot; = &amp;quot;_31111111111111111111111111111111.api.example.com.&amp;quot;&lt;br /&gt;
    &amp;quot;resource_record_type&amp;quot; = &amp;quot;CNAME&amp;quot;&lt;br /&gt;
    &amp;quot;resource_record_value&amp;quot; = &amp;quot;_42222222222222222222222222222222.vhzmpjdqfx.acm-validations.aws.&amp;quot;&lt;br /&gt;
                                 &lt;br /&gt;
  },&lt;br /&gt;
]&lt;br /&gt;
&lt;br /&gt;
# 'for k, v' syntax builds a new object 'validation_domains' by iterating over array of maps&lt;br /&gt;
# 'aws_acm_certificate.main.domain_validation_options' and conditinally changes a value of 'v'&lt;br /&gt;
# if contains the sting &amp;quot;*.dev.example.com&amp;quot;. tomap(v) is required to persist type across for expression.&lt;br /&gt;
locals {&lt;br /&gt;
  validation_domains = [&lt;br /&gt;
    for k, v in aws_acm_certificate.main.domain_validation_options : tomap(v) if contains(&lt;br /&gt;
      &amp;quot;*.dev.example.com&amp;quot;, replace(v.domain_name, &amp;quot;*.&amp;quot;, &amp;quot;&amp;quot;)&lt;br /&gt;
    )&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
$ terraform output local_distinct_domains&lt;br /&gt;
local_distinct_domains = [&lt;br /&gt;
  &amp;quot;api.example.com&amp;quot;,&lt;br /&gt;
  &amp;quot;dev.example.com&amp;quot;,&lt;br /&gt;
  &amp;quot;api-aat1.dev.example.com&amp;quot;,&lt;br /&gt;
  &amp;quot;api-aat2.dev.example.com&amp;quot;,&lt;br /&gt;
]&lt;br /&gt;
&lt;br /&gt;
# 'for domain' expession builds a new list only when a domain matches regexall string.&lt;br /&gt;
# checks regexall lengh &amp;gt; 0 of matched captured groups so true or false is return, so &lt;br /&gt;
# the 'for domain : if' statment conditionally adds the item to the new list&lt;br /&gt;
locals {&lt;br /&gt;
  distinct_domains_excluded = [ &lt;br /&gt;
    for domain in local.distinct_domains : domain if length(regexall(&amp;quot;dev.example.com&amp;quot;, domain)) &amp;gt; 0&lt;br /&gt;
  ]&lt;br /&gt;
&lt;br /&gt;
# Similar to the above but iterating over array of maps (k,v - key, value pairs)&lt;br /&gt;
  validation_domains = [&lt;br /&gt;
    for k,v in local.validation_domains : tomap(v) if length(regexall(&amp;quot;dev.example.com&amp;quot;, v.domain_name)) &amp;gt; 0&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Example of iterating over array of maps 'aws_acm_certificate.main.domain_validation_options' to build a list&lt;br /&gt;
# of fqdns that are store in 'aws_acm_certificate.main.domain_validation_options.resource_record_name' in .resource_record_name&lt;br /&gt;
# key.&lt;br /&gt;
# 'for fqdn' syntax on each iteration 'fqdn=aws_acm_certificate.main.domain_validation_options[index]', then&lt;br /&gt;
# anything after ':' means 'set to value equals' fqdn.resource_record_name&lt;br /&gt;
resource &amp;quot;aws_acm_certificate_validation&amp;quot; &amp;quot;main&amp;quot; {&lt;br /&gt;
  certificate_arn         = aws_acm_certificate.main.arn&lt;br /&gt;
  validation_record_fqdns = [ &lt;br /&gt;
    for fqdn in aws_acm_certificate.main.domain_validation_options : fqdn.resource_record_name&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== function: replace, regex ==&lt;br /&gt;
Snippet below removes comments and any empty lines from a &amp;lt;code&amp;gt;values.yaml.tpl&amp;lt;/code&amp;gt; file.&lt;br /&gt;
&amp;lt;source lang=json&amp;gt;&lt;br /&gt;
locals {&lt;br /&gt;
  match_comment = &amp;quot;/(?U)(?m)(?s)^[[:space:]]*#.*$/&amp;quot; # match anyline that starts with '#' or any 'whitespace(s) + #'&lt;br /&gt;
  match_empty_line = &amp;quot;/(?m)(?s)(^[\r\n])/&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
resource &amp;quot;helm_release&amp;quot; &amp;quot;myapp&amp;quot; {&lt;br /&gt;
  name             = &amp;quot;myapp&amp;quot;&lt;br /&gt;
  chart            = &amp;quot;${path.module}/charts/myapp&amp;quot;&lt;br /&gt;
  values = [&lt;br /&gt;
    replace(&lt;br /&gt;
        replace(&lt;br /&gt;
          templatefile(&amp;quot;${path.module}/templates/values.yaml.tpl&amp;quot;, {&lt;br /&gt;
            }), local.match_comment, &amp;quot;&amp;quot;), local.match_empty_line, &amp;quot;&amp;quot;)&lt;br /&gt;
  ]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Explanation:&lt;br /&gt;
* Terraform regex is using [https://github.com/google/re2/wiki/Syntax re2 library]&lt;br /&gt;
* Regex flags are enabled by prefixinf the search:&lt;br /&gt;
** &amp;lt;code&amp;gt;(?m)&amp;lt;/code&amp;gt; - multi-line mode (default false)&lt;br /&gt;
** &amp;lt;code&amp;gt;(?s)&amp;lt;/code&amp;gt; - let . match \n (default false)&lt;br /&gt;
** &amp;lt;code&amp;gt;(?U)&amp;lt;/code&amp;gt; - ungreedy (default false), so stop matching comments at EOL&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
*[https://www.hashicorp.com/blog/hashicorp-terraform-0-12-preview-for-and-for-each HashiCorp Terraform 0.12 Preview: For and For-Each]&lt;br /&gt;
&lt;br /&gt;
= Modules =&lt;br /&gt;
Modules are used in Terraform to modularize and encapsulate groups of resources in your infrastructure.&lt;br /&gt;
&lt;br /&gt;
When calling a module from .tf file you passing values for variables that are defined in a module to create resources to your specification. Before you can use any module it needs to be downloaded. Use &lt;br /&gt;
 $ terraform get&lt;br /&gt;
to download modules. You will notice that &amp;lt;code&amp;gt;.terraform&amp;lt;/code&amp;gt; directory will be created that contains symlinks to the module.&lt;br /&gt;
&lt;br /&gt;
;TF file &amp;lt;tt&amp;gt;~/git/dev101/vpc.tf&amp;lt;/tt&amp;gt; calling 'vpc' module&lt;br /&gt;
&lt;br /&gt;
 variable &amp;quot;vpc_name&amp;quot;       { description = &amp;quot;value comes from terrafrom.tfvars&amp;quot; }&lt;br /&gt;
 variable &amp;quot;vpc_cidr_base&amp;quot;  { description = &amp;quot;value comes from terrafrom.tfvars&amp;quot; }&lt;br /&gt;
 variable &amp;quot;vpc_cidr_range&amp;quot; { description = &amp;quot;value comes from terrafrom.tfvars&amp;quot; }&lt;br /&gt;
 &lt;br /&gt;
 module &amp;quot;vpc-dev&amp;quot; {&lt;br /&gt;
   source     = &amp;quot;../modules/vpc&amp;quot;&lt;br /&gt;
   &amp;lt;span style=&amp;quot;color: blue&amp;quot;&amp;gt;name&amp;lt;/span&amp;gt;       = &amp;quot;${var.vpc_name}&amp;quot;  #here we assign a value to 'name' variable&lt;br /&gt;
   &amp;lt;span style=&amp;quot;color: blue&amp;quot;&amp;gt;cidr&amp;lt;/span&amp;gt;       = &amp;quot;${var.vpc_cidr_base}.${var.vpc_cidr_range}&amp;quot; }&lt;br /&gt;
 &lt;br /&gt;
 output &amp;quot;vpc-name&amp;quot;         { value = &amp;quot;${var.vpc_name                  }&amp;quot;}&lt;br /&gt;
 output &amp;quot;vpc_id&amp;quot;           { value = &amp;quot;${module.vpc-dev.&amp;lt;span style=&amp;quot;color: green&amp;quot;&amp;gt;id-from_module&amp;lt;/span&amp;gt; }&amp;quot;}&lt;br /&gt;
&lt;br /&gt;
;Module in &amp;lt;tt&amp;gt;~/git/modules/vpc/main.tf&amp;lt;/tt&amp;gt;&lt;br /&gt;
 variable &amp;quot;name&amp;quot; { description = &amp;quot;variable local to the module, value comes when calling the module&amp;quot; }&lt;br /&gt;
 variable &amp;quot;cidr&amp;quot; { description = &amp;quot;local to the module, value passed on when calling the module&amp;quot; }&lt;br /&gt;
 &lt;br /&gt;
 resource &amp;quot;aws_vpc&amp;quot; &amp;quot;scope&amp;quot; {&lt;br /&gt;
    cidr_block  = &amp;quot;${var.&amp;lt;span style=&amp;quot;color: blue&amp;quot;&amp;gt;cidr&amp;lt;/span&amp;gt;}&amp;quot;&lt;br /&gt;
    tags { Name = &amp;quot;${var.&amp;lt;span style=&amp;quot;color: blue&amp;quot;&amp;gt;name&amp;lt;/span&amp;gt;}&amp;quot; }}&lt;br /&gt;
 &lt;br /&gt;
  output &amp;quot;&amp;lt;span style=&amp;quot;color: green&amp;quot;&amp;gt;id-from_module&amp;lt;/span&amp;gt;&amp;quot;    { value = &amp;quot;${aws_vpc.scope.id}&amp;quot; }&lt;br /&gt;
&lt;br /&gt;
Output variables is a way to output important data back when running &amp;lt;code&amp;gt;terraform apply&amp;lt;/code&amp;gt;. These variables also can be recalled when .tfstate file has been populated using &amp;lt;code&amp;gt;terraform output VARIABLE-NAME&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
 $ terraform apply     #this will use 'vpc' module&lt;br /&gt;
&lt;br /&gt;
[[File:Terraform-module-apply.png|400px|none|left|Terraform-module-apply]]&lt;br /&gt;
&lt;br /&gt;
Notice &amp;lt;span style=&amp;quot;color: green&amp;quot;&amp;gt;Outputs&amp;lt;/span&amp;gt;. These outputs can be recalled also by:&lt;br /&gt;
 $ terraform output vpc-name      $ terraform output vpc_id&lt;br /&gt;
 dev101                           vpc-00e00c67&lt;br /&gt;
&lt;br /&gt;
= Templates =&lt;br /&gt;
{{ Note | [https://github.com/hashicorp/terraform-guides/tree/master/infrastructure-as-code/terraform-0.12-examples/new-template-syntax Terraform 0.12+ New Template Syntax Example] }}&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
# Terraform version 0.12+ template syntax&lt;br /&gt;
%{ for name in var.names ~}&lt;br /&gt;
%{ if name == &amp;quot;Mary&amp;quot; }${name}%{ endif ~}&lt;br /&gt;
%{ endfor ~}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Dump a rendered &amp;lt;code&amp;gt;data.template_file&amp;lt;/code&amp;gt; into a file to preview correctness of interpolations&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
#Dumps rendered template&lt;br /&gt;
resource &amp;quot;null_resource&amp;quot; &amp;quot;export_rendered_template&amp;quot; {&lt;br /&gt;
  triggers = {&lt;br /&gt;
   uid = &amp;quot;${uuid()}&amp;quot;  #this causes to always run this resource&lt;br /&gt;
  }&lt;br /&gt;
  provisioner &amp;quot;local-exec&amp;quot; {&lt;br /&gt;
    command = &amp;quot;cat &amp;gt; waf-policy.output.txt &amp;lt;&amp;lt;EOL\n${data.template_file.waf-whitelist-policy.rendered}\nEOL&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example of creating &lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
resource &amp;quot;aws_instance&amp;quot; &amp;quot;microservices&amp;quot; {&lt;br /&gt;
  count      = &amp;quot;${var.instance_count}&amp;quot;&lt;br /&gt;
  subnet_id  = &amp;quot;${element(&amp;quot;${data.aws_subnet.private.*.id          }&amp;quot;, count.index)}&amp;quot;&lt;br /&gt;
  user_data  = &amp;quot;${element(&amp;quot;${data.template_file.userdata.*.rendered}&amp;quot;, count.index)}&amp;quot;&lt;br /&gt;
  ...&lt;br /&gt;
}&lt;br /&gt;
data &amp;quot;template_file&amp;quot; &amp;quot;userdata&amp;quot; {&lt;br /&gt;
  count      = &amp;quot;${var.instance_count}&amp;quot;&lt;br /&gt;
  template   = &amp;quot;${file(&amp;quot;${path.root}/templates/user-data.tpl&amp;quot;)}&amp;quot;&lt;br /&gt;
  vars = {&lt;br /&gt;
    vmname   = &amp;quot;ms-${count.index + 1}-${var.vpc_name}&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
#For debugging you can display an array of rendered templates with the output below:&lt;br /&gt;
output &amp;quot;userdata&amp;quot; { value = &amp;quot;${data.template_file.userdata.*.rendered}&amp;quot; }&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
{{ Note |&lt;br /&gt;
* resource &amp;lt;code&amp;gt;template_file is deprecated&amp;lt;/code&amp;gt; in favour of &amp;lt;code&amp;gt;data template_file&amp;lt;/code&amp;gt;&lt;br /&gt;
* Terraform 0.12+ offers new &amp;lt;code&amp;gt;template&amp;lt;/code&amp;gt; function without a need of using a &amp;lt;code&amp;gt;data&amp;lt;/code&amp;gt; object }}&lt;br /&gt;
== template json files ==&lt;br /&gt;
For working with JSON structures it's [https://www.terraform.io/docs/configuration/functions/templatefile.html#generating-json-or-yaml-from-a-template recommended] to use &amp;lt;code&amp;gt;jsonencode&amp;lt;/code&amp;gt; function to simplify escaping, delimiters and get validated json in return.&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
resource &amp;quot;aws_iam_policy&amp;quot; &amp;quot;s3Bucket&amp;quot; {&lt;br /&gt;
   name  = s3Bucket&amp;quot;&lt;br /&gt;
   policy = templatefile(&amp;quot;${path.module}/templates/s3Bucket.json.tpl&amp;quot;, {&lt;br /&gt;
     S3BUCKETS = var.s3_buckets&lt;br /&gt;
   })&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
variable &amp;quot;s3_buckets&amp;quot; {&lt;br /&gt;
  type        = list(string)&lt;br /&gt;
  default     = [ &amp;quot;aaa-bucket-111&amp;quot;, &amp;quot;bbb-bucket-222&amp;quot; ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Template file&lt;br /&gt;
&amp;lt;source lang=json&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
    &amp;quot;Version&amp;quot;: &amp;quot;2012-10-17&amp;quot;,&lt;br /&gt;
    &amp;quot;Statement&amp;quot;: [&lt;br /&gt;
        {&lt;br /&gt;
            &amp;quot;Effect&amp;quot;: &amp;quot;Allow&amp;quot;,&lt;br /&gt;
            &amp;quot;Action&amp;quot;: &amp;quot;s3:ListAllMyBuckets&amp;quot;,&lt;br /&gt;
            &amp;quot;Resource&amp;quot;: &amp;quot;*&amp;quot;&lt;br /&gt;
        },&lt;br /&gt;
        {&lt;br /&gt;
            &amp;quot;Effect&amp;quot;: &amp;quot;Allow&amp;quot;,&lt;br /&gt;
            &amp;quot;Action&amp;quot;: [&lt;br /&gt;
                &amp;quot;s3:ListBucket&amp;quot;,&lt;br /&gt;
                &amp;quot;s3:GetBucketLocation&amp;quot;&lt;br /&gt;
            ],&lt;br /&gt;
            &amp;quot;Resource&amp;quot;: ${jsonencode([for BUCKET in S3BUCKETS : &amp;quot;arn:aws:s3:::${BUCKET}&amp;quot;])}&lt;br /&gt;
# renders json array -&amp;gt; [ &amp;quot;arn:aws:s3:::aaa-bucket-111&amp;quot;, &amp;quot;arn:aws:s3:::bbb-bucket-222&amp;quot; ]&lt;br /&gt;
        }&lt;br /&gt;
    ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Explain&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
substitution syntax ${}    local loop variable&lt;br /&gt;
|  function jsonencode   /      templatefile function input variable, it's not ${} syntax&lt;br /&gt;
|  |                   /       /                                  &lt;br /&gt;
${jsonencode([for BUCKET in S3BUCKETS : &amp;quot;arn:aws:s3:::${BUCKET}&amp;quot;])}&lt;br /&gt;
             / |                                        /       |\&lt;br /&gt;
           /   for loop                     template variable   | function cloasing bracket&lt;br /&gt;
    indicates that the result to be an array[]               closing bracket of the json array&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resource ==&lt;br /&gt;
*[https://github.com/hashicorp/terraform/issues/1893 example of unique templates per instance]&lt;br /&gt;
*[https://github.com/hashicorp/terraform/pull/2140 recommendation of how to create unique templates per instance]&lt;br /&gt;
&lt;br /&gt;
= Execute arbitrary code using null_resource and local-exec =&lt;br /&gt;
The null_resource allows to create terraform managed resource also saved in the state file but it uses 3rd party provisoners like local-exec, remote-exec, etc., allowing for arbitrary code execution. This should be only used when Terraform core does not provide the solution for your use case.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
resource &amp;quot;null_resource&amp;quot; &amp;quot;attach_alb_am_wkr_ext&amp;quot; {&lt;br /&gt;
&lt;br /&gt;
  #depends_on sets up a dependency. So it depends on completion of another resource &lt;br /&gt;
  #and it won't run if the resource does not change&lt;br /&gt;
  #depends_on = [ &amp;quot;aws_cloudformation_stack.waf-alb&amp;quot; ]  &lt;br /&gt;
&lt;br /&gt;
  #triggers save computed strings in tfstate file, if value changes on the next run it triggers a resource to be created&lt;br /&gt;
  triggers = {   &lt;br /&gt;
    waf_id = &amp;quot;${aws_cloudformation_stack.waf-alb.outputs.wafWebACL}&amp;quot;   #produces WAF_id&lt;br /&gt;
    alb_id = &amp;quot;${module.balancer_external_alb_instance.arn         }&amp;quot;   #produces full ALB_arn name&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  provisioner &amp;quot;local-exec&amp;quot; {&lt;br /&gt;
    when    = &amp;quot;create&amp;quot;     #runs on: terraform apply&lt;br /&gt;
    command = &amp;lt;&amp;lt;EOF&lt;br /&gt;
ALBARN=$(aws elbv2 describe-load-balancers --region ${var.region} \&lt;br /&gt;
      --name ${var.vpc}-${var.alb_class} \&lt;br /&gt;
      --output text --query 'LoadBalancers[0].LoadBalancerArn') &amp;amp;&amp;amp;&lt;br /&gt;
aws waf-regional associate-web-acl --web-acl-id &amp;quot;${aws_cloudformation_stack.waf-alb.outputs.wafWebACL}&amp;quot; \&lt;br /&gt;
                                   --resource-arn $ALBARN --region ${var.region}&lt;br /&gt;
EOF&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  provisioner &amp;quot;local-exec&amp;quot; {&lt;br /&gt;
    when    = &amp;quot;destroy&amp;quot;  #runs only on: terraform destruct&lt;br /&gt;
    command = &amp;lt;&amp;lt;EOF&lt;br /&gt;
ALBARN=$(aws elbv2 describe-load-balancers --region ${var.region} \&lt;br /&gt;
      --name ${var.vpc}-${var.alb_class} \&lt;br /&gt;
      --output text --query 'LoadBalancers[0].LoadBalancerArn') &amp;amp;&amp;amp;&lt;br /&gt;
aws waf-regional disassociate-web-acl --resource-arn $ALBARN --region ${var.region}&lt;br /&gt;
EOF&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: By default the local-exec provisioner will use &amp;lt;code&amp;gt;/bin/sh -c &amp;quot;your&amp;lt;&amp;lt;EOFscript&amp;quot;&amp;lt;/code&amp;gt; so it will not strip down any meta-characters like &amp;quot;double quotes&amp;quot; causing &amp;lt;tt&amp;gt;aws cli&amp;lt;/tt&amp;gt; to fail. Therefore the output has been forced as &amp;lt;tt&amp;gt;text&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= &amp;lt;code&amp;gt;terraform providers&amp;lt;/code&amp;gt; =&lt;br /&gt;
List all providers in your project to see versions and dependencies.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
$ terraform providers&lt;br /&gt;
.&lt;br /&gt;
├── provider.aws ~&amp;gt; 2.44&lt;br /&gt;
├── provider.external ~&amp;gt; 1.2&lt;br /&gt;
├── provider.null ~&amp;gt; 2.1&lt;br /&gt;
├── provider.random ~&amp;gt; 2.2&lt;br /&gt;
├── provider.template ~&amp;gt; 2.1&lt;br /&gt;
├── module.kubernetes&lt;br /&gt;
│   ├── module.config&lt;br /&gt;
│   │   ├── provider.aws&lt;br /&gt;
│   │   ├── provider.helm ~&amp;gt; 0.10.4&lt;br /&gt;
│   │   ├── provider.kubernetes ~&amp;gt; 1.10.0&lt;br /&gt;
│   │   ├── provider.null (inherited)&lt;br /&gt;
│   │   ├── module.alb_ingress_controller&lt;br /&gt;
(...)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= terraform plugins cache =&lt;br /&gt;
Create &amp;lt;code&amp;gt;.terraformrc&amp;lt;/code&amp;gt; file in $HOME directory and specify the cache directory. Or set an environment variable. Then rerun &amp;lt;code&amp;gt;terraform init&amp;lt;/code&amp;gt; to save providers into shared (cache) directory.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=terraform&amp;gt;&lt;br /&gt;
# Option 1.&lt;br /&gt;
cat &amp;gt; ~/.terraformrc &amp;lt;&amp;lt;'EOF'&lt;br /&gt;
plugin_cache_dir   = &amp;quot;$HOME/.terraform.d/plugin-cache/&amp;quot;&lt;br /&gt;
disable_checkpoint = true&lt;br /&gt;
EOF&lt;br /&gt;
&lt;br /&gt;
# Option 2.&lt;br /&gt;
export TF_PLUGIN_CACHE_DIR=$HOME/.terraform.d/plugins-cache&lt;br /&gt;
&lt;br /&gt;
# Create the cache directory&lt;br /&gt;
mkdir $HOME/.terraform.d/plugin-cache&lt;br /&gt;
&lt;br /&gt;
# Delete per root module providers in &amp;lt;code&amp;gt;.terraform&amp;lt;/code&amp;gt; directory&lt;br /&gt;
find /git/repositories -type d -name &amp;quot;.terraform&amp;quot; -exec rm -rf {}/providers \;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Run &amp;lt;code&amp;gt;terraform init&amp;lt;/code&amp;gt;.&lt;br /&gt;
&amp;lt;source lang=terraform&amp;gt;&lt;br /&gt;
terraform init -backend-config=dev.backend.tfvars&lt;br /&gt;
Initializing the backend...&lt;br /&gt;
&lt;br /&gt;
Successfully configured the backend &amp;quot;s3&amp;quot;! Terraform will automatically&lt;br /&gt;
use this backend unless the backend configuration changes.&lt;br /&gt;
&lt;br /&gt;
Initializing provider plugins...&lt;br /&gt;
- Checking for available provider plugins...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;random&amp;quot; (hashicorp/random) 2.3.0...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;kubernetes&amp;quot; (hashicorp/kubernetes) 1.10.0...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;helm&amp;quot; (hashicorp/helm) 1.2.3...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;aws&amp;quot; (hashicorp/aws) 2.70.0...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;external&amp;quot; (hashicorp/external) 1.2.0...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;null&amp;quot; (hashicorp/null) 2.1.2...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;template&amp;quot; (hashicorp/template) 2.1.2...&lt;br /&gt;
&lt;br /&gt;
Terraform has been successfully initialized!&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
:[[File:ClipCapIt-200714-085009.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Although cache dir is used by all Terraform projects, the providers versioning still works and normal versioning restrictions apply. If you want to be sure which version is locked for use with your current project, you can inspect SHA256 of files saved in one of the files in the “.terraform” directory:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ cat .terraform/plugins/linux_amd64/lock.json &lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;aws&amp;quot;: &amp;quot;f08daaf64b9fca69978a40f88091d1a77fc9725fb04b0fec5e731609c53a025f&amp;quot;,&lt;br /&gt;
  &amp;quot;external&amp;quot;: &amp;quot;6dad56007a3cb0ae9c4655c67d13502e51e38ca2673cf0f22a5fadce6803f9e4&amp;quot;,&lt;br /&gt;
  &amp;quot;helm&amp;quot;: &amp;quot;09b8ccb993f7d776555e811c856de006ac12b9fedfca15b07a85a6814914fd04&amp;quot;,&lt;br /&gt;
  &amp;quot;kubernetes&amp;quot;: &amp;quot;7ebf3273e622d1adb736e98f6fa5cc7e664c61b9171105b13c3b5ea8f8ebc5ff&amp;quot;,&lt;br /&gt;
  &amp;quot;null&amp;quot;: &amp;quot;c56285e7bd25a806bf86fcd4893edbe46e621a46e20fe24ef209b6fd0b7cf5fc&amp;quot;,&lt;br /&gt;
  &amp;quot;random&amp;quot;: &amp;quot;791ef28ff31913d9b2ef0bedb97de98bebafe66d002bc2b9d01377e59a6cfaed&amp;quot;,&lt;br /&gt;
  &amp;quot;template&amp;quot;: &amp;quot;cd8665642bf0f5b5f57a53050d10fd83415428c2dc6713b85e174e007fcc93bf&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
 &lt;br /&gt;
find ~/.terraform.d/plugins -type f | xargs sha256sum&lt;br /&gt;
f08daaf64b9fca69978a40f88091d1a77fc9725fb04b0fec5e731609c53a025f  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-aws_v2.70.0_x4&lt;br /&gt;
6dad56007a3cb0ae9c4655c67d13502e51e38ca2673cf0f22a5fadce6803f9e4  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-external_v1.2.0_x4&lt;br /&gt;
c56285e7bd25a806bf86fcd4893edbe46e621a46e20fe24ef209b6fd0b7cf5fc  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-null_v2.1.2_x4&lt;br /&gt;
791ef28ff31913d9b2ef0bedb97de98bebafe66d002bc2b9d01377e59a6cfaed  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-random_v2.3.0_x4&lt;br /&gt;
09b8ccb993f7d776555e811c856de006ac12b9fedfca15b07a85a6814914fd04  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-helm_v1.2.3_x4&lt;br /&gt;
7ebf3273e622d1adb736e98f6fa5cc7e664c61b9171105b13c3b5ea8f8ebc5ff  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-kubernetes_v1.10.0_x4&lt;br /&gt;
cd8665642bf0f5b5f57a53050d10fd83415428c2dc6713b85e174e007fcc93bf  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-template_v2.1.2_x4&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
As you can see, the SHA256 hash for AWS provider saved in the &amp;lt;tt&amp;gt;lock.json&amp;lt;/tt&amp;gt; file matches the hash of providera saved in the cache directory.&lt;br /&gt;
&lt;br /&gt;
= AWS - [https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Updates.20180206.html#AuroraMySQL.Updates.20180206.CLI RDS aurora] - versioning =&lt;br /&gt;
[https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Updates.20180206.html#AuroraMySQL.Updates.20180206.CLI Engine name] 'aurora-mysql' refers to engine version 5.7.x and for version 5.6.10a engine name is aurora.&lt;br /&gt;
* The engine name for Aurora MySQL 2.x is aurora-mysql; the engine name for Aurora MySQL 1.x continues to be aurora.&lt;br /&gt;
* The engine version for Aurora MySQL 2.x is 5.7.12; the engine version for Aurora MySQL 1.x continues to be 5.6.10ann.&lt;br /&gt;
&amp;lt;syntaxhighlightjs lang=yaml&amp;gt;&lt;br /&gt;
module &amp;quot;db&amp;quot; {&lt;br /&gt;
  source  = &amp;quot;terraform-aws-modules/rds-aurora/aws&amp;quot;&lt;br /&gt;
  version = &amp;quot;2.29.0&amp;quot;&lt;br /&gt;
  name    = &amp;quot;db&amp;quot;&lt;br /&gt;
  engine          = &amp;quot;aurora&amp;quot;                  # v5.6&lt;br /&gt;
  engine_version  = &amp;quot;5.6.mysql_aurora.1.23.0&amp;quot; # v5.6&lt;br /&gt;
  #engine         = &amp;quot;aurora-mysql&amp;quot;            # v5.7&lt;br /&gt;
  #engine_version = &amp;quot;5.7.mysql_aurora.2.09.0&amp;quot; # v5.7&lt;br /&gt;
  ...&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlightjs&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/localstack/localstack localstack] - Mock AWS Services =&lt;br /&gt;
&amp;lt;source lang=&amp;quot;shell&amp;quot;&amp;gt;&lt;br /&gt;
pip install localstack&lt;br /&gt;
localstack start&lt;br /&gt;
SERVICES=kinesis,lambda,sqs,dynamodb DEBUG=1 localstack start&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
;Examples&lt;br /&gt;
* [https://github.com/MattSurabian/bad-terraform bad-terraform]&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/tfsec/tfsec tfsec] - Security Scanning TF code =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
LATEST=$(curl --silent -L &amp;quot;https://api.github.com/repos/tfsec/tfsec/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
sudo curl -L https://github.com/tfsec/tfsec/releases/download/${LATEST}/tfsec-linux-amd64 -o /usr/local/bin/tfsec &lt;br /&gt;
sudo chmod +x /usr/local/bin/tfsec&lt;br /&gt;
&lt;br /&gt;
# Use with docker&lt;br /&gt;
docker run --rm -it -v &amp;quot;$(pwd):/src&amp;quot; liamg/tfsec /src&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
tfsec .&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/terraform-linters/tflint tflint] - validate provider-specific issues =&lt;br /&gt;
Requires Terraform &amp;gt;= 0.12&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/terraform-linters/tflint/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d)&lt;br /&gt;
curl -L https://github.com/terraform-linters/tflint/releases/download/${LATEST}/tflint_linux_amd64.zip -o $TEMPDIR/tflint_linux_amd64.zip&lt;br /&gt;
sudo unzip $TEMPDIR/tflint_linux_amd64.zip -d /usr/local/bin&lt;br /&gt;
&lt;br /&gt;
# Configure tflint&lt;br /&gt;
# | Current directory (./.tflint.hcl)&lt;br /&gt;
# | Home directory (~/.tflint.hcl)&lt;br /&gt;
tflint --config other_config.hcl&lt;br /&gt;
&lt;br /&gt;
## Add plugins&lt;br /&gt;
https://github.com/terraform-linters/tflint/tree/master/docs/rules&lt;br /&gt;
cat &amp;gt; ./.tflint.hcl &amp;lt;&amp;lt;EOF&lt;br /&gt;
plugin &amp;quot;aws&amp;quot; {&lt;br /&gt;
  enabled = true&lt;br /&gt;
  version = &amp;quot;0.5.0&amp;quot;&lt;br /&gt;
  source  = &amp;quot;github.com/terraform-linters/tflint-ruleset-aws&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
plugin &amp;quot;google&amp;quot; {&lt;br /&gt;
    enabled = true&lt;br /&gt;
    version = &amp;quot;0.15.0&amp;quot;&lt;br /&gt;
    source  = &amp;quot;github.com/terraform-linters/tflint-ruleset-google&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
EOF&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
tflint --module&lt;br /&gt;
tflint --module --var-file=dev.tfvars&lt;br /&gt;
&lt;br /&gt;
# Use with docker&lt;br /&gt;
docker pull ghcr.io/terraform-linters/tflint:latest&lt;br /&gt;
docker run --rm -v $(pwd):/src -t ghcr.io/terraform-linters/tflint:v0.34.1&lt;br /&gt;
docker run --rm -v $(pwd):/src -t ghcr.io/terraform-linters/tflint:v0.34.1 -v&lt;br /&gt;
&lt;br /&gt;
# Init and check&lt;br /&gt;
docker run --rm -v $(pwd):/src -t --entrypoint /bin/sh  ghcr.io/terraform-linters/tflint:v0.34.1 -c &amp;quot;tflint --init; tflint /src/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
## It looks important that tflint is executed in terrafrom root path, thus `cd /src`&lt;br /&gt;
docker run --rm -v $(pwd):/src -t -e TFLINT_LOG=debug --entrypoint /bin/sh  ghcr.io/terraform-linters/tflint:v0.34.1 \&lt;br /&gt;
-c &amp;quot;cd /src; tflint --init; tflint --var-file=environments/gcp-dev.tfvars --module&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/terraform-docs/terraform-docs terraform-docs] - generate Terraform documentation = &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install the binary&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/terraform-docs/terraform-docs/releases/latest&amp;quot; | jq -r .tag_name); echo $VERSION&lt;br /&gt;
wget https://github.com/terraform-docs/terraform-docs/releases/download/$VERSION/terraform-docs-$VERSION-linux-amd64.tar.gz&lt;br /&gt;
tar xzvf terraform-docs-$VERSION-linux-amd64.tar.gz&lt;br /&gt;
sudo install terraform-docs /usr/local/bin/terraform-docs&lt;br /&gt;
&lt;br /&gt;
# Use with docker&lt;br /&gt;
docker run --rm --volume &amp;quot;$(pwd):/src&amp;quot; -u $(id -u) quay.io/terraform-docs/terraform-docs:0.16.0 markdown /src&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
terraform-docs . &amp;gt; README.md&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/cycloidio/inframap InfraMap] - plot your Terraform state =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/cycloidio/inframap/releases/latest&amp;quot; | jq -r .tag_name); echo $VERSION&lt;br /&gt;
TEMPDIR=$(mktemp -d)&lt;br /&gt;
curl -L https://github.com/cycloidio/inframap/releases/download/${VERSION}/inframap-linux-amd64.tar.gz -o $TEMPDIR/inframap-linux-amd64.tar.gz&lt;br /&gt;
tar xzvf $TEMPDIR/inframap-linux-amd64.tar.gz -C $TEMPDIR inframap-linux-amd64&lt;br /&gt;
sudo install $TEMPDIR/inframap-linux-amd64 /usr/local/bin/inframap&lt;br /&gt;
&lt;br /&gt;
# Install graphviz, it contains the `dot` program&lt;br /&gt;
sudo apt install graphviz&lt;br /&gt;
&lt;br /&gt;
# Install GraphEasy&lt;br /&gt;
## Cpan manager&lt;br /&gt;
sudo apt install cpanminus # install perl packet managet&lt;br /&gt;
sudo cpanm Graph::Easy # Graph-Easy-0.76 as of 2021-07&lt;br /&gt;
&lt;br /&gt;
## Apt-get (tested with Ubuntu 20.04 LTS)&lt;br /&gt;
sudo apt install libgraph-easy-perl # Graph::Easy v0.76&lt;br /&gt;
&lt;br /&gt;
# a sample usage&lt;br /&gt;
cat input.dot | graph-easy --from=dot --as_ascii&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Usage inframap&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
The most important subcommands are:&lt;br /&gt;
* generate: generates the graph from STDIN or file, STDIN can be .tf files/modules or .tfstate&lt;br /&gt;
* prune: removes all unnecessary information from the state or HCL (not supported yet) so it can be shared without any security concerns&lt;br /&gt;
&lt;br /&gt;
# Generate your infrastructure graph in a DOT representation from: Terraform files or state file&lt;br /&gt;
cat terraform.tf      | inframap generate --printer dot --hcl     | tee graph.dot &lt;br /&gt;
cat terraform.tfstate | inframap generate --printer dot --tfstate | tee graph.dot&lt;br /&gt;
&lt;br /&gt;
# `prune` command will sanitize and anonymize content of the files&lt;br /&gt;
cat terraform.tfstate | inframap prune --canonicals --tfstate &amp;gt; cleaned.tfstate &lt;br /&gt;
&lt;br /&gt;
# Pipe all the previous commands. ASCII graph is generated using graph-easy&lt;br /&gt;
cat terraform.tfstate | inframap prune --tfstate | inframap generate --tfstate | graph-easy&lt;br /&gt;
&lt;br /&gt;
# from State file - visualizing with `dot` or `graph-easy`&lt;br /&gt;
inframap generate state.tfstate | dot -Tpng &amp;gt; graph.png&lt;br /&gt;
inframap generate state.tfstate | graph-easy&lt;br /&gt;
&lt;br /&gt;
# from HCL&lt;br /&gt;
inframap generate terraform.tf | graph-easy&lt;br /&gt;
inframap generate ./my-module/ | graph-easy # or HCL module&lt;br /&gt;
&lt;br /&gt;
# using docker image (assuming that your Terraform files are in the working directory)&lt;br /&gt;
docker run --rm -v ${PWD}:/opt cycloid/inframap generate /opt/terraform.tfstate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example of EKS module&lt;br /&gt;
:[[File:ClipCapIt-210716-090202.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/Pluralith/pluralith-cli/releases Pluralith] =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/Pluralith/pluralith-cli/releases/latest&amp;quot; | jq -r .tag_name); echo $VERSION&lt;br /&gt;
curl -L https://github.com/Pluralith/pluralith-cli/releases/download/${VERSION}/pluralith_cli_linux_amd64_${VERSION} -o pluralith_cli_linux_amd64_${VERSION}&lt;br /&gt;
sudo install pluralith_cli_linux_amd64_${VERSION} /usr/local/bin/pluralith&lt;br /&gt;
&lt;br /&gt;
# Install pluralith-cli-graphing&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/Pluralith/pluralith-cli-graphing-release/releases/latest&amp;quot; | jq -r .tag_name | tr -d v); echo $VERSION&lt;br /&gt;
curl -L https://github.com/Pluralith/pluralith-cli-graphing-release/releases/download/v${VERSION}/pluralith_cli_graphing_linux_amd64_${VERSION} -o pluralith_cli_graphing_linux_amd64_${VERSION}&lt;br /&gt;
sudo install pluralith_cli_graphing_linux_amd64_${VERSION} ~/Pluralith/bin/pluralith-cli-graphing&lt;br /&gt;
&lt;br /&gt;
# Check versions&lt;br /&gt;
pluralith version&lt;br /&gt;
parsing response failed -&amp;gt; GetGitHubRelease: %!w(&amp;lt;nil&amp;gt;)&lt;br /&gt;
 _&lt;br /&gt;
|_)|    _ _ |._|_|_ &lt;br /&gt;
|  ||_|| (_||| | | |&lt;br /&gt;
&lt;br /&gt;
→ CLI Version: 0.2.2&lt;br /&gt;
→ Graph Module Version: 0.2.1&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
pluralith login --api-key $PLURALITH_API_KEY&lt;br /&gt;
&lt;br /&gt;
# Generate PDF graph locally&lt;br /&gt;
pluralith &amp;lt;terrafom-root-folder&amp;gt; --var-file environments/dev.tfvars graph --local-only&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/flosell/iam-policy-json-to-terraform iam-policy-json-to-terraform] =&lt;br /&gt;
Convert an IAM Policy in JSON format into a Terraform aws_iam_policy_document&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/flosell/iam-policy-json-to-terraform/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
sudo curl -L https://github.com/flosell/iam-policy-json-to-terraform/releases/download/${LATEST}/iam-policy-json-to-terraform_amd64 -o /usr/local/bin/iam-policy-json-to-terraform&lt;br /&gt;
sudo chmod +x /usr/local/bin/iam-policy-json-to-terraform&lt;br /&gt;
&lt;br /&gt;
# Usage:&lt;br /&gt;
iam-policy-json-to-terraform &amp;lt; some-policy.json&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/hieven/terraform-visual terraform-visual] =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install&lt;br /&gt;
sudo apt-get update&lt;br /&gt;
sudo apt install nodejs npm&lt;br /&gt;
sudo npm install -g @terraform-visual/cli&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
terraform plan -out=plan.out                # Run plan and output as a file&lt;br /&gt;
terraform show -json plan.out &amp;gt; plan.json   # Read plan file and output it in JSON format&lt;br /&gt;
terraform-visual --plan plan.json&lt;br /&gt;
firefox terraform-visual-report/index.html&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/cloudskiff/driftctl driftctl] =&lt;br /&gt;
Measures infrastructure as code coverage, and tracks infrastructure drift.&lt;br /&gt;
IaC: Terraform, Cloud providers: AWS, GitHub (Azure and GCP on the roadmap for 2021). Spot discrepancies as they happen: driftctl is a free and open-source CLI that warns of infrastructure drifts and fills in the missing piece in your DevSecOps toolbox.&lt;br /&gt;
&lt;br /&gt;
;Features [https://docs.driftctl.com/ docs]&lt;br /&gt;
* Scan cloud provider and map resources with IaC code&lt;br /&gt;
* Analyze diffs, and warn about drift and unwanted unmanaged resources&lt;br /&gt;
* Allow users to ignore resources&lt;br /&gt;
* Multiple output formats&lt;br /&gt;
&lt;br /&gt;
Install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
curl -L https://github.com/snyk/driftctl/releases/latest/download/driftctl_linux_amd64 -o driftctl&lt;br /&gt;
install ./driftctl /usr/local/bin/driftctl&lt;br /&gt;
driftctl version&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[https://docs.driftctl.com/0.39.0/usage/cmd/scan-usage Detect drift on GCP]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
source &amp;lt;(driftctl completion bash)&lt;br /&gt;
export GOOGLE_APPLICATION_CREDENTIALS=$HOME/.config/gcloud/application_default_credentials.json&lt;br /&gt;
export CLOUDSDK_CORE_PROJECT=&amp;lt;myproject_id&amp;gt;&lt;br /&gt;
driftctl scan --to=&amp;quot;gcp+tf&amp;quot;&lt;br /&gt;
driftctl scan --to=&amp;quot;gcp+tf&amp;quot; --deep --output html://output.html&lt;br /&gt;
driftctl scan --to=&amp;quot;gcp+tf&amp;quot; --from tfstate+gs://my-bucket/path/to/state.tfstate # Use this when working with workspaces&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/infracost/infracost infracost] =&lt;br /&gt;
Infracost shows cloud cost estimates for infrastructure-as-code projects such as Terraform.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Downloads the CLI based on your OS/arch and puts it in /usr/local/bin&lt;br /&gt;
curl -fsSL https://raw.githubusercontent.com/infracost/infracost/master/scripts/install.sh | sh&lt;br /&gt;
&lt;br /&gt;
# Register for a free API key&lt;br /&gt;
infracost register # The key is saved in ~/.config/infracost/credentials.yml.&lt;br /&gt;
&lt;br /&gt;
# Show cost breakdown on live infra&lt;br /&gt;
infracost breakdown --path terraform_nlb_static_eips&lt;br /&gt;
&lt;br /&gt;
# Show cost breakdown based on Terraform plan&lt;br /&gt;
cd path/to/src_code&lt;br /&gt;
terraform init&lt;br /&gt;
terraform plan -out  tfplan.binary&lt;br /&gt;
terraform show -json tfplan.binary &amp;gt; plan.json&lt;br /&gt;
&lt;br /&gt;
## run via binary&lt;br /&gt;
infracost breakdown --path plan.json&lt;br /&gt;
infracost breakdown --path plan.json --show-skipped --format html &amp;gt; /vagrant/infracost.html&lt;br /&gt;
infracost diff      --path plan.json&lt;br /&gt;
&lt;br /&gt;
## run via Docker&lt;br /&gt;
docker run -it --rm --volume &amp;quot;$(pwd):/src&amp;quot; -u $(id -u) -e INFRACOST_API_KEY=$INFRACOST_API_KEY infracost/infracost:0.9.15 breakdown --path /src/plan.json&lt;br /&gt;
docker run -it --rm --volume &amp;quot;$(pwd):/src&amp;quot; -u $(id -u) -e INFRACOST_API_KEY=$INFRACOST_API_KEY infracost/infracost:0.9.15 diff      --path /src/plan.json&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example output&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
## Cost breakdown&lt;br /&gt;
docker run -it --rm --volume &amp;quot;$(pwd):/src&amp;quot; -u $(id -u) -e INFRACOST_API_KEY=$INFRACOST_API_KEY infracost/infracost:0.9.15 breakdown --path /src/plan.json&lt;br /&gt;
Detected Terraform plan JSON file at /src/plan.json&lt;br /&gt;
✔ Calculating monthly cost estimate &lt;br /&gt;
&lt;br /&gt;
Project: /src/plan.json&lt;br /&gt;
 Name                                                              Monthly Qty  Unit   Monthly Cost &lt;br /&gt;
 module.gke.google_container_cluster.primary                                                        &lt;br /&gt;
 ├─ Cluster management fee                                                 730  hours        $73.00 &lt;br /&gt;
 └─ default_pool                                                                                    &lt;br /&gt;
    ├─ Instance usage (Linux/UNIX, on-demand, e2-medium)                 6,570  hours       $242.16 &lt;br /&gt;
    └─ Standard provisioned storage (pd-standard)                          900  GiB          $36.00 &lt;br /&gt;
 module.gke.google_container_node_pool.pools[&amp;quot;default-node-pool&amp;quot;]                                   &lt;br /&gt;
 ├─ Instance usage (Linux/UNIX, on-demand, e2-medium)                    6,570  hours       $242.16 &lt;br /&gt;
 └─ Standard provisioned storage (pd-standard)                             900  GiB          $36.00 &lt;br /&gt;
 OVERALL TOTAL                                                                              $629.31 &lt;br /&gt;
──────────────────────────────────&lt;br /&gt;
11 cloud resources were detected, rerun with --show-skipped to see details:&lt;br /&gt;
∙ 2 were estimated, 2 include usage-based costs, see https://infracost.io/usage-file&lt;br /&gt;
∙ 9 were free&lt;br /&gt;
&lt;br /&gt;
## Cost difference&lt;br /&gt;
docker run -it --rm --volume &amp;quot;$(pwd):/src&amp;quot; -u $(id -u) -e INFRACOST_API_KEY=$INFRACOST_API_KEY infracost/infracost:0.9.15 diff --path /src/plan.json&lt;br /&gt;
Detected Terraform plan JSON file at /src/plan.json&lt;br /&gt;
✔ Calculating monthly cost estimate &lt;br /&gt;
&lt;br /&gt;
Project: /src/plan.json&lt;br /&gt;
&lt;br /&gt;
+ module.gke.google_container_cluster.primary&lt;br /&gt;
  +$351&lt;br /&gt;
    + Cluster management fee&lt;br /&gt;
      +$73.00&lt;br /&gt;
    + default_pool&lt;br /&gt;
        + Instance usage (Linux/UNIX, on-demand, e2-medium)&lt;br /&gt;
          +$242&lt;br /&gt;
        + Standard provisioned storage (pd-standard)&lt;br /&gt;
          +$36.00&lt;br /&gt;
    + node_pool[0]&lt;br /&gt;
        + Instance usage (Linux/UNIX, on-demand, e2-medium)&lt;br /&gt;
          $0.00&lt;br /&gt;
        + Standard provisioned storage (pd-standard)&lt;br /&gt;
          $0.00&lt;br /&gt;
+ module.gke.google_container_node_pool.pools[&amp;quot;default-node-pool&amp;quot;]&lt;br /&gt;
  +$278&lt;br /&gt;
    + Instance usage (Linux/UNIX, on-demand, e2-medium)&lt;br /&gt;
      +$242&lt;br /&gt;
    + Standard provisioned storage (pd-standard)&lt;br /&gt;
      +$36.00&lt;br /&gt;
Monthly cost change for /src/plan.json&lt;br /&gt;
Amount:  +$629 ($0.00 → $629)&lt;br /&gt;
&lt;br /&gt;
──────────────────────────────────&lt;br /&gt;
Key: ~ changed, + added, - removed&lt;br /&gt;
&lt;br /&gt;
11 cloud resources were detected, rerun with --show-skipped to see details:&lt;br /&gt;
∙ 2 were estimated, 2 include usage-based costs, see https://infracost.io/usage-file&lt;br /&gt;
∙ 9 were free&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Resources:&lt;br /&gt;
* DockerHub: https://hub.docker.com/r/infracost/infracost/tags&lt;br /&gt;
&lt;br /&gt;
= [https://tfautomv.dev/ tfautomv - Terraform refactor] =&lt;br /&gt;
Tfautomv writes moved blocks for you so your refactoring is quicker and less error-prone.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
tfautomv -dry-run&lt;br /&gt;
tfautomv -show-analysis&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= [https://www.davidc.net/sites/default/subnets/subnets.html?network=192.168.0.0&amp;amp;mask=22&amp;amp;division=19.3d431 Subnetting] =&lt;br /&gt;
Very useful page for subnetting: https://www.davidc.net/sites/default/subnets/subnets.html&lt;br /&gt;
&lt;br /&gt;
= Resources =&lt;br /&gt;
*[https://discuss.hashicorp.com/u/apparentlymart apparentlymart] The Hero! discuss.hashicorp.com&lt;br /&gt;
*[https://blog.gruntwork.io/a-comprehensive-guide-to-terraform-b3d32832baca Comprehensive-guide-to-terraform] gruntwork.io&lt;br /&gt;
*[https://github.com/antonbabenko/terraform-best-practices Terraform good practices] naming conventions, etc..&lt;br /&gt;
*[https://www.runatlantis.io/ Atlantis] Terraform Pull Request Automation, Listens for webhooks from GitHub/GitLab/Bitbucket/Azure DevOps, Runs terraform commands remotely and comments back with their output.&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=HashiCorp/Vagrant&amp;diff=7054</id>
		<title>HashiCorp/Vagrant</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=HashiCorp/Vagrant&amp;diff=7054"/>
		<updated>2025-09-01T05:59:25Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* WIP DevOps workstation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
Vagrant is configured on a per project basis. Each of these projects has its own Vagran file. The Vagrant file is a text file in which vagrant reads that sets up our environment. There is a description of what OS, how much RAM, and what software to be installed etc. You can version control this file.&lt;br /&gt;
&lt;br /&gt;
= Install | [https://github.com/hashicorp/vagrant/blob/v2.2.10/CHANGELOG.md Changelog] =&lt;br /&gt;
Download or upgrade&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install using Ubuntu package manager (2024)&lt;br /&gt;
wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg&lt;br /&gt;
echo &amp;quot;deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main&amp;quot; | sudo tee /etc/apt/sources.list.d/hashicorp.list&lt;br /&gt;
apt-cache policy vagrant&lt;br /&gt;
sudo apt update &amp;amp;&amp;amp; sudo apt install vagrant&lt;br /&gt;
&lt;br /&gt;
# Install downloading a package from sources (2022)&lt;br /&gt;
LATEST=$(curl -s GET https://api.github.com/repos/hashicorp/vagrant/tags | jq -r '.[].name' | head -n1 | tr -d v); echo $LATEST&lt;br /&gt;
VERSION=${LATEST:=2.2.18}; &lt;br /&gt;
wget https://releases.hashicorp.com/vagrant/${VERSION}/vagrant_${VERSION}_linux_amd64.zip&lt;br /&gt;
sudo install /usr/bin/vagrant&lt;br /&gt;
#sudo dpkg -i vagrant_${VERSION}_x86_64.deb&lt;br /&gt;
#sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -f   # resolve missing dependencies&lt;br /&gt;
&lt;br /&gt;
# Fix plugins if needed&lt;br /&gt;
vagrant plugin update&lt;br /&gt;
vagrant plugin repair&lt;br /&gt;
vagrant plugin expunge --reinstall&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install ruby is recommended as configuration within '''Vagrant''' file is written in Ruby language. &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo apt-get install ruby&lt;br /&gt;
sudo gem install bundler&lt;br /&gt;
sudo gem update  bundler    # if update needed&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repair plugins after the upgrade&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant plugin repair    # use first&lt;br /&gt;
vagrant plugin expunge --reinstall&lt;br /&gt;
vagrant plugin update    # then update broken plugin&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= &amp;lt;code&amp;gt;box&amp;lt;/code&amp;gt; (image) management =&lt;br /&gt;
Vagrant comes with a preconfigured image repositories.&lt;br /&gt;
;Manage boxes&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant box [list | add | remove] [--help]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Add a box (image) into local repository&lt;br /&gt;
These are standard VMs from providers in Virtualbox, VMware or Hyper-V format taken from a given repository&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant box add hashicorp/precise64      #user: hashicorp boximage: precise64, this is preconfigured repository&lt;br /&gt;
vagrant box add ubuntu/xenial64&lt;br /&gt;
vagrant box add ubuntu/xenial64    --box-version 20170618.0.0 --provider virtualbox&lt;br /&gt;
vagrant box add bento/ubuntu-18.04 --box-version 201812.27.0  --provider hyperv&lt;br /&gt;
&lt;br /&gt;
# Box can be specified via URLs or local file paths, Virtualbox can only nest 32bit machines&lt;br /&gt;
vagrant box add --force ubuntu/14.04      https://cloud-images.ubuntu.com/vagrant/trusty/current/trusty-server-cloudimg-amd64-vagrant-disk1.box&lt;br /&gt;
vagrant box add --force ubuntu/14.04-i386 https://cloud-images.ubuntu.com/vagrant/precise/current/precise-server-cloudimg-i386-vagrant-disk1.box&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Windows images&lt;br /&gt;
* devopsgroup-io/windows_server-2012r2-standard-amd64-nocm&lt;br /&gt;
* peru/windows-server-2016-standard-x64-eval&lt;br /&gt;
* scotch/box&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Update a box to the latest version&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; vagrant box list&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20190411.0.0)&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20190718.0.0)&lt;br /&gt;
&lt;br /&gt;
$&amp;gt; vagrant box update --box ubuntu/bionic64&lt;br /&gt;
Checking for updates to 'ubuntu/bionic64'&lt;br /&gt;
Latest installed version: 20190718.0.0&lt;br /&gt;
Version constraints: &amp;gt; 20190718.0.0&lt;br /&gt;
Provider: virtualbox&lt;br /&gt;
Updating 'ubuntu/bionic64' with provider 'virtualbox' from version&lt;br /&gt;
'20190718.0.0' to '20200124.0.0'...&lt;br /&gt;
Loading metadata for box 'https://vagrantcloud.com/ubuntu/bionic64'&lt;br /&gt;
Adding box 'ubuntu/bionic64' (v20200124.0.0) for provider: virtualbox&lt;br /&gt;
Downloading: https://vagrantcloud.com/ubuntu/boxes/bionic64/versions/20200124.0.0/providers/virtualbox.box&lt;br /&gt;
Download redirected to host: cloud-images.ubuntu.com&lt;br /&gt;
&lt;br /&gt;
$&amp;gt; vagrant box list&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20190411.0.0)&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20190718.0.0)&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20200124.0.0) # &amp;lt;- new downloaded&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Delete all images (aka boxes)&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant box prune&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= &amp;lt;code&amp;gt;box&amp;lt;/code&amp;gt; (image) migration =&lt;br /&gt;
 vagrant box list   #list all downloaded boxes&lt;br /&gt;
&lt;br /&gt;
Default path of boxes image, it can be specified by environment variable &amp;lt;tt&amp;gt;VAGRANT_HOME&amp;lt;/tt&amp;gt;&lt;br /&gt;
 C:\Users\%username%\.vagrant.d\boxes  #Windows&lt;br /&gt;
 ~/.vagrant.d/boxes                    #Linux&lt;br /&gt;
&lt;br /&gt;
Change default path via environment variable&lt;br /&gt;
 export VAGRANT_HOME=my/new/path/goes/here/&lt;br /&gt;
&lt;br /&gt;
==Box format==&lt;br /&gt;
When you un-tar the &amp;lt;tt&amp;gt;.box&amp;lt;/tt&amp;gt; file it contains 4 files:&lt;br /&gt;
 |--Vagrantfile&lt;br /&gt;
 |--box-disk1.vmdk  #compressed virtual disk&lt;br /&gt;
 |--box.ovf         #description of virtual hardware&lt;br /&gt;
 |--metadata.json&lt;br /&gt;
&lt;br /&gt;
== [https://www.vagrantup.com/docs/virtualbox/boxes.html Create box] from current project (package a box) ==&lt;br /&gt;
This allows to create a reusable box that contains all changes to the software we made, VirtualBox or Hyper-V supported only.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://www.vagrantup.com/docs/cli/package.html Command basics]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant package [options] [name|id]&lt;br /&gt;
# --base NAME - instead of packaging a VirtualBox machine that Vagrant manages, &lt;br /&gt;
#               this will package a VirtualBox machine that VirtualBox manages&lt;br /&gt;
# --output NAME - default is package.box&lt;br /&gt;
# --include x,y,z -  additional files will be packaged with the box&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Package&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ vagrant version # -&amp;gt; Installed Version: 2.2.9&lt;br /&gt;
&lt;br /&gt;
# Optional '--vagrantfile NAME' can be included, that automatically restores '--include' files &lt;br /&gt;
# learn more at https://www.vagrantup.com/docs/vagrantfile#load-order&lt;br /&gt;
$ time vagrant package --output u18cli-1.box --include data,git-host,git-host3rd,sync.sh,cleanup.sh&lt;br /&gt;
==&amp;gt; default: Clearing any previously set forwarded ports...&lt;br /&gt;
==&amp;gt; default: Exporting VM...&lt;br /&gt;
==&amp;gt; default: Compressing package to: /home/piotr/vms-vagrant/u18cli-1/2020-05-23-u18cli-1.box&lt;br /&gt;
==&amp;gt; default: Packaging additional file: data               # &amp;lt;- dir&lt;br /&gt;
==&amp;gt; default: Packaging additional file: git-host           # &amp;lt;- dir&lt;br /&gt;
==&amp;gt; default: Packaging additional file: git-host3rd        # &amp;lt;- dir&lt;br /&gt;
==&amp;gt; default: Packaging additional file: cleanup.sh         # &amp;lt;- file&lt;br /&gt;
real	15m27.324s user	8m23.550s sys	0m16.827s&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Copy the &amp;lt;tt&amp;gt;.box&amp;lt;/tt&amp;gt; file and restore&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Add the packaged box to local system box repository&lt;br /&gt;
#                        _____box-name________ __box-file_____&lt;br /&gt;
$ vagrant box add --name box-packages/u18cli-1 u18cli-1-v1.box&lt;br /&gt;
==&amp;gt; box: Box file was not detected as metadata. Adding it directly...&lt;br /&gt;
==&amp;gt; box: Adding box 'u18cli-1-v1.box' (v0) for provider: &lt;br /&gt;
    box: Unpacking necessary files from: file:///home/piotr/vms-vagrant/test-box-restore/u18cli-1-v1.box&lt;br /&gt;
==&amp;gt; box: Successfully added box 'box-packages/u18cli-1' (v0) for 'virtualbox'!&lt;br /&gt;
&lt;br /&gt;
# List boxes&lt;br /&gt;
$ vagrant box list&lt;br /&gt;
box-packages/u18cli-1 (virtualbox, 0)&lt;br /&gt;
&lt;br /&gt;
$ ls -l ~/.vagrant.d/boxes&lt;br /&gt;
total 16&lt;br /&gt;
drwxrwxr-x 3 piotr piotr 4096 Jul 16 17:44 box-packages-VAGRANTSLASH-u18cli-1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Restore. Create/re-use Vagrantfile using box you added to your local box repository&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# vi Vagrantfile&lt;br /&gt;
config.vm.box = &amp;quot;box-packages/u18cli-1.box&amp;quot;&lt;br /&gt;
&lt;br /&gt;
vagrant up&lt;br /&gt;
# restore '--include' files coping them from&lt;br /&gt;
# 'ls -l ~/.vagrant.d/boxes/box-packages-VAGRANTSLASH-u18cli-1/0/virtualbox/include/*'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= vagrant init - your first project =&lt;br /&gt;
;Configure Vagrantfile to use the box as your base system&lt;br /&gt;
&amp;lt;source lang=&amp;quot;ruby&amp;quot;&amp;gt;&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
 config.vm.box      = &amp;quot;ubuntu/bionic64&amp;quot;&lt;br /&gt;
 config.vm.hostname = &amp;quot;ubuntu&amp;quot; #hostname, requires reload&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Create Vagrant project, by creating ''Vagrantfile'' in your current directory&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
vagrant init                    #initialises an project &lt;br /&gt;
vagrant init ubuntu/xenial64    # initialises official Ubuntu 16.04 LTS (Xenial Xerus) Daily Build&lt;br /&gt;
vagrant init ubuntu/bionic64    #supports only VirtualBox provider&lt;br /&gt;
vagrant init bento/ubuntu-18.04 #supports variety of providers&lt;br /&gt;
&lt;br /&gt;
#Windows&lt;br /&gt;
vagrant init devopsgroup-io/windows_server-2012r2-standard-amd64-nocm #Windows 2012r2, VirtualBox only; cannot ssh&lt;br /&gt;
vagrant init peru/windows-server-2016-standard-x64-eval               #Windows 2016, halt works&lt;br /&gt;
vagrant init gusztavvargadr/windows-server                            #Windows 2019, full integration&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Power up your Vagrant box&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
vagrant up&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Ssh to the box. Below example of nested virtualisation 64bit VM(host) runs 32bit (guest vm)&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
piotr@vm-ubuntu64:~/git/vagrant$ vagrant ssh    #default password is &amp;quot;vagrant&amp;quot;&lt;br /&gt;
vagrant@vagrant-ubuntu-precise-32:~$ w&lt;br /&gt;
13:08:35 up 15 min,  1 user,  load average: 0.06, 0.31, 0.54&lt;br /&gt;
USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT&lt;br /&gt;
vagrant  pts/0    10.0.2.2         13:02    1.00s  4.63s  0.09s w&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Shared directory between Vagrant VM and an hypervisor provider&lt;br /&gt;
Vagrant VM shares a directory mounted at &amp;lt;tt&amp;gt;/vagrant&amp;lt;/tt&amp;gt;  with the directory on the host containing your Vagrantfile. This can be manually mounted from within VM as long the shared directory is setup in  GUI. &lt;br /&gt;
&lt;br /&gt;
Eg. vm_name &amp;gt; Settings &amp;gt; Shared Folders &amp;gt; Name: vagrant | Path: /home/piotr/vm_name&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
 sudo mount -t vboxsf -o uid=1000 vagrant /vagrant #firts arg 'vagrant' refers to GUI setiing &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
vagrant --debug up&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Nesting VMs ==&lt;br /&gt;
The error below is due to Virtualbox cannot run nested 64bit virtualbox VM. Spinning up a 64bit VM stops with an error that no 64bit CPU could be found. Update [https://forums.virtualbox.org/viewtopic.php?f=1&amp;amp;t=90831 VirtualBox 6.x Nested virtualization, VT-x/AMD-V in the guest].&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Error:&lt;br /&gt;
 Timed out while waiting for the machine to boot. This means that&lt;br /&gt;
 Vagrant was unable to communicate with the guest machine within&lt;br /&gt;
 the configured (&amp;quot;config.vm.boot_timeout&amp;quot; value) time period.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Manage power states =&lt;br /&gt;
*&amp;lt;code&amp;gt;vagrant suspend&amp;lt;/code&amp;gt; - saves the current running state of the machine and stop it&lt;br /&gt;
*&amp;lt;code&amp;gt;vagrant halt&amp;lt;/code&amp;gt; - gracefully shuts down the guest operating system and power down the guest machine&lt;br /&gt;
*&amp;lt;code&amp;gt;vagrant destroy&amp;lt;/code&amp;gt; - removes all traces of the guest machine from your system. It'll stop the guest machine, power it down, and remove all of the guest hard disks&lt;br /&gt;
&lt;br /&gt;
= Managing snapshots =&lt;br /&gt;
You can easily save snapshots.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Get status&lt;br /&gt;
$ vagrant status&lt;br /&gt;
Current machine states:&lt;br /&gt;
default                   poweroff (virtualbox) # &amp;lt;- 'default' it's machine name&lt;br /&gt;
                                                # in multi-vm Vagrant config file&lt;br /&gt;
The VM is powered off. To restart the VM, simply run `vagrant up`&lt;br /&gt;
&lt;br /&gt;
# List&lt;br /&gt;
vagrant snapshot list&lt;br /&gt;
==&amp;gt; default: &lt;br /&gt;
11_b4-upgradeVbox-stopped&lt;br /&gt;
12_Dec01_stopped&lt;br /&gt;
&lt;br /&gt;
# Save&lt;br /&gt;
                        &amp;lt;nameOfvm&amp;gt; &amp;lt;snapshot-name&amp;gt; &lt;br /&gt;
vagrant snapshot save    default    13_Dec30_external-eks_stopped&lt;br /&gt;
&lt;br /&gt;
# Restore&lt;br /&gt;
vagrant snapshot restore default    13_Dec30_external-eks_stopped&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Lookup path precedence for Vagrant project file =&lt;br /&gt;
When you run any vagrant command, Vagrant climbs your directory tree starting first in the current directory you are in. Example:&lt;br /&gt;
 /home/peter/projects/la/Vagrant&lt;br /&gt;
 /home/peter/projects/Vagrant&lt;br /&gt;
 /home/peter/Vagrant&lt;br /&gt;
 /home/Vagrant&lt;br /&gt;
 /Vagrant&lt;br /&gt;
&lt;br /&gt;
= Configuration =&lt;br /&gt;
== Networking ==&lt;br /&gt;
'''Private''' network is network that is not accessible from Internet. Networking stanza is a part of the main &amp;lt;tt&amp;gt;|config|&amp;lt;/tt&amp;gt; loop.&lt;br /&gt;
&lt;br /&gt;
DHCP IP address assigned&lt;br /&gt;
 config.vm.network &amp;quot;private_network&amp;quot;, type: &amp;quot;dhcp&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Static IP assigment&lt;br /&gt;
 config.vm.network &amp;quot;private_network&amp;quot;, ip: &amp;quot;192.168.80.5&amp;quot;&lt;br /&gt;
 auto_config: false     #optional to disable auto-configure&lt;br /&gt;
&lt;br /&gt;
'''Public network'''&lt;br /&gt;
These networks can be accessible from outside of the host machine including Internet, are usually '''Bridged Networks'''.&lt;br /&gt;
&lt;br /&gt;
Examples of dhcp and static IP assignment&lt;br /&gt;
 config.vm.network &amp;quot;public_network&amp;quot;, type: &amp;quot;dhcp&amp;quot;&lt;br /&gt;
 config.vm.network &amp;quot;public_network&amp;quot;, ip: &amp;quot;192.168.80.5&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Default interface. The name need to match your system name otherwise Vagrant will prompt you to choose from available interfaces during ''vagrant up'' process.&lt;br /&gt;
 config.vm.network &amp;quot;public_network&amp;quot;, bridge: 'eth1'&lt;br /&gt;
&lt;br /&gt;
== Port forwarding ==&lt;br /&gt;
Vagrant can forward any host(hypervisor) TCP port to guest vm specyfing in ~/git/vargant/Vagrant file&lt;br /&gt;
 config.vm.network :forwarded_port, guest: 80, host: 4567&lt;br /&gt;
Reload virtual machine &amp;lt;code&amp;gt;vagrant reload&amp;lt;/code&amp;gt; and run from hypervisor web browser http://127.0.0.1:4567 to test it.&lt;br /&gt;
&lt;br /&gt;
== Sync folders ==&lt;br /&gt;
Vagrant v2 renamed ''Shared folders'' into '''Sync folders'''. This feature mounts HostOS directory into GuestOS. allowing workflow of editing files with IDE installed on a host machine but access them within GuestOS. The files sync both directions (as mount on GuestOS), Remember, taking &amp;lt;code&amp;gt;vagrant snapshot save ubuntu-snap1&amp;lt;/code&amp;gt; '''will NOT save''' the '''Sync folder''' content as it's just mounted directory.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When configuring, 1st argument is a path existing on a '''host machine'''. If relative then it's relative to the root-project folder (where Vagrantfile exists) and 2nd arg is a full path to the mounted dir on guest-os.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Enabling Sync folders and Symlinks&lt;br /&gt;
This can be done at any time, it's applied during &amp;lt;code&amp;gt;vagrant up | reload&amp;lt;/code&amp;gt;. In general symlinks are disabled by VirtualBox as insecure.&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
#                    path on the host       mount on the guestOS&lt;br /&gt;
                              \             /         &lt;br /&gt;
     config.vm.sync_folder &amp;quot;git-host/&amp;quot;, &amp;quot;/git&amp;quot;, disabled: false&lt;br /&gt;
 end&lt;br /&gt;
  config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
    vb.name   = File.basename(Dir.pwd) + &amp;quot;_vagrant&amp;quot;&lt;br /&gt;
    ...&lt;br /&gt;
    vb.customize [&amp;quot;setextradata&amp;quot;, :id, &amp;quot;VBoxInternal2/SharedFoldersEnableSymlinksCreate//git&amp;quot;,     &amp;quot;1&amp;quot;]&lt;br /&gt;
#   vb.customize [&amp;quot;setextradata&amp;quot;, :id, &amp;quot;VBoxInternal2/SharedFoldersEnableSymlinksCreate//vagrant&amp;quot;, &amp;quot;1&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
    # symlinks should be active in root of vm by default&lt;br /&gt;
#   vb.customize [&amp;quot;setextradata&amp;quot;, :id, &amp;quot;VBoxInternal2/SharedFoldersEnableSymlinksCreate/v-root&amp;quot;,   &amp;quot;1&amp;quot;]&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Disabling&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
     config.vm.sync_folder &amp;quot;../data/&amp;quot;, &amp;quot;/vagrant-data&amp;quot;, disabled: true&lt;br /&gt;
 end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Modifying the Owner/Group&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 config.vm.sync_folder &amp;quot;../data/&amp;quot;, &amp;quot;/vagrant-data&amp;quot;, disabled: true,&lt;br /&gt;
    owner: &amp;quot;root&amp;quot;, group: &amp;quot;root&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
References&lt;br /&gt;
* [https://www.vagrantup.com/docs/synced-folders/basic_usage.html#id synced-folders] Hashicorp docs&lt;br /&gt;
&lt;br /&gt;
= Vagrant providers =&lt;br /&gt;
Vagrant can work with a wide variety of backend providers, such as VMware, AWS, and more without changing Vagrantfile. It's enough to  specify the provider and Vagrant will do the rest:&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
vagrant up --provider=vmware_fusion&lt;br /&gt;
vagrant up --provider=aws&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Hyper-V ==&lt;br /&gt;
*Enable Hyper-V&lt;br /&gt;
*if you running Docker for Windows make sure is disabled as only one application can bound to Internal NAT vswitch, if you are using it&lt;br /&gt;
*WSL and Windows Vagrant versions must match&lt;br /&gt;
*the terminal you run WSL or PowerShell runs with elevated privileges&lt;br /&gt;
When running in WSL, make sure you have &amp;lt;code&amp;gt;export VAGRANT_WSL_ENABLE_WINDOWS_ACCESS=&amp;quot;1&amp;quot;&amp;lt;/code&amp;gt;,&lt;br /&gt;
*you are in native Bash.exe not eg. ConEmu terminal with as it was proven not working at the time. You can change default provider by &amp;lt;code&amp;gt;export VAGRANT_DEFAULT_PROVIDER=hyperv&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Optional: Set the user-level environment variable in PowerShell: &lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
[Environment]::SetEnvironmentVariable(&amp;quot;VAGRANT_DEFAULT_PROVIDER&amp;quot;, &amp;quot;hyperv&amp;quot;, &amp;quot;User&amp;quot;) &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Workarounds&lt;br /&gt;
Copy insecure private key from &amp;lt;code&amp;gt;https://raw.githubusercontent.com/hashicorp/vagrant/master/keys/vagrant to WSL ~/.vagr&lt;br /&gt;
ant_key/private_key&amp;lt;/code&amp;gt; because Microsoft filesystem does not support Unix style file permissions, until WSL2 is released.&lt;br /&gt;
&amp;lt;source&amp;gt; &lt;br /&gt;
$ wget https://raw.githubusercontent.com/hashicorp/vagrant/master/keys/vagrant -O ~/.vagrant_key/private_key&lt;br /&gt;
# then set in Vagrantfile&lt;br /&gt;
config.ssh.private_key_path = &amp;quot;~/.vagrant_key/private_key&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When running on HyperV you need to choose a vswitch you will use. Vagrant will prompt you, select &amp;quot;Default Switch&amp;quot;, w&lt;br /&gt;
hich is eqvivalent of NAT Network. You need to creat your own vswitch if you want access to Internet.&lt;br /&gt;
&lt;br /&gt;
Go to Hyper-V Manager, open Virtual Switch Manager..., create External switch, name: vagrant-external, press OK. Then&lt;br /&gt;
 run.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
vagrant up --provider hyperv&lt;br /&gt;
&lt;br /&gt;
    default: Please choose a switch to attach to your Hyper-V instance.&lt;br /&gt;
    default: If none of these are appropriate, please open the Hyper-V manager&lt;br /&gt;
    default: to create a new virtual switch.&lt;br /&gt;
    default:&lt;br /&gt;
    default: 1) DockerNAT&lt;br /&gt;
    default: 2) Default Switch&lt;br /&gt;
    default: 3) vagrant-external&lt;br /&gt;
    default:&lt;br /&gt;
    default: What switch would you like to use?3    #&amp;lt;-- select 3&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Read more https://www.vagrantup.com/docs/hyperv/limitations.html&lt;br /&gt;
&lt;br /&gt;
Run Vagrant file&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant up --provider=hyperv&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== References ===&lt;br /&gt;
*[https://gist.github.com/savishy/8ed40cd8692e295d64f45e299c2b83c9 Create vSwitch in Hyper-V to run Vagrant]&lt;br /&gt;
*[https://techcommunity.microsoft.com/t5/Virtualization/Copying-Files-into-a-Hyper-V-VM-with-Vagrant/ba-p/382376 Copying Files into a Hyper-V VM with Vagrant]&lt;br /&gt;
*[https://techcommunity.microsoft.com/t5/Virtualization/Vagrant-and-Hyper-V-Tips-and-Tricks/ba-p/382373 Vagrant and Hyper-V -- Tips and Tricks] techcommunity.microsoft.com&lt;br /&gt;
&lt;br /&gt;
= Provisioners =&lt;br /&gt;
==Shell provisioner==&lt;br /&gt;
Vagrant can run from shared location script or from inline: Vagrant file shell provisioning commands.&lt;br /&gt;
&lt;br /&gt;
Create provisioning script&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi ~/git/vagrant/bootstrap.sh     &lt;br /&gt;
#!/usr/bin/env bash&lt;br /&gt;
export http_proxy=&amp;lt;nowiki&amp;gt;http://username:password@proxyserver.local:8080&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
export https_proxy=$http_proxy &lt;br /&gt;
apt-get update&lt;br /&gt;
apt-get install -y apache2&lt;br /&gt;
if ! [ -L /var/www ]; then &lt;br /&gt;
  rm -rf /var/www&lt;br /&gt;
  ln -sf /vagrant /var/www  # sets Vagrant shared dir to Apache DocumentRoot&lt;br /&gt;
fi&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure Vagrant to run this shell script above when setting up our machine&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi ~/git/vagrant/Vagrantfile   &lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
   config.vm.box = ubuntu/14.04-i386&lt;br /&gt;
   config.vm.provision: shell, path: &amp;quot;bootstrap.sh&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another example of using shell provisioner, separating a script out&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$script = &amp;lt;&amp;lt;SCRIPT&lt;br /&gt;
echo    &amp;quot; touch /home/vagrant/test_\\`date +%s\\`.txt &amp;quot; &amp;gt; /home/vagrant/newfile&lt;br /&gt;
chmod +x        /home/vagrant/newfile&lt;br /&gt;
echo &amp;quot;* * * * * /home/vagrant/newfile&amp;quot; &amp;gt; mycron&lt;br /&gt;
crontab mycron&lt;br /&gt;
SCRIPT&lt;br /&gt;
&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
  config.vm.box = &amp;quot;ubuntu/xenial64&amp;quot;&lt;br /&gt;
  config.vm.provision &amp;quot;shell&amp;quot;, inline: $script , privileged: false&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Bring the environment up  &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant up                   #runs provisioning only once&lt;br /&gt;
vagrant reload --provision   #reloads VM skipping import and runs provisioning&lt;br /&gt;
vagrant ssh                  #ssh to VM&lt;br /&gt;
wget -qO- 127.0.0.1          #test Apache is running on VM&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Provisioners - shell, ansible, ansible_local and more&lt;br /&gt;
&lt;br /&gt;
This section is about using Ansible with Vagrant, &lt;br /&gt;
 *&amp;lt;code&amp;gt;ansible&amp;lt;/code&amp;gt;, where Ansible is executed on the '''Vagrant host'''&lt;br /&gt;
 *&amp;lt;code&amp;gt;ansible_local&amp;lt;/code&amp;gt;, where Ansible is executed on the '''Vagrant guest'''&lt;br /&gt;
&lt;br /&gt;
==Ansible provisioner==&lt;br /&gt;
&lt;br /&gt;
Specify Ansible as a provisioner in Vagrant file&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 # Run Ansible from the Vagrant Host&lt;br /&gt;
 config.vm.provision &amp;quot;ansible&amp;quot; do |ansible|&lt;br /&gt;
    ansible.playbook = &amp;quot;playbook.yml&amp;quot;&lt;br /&gt;
 end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Chef_solo provisioner ==&lt;br /&gt;
Create recipe, the following dirctory structure is required, eg. recipe name is: vagrant_la&lt;br /&gt;
 ├── cookbooks&lt;br /&gt;
 │   └── vagrant_la&lt;br /&gt;
 │       └── recipes&lt;br /&gt;
 │           └── default.rb&lt;br /&gt;
 Vagrant&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Recipe&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
vi cookbooks/vagrant_la/recipes/default.rb&lt;br /&gt;
execute &amp;quot;apt-get update&amp;quot;&lt;br /&gt;
package &amp;quot;apache2&amp;quot;&lt;br /&gt;
execute &amp;quot;rm -rf /var/www&amp;quot;&lt;br /&gt;
link &amp;quot;var/www&amp;quot; do&lt;br /&gt;
        to &amp;quot;/vagrant&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In Vagrant file add following&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
config.vm.provision &amp;quot;chef_solo&amp;quot; do |chef|&lt;br /&gt;
        chef.add_recipe &amp;quot;vagrant_la&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Run &amp;lt;code&amp;gt;vagrant up&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Puppet manifest ==&lt;br /&gt;
Create Vagrant provisioning stanza&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 config.vm.define &amp;quot;web&amp;quot; do |web|&lt;br /&gt;
         web.vm.hostname = &amp;quot;web&amp;quot;&lt;br /&gt;
         web.vm.box = &amp;quot;apache&amp;quot;&lt;br /&gt;
         web.vm.network &amp;quot;private_network&amp;quot;, type: &amp;quot;dhcp&amp;quot;&lt;br /&gt;
         web.vm.network &amp;quot;forwarded_port&amp;quot;, guest: 80, host: 8080&lt;br /&gt;
         web.vm.provision &amp;quot;puppet&amp;quot; do |puppet|&lt;br /&gt;
                 puppet.manifests_path = &amp;quot;manifests&amp;quot;&lt;br /&gt;
                 puppet.manifest_file = &amp;quot;default.pp&amp;quot;&lt;br /&gt;
         end&lt;br /&gt;
 end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create a required folder structure for puppet manifests&lt;br /&gt;
 ├── manifests&lt;br /&gt;
 │   └── default.pp&lt;br /&gt;
 └── Vagrantfile&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Puppet manifest file&lt;br /&gt;
 vi manifests/default.pp&lt;br /&gt;
 exec { &amp;quot;apt-get update&amp;quot;:&lt;br /&gt;
        command =&amp;gt; &amp;quot;/usr/bin/apt-get update&amp;quot;,&lt;br /&gt;
 }&lt;br /&gt;
 package { &amp;quot;apache2&amp;quot;:&lt;br /&gt;
        require =&amp;gt; Exec[&amp;quot;apt-get update&amp;quot;],&lt;br /&gt;
 }&lt;br /&gt;
 file { &amp;quot;/var/www&amp;quot;:&lt;br /&gt;
        ensure =&amp;gt; link,&lt;br /&gt;
        target =&amp;gt; &amp;quot;/vagrant&amp;quot;,&lt;br /&gt;
        force =&amp;gt; true,&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
= [https://tuhrig.de/resizing-vagrant-box-disk-space/ Resizing Vagrant box disk] =&lt;br /&gt;
* [https://www.vagrantup.com/docs/disks/usage Resizing primary disk] native way&lt;br /&gt;
&lt;br /&gt;
= Enable Vagrant to use proxy server for VMs =&lt;br /&gt;
Install proxyconf plugin or use &amp;lt;code&amp;gt;vagrant plugin list&amp;lt;/code&amp;gt; to verify if installed&lt;br /&gt;
 vagrant plugin install vagrant-proxyconf&lt;br /&gt;
&lt;br /&gt;
Configure your Vagrantfile, here particularly host 10.0.0.1:3128 runs CNTLM proxy&lt;br /&gt;
 Vagrant.configure(&amp;quot;2&amp;quot;) do |config| &lt;br /&gt;
     &amp;lt;nowiki&amp;gt;config.proxy.http = &amp;quot;http://10.0.0.1:3128&amp;quot;&lt;br /&gt;
    config.proxy.https = &amp;quot;http://10.0.0.1:3128&amp;quot;&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
     config.proxy.no_proxy = &amp;quot;localhost,127.0.0.1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
= Virtualbox Guest Additions =&lt;br /&gt;
== Sync using vagrant-vbguest plugin ==&lt;br /&gt;
Plugin install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant plugin install vagrant-vbguest&lt;br /&gt;
&lt;br /&gt;
# In a case of dependency issues you can temporary disable the check&lt;br /&gt;
VAGRANT_DISABLE_STRICT_DEPENDENCY_ENFORCEMENT=1 vagrant plugin install vagrant-vbguest&lt;br /&gt;
&lt;br /&gt;
# Verify current version, running on a host(hypervisor)&lt;br /&gt;
vagrant vbguest --status&lt;br /&gt;
&lt;br /&gt;
# Add to your Vagrant file&lt;br /&gt;
if Vagrant.has_plugin?(&amp;quot;vagrant-vbguest&amp;quot;)&lt;br /&gt;
  host.vbguest.auto_update = true&lt;br /&gt;
  host.vbguest.no_remote   = true&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Manual install&lt;br /&gt;
Download VBoxGuestAdditions from:&lt;br /&gt;
* https://download.virtualbox.org/virtualbox&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install a matching version to your host version onto the virtual machine.&lt;br /&gt;
wget https://download.virtualbox.org/virtualbox/7.0.16/VBoxGuestAdditions_7.0.16.iso&lt;br /&gt;
vagrant vbguest --do install --iso VBoxGuestAdditions_7.0.16.iso&lt;br /&gt;
&lt;br /&gt;
Usage: vagrant vbguest [vm-name] [--do start|rebuild|install] [--status] [-f|--force] [-b|--auto-reboot] [-R|--no-remote] [--iso VBoxGuestAdditions.iso] [--no-cleanup]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More you will find at [https://github.com/dotless-de/vagrant-vbguest vagrant-vbguest] plugin project.&lt;br /&gt;
&lt;br /&gt;
== Manual upgrade ==&lt;br /&gt;
Find out what version you are running, execute on a guest VM&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant@ubuntu:~$ modinfo vboxguest | grep ^version&lt;br /&gt;
version:        6.0.10 r132072&lt;br /&gt;
&lt;br /&gt;
vagrant@ubuntu:~$ lsmod | grep -io vboxguest | xargs modinfo | grep -iw version&lt;br /&gt;
version:        6.0.10 r132072&lt;br /&gt;
&lt;br /&gt;
vagrant@u18cli-3:~$ sudo /usr/sbin/VBoxService --version&lt;br /&gt;
6.0.10r132072&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download the extension, you can explore [http://download.virtualbox.org/virtualbox here]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
wget http://download.virtualbox.org/virtualbox/5.0.32/VBoxGuestAdditions_5.0.32.iso&lt;br /&gt;
#you need to get it mounted or extracted the content and run inside the VM.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
*[https://github.com/chilcano/box-vagrant-wso2-dev-srv/blob/master/_downloads/vagrant-vboxguestadditions-workaroud.md Upgrade Vbox extension additions within Vagrant box]&lt;br /&gt;
&lt;br /&gt;
= List all Virtualbox SSH redirections =&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
vboxmanage list vms | cut -d ' ' -f 2  &amp;gt; /tmp/vms.out &amp;amp;&amp;amp; for vm in $(cat /tmp/vms.out); do vboxmanage showvminfo &amp;quot;$vm&amp;quot; | grep ssh; done&lt;br /&gt;
vboxmanage list vms | cut -d ' ' -f 1 | sed 's/&amp;quot;//g' &amp;gt; /tmp/vms.out &amp;amp;&amp;amp; for vm in $(cat /tmp/vms.out); do echo $vm; vboxmanage showvminfo &amp;quot;$vm&amp;quot; | grep ssh; done&lt;br /&gt;
vboxmanage list vms \&lt;br /&gt;
  | cut -d ' ' -f 1 \&lt;br /&gt;
  | sed 's/&amp;quot;//g' &amp;gt; /tmp/vms.out \&lt;br /&gt;
  &amp;amp;&amp;amp; for vm in $(cat /tmp/vms.out); do vboxmanage showvminfo &amp;quot;$vm&amp;quot; \&lt;br /&gt;
                                      | grep ssh \&lt;br /&gt;
                                      | tr --delete '\n'; echo &amp;quot; $vm&amp;quot;; done&lt;br /&gt;
&lt;br /&gt;
sed 's/&amp;quot;//g'      #removes double quotes from whole string&lt;br /&gt;
tr --delete '\n'  #deletes EOL, so the next command output is appended to the previous line&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Vagrant file =&lt;br /&gt;
;Ruby gotchas&lt;br /&gt;
Vagrant configuration file is written in Ruby specification, therefore you need to remember&lt;br /&gt;
*don't use dashes in object names, '''don't''': &amp;lt;tt&amp;gt;jenkins-minion_config.vm.box = &amp;quot;ubuntu/xenial64&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
*don't use symbols (here underscore) in variable names, '''don't''': &amp;lt;tt&amp;gt;(1..2).each do |minion_number|&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== HAProxy cluster, multi-node Vagrant config  ==&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
git clone https://github.com/jweissig/episode-45&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This creates ''Ansible'' mgmt server, Load Balancer and Web nodes [1..2]. HAProxy will be configured via Ansible code.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# -*- mode: ruby -*-&lt;br /&gt;
# vi: set ft=ruby :&lt;br /&gt;
&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
 # create mgmt node&lt;br /&gt;
 config.vm.define :mgmt do |mgmt_config|&lt;br /&gt;
     mgmt_config.vm.box = &amp;quot;ubuntu/trusty64&amp;quot;&lt;br /&gt;
     mgmt_config.vm.hostname = &amp;quot;mgmt&amp;quot;&lt;br /&gt;
     mgmt_config.vm.network :private_network, ip: &amp;quot;10.0.15.10&amp;quot;&lt;br /&gt;
     mgmt_config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
       vb.memory = &amp;quot;256&amp;quot;&lt;br /&gt;
     end&lt;br /&gt;
     mgmt_config.vm.provision :shell, path: &amp;quot;bootstrap-mgmt.sh&amp;quot;&lt;br /&gt;
 end&lt;br /&gt;
&lt;br /&gt;
 # create load balancer&lt;br /&gt;
 config.vm.define :lb do |lb_config|&lt;br /&gt;
     lb_config.vm.box = &amp;quot;ubuntu/trusty64&amp;quot;&lt;br /&gt;
     lb_config.vm.hostname = &amp;quot;lb&amp;quot;&lt;br /&gt;
     lb_config.vm.network :private_network, ip: &amp;quot;10.0.15.11&amp;quot;&lt;br /&gt;
     lb_config.vm.network &amp;quot;forwarded_port&amp;quot;, guest: 80, host: 8080&lt;br /&gt;
     lb_config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
       vb.memory = &amp;quot;256&amp;quot;&lt;br /&gt;
     end&lt;br /&gt;
 end&lt;br /&gt;
&lt;br /&gt;
 # create some web servers&lt;br /&gt;
 # https://docs.vagrantup.com/v2/vagrantfile/tips.html&lt;br /&gt;
  (1..2).each do |i|&lt;br /&gt;
    config.vm.define &amp;quot;web#{i}&amp;quot; do |node|&lt;br /&gt;
        node.vm.box = &amp;quot;ubuntu/trusty64&amp;quot;&lt;br /&gt;
        node.vm.hostname = &amp;quot;web#{i}&amp;quot;&lt;br /&gt;
        node.vm.network :private_network, ip: &amp;quot;10.0.15.2#{i}&amp;quot;&lt;br /&gt;
        node.vm.network &amp;quot;forwarded_port&amp;quot;, guest: 80, host: &amp;quot;808#{i}&amp;quot;&lt;br /&gt;
        node.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
          vb.memory = &amp;quot;256&amp;quot;&lt;br /&gt;
        end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Boot strap script &amp;lt;tt&amp;gt;bootstrap-mgmt.sh&amp;lt;/tt&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;shell&amp;quot;&amp;gt;&lt;br /&gt;
#!/usr/bin/env bash &lt;br /&gt;
# install ansible (http://docs.ansible.com/intro_installation.html)&lt;br /&gt;
apt-get -y install software-properties-common&lt;br /&gt;
apt-add-repository -y ppa:ansible/ansible&lt;br /&gt;
apt-get update&lt;br /&gt;
apt-get -y install ansible&lt;br /&gt;
&lt;br /&gt;
# copy examples into /home/vagrant (from inside the mgmt node)&lt;br /&gt;
cp -a /vagrant/examples/* /home/vagrant&lt;br /&gt;
chown -R vagrant:vagrant /home/vagrant&lt;br /&gt;
&lt;br /&gt;
# configure hosts file for our internal network defined by Vagrantfile&lt;br /&gt;
cat &amp;gt;&amp;gt; /etc/hosts &amp;lt;&amp;lt;EOL&lt;br /&gt;
# vagrant environment nodes&lt;br /&gt;
10.0.15.10  mgmt&lt;br /&gt;
10.0.15.11  lb&lt;br /&gt;
10.0.15.21  web1&lt;br /&gt;
10.0.15.22  web2&lt;br /&gt;
10.0.15.23  web3&lt;br /&gt;
10.0.15.24  web4&lt;br /&gt;
10.0.15.25  web5&lt;br /&gt;
10.0.15.26  web6&lt;br /&gt;
10.0.15.27  web7&lt;br /&gt;
10.0.15.28  web8&lt;br /&gt;
10.0.15.29  web9&lt;br /&gt;
EOL&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Gitbash path -  &amp;lt;code&amp;gt;/c/Program\ Files/Oracle/VirtualBox/VBoxManage.exe&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Set bootstrap script for Proxy or No-proxy specific system&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
Vagrant status&lt;br /&gt;
Vagrant up&lt;br /&gt;
Vagrant ssh mgmt&lt;br /&gt;
ansible all --list-hosts&lt;br /&gt;
ssh-keyscan web1 web2 lb &amp;gt; ~/.ssh/known_hosts&lt;br /&gt;
ansible-playbook ssh-addkey.yml -u vagrant --ask-pass&lt;br /&gt;
ansible-playbook site.yml&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once set it up you can navigate on your laptop to:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
http://localhost:8080/              #Website test&lt;br /&gt;
http://localhost:8080/haproxy?stats #HAProxy stats&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Use to verify end server&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
curl -I http://localhost:8080&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:X-Backend-Server.png|none|left|Curl -i X-Backend-Server]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Generate web traffic&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
vagrant ssh lb&lt;br /&gt;
sudo apt-get install apache2-utils&lt;br /&gt;
ansible localhost -m apt -a &amp;quot;pkg=apache2-utils state=present&amp;quot; --become&lt;br /&gt;
ab -n 1000 -c 1 http://10.0.2.15:80/ # Apache replaced 'ab' with 'hey'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Vagrant DNS =&lt;br /&gt;
== Multi-machine mDNS discovery ==&lt;br /&gt;
Multi-machine setup requires 3 ingredients* :&lt;br /&gt;
* each machine to have different hostname&lt;br /&gt;
* a way of getting the IP address for a hostname (eg. mDNS)&lt;br /&gt;
* connect the VMs through a private network&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In multi-machine configuration we need a way of getting the IP address for a hostname. We use &amp;lt;code&amp;gt;mDNS&amp;lt;/code&amp;gt; for this. By default &amp;lt;codemDNS&amp;lt;/code&amp;gt; only resolves host names ending with the &amp;lt;code&amp;gt;.local&amp;lt;/code&amp;gt; top-level domain (TLD). This can cause problems if that domain includes hosts which do not implement mDNS but which can be found via a conventional unicast DNS server. Resolving such conflicts requires network-configuration changes that violate the zero-configuration goal. Install &amp;lt;code&amp;gt;avahi&amp;lt;/code&amp;gt; system on all machines to facilitate service discovery on a local network via the &amp;lt;code&amp;gt;mDNS/DNS-SD&amp;lt;/code&amp;gt; protocol suite.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
config.vm.provision &amp;quot;shell&amp;quot;, inline: &amp;lt;&amp;lt;-SCRIPT&lt;br /&gt;
  apt-get install -y avahi-daemon libnss-mdns&lt;br /&gt;
SCRIPT&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;References&lt;br /&gt;
*[https://github.com/lathiat/nss-mdns nss-mdns] system which allows hostname lookup of *.local hostnames via mDNS in all system programs using nsswitch&lt;br /&gt;
*[https://www.avahi.org/ avahi.org]&lt;br /&gt;
&lt;br /&gt;
== Set host system DNS server resolver ==&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
config.vm.provider :virtualbox do |vb|&lt;br /&gt;
    vb.customize [&amp;quot;modifyvm&amp;quot;, :id, &amp;quot;--natdnshostresolver1&amp;quot;, &amp;quot;on&amp;quot;]&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Ubuntu with GUI =&lt;br /&gt;
This article is going to describe to setup Vagrant Virtualbox VM with GUI, setting up xserver with xfce4 as desktop environment.&lt;br /&gt;
== Locales ==&lt;br /&gt;
This is not working&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
     locale-gen en_GB.utf8 #en_GB.UTF-8&lt;br /&gt;
     update-locale LANG=en_GB.UTF-8&lt;br /&gt;
     locale-gen --purge &amp;quot;en_GB.UTF-8&amp;quot;&lt;br /&gt;
     dpkg-reconfigure --frontend=noninteractive locales&lt;br /&gt;
     dpkg-reconfigure --frontend=noninteractive keyboard-configuration&lt;br /&gt;
     localedef -i en_GB -c -f UTF-8 en_GB.utf8&lt;br /&gt;
     sudo update-locale LANG=en_GB.UTF-8&lt;br /&gt;
     locale-gen --purge &amp;quot;en_GB.UTF-8&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Troubleshooting&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
locale -a #shows which locales are available on your system&lt;br /&gt;
sudo less /usr/share/i18n/SUPPORTED&lt;br /&gt;
cat /etc/default/locale&lt;br /&gt;
&lt;br /&gt;
#Set system wide locales (does not work for users)&lt;br /&gt;
localectl set-locale LANG=en_GB.UTF-8 LANGUAGE=en_GB:en&lt;br /&gt;
localectl set-keymap gb&lt;br /&gt;
localectl set-x11-keymap gb&lt;br /&gt;
&lt;br /&gt;
#Quick kb change&lt;br /&gt;
apt-get install -yq x11-xkb-utils; setxkbmap gb&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Gnome3 ==&lt;br /&gt;
This setup installs Ubuntu desktop and may require a restart to apply changes to like a taskbar with shortcuts.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;ruby&amp;quot;&amp;gt;&lt;br /&gt;
# -*- mode: ruby -*-&lt;br /&gt;
# vi: set ft=ruby :&lt;br /&gt;
&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
  config.vm.box      = &amp;quot;ubuntu/bionic64&amp;quot; #bento/ubuntu-18.04, ubuntu/xenial64&lt;br /&gt;
&lt;br /&gt;
  machineName = File.basename(Dir.pwd) #name as a current working dir&lt;br /&gt;
# machineName = 'u18gui-1'&lt;br /&gt;
  config.vm.hostname = machineName&lt;br /&gt;
&lt;br /&gt;
  # Manually check for updates `vagrant box outdated`&lt;br /&gt;
  config.vm.box_check_update = false&lt;br /&gt;
&lt;br /&gt;
  # Vbguest plugin&lt;br /&gt;
  if Vagrant.has_plugin?(&amp;quot;vagrant-vbguest&amp;quot;)&lt;br /&gt;
    config.vbguest.auto_update = false&lt;br /&gt;
    config.vbguest.no_remote   = true&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # config.vm.network &amp;quot;forwarded_port&amp;quot;, guest: 80, host: 8080, host_ip: &amp;quot;127.0.0.1&amp;quot;&lt;br /&gt;
  # Public network, which generally matched to bridged network.&lt;br /&gt;
  # config.vm.network &amp;quot;public_network&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  # config.vm.synced_folder &amp;quot;hostDir&amp;quot;, &amp;quot;/InVagrantMount/path&amp;quot; &lt;br /&gt;
  # config.vm.synced_folder &amp;quot;../data&amp;quot;, &amp;quot;/vagrant_data&amp;quot;&lt;br /&gt;
&lt;br /&gt;
   config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
     vb.gui    = true&lt;br /&gt;
     vb.memory = &amp;quot;3072&amp;quot;&lt;br /&gt;
     vb.name   = machineName + &amp;quot;_vagrant&amp;quot;&lt;br /&gt;
   end&lt;br /&gt;
  &lt;br /&gt;
   config.vm.provision &amp;quot;shell&amp;quot;, inline: &amp;lt;&amp;lt;-SHELL&lt;br /&gt;
     export DEBIAN_FRONTEND=noninteractive&lt;br /&gt;
     setxkbmap gb&lt;br /&gt;
     apt-get update &amp;amp;&amp;amp; apt-get upgrade -yq&lt;br /&gt;
     apt-get install -yq ubuntu-desktop --no-install-recommends&lt;br /&gt;
     apt-get install -yq terminator tmux&lt;br /&gt;
     #only U16 xenial to fix Unity&lt;br /&gt;
     #apt-get install -yq unity-lens-files unity-lens-applications indicator-session --no-install-recommends &lt;br /&gt;
   SHELL&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Running up&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
vagrant plugin install vagrant-vbguest&lt;br /&gt;
vagrant up &amp;amp;&amp;amp; vagrant vbguest --do install &amp;amp;&amp;amp; vagrant reload&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Xface ==&lt;br /&gt;
Get a basic Ubuntu image working, boot it up and vagrant ssh.&lt;br /&gt;
Next, enable the VirtualBox display, which is off by default. Halt the VM and uncomment these lines in Vagrantfile:&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
config.vm.provider :virtualbox do |vb|&lt;br /&gt;
  vb.gui = true&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Boot the VM and observe the new display window. Now you just need to install and start xfce4. Use vagrant ssh and:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get install -y xfce4 virtualbox-guest-dkms virtualbox-guest-utils virtualbox-guest-x11&lt;br /&gt;
#guest additions are already installed on most of the Vagrant boxes&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Don't start the GUI as root because you really want to stay the vagrant user. To do this you need to permit anyone to start the GUI: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo vim /etc/X11/Xwrapper.config and edit it to allowed_users=anybody&lt;br /&gt;
sudo startxfce4&amp;amp;&lt;br /&gt;
sudo VBoxClient-all #optional&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should be landed in a xfce4 session.&lt;br /&gt;
&lt;br /&gt;
(Optional) If VBoxClient-all script isn't installed or anything is missing, you can replace with the equivalent:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo VBoxClient --clipboard&lt;br /&gt;
sudo VBoxClient --draganddrop&lt;br /&gt;
sudo VBoxClient --display&lt;br /&gt;
sudo VBoxClient --checkhostversion&lt;br /&gt;
sudo VBoxClient --seamless&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
*[http://stackoverflow.com/questions/18878117/using-vagrant-to-run-virtual-machines-with-desktop-environment Vagrant GUI vms] stackoverflow&lt;br /&gt;
&lt;br /&gt;
= Windows=&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
# -*- mode: ruby -*-&lt;br /&gt;
# vi: set ft=ruby :&lt;br /&gt;
&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
  config.vm.box = &amp;quot;gusztavvargadr/windows-server&amp;quot;&lt;br /&gt;
  config.vm.box_check_update = false&lt;br /&gt;
  config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
     vb.gui = true       # Display the VirtualBox GUI when booting the machine&lt;br /&gt;
     vb.memory = &amp;quot;3072&amp;quot;  # Customize the amount of memory on the VM:&lt;br /&gt;
  end&lt;br /&gt;
  # Plugins&lt;br /&gt;
  config.vbguest.auto_update = false&lt;br /&gt;
  config.vbguest.no_remote = true&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Shared location&lt;br /&gt;
* enable Network Sharing&lt;br /&gt;
* Vagrant path is mapped to &amp;lt;code&amp;gt;\\VBOXSVR\vagrant&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= WIP DevOps workstation =&lt;br /&gt;
This to contain:&lt;br /&gt;
*bash_logout and .profile to eval ssh-agent and kill on exit&lt;br /&gt;
*java Oracle&lt;br /&gt;
*vim install&lt;br /&gt;
*vundle install&lt;br /&gt;
*[done]python pip: awscli, boto, boto3, etc..&lt;br /&gt;
&lt;br /&gt;
Challenges:&lt;br /&gt;
**Read more at launchpad [https://bugs.launchpad.net/cloud-images/+bug/1569237 vagrant xenial box is not provided with vagrant/vagrant username and password ]&lt;br /&gt;
* Solutions&lt;br /&gt;
** on W10 host both users: ubuntu &amp;amp; vagrant exist. Only vagrant has insecure_pub installed OOB. I am coping vagrant user pub key into ubuntu user authorized_keys&lt;br /&gt;
*** [https://blog.ouseful.info/2015/07/27/running-a-shell-script-once-only-in-vagrant/ Running a Shell Script Once Only in vagrant]&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
*[https://www.vagrantup.com/docs/getting-started/ Vagrant Start up documentation]&lt;br /&gt;
*[https://atlas.hashicorp.com/boxes/search Vagrant Hashicorp VMs repository] Virtualbox&lt;br /&gt;
*[https://cloud-images.ubuntu.com/vagrant/ Vagrant Ubuntu VMs images] Virtualbox&lt;br /&gt;
*[https://www.vagrantup.com/docs/provisioning/ansible_intro.html Vagrant and Ansible provisioner] Vagrant docs&lt;br /&gt;
*[https://manski.net/2016/09/vagrant-multi-machine-tutorial/#multi-machine.3A-the-naive-way Vagrant Tutorial – From Nothing To Multi-Machine] Tutorial&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Linux_shell/Productivity_tools&amp;diff=7053</id>
		<title>Linux shell/Productivity tools</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Linux_shell/Productivity_tools&amp;diff=7053"/>
		<updated>2025-08-29T06:10:32Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Autojump =&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
https://github.com/wting/autojump#manual&lt;br /&gt;
sudo apt-get install autojump&lt;br /&gt;
cat /usr/share/doc/autojump/README.Debian&lt;br /&gt;
&lt;br /&gt;
Autojump for Debian&lt;br /&gt;
-------------------&lt;br /&gt;
&lt;br /&gt;
To use autojump, you need to configure you shell to source&lt;br /&gt;
/usr/share/autojump/autojump.sh on startup.&lt;br /&gt;
&lt;br /&gt;
If you use Bash, add the following line to your ~/.bashrc (for non-login&lt;br /&gt;
interactive shells) and your ~/.bash_profile (for login shells):&lt;br /&gt;
. /usr/share/autojump/autojump.sh&lt;br /&gt;
&lt;br /&gt;
If you use Zsh, add the following line to your ~/.zshrc (for all interactive shells):&lt;br /&gt;
. /usr/share/autojump/autojump.sh&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
j -s # display statistics&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
j foo      # Jump To A Directory That Contains foo. It takes multiple arguments to do fuzzy search&lt;br /&gt;
jc bar     # jump to a child directory (sub-directory of current directory) rather than typing out the full name&lt;br /&gt;
jo music   # Open File Manager To Directories (instead of jumping)&lt;br /&gt;
jco images # Opening a file manager to a child directory is also supported&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= direnv =&lt;br /&gt;
TODO:&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/sst/opencode Opencode] =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;e&amp;gt;&lt;br /&gt;
# Install opencode&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/sst/opencode/releases/latest&amp;quot; | jq -r .tag_name); echo $VERSION&lt;br /&gt;
TEMPDIR=$(mktemp -d)&lt;br /&gt;
curl -L https://github.com/sst/opencode/releases/download/${VERSION}/opencode-linux-x64.zip -o $TEMPDIR/opencode-linux-x64.zip&lt;br /&gt;
unzip $TEMPDIR/opencode-linux-x64.zip -d $TEMPDIR&lt;br /&gt;
sudo install $TEMPDIR/opencode /usr/local/bin/opencode&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
opencode version&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=DNS&amp;diff=7052</id>
		<title>DNS</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=DNS&amp;diff=7052"/>
		<updated>2025-08-26T21:09:58Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* AWS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is a source of general information about Domain Name System aka DNS.&lt;br /&gt;
&lt;br /&gt;
The DNS server stores different types of resource records used to resolve names, records like:&lt;br /&gt;
*'''A''' - Address record - returns a 32-bit IPv4 address, most commonly used to map hostnames to an IP address of the host&lt;br /&gt;
*'''NS''' - Name server record - an authoritative name server, delegates a DNS zone to use the given authoritative name servers&lt;br /&gt;
*'''CNAME''' - Canonical name record - the canonical name (or Fully Qualified Domain Name) for an alias; Alias of one name to another: the DNS lookup will continue by retrying the lookup with the new name. Used when multiple services have the single network address, but each service has its own entry in DNS&lt;br /&gt;
*'''MX''' - mail exchange record; maps a domain name to a list of mail exchange servers (MTA) for that domain&lt;br /&gt;
*'''SRV''' - Service Locator, multi line record of form of eg in AWS &amp;lt;code&amp;gt;[priority] [weight] [port] [server host name]&amp;lt;/code&amp;gt;, multiline must start with &amp;lt;code&amp;gt;_&amp;lt;/code&amp;gt; a new line delimiter&lt;br /&gt;
*'''SOA''' - Start of [a zone of] authority record - Specifies authoritative information about a DNS zone, including the primary name server, the email of the domain administrator, the domain serial number, and several timers relating to refreshing the zone.&lt;br /&gt;
*'''PTR''' - Pointer record - pointer to a canonical name. Unlike a CNAME, DNS processing stops and just the name is returned. The most common use is for implementing reverse DNS lookups&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;ipconfig /displaydns&amp;lt;/code&amp;gt; command displays all of the cached DNS entries on a Windows computer system.&lt;br /&gt;
&lt;br /&gt;
= &amp;lt;code&amp;gt;/etc/hosts&amp;lt;/code&amp;gt; =&lt;br /&gt;
&amp;lt;code&amp;gt;dig&amp;lt;/code&amp;gt; (domain information groper) and &amp;lt;code&amp;gt;nslookup&amp;lt;/code&amp;gt; (query Internet name servers interactively) are tools that query name servers. Unless a specific name server is specified as a commandline argument they will query the name server(s) found in &amp;lt;code&amp;gt;/etc/resolv.conf&amp;lt;/code&amp;gt;. They simply don't look at alternative sources of host information such as the &amp;lt;code&amp;gt;/etc/hosts&amp;lt;/code&amp;gt; file or other sources specified in &amp;lt;code&amp;gt;/etc/nsswitch.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To force all dns queries through dnsmasq on your host, the &amp;lt;code&amp;gt;/etc/resolv.conf&amp;lt;/code&amp;gt; there should point to dnsmasq, i.e. it should look like:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
#/etc/resolv.conf on sun&lt;br /&gt;
nameserver 127.0.0.1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Hosts file is part of &amp;lt;tt&amp;gt;Name Service Switch&amp;lt;/tt&amp;gt;. Configured at&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ cat /etc/nsswitch.conf&lt;br /&gt;
# /etc/nsswitch.conf&lt;br /&gt;
#&lt;br /&gt;
# Example configuration of GNU Name Service Switch functionality.&lt;br /&gt;
# If you have the `glibc-doc-reference' and `info' packages installed, try:&lt;br /&gt;
# `info libc &amp;quot;Name Service Switch&amp;quot;' for information about this file.&lt;br /&gt;
&lt;br /&gt;
passwd:         compat systemd&lt;br /&gt;
group:          compat systemd&lt;br /&gt;
shadow:         compat&lt;br /&gt;
gshadow:        files&lt;br /&gt;
&lt;br /&gt;
hosts:          files mdns4_minimal [NOTFOUND=return] dns myhostname&lt;br /&gt;
networks:       files&lt;br /&gt;
&lt;br /&gt;
protocols:      db files&lt;br /&gt;
services:       db files&lt;br /&gt;
ethers:         db files&lt;br /&gt;
rpc:            db files&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example entries in &amp;lt;code&amp;gt;/etc/hosts&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
10.10.11.11                 echo-1.service.k8s.acme.cloud # via app-service LoadBalancer&lt;br /&gt;
10.10.22.22  k8s.acme.cloud echo-1.ingress.k8s.acme.cloud # via ingress-service (k8s entry point)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
can be verified using &amp;lt;code&amp;gt;getent&amp;lt;/code&amp;gt; utility, to get entries from Name Service Switch libraries&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ getent hosts 10.10.11.11&lt;br /&gt;
10.10.11.11 echo-1.service.k8s.acme.cloud&lt;br /&gt;
$ getent hosts echo-1.ingress.k8s.acme.cloud&lt;br /&gt;
10.10.22.22 k8s.acme.cloud echo-1.ingress.k8s.acme.cloud&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= flush dns =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo systemctl is-active systemd-resolved.service&lt;br /&gt;
# -&amp;gt; active&lt;br /&gt;
&lt;br /&gt;
# Ubuntu 18.04, 20.04&lt;br /&gt;
resolvectl statistics               # show statistics, the same output as 'systemd-resolve --statistics'&lt;br /&gt;
sudo systemd-resolve --statistics   # or --reset-statistics - resets resolver statistics&lt;br /&gt;
&lt;br /&gt;
sudo systemd-resolve --flush-caches # Flush Ubuntu DNS Cache - Ubuntu &amp;lt;22.04 (old)&lt;br /&gt;
resolvectl flush-caches             # Flush Ubuntu DNS Cache - Ubuntu  22.04&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
sudo systemctl restart nscd          # Other distros, eg arch Linux&lt;br /&gt;
&lt;br /&gt;
# Resolve a name without using local cache&lt;br /&gt;
sudo systemd-resolve --flush-caches&lt;br /&gt;
resolvectl flush-caches&lt;br /&gt;
&lt;br /&gt;
systemd-resolve --statistics | grep 'Current Cache Size' # -&amp;gt; Current Cache Size: 0&lt;br /&gt;
dig +short tvp.info @8.8.8.8&lt;br /&gt;
systemd-resolve --statistics | grep 'Current Cache Size' # -&amp;gt; Current Cache Size: 0&lt;br /&gt;
dig +short tvp.info&lt;br /&gt;
systemd-resolve --statistics | grep 'Current Cache Size' # -&amp;gt; Current Cache Size: 1&lt;br /&gt;
&lt;br /&gt;
# Display cached dns entries&lt;br /&gt;
sudo killall -USR1 systemd-resolved # it doesn't stop the service, it tells systemd-resolved to write all the current cache entries to the system log&lt;br /&gt;
journalctl -u systemd-resolved      # list the cached entries from the log&lt;br /&gt;
&lt;br /&gt;
## Oneliner&lt;br /&gt;
sudo killall -USR1 systemd-resolved; journalctl -u systemd-resolved --since &amp;quot;5s ago&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Netplan =&lt;br /&gt;
Netplan is the default network management tool on Ubuntu 18.04, replacing the /etc/resolv.conf and /etc/network/interfaces configuration files that have been used to configure the network in the previous Ubuntu versions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Back in the days, whenever you wanted to configure DNS resolvers in Linux you would simply open the /etc/resolv.conf file, edit the entries, save the file and you are good to go. This file still exists but it is a symlink controlled by the systemd-resolved service and should not be edited manually.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Note|Info As of 18/05/2020 Network Manager doesn’t respect the Netplan option nameservers: addresses [8.8.8.8,8.8.4.4] option even when you specify dhcp4-overrides: use-dns: false it still uses (and give priority to) the default DHCP DNS servers. This renders any custom DNS servers redundant. The only way around this AFAIK is to specify the Ethernet connection as static.}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo vi /etc/netplan/enp0s3.yaml&lt;br /&gt;
network:&lt;br /&gt;
    version: 2&lt;br /&gt;
    renderer: NetworkManager&lt;br /&gt;
    ethernets:&lt;br /&gt;
       enp0s3:&lt;br /&gt;
          dhcp4: false&lt;br /&gt;
          addresses: [192.168.1.114/24]&lt;br /&gt;
          gateway4: 192.168.1.1&lt;br /&gt;
          nameservers:&lt;br /&gt;
             addresses: [8.8.8.8, 8.8.4.4]&lt;br /&gt;
&lt;br /&gt;
# Using this method you’ll lose the Network Manager GUI and network icon and let Netplan to manage all devices&lt;br /&gt;
sudo netplan apply&lt;br /&gt;
systemd-resolve --status | grep 'DNS Servers' -A2&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== DNS Management in Ubuntu 24.04: systemd-resolved vs. resolvconf ==&lt;br /&gt;
On modern Ubuntu systems, including 24.04, &amp;lt;code&amp;gt;systemd-resolved&amp;lt;/code&amp;gt; has become the primary service for managing DNS resolution. It replaces the legacy &amp;lt;code&amp;gt;resolvconf&amp;lt;/code&amp;gt; framework as the default, offering a more robust and integrated solution. For advanced users, the key takeaway is the shift in tooling and configuration from the traditional &amp;lt;code&amp;gt;/etc/resolv.conf&amp;lt;/code&amp;gt; file to the &amp;lt;code&amp;gt;resolvectl&amp;lt;/code&amp;gt; command and its integration with systemd.&lt;br /&gt;
&lt;br /&gt;
=== The Modern Default: systemd-resolved ===&lt;br /&gt;
&amp;lt;code&amp;gt;systemd-resolved&amp;lt;/code&amp;gt; acts as a local caching DNS stub resolver. It intercepts DNS queries from local applications and handles them intelligently, providing advanced features and better integration with modern network configurations.&lt;br /&gt;
&lt;br /&gt;
'''Core Function:''' Provides local DNS caching, DNSSEC validation, and Link-Local Multicast Name Resolution (LLMNR).&lt;br /&gt;
'''How it Works:''' By default, &amp;lt;code&amp;gt;/etc/resolv.conf&amp;lt;/code&amp;gt; is a symlink to a file managed by &amp;lt;code&amp;gt;systemd-resolved&amp;lt;/code&amp;gt; (e.g., &amp;lt;code&amp;gt;/run/systemd/resolve/stub-resolv.conf&amp;lt;/code&amp;gt;). This file typically points to the local stub resolver at &amp;lt;code&amp;gt;127.0.0.53&amp;lt;/code&amp;gt;. Applications can also query &amp;lt;code&amp;gt;systemd-resolved&amp;lt;/code&amp;gt; directly via D-Bus, bypassing &amp;lt;code&amp;gt;/etc/resolv.conf&amp;lt;/code&amp;gt; entirely.&lt;br /&gt;
'''Key Features:'''&lt;br /&gt;
&lt;br /&gt;
'''DNS Caching:''' Improves performance by caching previous lookups.&lt;br /&gt;
&lt;br /&gt;
'''DNSSEC:''' Validates DNS records to protect against spoofing.&lt;br /&gt;
&lt;br /&gt;
'''Per-Link Configuration:''' Intelligently handles DNS servers for multiple network interfaces (e.g., Ethernet, Wi-Fi, and a VPN connection simultaneously).&lt;br /&gt;
&lt;br /&gt;
'''Management:''' The primary tool for interacting with this service is &amp;lt;code&amp;gt;resolvectl&amp;lt;/code&amp;gt;. You can use &amp;lt;code&amp;gt;resolvectl status&amp;lt;/code&amp;gt; to view current DNS servers and &amp;lt;code&amp;gt;resolvectl query &amp;lt;domain&amp;gt;&amp;lt;/code&amp;gt; to test resolution.&lt;br /&gt;
&lt;br /&gt;
'''Integration:''' It integrates seamlessly with network management tools like Netplan and NetworkManager, which automatically feed it DNS server information received via DHCP or static configurations.&lt;br /&gt;
&lt;br /&gt;
=== The Legacy Framework: resolvconf ===&lt;br /&gt;
&amp;lt;code&amp;gt;resolvconf&amp;lt;/code&amp;gt; is a script-based framework designed to manage the contents of &amp;lt;code&amp;gt;/etc/resolv.conf&amp;lt;/code&amp;gt;. It acts as an intermediary, collecting DNS information from various sources and writing it to the configuration file.&lt;br /&gt;
&lt;br /&gt;
'''Core Function:''' To dynamically generate &amp;lt;code&amp;gt;/etc/resolv.conf&amp;lt;/code&amp;gt; by aggregating DNS information from sources like DHCP clients, VPN software, and static network configurations.&lt;br /&gt;
'''Status on Modern Ubuntu:''' While the &amp;lt;code&amp;gt;resolvconf&amp;lt;/code&amp;gt; package may still be installed for compatibility with older software, it is no longer the default manager of DNS resolution. On a standard Ubuntu 24.04 installation, &amp;lt;code&amp;gt;systemd-resolved&amp;lt;/code&amp;gt; handles its responsibilities. If both are present, &amp;lt;code&amp;gt;systemd-resolved&amp;lt;/code&amp;gt; typically takes precedence.&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
*[https://en.wikipedia.org/wiki/List_of_DNS_record_types List of DNS record types] Wikipedia&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Ubuntu_Setup&amp;diff=7051</id>
		<title>Ubuntu Setup</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Ubuntu_Setup&amp;diff=7051"/>
		<updated>2025-08-23T12:31:45Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* gnome-shell-system-monitor-applet - cpu, memory indicators */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;If you are using Ubuntu for various Linux projects you will find that as it comes with pre installed with many packages. On the other hand installing just minimal version seems to be too extreme. Therefore I started maitaining a list of unnecessary packages and one liner to that removes them all. Please feel free to modify for your needs.&lt;br /&gt;
&lt;br /&gt;
= Default partitioning =&lt;br /&gt;
On virtual systems schema below will be applied, eg on laptops:&lt;br /&gt;
:[[File:ClipCapIt-200620-131502.PNG]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#Eg. for 4G memory and 50G storage system&lt;br /&gt;
&lt;br /&gt;
/dev/mapper/ubuntu--vg-root        mount_point: /&lt;br /&gt;
/dev/mapper/ubuntu--vg-swapt_1&lt;br /&gt;
/dev/sda&lt;br /&gt;
 /dev/sda1 (50G)&lt;br /&gt;
&lt;br /&gt;
LVM VG ubuntu-vg, LV root    as ext4&lt;br /&gt;
LVM VG ubuntu-vg, LV swapt_1 as swap&lt;br /&gt;
&lt;br /&gt;
#Boot device:&lt;br /&gt;
/dev/mapper/ubuntu--vg-root&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
As a good handy practice you may create 100G virtual disk that you thin provision. Then create 2 PVs for  root and swap partitions. Don't utilize all space at once but extend partitions when needed. This method eliminates adding new disks to VMs saving time and efforts.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example LVM setup, here using 30G Physical Volume(99.9% used), 1 Volume Group and 2 Logical Volumes (root and swap). &lt;br /&gt;
&amp;lt;source lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
$ sudo pvs&lt;br /&gt;
  PV         VG        Fmt  Attr PSize   PFree &lt;br /&gt;
  /dev/sda1  ubuntu-vg lvm2 a--  &amp;lt;29.93g 36.00m&lt;br /&gt;
$ sudo vgs&lt;br /&gt;
  VG        #PV #LV #SN Attr   VSize   VFree &lt;br /&gt;
  ubuntu-vg   1   2   0 wz--n- &amp;lt;29.93g 36.00m&lt;br /&gt;
$ sudo lvs&lt;br /&gt;
  LV     VG        Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert&lt;br /&gt;
  root   ubuntu-vg -wi-ao----  28.94g                                                    &lt;br /&gt;
  swap_1 ubuntu-vg -wi-ao---- 976.00m                                                    &lt;br /&gt;
piotr@u18:~$&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
$ lsblk /dev/sda --fs&lt;br /&gt;
NAME                  FSTYPE      LABEL UUID                                   MOUNTPOINT&lt;br /&gt;
sda                                                                            &lt;br /&gt;
└─sda1                LVM2_member       rP18Kb-Q12j-wjVf-C1iV-uy42-BUJD-aWFuO7 &lt;br /&gt;
  ├─ubuntu--vg-root   ext4              fad04a3b-5fa3-4a03-bbd6-24a93cda1eb3   /&lt;br /&gt;
  └─ubuntu--vg-swap_1 swap              47cd084b-89b0-4cd5-bdb8-367238842ba1   [SWAP]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= List of unnecessary packages =&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get remove libreoffice-* #Remove LibreOffice&lt;br /&gt;
sudo apt-get remove unity-lens-* #This package contains photos scopes which allow Unity to search for local and online photos.&lt;br /&gt;
sudo apt-get remove shotwell* #Photo organizer&lt;br /&gt;
sudo apt-get remove simple-scan #Scanner software&lt;br /&gt;
sudo apt-get remove empathy* #Internet messaging ~13M&lt;br /&gt;
sudo apt-get remove thunderbird* #Email client ~61M&lt;br /&gt;
sudo apt-get remove unity-scope-gdrive #Google Drive scope for Unity ~116KB&lt;br /&gt;
sudo apt-get remove cheese* #Cheese Webcam Booth - webcam software&lt;br /&gt;
sudo apt-get remove brasero* #Brasero Disc Burner ~6.5MB&lt;br /&gt;
sudo apt-get remove gnome-bluetooth Package to manipulate bloototh devices using Gnome desktop ~2MB&lt;br /&gt;
sudo apt-get remove gnome-orca Orca Screen Reader -Provide access to graphical desktop environments via synthesised speech and/or refreshable braille&lt;br /&gt;
sudo apt-get remove unity-webapps-common #Amazon Unity WebApp integration scripts ~133KB&lt;br /&gt;
sudo apt-get remove ibus-pinyin #IBus Bopomofo Preferences - ibus-pinyin is a IBus based IM engine for Chinese ~1.4MB&lt;br /&gt;
sudo apt-get remove apt-get remove printer-driver-foo2zjs* #Reactivate HP LaserJet 1018/1020 after reloading paper ~3.2MB&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Remove unnecessary packages - one liner =&lt;br /&gt;
;Ubuntu 12, 14, 16&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sudo apt-get remove libreoffice-* unity-lens-* shotwell* simple-scan empathy* thunderbird* unity-scope-gdrive cheese*\&lt;br /&gt;
brasero* gnome-bluetooth gnome-orca unity-webapps-common ibus-pinyin printer-driver-foo2zjs*&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Ubuntu 18. It's recommended to choose ''Minimal Install'', so most of packages below won't get installed.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sudo apt-get purge libreoffice-* unity-lens-* shotwell* simple-scan empathy* thunderbird* cheese* \&lt;br /&gt;
brasero* gnome-bluetooth gnome-orca ibus-pinyin printer-driver-foo2zjs* xul-ext-ubufox speech-dispatcher* \&lt;br /&gt;
rhythmbox* printer-driver-* mythes-en-us mobile-broadband-provider-inf* \&lt;br /&gt;
evolution-data-server* espeak-ng-data:amd64 bluez* ubuntu-web-launchers \&lt;br /&gt;
transmission-*&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get purge xul-ext-ubufox                           # Canonical FF customizations for u14,16,18,20&lt;br /&gt;
sudo apt-get remove gnome-mahjongg gnome-mines gnome-sudoku # games, works for u14,16,18,20&lt;br /&gt;
sudo apt-get remove gnome-video-effects gstreamer1.0-* &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
; XTREME&lt;br /&gt;
UnInstallant Ubuntu software notifier&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get remove update-notifier&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Uninstall locales - unused languages etc =&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get install localepurge&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Set apt-get to not install recommended and suggested packages =&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo bash -c 'cat &amp;gt; /etc/apt/apt.conf.d/01no-recommend &amp;lt;&amp;lt; EOF&lt;br /&gt;
APT::Install-Recommends &amp;quot;0&amp;quot;;&lt;br /&gt;
APT::Install-Suggests &amp;quot;0&amp;quot;;&lt;br /&gt;
EOF'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To see if apt reads this, enter this in command line (as root or regular user):&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
apt-config dump | grep -e Recommends -e Suggests&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Install necessary packages =&lt;br /&gt;
&lt;br /&gt;
Adobe Flash Player&lt;br /&gt;
 sudo apt-get install flashplugin-installer&lt;br /&gt;
&lt;br /&gt;
Java JRE&lt;br /&gt;
This will install the default verison Java for you distro plus Icedtea plugin for using Firefox with Java&lt;br /&gt;
 sudo apt-get install default-jre icedtea-plugin&lt;br /&gt;
&lt;br /&gt;
Unity Settings&lt;br /&gt;
 sudo apt-get install unity-control-center&lt;br /&gt;
&lt;br /&gt;
Opera&lt;br /&gt;
&lt;br /&gt;
Add Opera repository &amp;lt;code&amp;gt;'''deb &amp;lt;nowiki&amp;gt;http://deb.opera.com/opera/&amp;lt;/nowiki&amp;gt; stable non-free'''&amp;lt;/code&amp;gt; to the apt-get source list in &amp;lt;code&amp;gt;/etc/apt/sources.list&amp;lt;/code&amp;gt;. Then import a public PGP repository key.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
echo &amp;quot;deb http://deb.opera.com/opera/ stable non-free&amp;quot; | sudo tee -a /etc/apt/sources.list&lt;br /&gt;
wget -qO - http://deb.opera.com/archive.key | sudo apt-key add -&lt;br /&gt;
sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install opera&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Silverlight&lt;br /&gt;
&lt;br /&gt;
Pipelight has been released and we can use it for silverlight as a best alternative moonlight.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-add-repository ppa:ehoover/compholio&lt;br /&gt;
sudo apt-add-repository ppa:mqchael/pipelight&lt;br /&gt;
sudo apt-get update&lt;br /&gt;
sudo apt-get install pipelight&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= GUI tools =&lt;br /&gt;
* [https://github.com/hluk/CopyQ/releases copyQ] clipboard manager&lt;br /&gt;
* VisualVM&lt;br /&gt;
&lt;br /&gt;
= Customise Ubuntu =&lt;br /&gt;
==Fix Ubuntu Unity Dash Search for Applications and Files==&lt;br /&gt;
 sudo apt-get install unity-lens-files unity-lens-applications #log out and log back in required&lt;br /&gt;
&lt;br /&gt;
==Fix Ubuntu &amp;lt;17.10 missing Control Center==&lt;br /&gt;
 sudo apt-get install unity-control-center --no-install-recommends&lt;br /&gt;
&lt;br /&gt;
==Fix Ubuntu &amp;gt;18.04 missing System Settings==&lt;br /&gt;
 sudo apt install gnome-control-center&lt;br /&gt;
&lt;br /&gt;
==Remove background wallpaper ==&lt;br /&gt;
Tested on Ubuntu 14,16,18&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
gsettings set org.gnome.settings-daemon.plugins.background active true&lt;br /&gt;
gsettings set org.gnome.desktop.background draw-background false        #disable &lt;br /&gt;
gsettings set org.gnome.desktop.background primary-color &amp;quot;#000000&amp;quot;      #set to black&lt;br /&gt;
gsettings set org.gnome.desktop.background secondary-color &amp;quot;#000000&amp;quot;    #set to black&lt;br /&gt;
gsettings set org.gnome.desktop.background color-shading-type &amp;quot;solid&amp;quot;   #set solid colour&lt;br /&gt;
gsettings set org.gnome.desktop.background picture-uri file:///dev/null #remove wallpaper, not perfect but nothing worked in U15.10&lt;br /&gt;
gsettings set com.canonical.unity-greeter draw-user-backgrounds false   #disable not worked&lt;br /&gt;
&lt;br /&gt;
# Reset background picture to origin, U15.10&lt;br /&gt;
gsettings set org.gnome.desktop.background picture-uri file:///usr/share/backgrounds/warty-final-ubuntu.png &lt;br /&gt;
&lt;br /&gt;
# Sets Unity greeter background, &amp;lt;17.04&lt;br /&gt;
gsettings set com.canonical.unity-greeter background /usr/share/backgrounds/warty-final-ubuntu.png&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Disable screen lock out==&lt;br /&gt;
&amp;lt;code&amp;gt;dconf&amp;lt;/code&amp;gt; is legacy tool to configure &amp;lt;tt&amp;gt;gnome&amp;lt;/tt&amp;gt; nowadays more modern way is to use &amp;lt;code&amp;gt;gsettings&amp;lt;/code&amp;gt;.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
dconf write /org/gnome/desktop/screensaver/idle-activation-enabled false  #gnome&lt;br /&gt;
dconf write /org/gnome/desktop/screensaver/lock-enabled            false&lt;br /&gt;
&lt;br /&gt;
# Unity - Ubuntu 14.04, 16.04&lt;br /&gt;
gsettings set org.gnome.desktop.session     idle-delay   0      #disable the screen blackout:(0 to disable)&lt;br /&gt;
gsettings set org.gnome.desktop.screensaver lock-enabled false  #disable the screen lock&lt;br /&gt;
&lt;br /&gt;
# VirtualBox &amp;gt; Ubuntu 18.04 Disabling Xserver screen timeouts&lt;br /&gt;
xset s off     # Xserver s parameter sets screensaver to off&lt;br /&gt;
xset s noblank # prevent the display from blanking &lt;br /&gt;
xset -dpms     # prevent the monitor's DPMS energy saver from kicking in&lt;br /&gt;
&lt;br /&gt;
# Gnome - Ubuntu 18.04 LTS, Settings &amp;gt; Power &amp;gt; Blank screen &amp;gt; set to: Never&lt;br /&gt;
gsettings get org.gnome.desktop.lockdown    disable-lock-screen      # verify status&lt;br /&gt;
gsettings set org.gnome.desktop.lockdown    disable-lock-screen true # set disabled&lt;br /&gt;
gsettings get org.gnome.desktop.screensaver lock-enabled             # verify status&lt;br /&gt;
gsettings set org.gnome.desktop.screensaver lock-enabled false       # set disabled&lt;br /&gt;
dconf write  /org/gnome/desktop/screensaver/lock-enabled false       # set disbaled using dconf&lt;br /&gt;
gsettings set org.gnome.desktop.screensaver idle-activation-enabled false # some say it's last resort :)&lt;br /&gt;
&lt;br /&gt;
# Power management&lt;br /&gt;
gsettings set org.gnome.settings-daemon.plugins.power active true  #set gnome to be the default power management run&lt;br /&gt;
gsettings set org.gnome.settings-daemon.plugins.power active false #turn off power management&lt;br /&gt;
&lt;br /&gt;
# last resort as it was a bud in Ubuntu 11.10 with DPMS&lt;br /&gt;
gsettings set org.gnome.desktop.screensaver idle-activation-enabled false&lt;br /&gt;
gsettings set org.gnome.desktop.session idle-delay 2400&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Verify by navigating in &amp;lt;tt&amp;gt;dconf-editor&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;/org/gnome/desktop/screensaver/&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Change number of workspaces==&lt;br /&gt;
To get the current values:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
dconf read /org/compiz/profiles/unity/plugins/core/hsize&lt;br /&gt;
dconf read /org/compiz/profiles/unity/plugins/core/vsize&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To set new values:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
dconf write /org/compiz/profiles/unity/plugins/core/hsize 2&lt;br /&gt;
# or&lt;br /&gt;
gsettings set org.compiz.core:/org/compiz/profiles/unity/plugins/core/ hsize 4&lt;br /&gt;
gsettings set org.compiz.core:/org/compiz/profiles/unity/plugins/core/ vsize 4&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Clenup motd messages ==&lt;br /&gt;
Ubuntu at login displays a number standard messages taking terminal space causing potential loosing context of previous operations. &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
Welcome to Ubuntu 16.04.5 LTS (GNU/Linux 4.4.0-134-generic x86_64)&lt;br /&gt;
&lt;br /&gt;
 * Documentation:  https://help.ubuntu.com&lt;br /&gt;
 * Management:     https://landscape.canonical.com&lt;br /&gt;
 * Support:        https://ubuntu.com/advantage&lt;br /&gt;
&lt;br /&gt;
  Get cloud support with Ubuntu Advantage Cloud Guest:&lt;br /&gt;
    http://www.ubuntu.com/business/services/cloud&lt;br /&gt;
&lt;br /&gt;
1 package can be updated.&lt;br /&gt;
0 updates are security updates.&lt;br /&gt;
&lt;br /&gt;
New release '18.04.1 LTS' available.&lt;br /&gt;
Run 'do-release-upgrade' to upgrade to it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Last login: Fri Aug 31 12:11:28 2018 from 10.0.2.2&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is managed by files in &amp;lt;code&amp;gt;/etc/update-motd.d/&amp;lt;/code&amp;gt;, so deleting them will remove clutter on a screen&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ls /etc/update-motd.d/&lt;br /&gt;
00-header             51-cloudguest         91-release-upgrade    98-fsck-at-reboot     &lt;br /&gt;
10-help-text          90-updates-available  97-overlayroot        98-reboot-required &lt;br /&gt;
&lt;br /&gt;
# Ubuntu Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-1022-azure x86_64)&lt;br /&gt;
# Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-1021-aws x86_64)&lt;br /&gt;
sudo rm /etc/update-motd.d/{10-help-text,50-landscape-sysinfo,50-motd-news,51-cloudguest,80-livepatch,95-hwe-eol}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This cuts down to this message, Ubuntu 18.04 in AWS&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
Welcome to Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-1021-aws x86_64)&lt;br /&gt;
&lt;br /&gt;
0 packages can be updated.&lt;br /&gt;
0 updates are security updates.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Last login: Thu Jan 31 17:09:38 2019 from 10.10.11.11&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Useful setups =&lt;br /&gt;
== Image converter ==&lt;br /&gt;
nautilus-image-converter is a nautilus extension to mass resize or rotate images. It adds two context menu items in nautlius so you can right-click and choose &amp;quot;Resize Image&amp;quot; or &amp;quot;Rotate Image&amp;quot;).&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# tested on Ubuntu 24.04 with Gnome&lt;br /&gt;
sudo apt-get install nautilus-image-converter&lt;br /&gt;
&lt;br /&gt;
# Restart to see the new context menu&lt;br /&gt;
nautilus -q&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Call screen saver from a terminal to blank all screens ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# tested on Ubuntu 18.04 with Gnome&lt;br /&gt;
sudo apt-get install gnome-screensaver&lt;br /&gt;
gnome-screensaver-command -a #controls GNOME screensaver, -a activate (blank the screen)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create application launcher ==&lt;br /&gt;
;Ubuntu 18.04&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install the GNOME-panel toolset&lt;br /&gt;
sudo apt-get install --no-install-recommends gnome-panel&lt;br /&gt;
&lt;br /&gt;
# Every user launcher&lt;br /&gt;
sudo gnome-desktop-item-edit /usr/share/applications/VisualVM.desktop --create-new&lt;br /&gt;
&lt;br /&gt;
# Local user only, the filename by default is a Name-of-appication.desktop&lt;br /&gt;
gnome-desktop-item-edit ~/.local/share/applications --create-new &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:[[File:ClipCapIt-190807-080016.PNG]]&lt;br /&gt;
&lt;br /&gt;
;Ubuntu 19.10, 20.04&lt;br /&gt;
In above releases &amp;lt;code&amp;gt;gnome-desktop-item-edit&amp;lt;/code&amp;gt; has been removed from the &amp;lt;code&amp;gt;gnome-panel&amp;lt;/code&amp;gt; package, as an alternative &amp;lt;code&amp;gt;.desktop&amp;lt;/code&amp;gt; files can be created manually.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi /usr/share/applications/APPNAME.desktop&lt;br /&gt;
[Desktop Entry]&lt;br /&gt;
Name=&amp;lt;NAME OF THE APPLICATION&amp;gt;&lt;br /&gt;
Comment=&amp;lt;A SHORT DESCRIPTION&amp;gt;&lt;br /&gt;
Exec=&amp;lt;COMMAND-OR-FULL-PATH-TO-LAUNCH-THE-APPLICATION&amp;gt;&lt;br /&gt;
Type=Application&lt;br /&gt;
Terminal=false&lt;br /&gt;
Icon=&amp;lt;ICON NAME OR PATH TO ICON&amp;gt;&lt;br /&gt;
NoDisplay=false&lt;br /&gt;
Keywords=&amp;lt;eg. sql&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It's optional but you may need to right click and set 'allow launching' with addition to set executable permissions. Usual locations of &amp;lt;code&amp;gt;.desktop&amp;lt;/code&amp;gt; files are:&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/share/applications/&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/var/lib/snapd/desktop/applications/&amp;lt;/code&amp;gt; for snap applications&lt;br /&gt;
&lt;br /&gt;
== [https://github.com/paradoxxxzero/gnome-shell-system-monitor-applet gnome-shell-system-monitor-applet] - cpu, memory indicators ==&lt;br /&gt;
System information such as memory usage, cpu usage, network rates and more can be displayed in the notification area in GNOME Shell.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
System-monitor extensions:&lt;br /&gt;
[https://extensions.gnome.org/extension/120/system-monitor/ system-monitor] by paradoxxxzero on [https://github.com/paradoxxxzero/gnome-shell-system-monitor-applet github] supports Gnome-shell up to v40. It seems like abandoned project.&lt;br /&gt;
[https://extensions.gnome.org/extension/3010/system-monitor-next/ system-monitor-next] by mgalgs on [https://github.com/mgalgs/gnome-shell-system-monitor-applet github] supports Gnome-shell v40+, it's a fork of the above.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All extensions:&lt;br /&gt;
* https://extensions.gnome.org&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Note|The current version of the browser Firefox is packaged as a snap version. One of the issues with this is that it cannot work with the Gnome Extensions website.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install on Ubuntu 24.04 (June 2024)&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Ubuntu version tested: v20/22/24 LTS&lt;br /&gt;
lsb_release -d&lt;br /&gt;
Description:	Ubuntu 24.04.3 LTS&lt;br /&gt;
&lt;br /&gt;
gnome-shell --version&lt;br /&gt;
GNOME Shell 46.0&lt;br /&gt;
&lt;br /&gt;
# Install the Gnome-Shell-Extension &amp;amp; Manager&lt;br /&gt;
sudo apt install gnome-shell-extensions               # Ubuntu 20.04 LTS already has this package, 24.04 needs installing it&lt;br /&gt;
sudo apt install gnome-shell-extension-manager        # Ubuntu 22.04|24.04 LTS&lt;br /&gt;
&lt;br /&gt;
# 1. Open `Extensions` app, turn &amp;quot;Use Extensions&amp;quot;. It is already turned on in Ubuntu 24.04.3 LTS.&lt;br /&gt;
# 2. Open Browse tab &amp;gt; search for 'system-monitor-next' by mgalgs, click &amp;quot;Install&amp;quot;.&lt;br /&gt;
# 3. &amp;quot;cpu/mem/net&amp;quot; indicators will appear in the system tray.&lt;br /&gt;
&lt;br /&gt;
# Additional steps for Ubuntu &amp;lt; 24.04&lt;br /&gt;
sudo apt install gnome-tweaks                         # GUI to manage gnome-extensions&lt;br /&gt;
sudo apt install gir1.2-gtop-2.0 gir1.2-nm-1.0 gir1.2-clutter-1.0 gnome-system-monitor&lt;br /&gt;
sudo apt install gnome-shell-extension-system-monitor # after requires log out&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Download the extension from&lt;br /&gt;
## https://extensions.gnome.org/extension/120/system-monitor/&lt;br /&gt;
&lt;br /&gt;
# Never worked out how to use this direct download and install via 'gnome-extensions install &amp;lt;extension_name&amp;gt;'&lt;br /&gt;
## wget https://github.com/paradoxxxzero/gnome-shell-system-monitor-applet/archive/v38.zip&lt;br /&gt;
## gnome-extensions install &amp;lt;system-monitor@paradoxxx.zero.gmail.com&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Enable extension using cli&lt;br /&gt;
gnome-extensions enable system-monitor-next@paradoxxx.zero.gmail.com&lt;br /&gt;
gnome-extensions list --user&lt;br /&gt;
clipboard-indicator@tudmotu.com&lt;br /&gt;
system-monitor-next@paradoxxx.zero.gmail.com&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
:[[File:ClipCapIt-210105-084527.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== [https://github.com/paradoxxxzero/gnome-shell-system-monitor-applet/issues/737#issuecomment-1230654455 Ubuntu 22.04 workaround for the OUTDATED extension] ===&lt;br /&gt;
{{Note|Workaround still needed in August 2022}}&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo apt install gir1.2-gtop-2.0 gir1.2-nm-1.0 gir1.2-clutter-1.0 gnome-system-monitor&lt;br /&gt;
git clone https://github.com/paradoxxxzero/gnome-shell-system-monitor-applet.git&lt;br /&gt;
cd gnome-shell-system-monitor-applet # commit b359d88 verified&lt;br /&gt;
vi system-monitor@paradoxxx.zero.gmail.com/metadata.json &lt;br /&gt;
# | change &amp;quot;version&amp;quot;: -1 to &amp;quot;version&amp;quot;: 42&lt;br /&gt;
make install&lt;br /&gt;
# log out and back in (required)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Snapd - Chromium =&lt;br /&gt;
Recently in U19+ Chromium get installed via snapd package. This is classic installation that has limited access to only a certain directories. It happen that when working with AWS we need get access to &amp;lt;code&amp;gt;~/.ssh&amp;lt;/code&amp;gt; folder to get ec2 machine password. This folder is denied, but we can bind mount &amp;lt;code&amp;gt;~/.ssh&amp;lt;/code&amp;gt; folder into the snap container directory:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ snap list chromium &lt;br /&gt;
Name      Version        Rev   Tracking       Publisher   Notes&lt;br /&gt;
chromium  86.0.4240.111  1373  latest/stable  canonical✓  -&lt;br /&gt;
&lt;br /&gt;
# cd to chromim $HOME dir&lt;br /&gt;
mkdir ~/snap/chromium/current/.ssh&lt;br /&gt;
sudo mount --bind ~/.ssh/ ~/snap/chromium/current/.ssh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Screen shooting =&lt;br /&gt;
In Ubuntu 20.04 Shutter is not a part of default repositories. It can be added via PPA:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo add-apt-repository -y ppa:linuxuprising/shutter&lt;br /&gt;
sudo apt-get install shutter&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Audio - [https://rastating.github.io/setting-default-audio-device-in-ubuntu-18-04/ set defaults] =&lt;br /&gt;
For preserving settings using GUI you can install [https://freedesktop.org/software/pulseaudio/pavucontrol/ PulseAudio Volume Control] &amp;lt;code&amp;gt;pavucontrol&amp;lt;/code&amp;gt;.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# install&lt;br /&gt;
sudo apt install pavucontrol # Ubuntu 20.04&lt;br /&gt;
# run&lt;br /&gt;
pavucontrol&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set default output/input device. In Ubuntu PulseAudio is used to control audio devices. It contains following configuration files&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
/etc/pulse/default.pa # system wide&lt;br /&gt;
~/.config/pulse       # user configuration&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set defaults&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# List devices: modules, sinks, sources, sink-inputs, source-outputs, clients, samples, cards&lt;br /&gt;
# sinks - outputs, sink-inputs, sources - all input/output including RUNNING and SUSPENDED devices&lt;br /&gt;
$ pactl list short sources | column -t&lt;br /&gt;
5   alsa_output.pci-0000_00_1f.3-platform-skl_hda_dsp_generic.HiFi__hw_sofhdadsp_5__sink.monitor  module-alsa-card.c  s16le  2ch  48000Hz  SUSPENDED&lt;br /&gt;
6   alsa_output.pci-0000_00_1f.3-platform-skl_hda_dsp_generic.HiFi__hw_sofhdadsp_4__sink.monitor  module-alsa-card.c  s16le  2ch  48000Hz  RUNNING&lt;br /&gt;
7   alsa_output.pci-0000_00_1f.3-platform-skl_hda_dsp_generic.HiFi__hw_sofhdadsp_3__sink.monitor  module-alsa-card.c  s16le  2ch  48000Hz  SUSPENDED&lt;br /&gt;
8   alsa_output.pci-0000_00_1f.3-platform-skl_hda_dsp_generic.HiFi__hw_sofhdadsp__sink.monitor    module-alsa-card.c  s16le  2ch  48000Hz  SUSPENDED&lt;br /&gt;
9   alsa_input.pci-0000_00_1f.3-platform-skl_hda_dsp_generic.HiFi__hw_sofhdadsp__source           module-alsa-card.c  s16le  2ch  48000Hz  SUSPENDED&lt;br /&gt;
10  alsa_input.pci-0000_00_1f.3-platform-skl_hda_dsp_generic.HiFi__hw_sofhdadsp_6__source         module-alsa-card.c  s16le  4ch  48000Hz  SUSPENDED&lt;br /&gt;
15  alsa_output.usb-DisplayLink_Dell_Universal_Dock_D6000_1806021690-02.analog-stereo.monitor     module-alsa-card.c  s16le  2ch  48000Hz  SUSPENDED&lt;br /&gt;
17  alsa_output.usb-Plantronics_Plantronics_DA40-00.multichannel-output.monitor                   module-alsa-card.c  s16le  1ch  48000Hz  SUSPENDED&lt;br /&gt;
19  alsa_input.usb-Plantronics_Plantronics_DA40-00.mono-fallback                                  module-alsa-card.c  s16le  1ch  16000Hz  SUSPENDED&lt;br /&gt;
20  alsa_input.usb-DisplayLink_Dell_Universal_Dock_D6000_1806021690-02.iec958-stereo              module-alsa-card.c  s16le  2ch  48000Hz  RUNNING&lt;br /&gt;
&lt;br /&gt;
# Set defaut output device. Tab autocompletion should work (U20.04)&lt;br /&gt;
pactl set-default-sink alsa_output.usb-Plantronics_Plantronics_DA40-00.multichannel-output&lt;br /&gt;
# Set defaut input device&lt;br /&gt;
pactl set-default-source alsa_input.usb-Plantronics_Plantronics_DA40-00.mono-fallback&lt;br /&gt;
&lt;br /&gt;
# Test, play some audio then run. IDLE - means in use&lt;br /&gt;
pactl list short sources | column -t | grep -e RUNNING -e IDLE&lt;br /&gt;
17  alsa_output.usb-Plantronics_Plantronics_DA40-00.multichannel-output.monitor                   module-alsa-card.c  s16le  1ch  48000Hz  IDLE&lt;br /&gt;
19  alsa_input.usb-Plantronics_Plantronics_DA40-00.mono-fallback                                  module-alsa-card.c  s16le  1ch  16000Hz  RUNNING&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Make it permanent by setting default device in PulseAudio system configuration file&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Output device&lt;br /&gt;
OUTPUT_DEVICE=alsa_output.usb-Plantronics_Plantronics_DA40-00.mono-fallback&lt;br /&gt;
sudo sed -i &amp;quot;s/#\(set-default-sink\) output/\1 ${OUTPUT_DEVICE}/g&amp;quot; /etc/pulse/default.pa # remove '-i' to test before apply&lt;br /&gt;
# Input device&lt;br /&gt;
INPUT_DEVICE=alsa_input.usb-Plantronics_Plantronics_DA40-00.mono-fallback&lt;br /&gt;
sudo sed -i &amp;quot;s/#\(set-default-source\) input/\1 ${INPUT_DEVICE}/g&amp;quot; /etc/pulse/default.pa&lt;br /&gt;
&lt;br /&gt;
vi /etc/pulse/default.pa # make sure lines below are in place&lt;br /&gt;
### Make some devices default&lt;br /&gt;
set-default-sink   alsa_output.usb-Plantronics_Plantronics_DA40-00.mono-fallback&lt;br /&gt;
set-default-source  alsa_input.usb-Plantronics_Plantronics_DA40-00.mono-fallback&lt;br /&gt;
&lt;br /&gt;
# Delete local user profile and restart system, after boot new defaults should be set&lt;br /&gt;
rm -r ~/.config/pulse&lt;br /&gt;
&lt;br /&gt;
# After reboot, defaults should be set&lt;br /&gt;
cat ~/.config/pulse/*default*&lt;br /&gt;
alsa_output.usb-Plantronics_Plantronics_DA40-00.mono-fallback&lt;br /&gt;
alsa_input.usb-Plantronics_Plantronics_DA40-00.mono-fallback&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Troubleshooting&lt;br /&gt;
PulseAudio cli&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
pacmd&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; help # lists all available commands&lt;br /&gt;
&lt;br /&gt;
pulseaudio --check # Check if any pulseaudio instance is running. It normally prints no output, just exit code. 0 means running&lt;br /&gt;
pulseaudio --kill  # kill, then --start&lt;br /&gt;
pulseaudio -D      # start pulseaudio as a daemon&lt;br /&gt;
# | using /etc/pulse/daemon.conf&lt;br /&gt;
&lt;br /&gt;
# Pulseaudio is a user service&lt;br /&gt;
systemctl --user restart pulseaudio.service&lt;br /&gt;
systemctl --user restart pulseaudio.socket&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I have a port replicator Dell D-6000, that gets randomly disconnected causing switching audio to new connected device - means itself. As workaround commenting out lines below stops this behaviour.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi /etc/pulse/default.pa&lt;br /&gt;
### Use hot-plugged devices like Bluetooth or USB automatically (LP: #1702794)&lt;br /&gt;
# .ifexists module-switch-on-connect.so&lt;br /&gt;
# load-module module-switch-on-connect&lt;br /&gt;
# .endif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Input devices =&lt;br /&gt;
Motivation is to enable horizontal scrolling in Ubuntu 20.04 using Perixx Gamig Mouse Mx2000&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
xinput list&lt;br /&gt;
⎡ Virtual core pointer                    	id=2	[master pointer  (3)]&lt;br /&gt;
⎜   ↳ Virtual core XTEST pointer              	id=4	[slave  pointer  (2)]&lt;br /&gt;
⎜   ↳ Holtek USB Gaming Mouse                 	id=11	[slave  pointer  (2)]&lt;br /&gt;
⎜   ↳ SYNA8007:00 06CB:CD8C Mouse             	id=14	[slave  pointer  (2)]&lt;br /&gt;
⎜   ↳ SYNA8007:00 06CB:CD8C Touchpad          	id=15	[slave  pointer  (2)]&lt;br /&gt;
⎜   ↳ TPPS/2 Elan TrackPoint                  	id=19	[slave  pointer  (2)]&lt;br /&gt;
⎣ Virtual core keyboard                   	id=3	[master keyboard (2)]&lt;br /&gt;
    ↳ Virtual core XTEST keyboard             	id=5	[slave  keyboard (3)]&lt;br /&gt;
    ↳ Power Button                            	id=6	[slave  keyboard (3)]&lt;br /&gt;
    ↳ Video Bus                               	id=7	[slave  keyboard (3)]&lt;br /&gt;
    ↳ Sleep Button                            	id=8	[slave  keyboard (3)]&lt;br /&gt;
    ↳ CHICONY HP Basic USB Keyboard           	id=9	[slave  keyboard (3)]&lt;br /&gt;
    ↳ Holtek USB Gaming Mouse                 	id=10	[slave  keyboard (3)]&lt;br /&gt;
    ↳ Integrated Camera: Integrated C         	id=12	[slave  keyboard (3)]&lt;br /&gt;
    ↳ Integrated Camera: Integrated I         	id=13	[slave  keyboard (3)]&lt;br /&gt;
    ↳ sof-hda-dsp Headset Jack                	id=16	[slave  keyboard (3)]&lt;br /&gt;
    ↳ Intel HID events                        	id=17	[slave  keyboard (3)]&lt;br /&gt;
    ↳ AT Translated Set 2 keyboard            	id=18	[slave  keyboard (3)]&lt;br /&gt;
    ↳ ThinkPad Extra Buttons                  	id=20	[slave  keyboard (3)]&lt;br /&gt;
    ↳ Holtek USB Gaming Mouse                 	id=21	[slave  keyboard (3)]&lt;br /&gt;
&lt;br /&gt;
# test mouse aka Virtual core pointer&lt;br /&gt;
xinput test 11&lt;br /&gt;
motion a[0]=2023  # &amp;lt;- cursor moving&lt;br /&gt;
motion a[0]=2024 a[1]=1411 &lt;br /&gt;
motion a[3]=19545 # &amp;lt;- scroll down &lt;br /&gt;
button press   5 &lt;br /&gt;
button release 5 &lt;br /&gt;
&lt;br /&gt;
# test 'virtual core keyboard' aka additional programmable buttons&lt;br /&gt;
## '10' - this virtual keyboard for all buttons except the scrolling wheel&lt;br /&gt;
xinput test 10&lt;br /&gt;
key press   37&lt;br /&gt;
key press   38&lt;br /&gt;
&lt;br /&gt;
## '21' - this is scrolling wheel buttons left/right, not scrolling itself&lt;br /&gt;
xinput test 21&lt;br /&gt;
key press   248 &lt;br /&gt;
key release 248 &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# List of properties of a device. We want to see 'horizontal scrolling wheel buttons'&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ xinput list-props  21&lt;br /&gt;
Device 'Holtek USB Gaming Mouse':&lt;br /&gt;
	Device Enabled (169):	1&lt;br /&gt;
	Coordinate Transformation Matrix (171):	1.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 1.000000&lt;br /&gt;
	libinput Send Events Modes Available (291):	1, 0&lt;br /&gt;
	libinput Send Events Mode Enabled (292):	0, 0&lt;br /&gt;
	libinput Send Events Mode Enabled Default (293):	0, 0&lt;br /&gt;
	Device Node (294):	&amp;quot;/dev/input/event10&amp;quot;&lt;br /&gt;
	Device Product ID (295):	1241, 41063&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
[[Category:linux]]&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=HashiCorp/Vagrant&amp;diff=7050</id>
		<title>HashiCorp/Vagrant</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=HashiCorp/Vagrant&amp;diff=7050"/>
		<updated>2025-08-23T11:34:39Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Images aka box management */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
Vagrant is configured on a per project basis. Each of these projects has its own Vagran file. The Vagrant file is a text file in which vagrant reads that sets up our environment. There is a description of what OS, how much RAM, and what software to be installed etc. You can version control this file.&lt;br /&gt;
&lt;br /&gt;
= Install | [https://github.com/hashicorp/vagrant/blob/v2.2.10/CHANGELOG.md Changelog] =&lt;br /&gt;
Download or upgrade&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install using Ubuntu package manager (2024)&lt;br /&gt;
wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg&lt;br /&gt;
echo &amp;quot;deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main&amp;quot; | sudo tee /etc/apt/sources.list.d/hashicorp.list&lt;br /&gt;
apt-cache policy vagrant&lt;br /&gt;
sudo apt update &amp;amp;&amp;amp; sudo apt install vagrant&lt;br /&gt;
&lt;br /&gt;
# Install downloading a package from sources (2022)&lt;br /&gt;
LATEST=$(curl -s GET https://api.github.com/repos/hashicorp/vagrant/tags | jq -r '.[].name' | head -n1 | tr -d v); echo $LATEST&lt;br /&gt;
VERSION=${LATEST:=2.2.18}; &lt;br /&gt;
wget https://releases.hashicorp.com/vagrant/${VERSION}/vagrant_${VERSION}_linux_amd64.zip&lt;br /&gt;
sudo install /usr/bin/vagrant&lt;br /&gt;
#sudo dpkg -i vagrant_${VERSION}_x86_64.deb&lt;br /&gt;
#sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -f   # resolve missing dependencies&lt;br /&gt;
&lt;br /&gt;
# Fix plugins if needed&lt;br /&gt;
vagrant plugin update&lt;br /&gt;
vagrant plugin repair&lt;br /&gt;
vagrant plugin expunge --reinstall&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install ruby is recommended as configuration within '''Vagrant''' file is written in Ruby language. &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo apt-get install ruby&lt;br /&gt;
sudo gem install bundler&lt;br /&gt;
sudo gem update  bundler    # if update needed&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repair plugins after the upgrade&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant plugin repair    # use first&lt;br /&gt;
vagrant plugin expunge --reinstall&lt;br /&gt;
vagrant plugin update    # then update broken plugin&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= &amp;lt;code&amp;gt;box&amp;lt;/code&amp;gt; (image) management =&lt;br /&gt;
Vagrant comes with a preconfigured image repositories.&lt;br /&gt;
;Manage boxes&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant box [list | add | remove] [--help]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Add a box (image) into local repository&lt;br /&gt;
These are standard VMs from providers in Virtualbox, VMware or Hyper-V format taken from a given repository&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant box add hashicorp/precise64      #user: hashicorp boximage: precise64, this is preconfigured repository&lt;br /&gt;
vagrant box add ubuntu/xenial64&lt;br /&gt;
vagrant box add ubuntu/xenial64    --box-version 20170618.0.0 --provider virtualbox&lt;br /&gt;
vagrant box add bento/ubuntu-18.04 --box-version 201812.27.0  --provider hyperv&lt;br /&gt;
&lt;br /&gt;
# Box can be specified via URLs or local file paths, Virtualbox can only nest 32bit machines&lt;br /&gt;
vagrant box add --force ubuntu/14.04      https://cloud-images.ubuntu.com/vagrant/trusty/current/trusty-server-cloudimg-amd64-vagrant-disk1.box&lt;br /&gt;
vagrant box add --force ubuntu/14.04-i386 https://cloud-images.ubuntu.com/vagrant/precise/current/precise-server-cloudimg-i386-vagrant-disk1.box&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Windows images&lt;br /&gt;
* devopsgroup-io/windows_server-2012r2-standard-amd64-nocm&lt;br /&gt;
* peru/windows-server-2016-standard-x64-eval&lt;br /&gt;
* scotch/box&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Update a box to the latest version&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; vagrant box list&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20190411.0.0)&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20190718.0.0)&lt;br /&gt;
&lt;br /&gt;
$&amp;gt; vagrant box update --box ubuntu/bionic64&lt;br /&gt;
Checking for updates to 'ubuntu/bionic64'&lt;br /&gt;
Latest installed version: 20190718.0.0&lt;br /&gt;
Version constraints: &amp;gt; 20190718.0.0&lt;br /&gt;
Provider: virtualbox&lt;br /&gt;
Updating 'ubuntu/bionic64' with provider 'virtualbox' from version&lt;br /&gt;
'20190718.0.0' to '20200124.0.0'...&lt;br /&gt;
Loading metadata for box 'https://vagrantcloud.com/ubuntu/bionic64'&lt;br /&gt;
Adding box 'ubuntu/bionic64' (v20200124.0.0) for provider: virtualbox&lt;br /&gt;
Downloading: https://vagrantcloud.com/ubuntu/boxes/bionic64/versions/20200124.0.0/providers/virtualbox.box&lt;br /&gt;
Download redirected to host: cloud-images.ubuntu.com&lt;br /&gt;
&lt;br /&gt;
$&amp;gt; vagrant box list&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20190411.0.0)&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20190718.0.0)&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20200124.0.0) # &amp;lt;- new downloaded&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Delete all images (aka boxes)&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant box prune&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= &amp;lt;code&amp;gt;box&amp;lt;/code&amp;gt; (image) migration =&lt;br /&gt;
 vagrant box list   #list all downloaded boxes&lt;br /&gt;
&lt;br /&gt;
Default path of boxes image, it can be specified by environment variable &amp;lt;tt&amp;gt;VAGRANT_HOME&amp;lt;/tt&amp;gt;&lt;br /&gt;
 C:\Users\%username%\.vagrant.d\boxes  #Windows&lt;br /&gt;
 ~/.vagrant.d/boxes                    #Linux&lt;br /&gt;
&lt;br /&gt;
Change default path via environment variable&lt;br /&gt;
 export VAGRANT_HOME=my/new/path/goes/here/&lt;br /&gt;
&lt;br /&gt;
==Box format==&lt;br /&gt;
When you un-tar the &amp;lt;tt&amp;gt;.box&amp;lt;/tt&amp;gt; file it contains 4 files:&lt;br /&gt;
 |--Vagrantfile&lt;br /&gt;
 |--box-disk1.vmdk  #compressed virtual disk&lt;br /&gt;
 |--box.ovf         #description of virtual hardware&lt;br /&gt;
 |--metadata.json&lt;br /&gt;
&lt;br /&gt;
== [https://www.vagrantup.com/docs/virtualbox/boxes.html Create box] from current project (package a box) ==&lt;br /&gt;
This allows to create a reusable box that contains all changes to the software we made, VirtualBox or Hyper-V supported only.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://www.vagrantup.com/docs/cli/package.html Command basics]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant package [options] [name|id]&lt;br /&gt;
# --base NAME - instead of packaging a VirtualBox machine that Vagrant manages, &lt;br /&gt;
#               this will package a VirtualBox machine that VirtualBox manages&lt;br /&gt;
# --output NAME - default is package.box&lt;br /&gt;
# --include x,y,z -  additional files will be packaged with the box&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Package&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ vagrant version # -&amp;gt; Installed Version: 2.2.9&lt;br /&gt;
&lt;br /&gt;
# Optional '--vagrantfile NAME' can be included, that automatically restores '--include' files &lt;br /&gt;
# learn more at https://www.vagrantup.com/docs/vagrantfile#load-order&lt;br /&gt;
$ time vagrant package --output u18cli-1.box --include data,git-host,git-host3rd,sync.sh,cleanup.sh&lt;br /&gt;
==&amp;gt; default: Clearing any previously set forwarded ports...&lt;br /&gt;
==&amp;gt; default: Exporting VM...&lt;br /&gt;
==&amp;gt; default: Compressing package to: /home/piotr/vms-vagrant/u18cli-1/2020-05-23-u18cli-1.box&lt;br /&gt;
==&amp;gt; default: Packaging additional file: data               # &amp;lt;- dir&lt;br /&gt;
==&amp;gt; default: Packaging additional file: git-host           # &amp;lt;- dir&lt;br /&gt;
==&amp;gt; default: Packaging additional file: git-host3rd        # &amp;lt;- dir&lt;br /&gt;
==&amp;gt; default: Packaging additional file: cleanup.sh         # &amp;lt;- file&lt;br /&gt;
real	15m27.324s user	8m23.550s sys	0m16.827s&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Copy the &amp;lt;tt&amp;gt;.box&amp;lt;/tt&amp;gt; file and restore&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Add the packaged box to local system box repository&lt;br /&gt;
#                        _____box-name________ __box-file_____&lt;br /&gt;
$ vagrant box add --name box-packages/u18cli-1 u18cli-1-v1.box&lt;br /&gt;
==&amp;gt; box: Box file was not detected as metadata. Adding it directly...&lt;br /&gt;
==&amp;gt; box: Adding box 'u18cli-1-v1.box' (v0) for provider: &lt;br /&gt;
    box: Unpacking necessary files from: file:///home/piotr/vms-vagrant/test-box-restore/u18cli-1-v1.box&lt;br /&gt;
==&amp;gt; box: Successfully added box 'box-packages/u18cli-1' (v0) for 'virtualbox'!&lt;br /&gt;
&lt;br /&gt;
# List boxes&lt;br /&gt;
$ vagrant box list&lt;br /&gt;
box-packages/u18cli-1 (virtualbox, 0)&lt;br /&gt;
&lt;br /&gt;
$ ls -l ~/.vagrant.d/boxes&lt;br /&gt;
total 16&lt;br /&gt;
drwxrwxr-x 3 piotr piotr 4096 Jul 16 17:44 box-packages-VAGRANTSLASH-u18cli-1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Restore. Create/re-use Vagrantfile using box you added to your local box repository&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# vi Vagrantfile&lt;br /&gt;
config.vm.box = &amp;quot;box-packages/u18cli-1.box&amp;quot;&lt;br /&gt;
&lt;br /&gt;
vagrant up&lt;br /&gt;
# restore '--include' files coping them from&lt;br /&gt;
# 'ls -l ~/.vagrant.d/boxes/box-packages-VAGRANTSLASH-u18cli-1/0/virtualbox/include/*'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= vagrant init - your first project =&lt;br /&gt;
;Configure Vagrantfile to use the box as your base system&lt;br /&gt;
&amp;lt;source lang=&amp;quot;ruby&amp;quot;&amp;gt;&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
 config.vm.box      = &amp;quot;ubuntu/bionic64&amp;quot;&lt;br /&gt;
 config.vm.hostname = &amp;quot;ubuntu&amp;quot; #hostname, requires reload&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Create Vagrant project, by creating ''Vagrantfile'' in your current directory&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
vagrant init                    #initialises an project &lt;br /&gt;
vagrant init ubuntu/xenial64    # initialises official Ubuntu 16.04 LTS (Xenial Xerus) Daily Build&lt;br /&gt;
vagrant init ubuntu/bionic64    #supports only VirtualBox provider&lt;br /&gt;
vagrant init bento/ubuntu-18.04 #supports variety of providers&lt;br /&gt;
&lt;br /&gt;
#Windows&lt;br /&gt;
vagrant init devopsgroup-io/windows_server-2012r2-standard-amd64-nocm #Windows 2012r2, VirtualBox only; cannot ssh&lt;br /&gt;
vagrant init peru/windows-server-2016-standard-x64-eval               #Windows 2016, halt works&lt;br /&gt;
vagrant init gusztavvargadr/windows-server                            #Windows 2019, full integration&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Power up your Vagrant box&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
vagrant up&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Ssh to the box. Below example of nested virtualisation 64bit VM(host) runs 32bit (guest vm)&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
piotr@vm-ubuntu64:~/git/vagrant$ vagrant ssh    #default password is &amp;quot;vagrant&amp;quot;&lt;br /&gt;
vagrant@vagrant-ubuntu-precise-32:~$ w&lt;br /&gt;
13:08:35 up 15 min,  1 user,  load average: 0.06, 0.31, 0.54&lt;br /&gt;
USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT&lt;br /&gt;
vagrant  pts/0    10.0.2.2         13:02    1.00s  4.63s  0.09s w&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Shared directory between Vagrant VM and an hypervisor provider&lt;br /&gt;
Vagrant VM shares a directory mounted at &amp;lt;tt&amp;gt;/vagrant&amp;lt;/tt&amp;gt;  with the directory on the host containing your Vagrantfile. This can be manually mounted from within VM as long the shared directory is setup in  GUI. &lt;br /&gt;
&lt;br /&gt;
Eg. vm_name &amp;gt; Settings &amp;gt; Shared Folders &amp;gt; Name: vagrant | Path: /home/piotr/vm_name&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
 sudo mount -t vboxsf -o uid=1000 vagrant /vagrant #firts arg 'vagrant' refers to GUI setiing &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
vagrant --debug up&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Nesting VMs ==&lt;br /&gt;
The error below is due to Virtualbox cannot run nested 64bit virtualbox VM. Spinning up a 64bit VM stops with an error that no 64bit CPU could be found. Update [https://forums.virtualbox.org/viewtopic.php?f=1&amp;amp;t=90831 VirtualBox 6.x Nested virtualization, VT-x/AMD-V in the guest].&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Error:&lt;br /&gt;
 Timed out while waiting for the machine to boot. This means that&lt;br /&gt;
 Vagrant was unable to communicate with the guest machine within&lt;br /&gt;
 the configured (&amp;quot;config.vm.boot_timeout&amp;quot; value) time period.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Manage power states =&lt;br /&gt;
*&amp;lt;code&amp;gt;vagrant suspend&amp;lt;/code&amp;gt; - saves the current running state of the machine and stop it&lt;br /&gt;
*&amp;lt;code&amp;gt;vagrant halt&amp;lt;/code&amp;gt; - gracefully shuts down the guest operating system and power down the guest machine&lt;br /&gt;
*&amp;lt;code&amp;gt;vagrant destroy&amp;lt;/code&amp;gt; - removes all traces of the guest machine from your system. It'll stop the guest machine, power it down, and remove all of the guest hard disks&lt;br /&gt;
&lt;br /&gt;
= Managing snapshots =&lt;br /&gt;
You can easily save snapshots.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Get status&lt;br /&gt;
$ vagrant status&lt;br /&gt;
Current machine states:&lt;br /&gt;
default                   poweroff (virtualbox) # &amp;lt;- 'default' it's machine name&lt;br /&gt;
                                                # in multi-vm Vagrant config file&lt;br /&gt;
The VM is powered off. To restart the VM, simply run `vagrant up`&lt;br /&gt;
&lt;br /&gt;
# List&lt;br /&gt;
vagrant snapshot list&lt;br /&gt;
==&amp;gt; default: &lt;br /&gt;
11_b4-upgradeVbox-stopped&lt;br /&gt;
12_Dec01_stopped&lt;br /&gt;
&lt;br /&gt;
# Save&lt;br /&gt;
                        &amp;lt;nameOfvm&amp;gt; &amp;lt;snapshot-name&amp;gt; &lt;br /&gt;
vagrant snapshot save    default    13_Dec30_external-eks_stopped&lt;br /&gt;
&lt;br /&gt;
# Restore&lt;br /&gt;
vagrant snapshot restore default    13_Dec30_external-eks_stopped&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Lookup path precedence for Vagrant project file =&lt;br /&gt;
When you run any vagrant command, Vagrant climbs your directory tree starting first in the current directory you are in. Example:&lt;br /&gt;
 /home/peter/projects/la/Vagrant&lt;br /&gt;
 /home/peter/projects/Vagrant&lt;br /&gt;
 /home/peter/Vagrant&lt;br /&gt;
 /home/Vagrant&lt;br /&gt;
 /Vagrant&lt;br /&gt;
&lt;br /&gt;
= Configuration =&lt;br /&gt;
== Networking ==&lt;br /&gt;
'''Private''' network is network that is not accessible from Internet. Networking stanza is a part of the main &amp;lt;tt&amp;gt;|config|&amp;lt;/tt&amp;gt; loop.&lt;br /&gt;
&lt;br /&gt;
DHCP IP address assigned&lt;br /&gt;
 config.vm.network &amp;quot;private_network&amp;quot;, type: &amp;quot;dhcp&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Static IP assigment&lt;br /&gt;
 config.vm.network &amp;quot;private_network&amp;quot;, ip: &amp;quot;192.168.80.5&amp;quot;&lt;br /&gt;
 auto_config: false     #optional to disable auto-configure&lt;br /&gt;
&lt;br /&gt;
'''Public network'''&lt;br /&gt;
These networks can be accessible from outside of the host machine including Internet, are usually '''Bridged Networks'''.&lt;br /&gt;
&lt;br /&gt;
Examples of dhcp and static IP assignment&lt;br /&gt;
 config.vm.network &amp;quot;public_network&amp;quot;, type: &amp;quot;dhcp&amp;quot;&lt;br /&gt;
 config.vm.network &amp;quot;public_network&amp;quot;, ip: &amp;quot;192.168.80.5&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Default interface. The name need to match your system name otherwise Vagrant will prompt you to choose from available interfaces during ''vagrant up'' process.&lt;br /&gt;
 config.vm.network &amp;quot;public_network&amp;quot;, bridge: 'eth1'&lt;br /&gt;
&lt;br /&gt;
== Port forwarding ==&lt;br /&gt;
Vagrant can forward any host(hypervisor) TCP port to guest vm specyfing in ~/git/vargant/Vagrant file&lt;br /&gt;
 config.vm.network :forwarded_port, guest: 80, host: 4567&lt;br /&gt;
Reload virtual machine &amp;lt;code&amp;gt;vagrant reload&amp;lt;/code&amp;gt; and run from hypervisor web browser http://127.0.0.1:4567 to test it.&lt;br /&gt;
&lt;br /&gt;
== Sync folders ==&lt;br /&gt;
Vagrant v2 renamed ''Shared folders'' into '''Sync folders'''. This feature mounts HostOS directory into GuestOS. allowing workflow of editing files with IDE installed on a host machine but access them within GuestOS. The files sync both directions (as mount on GuestOS), Remember, taking &amp;lt;code&amp;gt;vagrant snapshot save ubuntu-snap1&amp;lt;/code&amp;gt; '''will NOT save''' the '''Sync folder''' content as it's just mounted directory.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When configuring, 1st argument is a path existing on a '''host machine'''. If relative then it's relative to the root-project folder (where Vagrantfile exists) and 2nd arg is a full path to the mounted dir on guest-os.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Enabling Sync folders and Symlinks&lt;br /&gt;
This can be done at any time, it's applied during &amp;lt;code&amp;gt;vagrant up | reload&amp;lt;/code&amp;gt;. In general symlinks are disabled by VirtualBox as insecure.&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
#                    path on the host       mount on the guestOS&lt;br /&gt;
                              \             /         &lt;br /&gt;
     config.vm.sync_folder &amp;quot;git-host/&amp;quot;, &amp;quot;/git&amp;quot;, disabled: false&lt;br /&gt;
 end&lt;br /&gt;
  config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
    vb.name   = File.basename(Dir.pwd) + &amp;quot;_vagrant&amp;quot;&lt;br /&gt;
    ...&lt;br /&gt;
    vb.customize [&amp;quot;setextradata&amp;quot;, :id, &amp;quot;VBoxInternal2/SharedFoldersEnableSymlinksCreate//git&amp;quot;,     &amp;quot;1&amp;quot;]&lt;br /&gt;
#   vb.customize [&amp;quot;setextradata&amp;quot;, :id, &amp;quot;VBoxInternal2/SharedFoldersEnableSymlinksCreate//vagrant&amp;quot;, &amp;quot;1&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
    # symlinks should be active in root of vm by default&lt;br /&gt;
#   vb.customize [&amp;quot;setextradata&amp;quot;, :id, &amp;quot;VBoxInternal2/SharedFoldersEnableSymlinksCreate/v-root&amp;quot;,   &amp;quot;1&amp;quot;]&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Disabling&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
     config.vm.sync_folder &amp;quot;../data/&amp;quot;, &amp;quot;/vagrant-data&amp;quot;, disabled: true&lt;br /&gt;
 end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Modifying the Owner/Group&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 config.vm.sync_folder &amp;quot;../data/&amp;quot;, &amp;quot;/vagrant-data&amp;quot;, disabled: true,&lt;br /&gt;
    owner: &amp;quot;root&amp;quot;, group: &amp;quot;root&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
References&lt;br /&gt;
* [https://www.vagrantup.com/docs/synced-folders/basic_usage.html#id synced-folders] Hashicorp docs&lt;br /&gt;
&lt;br /&gt;
= Vagrant providers =&lt;br /&gt;
Vagrant can work with a wide variety of backend providers, such as VMware, AWS, and more without changing Vagrantfile. It's enough to  specify the provider and Vagrant will do the rest:&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
vagrant up --provider=vmware_fusion&lt;br /&gt;
vagrant up --provider=aws&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Hyper-V ==&lt;br /&gt;
*Enable Hyper-V&lt;br /&gt;
*if you running Docker for Windows make sure is disabled as only one application can bound to Internal NAT vswitch, if you are using it&lt;br /&gt;
*WSL and Windows Vagrant versions must match&lt;br /&gt;
*the terminal you run WSL or PowerShell runs with elevated privileges&lt;br /&gt;
When running in WSL, make sure you have &amp;lt;code&amp;gt;export VAGRANT_WSL_ENABLE_WINDOWS_ACCESS=&amp;quot;1&amp;quot;&amp;lt;/code&amp;gt;,&lt;br /&gt;
*you are in native Bash.exe not eg. ConEmu terminal with as it was proven not working at the time. You can change default provider by &amp;lt;code&amp;gt;export VAGRANT_DEFAULT_PROVIDER=hyperv&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Optional: Set the user-level environment variable in PowerShell: &lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
[Environment]::SetEnvironmentVariable(&amp;quot;VAGRANT_DEFAULT_PROVIDER&amp;quot;, &amp;quot;hyperv&amp;quot;, &amp;quot;User&amp;quot;) &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Workarounds&lt;br /&gt;
Copy insecure private key from &amp;lt;code&amp;gt;https://raw.githubusercontent.com/hashicorp/vagrant/master/keys/vagrant to WSL ~/.vagr&lt;br /&gt;
ant_key/private_key&amp;lt;/code&amp;gt; because Microsoft filesystem does not support Unix style file permissions, until WSL2 is released.&lt;br /&gt;
&amp;lt;source&amp;gt; &lt;br /&gt;
$ wget https://raw.githubusercontent.com/hashicorp/vagrant/master/keys/vagrant -O ~/.vagrant_key/private_key&lt;br /&gt;
# then set in Vagrantfile&lt;br /&gt;
config.ssh.private_key_path = &amp;quot;~/.vagrant_key/private_key&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When running on HyperV you need to choose a vswitch you will use. Vagrant will prompt you, select &amp;quot;Default Switch&amp;quot;, w&lt;br /&gt;
hich is eqvivalent of NAT Network. You need to creat your own vswitch if you want access to Internet.&lt;br /&gt;
&lt;br /&gt;
Go to Hyper-V Manager, open Virtual Switch Manager..., create External switch, name: vagrant-external, press OK. Then&lt;br /&gt;
 run.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
vagrant up --provider hyperv&lt;br /&gt;
&lt;br /&gt;
    default: Please choose a switch to attach to your Hyper-V instance.&lt;br /&gt;
    default: If none of these are appropriate, please open the Hyper-V manager&lt;br /&gt;
    default: to create a new virtual switch.&lt;br /&gt;
    default:&lt;br /&gt;
    default: 1) DockerNAT&lt;br /&gt;
    default: 2) Default Switch&lt;br /&gt;
    default: 3) vagrant-external&lt;br /&gt;
    default:&lt;br /&gt;
    default: What switch would you like to use?3    #&amp;lt;-- select 3&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Read more https://www.vagrantup.com/docs/hyperv/limitations.html&lt;br /&gt;
&lt;br /&gt;
Run Vagrant file&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant up --provider=hyperv&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== References ===&lt;br /&gt;
*[https://gist.github.com/savishy/8ed40cd8692e295d64f45e299c2b83c9 Create vSwitch in Hyper-V to run Vagrant]&lt;br /&gt;
*[https://techcommunity.microsoft.com/t5/Virtualization/Copying-Files-into-a-Hyper-V-VM-with-Vagrant/ba-p/382376 Copying Files into a Hyper-V VM with Vagrant]&lt;br /&gt;
*[https://techcommunity.microsoft.com/t5/Virtualization/Vagrant-and-Hyper-V-Tips-and-Tricks/ba-p/382373 Vagrant and Hyper-V -- Tips and Tricks] techcommunity.microsoft.com&lt;br /&gt;
&lt;br /&gt;
= Provisioners =&lt;br /&gt;
==Shell provisioner==&lt;br /&gt;
Vagrant can run from shared location script or from inline: Vagrant file shell provisioning commands.&lt;br /&gt;
&lt;br /&gt;
Create provisioning script&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi ~/git/vagrant/bootstrap.sh     &lt;br /&gt;
#!/usr/bin/env bash&lt;br /&gt;
export http_proxy=&amp;lt;nowiki&amp;gt;http://username:password@proxyserver.local:8080&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
export https_proxy=$http_proxy &lt;br /&gt;
apt-get update&lt;br /&gt;
apt-get install -y apache2&lt;br /&gt;
if ! [ -L /var/www ]; then &lt;br /&gt;
  rm -rf /var/www&lt;br /&gt;
  ln -sf /vagrant /var/www  # sets Vagrant shared dir to Apache DocumentRoot&lt;br /&gt;
fi&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure Vagrant to run this shell script above when setting up our machine&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi ~/git/vagrant/Vagrantfile   &lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
   config.vm.box = ubuntu/14.04-i386&lt;br /&gt;
   config.vm.provision: shell, path: &amp;quot;bootstrap.sh&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another example of using shell provisioner, separating a script out&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$script = &amp;lt;&amp;lt;SCRIPT&lt;br /&gt;
echo    &amp;quot; touch /home/vagrant/test_\\`date +%s\\`.txt &amp;quot; &amp;gt; /home/vagrant/newfile&lt;br /&gt;
chmod +x        /home/vagrant/newfile&lt;br /&gt;
echo &amp;quot;* * * * * /home/vagrant/newfile&amp;quot; &amp;gt; mycron&lt;br /&gt;
crontab mycron&lt;br /&gt;
SCRIPT&lt;br /&gt;
&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
  config.vm.box = &amp;quot;ubuntu/xenial64&amp;quot;&lt;br /&gt;
  config.vm.provision &amp;quot;shell&amp;quot;, inline: $script , privileged: false&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Bring the environment up  &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant up                   #runs provisioning only once&lt;br /&gt;
vagrant reload --provision   #reloads VM skipping import and runs provisioning&lt;br /&gt;
vagrant ssh                  #ssh to VM&lt;br /&gt;
wget -qO- 127.0.0.1          #test Apache is running on VM&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Provisioners - shell, ansible, ansible_local and more&lt;br /&gt;
&lt;br /&gt;
This section is about using Ansible with Vagrant, &lt;br /&gt;
 *&amp;lt;code&amp;gt;ansible&amp;lt;/code&amp;gt;, where Ansible is executed on the '''Vagrant host'''&lt;br /&gt;
 *&amp;lt;code&amp;gt;ansible_local&amp;lt;/code&amp;gt;, where Ansible is executed on the '''Vagrant guest'''&lt;br /&gt;
&lt;br /&gt;
==Ansible provisioner==&lt;br /&gt;
&lt;br /&gt;
Specify Ansible as a provisioner in Vagrant file&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 # Run Ansible from the Vagrant Host&lt;br /&gt;
 config.vm.provision &amp;quot;ansible&amp;quot; do |ansible|&lt;br /&gt;
    ansible.playbook = &amp;quot;playbook.yml&amp;quot;&lt;br /&gt;
 end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Chef_solo provisioner ==&lt;br /&gt;
Create recipe, the following dirctory structure is required, eg. recipe name is: vagrant_la&lt;br /&gt;
 ├── cookbooks&lt;br /&gt;
 │   └── vagrant_la&lt;br /&gt;
 │       └── recipes&lt;br /&gt;
 │           └── default.rb&lt;br /&gt;
 Vagrant&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Recipe&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
vi cookbooks/vagrant_la/recipes/default.rb&lt;br /&gt;
execute &amp;quot;apt-get update&amp;quot;&lt;br /&gt;
package &amp;quot;apache2&amp;quot;&lt;br /&gt;
execute &amp;quot;rm -rf /var/www&amp;quot;&lt;br /&gt;
link &amp;quot;var/www&amp;quot; do&lt;br /&gt;
        to &amp;quot;/vagrant&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In Vagrant file add following&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
config.vm.provision &amp;quot;chef_solo&amp;quot; do |chef|&lt;br /&gt;
        chef.add_recipe &amp;quot;vagrant_la&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Run &amp;lt;code&amp;gt;vagrant up&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Puppet manifest ==&lt;br /&gt;
Create Vagrant provisioning stanza&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 config.vm.define &amp;quot;web&amp;quot; do |web|&lt;br /&gt;
         web.vm.hostname = &amp;quot;web&amp;quot;&lt;br /&gt;
         web.vm.box = &amp;quot;apache&amp;quot;&lt;br /&gt;
         web.vm.network &amp;quot;private_network&amp;quot;, type: &amp;quot;dhcp&amp;quot;&lt;br /&gt;
         web.vm.network &amp;quot;forwarded_port&amp;quot;, guest: 80, host: 8080&lt;br /&gt;
         web.vm.provision &amp;quot;puppet&amp;quot; do |puppet|&lt;br /&gt;
                 puppet.manifests_path = &amp;quot;manifests&amp;quot;&lt;br /&gt;
                 puppet.manifest_file = &amp;quot;default.pp&amp;quot;&lt;br /&gt;
         end&lt;br /&gt;
 end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create a required folder structure for puppet manifests&lt;br /&gt;
 ├── manifests&lt;br /&gt;
 │   └── default.pp&lt;br /&gt;
 └── Vagrantfile&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Puppet manifest file&lt;br /&gt;
 vi manifests/default.pp&lt;br /&gt;
 exec { &amp;quot;apt-get update&amp;quot;:&lt;br /&gt;
        command =&amp;gt; &amp;quot;/usr/bin/apt-get update&amp;quot;,&lt;br /&gt;
 }&lt;br /&gt;
 package { &amp;quot;apache2&amp;quot;:&lt;br /&gt;
        require =&amp;gt; Exec[&amp;quot;apt-get update&amp;quot;],&lt;br /&gt;
 }&lt;br /&gt;
 file { &amp;quot;/var/www&amp;quot;:&lt;br /&gt;
        ensure =&amp;gt; link,&lt;br /&gt;
        target =&amp;gt; &amp;quot;/vagrant&amp;quot;,&lt;br /&gt;
        force =&amp;gt; true,&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
= [https://tuhrig.de/resizing-vagrant-box-disk-space/ Resizing Vagrant box disk] =&lt;br /&gt;
* [https://www.vagrantup.com/docs/disks/usage Resizing primary disk] native way&lt;br /&gt;
&lt;br /&gt;
= Enable Vagrant to use proxy server for VMs =&lt;br /&gt;
Install proxyconf plugin or use &amp;lt;code&amp;gt;vagrant plugin list&amp;lt;/code&amp;gt; to verify if installed&lt;br /&gt;
 vagrant plugin install vagrant-proxyconf&lt;br /&gt;
&lt;br /&gt;
Configure your Vagrantfile, here particularly host 10.0.0.1:3128 runs CNTLM proxy&lt;br /&gt;
 Vagrant.configure(&amp;quot;2&amp;quot;) do |config| &lt;br /&gt;
     &amp;lt;nowiki&amp;gt;config.proxy.http = &amp;quot;http://10.0.0.1:3128&amp;quot;&lt;br /&gt;
    config.proxy.https = &amp;quot;http://10.0.0.1:3128&amp;quot;&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
     config.proxy.no_proxy = &amp;quot;localhost,127.0.0.1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
= Virtualbox Guest Additions =&lt;br /&gt;
== Sync using vagrant-vbguest plugin ==&lt;br /&gt;
Plugin install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant plugin install vagrant-vbguest&lt;br /&gt;
&lt;br /&gt;
# In a case of dependency issues you can temporary disable the check&lt;br /&gt;
VAGRANT_DISABLE_STRICT_DEPENDENCY_ENFORCEMENT=1 vagrant plugin install vagrant-vbguest&lt;br /&gt;
&lt;br /&gt;
# Verify current version, running on a host(hypervisor)&lt;br /&gt;
vagrant vbguest --status&lt;br /&gt;
&lt;br /&gt;
# Add to your Vagrant file&lt;br /&gt;
if Vagrant.has_plugin?(&amp;quot;vagrant-vbguest&amp;quot;)&lt;br /&gt;
  host.vbguest.auto_update = true&lt;br /&gt;
  host.vbguest.no_remote   = true&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Manual install&lt;br /&gt;
Download VBoxGuestAdditions from:&lt;br /&gt;
* https://download.virtualbox.org/virtualbox&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install a matching version to your host version onto the virtual machine.&lt;br /&gt;
wget https://download.virtualbox.org/virtualbox/7.0.16/VBoxGuestAdditions_7.0.16.iso&lt;br /&gt;
vagrant vbguest --do install --iso VBoxGuestAdditions_7.0.16.iso&lt;br /&gt;
&lt;br /&gt;
Usage: vagrant vbguest [vm-name] [--do start|rebuild|install] [--status] [-f|--force] [-b|--auto-reboot] [-R|--no-remote] [--iso VBoxGuestAdditions.iso] [--no-cleanup]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More you will find at [https://github.com/dotless-de/vagrant-vbguest vagrant-vbguest] plugin project.&lt;br /&gt;
&lt;br /&gt;
== Manual upgrade ==&lt;br /&gt;
Find out what version you are running, execute on a guest VM&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant@ubuntu:~$ modinfo vboxguest | grep ^version&lt;br /&gt;
version:        6.0.10 r132072&lt;br /&gt;
&lt;br /&gt;
vagrant@ubuntu:~$ lsmod | grep -io vboxguest | xargs modinfo | grep -iw version&lt;br /&gt;
version:        6.0.10 r132072&lt;br /&gt;
&lt;br /&gt;
vagrant@u18cli-3:~$ sudo /usr/sbin/VBoxService --version&lt;br /&gt;
6.0.10r132072&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download the extension, you can explore [http://download.virtualbox.org/virtualbox here]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
wget http://download.virtualbox.org/virtualbox/5.0.32/VBoxGuestAdditions_5.0.32.iso&lt;br /&gt;
#you need to get it mounted or extracted the content and run inside the VM.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
*[https://github.com/chilcano/box-vagrant-wso2-dev-srv/blob/master/_downloads/vagrant-vboxguestadditions-workaroud.md Upgrade Vbox extension additions within Vagrant box]&lt;br /&gt;
&lt;br /&gt;
= List all Virtualbox SSH redirections =&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
vboxmanage list vms | cut -d ' ' -f 2  &amp;gt; /tmp/vms.out &amp;amp;&amp;amp; for vm in $(cat /tmp/vms.out); do vboxmanage showvminfo &amp;quot;$vm&amp;quot; | grep ssh; done&lt;br /&gt;
vboxmanage list vms | cut -d ' ' -f 1 | sed 's/&amp;quot;//g' &amp;gt; /tmp/vms.out &amp;amp;&amp;amp; for vm in $(cat /tmp/vms.out); do echo $vm; vboxmanage showvminfo &amp;quot;$vm&amp;quot; | grep ssh; done&lt;br /&gt;
vboxmanage list vms \&lt;br /&gt;
  | cut -d ' ' -f 1 \&lt;br /&gt;
  | sed 's/&amp;quot;//g' &amp;gt; /tmp/vms.out \&lt;br /&gt;
  &amp;amp;&amp;amp; for vm in $(cat /tmp/vms.out); do vboxmanage showvminfo &amp;quot;$vm&amp;quot; \&lt;br /&gt;
                                      | grep ssh \&lt;br /&gt;
                                      | tr --delete '\n'; echo &amp;quot; $vm&amp;quot;; done&lt;br /&gt;
&lt;br /&gt;
sed 's/&amp;quot;//g'      #removes double quotes from whole string&lt;br /&gt;
tr --delete '\n'  #deletes EOL, so the next command output is appended to the previous line&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Vagrant file =&lt;br /&gt;
;Ruby gotchas&lt;br /&gt;
Vagrant configuration file is written in Ruby specification, therefore you need to remember&lt;br /&gt;
*don't use dashes in object names, '''don't''': &amp;lt;tt&amp;gt;jenkins-minion_config.vm.box = &amp;quot;ubuntu/xenial64&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
*don't use symbols (here underscore) in variable names, '''don't''': &amp;lt;tt&amp;gt;(1..2).each do |minion_number|&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== HAProxy cluster, multi-node Vagrant config  ==&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
git clone https://github.com/jweissig/episode-45&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This creates ''Ansible'' mgmt server, Load Balancer and Web nodes [1..2]. HAProxy will be configured via Ansible code.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# -*- mode: ruby -*-&lt;br /&gt;
# vi: set ft=ruby :&lt;br /&gt;
&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
 # create mgmt node&lt;br /&gt;
 config.vm.define :mgmt do |mgmt_config|&lt;br /&gt;
     mgmt_config.vm.box = &amp;quot;ubuntu/trusty64&amp;quot;&lt;br /&gt;
     mgmt_config.vm.hostname = &amp;quot;mgmt&amp;quot;&lt;br /&gt;
     mgmt_config.vm.network :private_network, ip: &amp;quot;10.0.15.10&amp;quot;&lt;br /&gt;
     mgmt_config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
       vb.memory = &amp;quot;256&amp;quot;&lt;br /&gt;
     end&lt;br /&gt;
     mgmt_config.vm.provision :shell, path: &amp;quot;bootstrap-mgmt.sh&amp;quot;&lt;br /&gt;
 end&lt;br /&gt;
&lt;br /&gt;
 # create load balancer&lt;br /&gt;
 config.vm.define :lb do |lb_config|&lt;br /&gt;
     lb_config.vm.box = &amp;quot;ubuntu/trusty64&amp;quot;&lt;br /&gt;
     lb_config.vm.hostname = &amp;quot;lb&amp;quot;&lt;br /&gt;
     lb_config.vm.network :private_network, ip: &amp;quot;10.0.15.11&amp;quot;&lt;br /&gt;
     lb_config.vm.network &amp;quot;forwarded_port&amp;quot;, guest: 80, host: 8080&lt;br /&gt;
     lb_config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
       vb.memory = &amp;quot;256&amp;quot;&lt;br /&gt;
     end&lt;br /&gt;
 end&lt;br /&gt;
&lt;br /&gt;
 # create some web servers&lt;br /&gt;
 # https://docs.vagrantup.com/v2/vagrantfile/tips.html&lt;br /&gt;
  (1..2).each do |i|&lt;br /&gt;
    config.vm.define &amp;quot;web#{i}&amp;quot; do |node|&lt;br /&gt;
        node.vm.box = &amp;quot;ubuntu/trusty64&amp;quot;&lt;br /&gt;
        node.vm.hostname = &amp;quot;web#{i}&amp;quot;&lt;br /&gt;
        node.vm.network :private_network, ip: &amp;quot;10.0.15.2#{i}&amp;quot;&lt;br /&gt;
        node.vm.network &amp;quot;forwarded_port&amp;quot;, guest: 80, host: &amp;quot;808#{i}&amp;quot;&lt;br /&gt;
        node.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
          vb.memory = &amp;quot;256&amp;quot;&lt;br /&gt;
        end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Boot strap script &amp;lt;tt&amp;gt;bootstrap-mgmt.sh&amp;lt;/tt&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;shell&amp;quot;&amp;gt;&lt;br /&gt;
#!/usr/bin/env bash &lt;br /&gt;
# install ansible (http://docs.ansible.com/intro_installation.html)&lt;br /&gt;
apt-get -y install software-properties-common&lt;br /&gt;
apt-add-repository -y ppa:ansible/ansible&lt;br /&gt;
apt-get update&lt;br /&gt;
apt-get -y install ansible&lt;br /&gt;
&lt;br /&gt;
# copy examples into /home/vagrant (from inside the mgmt node)&lt;br /&gt;
cp -a /vagrant/examples/* /home/vagrant&lt;br /&gt;
chown -R vagrant:vagrant /home/vagrant&lt;br /&gt;
&lt;br /&gt;
# configure hosts file for our internal network defined by Vagrantfile&lt;br /&gt;
cat &amp;gt;&amp;gt; /etc/hosts &amp;lt;&amp;lt;EOL&lt;br /&gt;
# vagrant environment nodes&lt;br /&gt;
10.0.15.10  mgmt&lt;br /&gt;
10.0.15.11  lb&lt;br /&gt;
10.0.15.21  web1&lt;br /&gt;
10.0.15.22  web2&lt;br /&gt;
10.0.15.23  web3&lt;br /&gt;
10.0.15.24  web4&lt;br /&gt;
10.0.15.25  web5&lt;br /&gt;
10.0.15.26  web6&lt;br /&gt;
10.0.15.27  web7&lt;br /&gt;
10.0.15.28  web8&lt;br /&gt;
10.0.15.29  web9&lt;br /&gt;
EOL&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Gitbash path -  &amp;lt;code&amp;gt;/c/Program\ Files/Oracle/VirtualBox/VBoxManage.exe&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Set bootstrap script for Proxy or No-proxy specific system&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
Vagrant status&lt;br /&gt;
Vagrant up&lt;br /&gt;
Vagrant ssh mgmt&lt;br /&gt;
ansible all --list-hosts&lt;br /&gt;
ssh-keyscan web1 web2 lb &amp;gt; ~/.ssh/known_hosts&lt;br /&gt;
ansible-playbook ssh-addkey.yml -u vagrant --ask-pass&lt;br /&gt;
ansible-playbook site.yml&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once set it up you can navigate on your laptop to:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
http://localhost:8080/              #Website test&lt;br /&gt;
http://localhost:8080/haproxy?stats #HAProxy stats&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Use to verify end server&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
curl -I http://localhost:8080&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:X-Backend-Server.png|none|left|Curl -i X-Backend-Server]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Generate web traffic&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
vagrant ssh lb&lt;br /&gt;
sudo apt-get install apache2-utils&lt;br /&gt;
ansible localhost -m apt -a &amp;quot;pkg=apache2-utils state=present&amp;quot; --become&lt;br /&gt;
ab -n 1000 -c 1 http://10.0.2.15:80/ # Apache replaced 'ab' with 'hey'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Vagrant DNS =&lt;br /&gt;
== Multi-machine mDNS discovery ==&lt;br /&gt;
Multi-machine setup requires 3 ingredients* :&lt;br /&gt;
* each machine to have different hostname&lt;br /&gt;
* a way of getting the IP address for a hostname (eg. mDNS)&lt;br /&gt;
* connect the VMs through a private network&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In multi-machine configuration we need a way of getting the IP address for a hostname. We use &amp;lt;code&amp;gt;mDNS&amp;lt;/code&amp;gt; for this. By default &amp;lt;codemDNS&amp;lt;/code&amp;gt; only resolves host names ending with the &amp;lt;code&amp;gt;.local&amp;lt;/code&amp;gt; top-level domain (TLD). This can cause problems if that domain includes hosts which do not implement mDNS but which can be found via a conventional unicast DNS server. Resolving such conflicts requires network-configuration changes that violate the zero-configuration goal. Install &amp;lt;code&amp;gt;avahi&amp;lt;/code&amp;gt; system on all machines to facilitate service discovery on a local network via the &amp;lt;code&amp;gt;mDNS/DNS-SD&amp;lt;/code&amp;gt; protocol suite.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
config.vm.provision &amp;quot;shell&amp;quot;, inline: &amp;lt;&amp;lt;-SCRIPT&lt;br /&gt;
  apt-get install -y avahi-daemon libnss-mdns&lt;br /&gt;
SCRIPT&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;References&lt;br /&gt;
*[https://github.com/lathiat/nss-mdns nss-mdns] system which allows hostname lookup of *.local hostnames via mDNS in all system programs using nsswitch&lt;br /&gt;
*[https://www.avahi.org/ avahi.org]&lt;br /&gt;
&lt;br /&gt;
== Set host system DNS server resolver ==&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
config.vm.provider :virtualbox do |vb|&lt;br /&gt;
    vb.customize [&amp;quot;modifyvm&amp;quot;, :id, &amp;quot;--natdnshostresolver1&amp;quot;, &amp;quot;on&amp;quot;]&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Ubuntu with GUI =&lt;br /&gt;
This article is going to describe to setup Vagrant Virtualbox VM with GUI, setting up xserver with xfce4 as desktop environment.&lt;br /&gt;
== Locales ==&lt;br /&gt;
This is not working&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
     locale-gen en_GB.utf8 #en_GB.UTF-8&lt;br /&gt;
     update-locale LANG=en_GB.UTF-8&lt;br /&gt;
     locale-gen --purge &amp;quot;en_GB.UTF-8&amp;quot;&lt;br /&gt;
     dpkg-reconfigure --frontend=noninteractive locales&lt;br /&gt;
     dpkg-reconfigure --frontend=noninteractive keyboard-configuration&lt;br /&gt;
     localedef -i en_GB -c -f UTF-8 en_GB.utf8&lt;br /&gt;
     sudo update-locale LANG=en_GB.UTF-8&lt;br /&gt;
     locale-gen --purge &amp;quot;en_GB.UTF-8&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Troubleshooting&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
locale -a #shows which locales are available on your system&lt;br /&gt;
sudo less /usr/share/i18n/SUPPORTED&lt;br /&gt;
cat /etc/default/locale&lt;br /&gt;
&lt;br /&gt;
#Set system wide locales (does not work for users)&lt;br /&gt;
localectl set-locale LANG=en_GB.UTF-8 LANGUAGE=en_GB:en&lt;br /&gt;
localectl set-keymap gb&lt;br /&gt;
localectl set-x11-keymap gb&lt;br /&gt;
&lt;br /&gt;
#Quick kb change&lt;br /&gt;
apt-get install -yq x11-xkb-utils; setxkbmap gb&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Gnome3 ==&lt;br /&gt;
This setup installs Ubuntu desktop and may require a restart to apply changes to like a taskbar with shortcuts.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;ruby&amp;quot;&amp;gt;&lt;br /&gt;
# -*- mode: ruby -*-&lt;br /&gt;
# vi: set ft=ruby :&lt;br /&gt;
&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
  config.vm.box      = &amp;quot;ubuntu/bionic64&amp;quot; #bento/ubuntu-18.04, ubuntu/xenial64&lt;br /&gt;
&lt;br /&gt;
  machineName = File.basename(Dir.pwd) #name as a current working dir&lt;br /&gt;
# machineName = 'u18gui-1'&lt;br /&gt;
  config.vm.hostname = machineName&lt;br /&gt;
&lt;br /&gt;
  # Manually check for updates `vagrant box outdated`&lt;br /&gt;
  config.vm.box_check_update = false&lt;br /&gt;
&lt;br /&gt;
  # Vbguest plugin&lt;br /&gt;
  if Vagrant.has_plugin?(&amp;quot;vagrant-vbguest&amp;quot;)&lt;br /&gt;
    config.vbguest.auto_update = false&lt;br /&gt;
    config.vbguest.no_remote   = true&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # config.vm.network &amp;quot;forwarded_port&amp;quot;, guest: 80, host: 8080, host_ip: &amp;quot;127.0.0.1&amp;quot;&lt;br /&gt;
  # Public network, which generally matched to bridged network.&lt;br /&gt;
  # config.vm.network &amp;quot;public_network&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  # config.vm.synced_folder &amp;quot;hostDir&amp;quot;, &amp;quot;/InVagrantMount/path&amp;quot; &lt;br /&gt;
  # config.vm.synced_folder &amp;quot;../data&amp;quot;, &amp;quot;/vagrant_data&amp;quot;&lt;br /&gt;
&lt;br /&gt;
   config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
     vb.gui    = true&lt;br /&gt;
     vb.memory = &amp;quot;3072&amp;quot;&lt;br /&gt;
     vb.name   = machineName + &amp;quot;_vagrant&amp;quot;&lt;br /&gt;
   end&lt;br /&gt;
  &lt;br /&gt;
   config.vm.provision &amp;quot;shell&amp;quot;, inline: &amp;lt;&amp;lt;-SHELL&lt;br /&gt;
     export DEBIAN_FRONTEND=noninteractive&lt;br /&gt;
     setxkbmap gb&lt;br /&gt;
     apt-get update &amp;amp;&amp;amp; apt-get upgrade -yq&lt;br /&gt;
     apt-get install -yq ubuntu-desktop --no-install-recommends&lt;br /&gt;
     apt-get install -yq terminator tmux&lt;br /&gt;
     #only U16 xenial to fix Unity&lt;br /&gt;
     #apt-get install -yq unity-lens-files unity-lens-applications indicator-session --no-install-recommends &lt;br /&gt;
   SHELL&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Running up&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
vagrant plugin install vagrant-vbguest&lt;br /&gt;
vagrant up &amp;amp;&amp;amp; vagrant vbguest --do install &amp;amp;&amp;amp; vagrant reload&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Xface ==&lt;br /&gt;
Get a basic Ubuntu image working, boot it up and vagrant ssh.&lt;br /&gt;
Next, enable the VirtualBox display, which is off by default. Halt the VM and uncomment these lines in Vagrantfile:&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
config.vm.provider :virtualbox do |vb|&lt;br /&gt;
  vb.gui = true&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Boot the VM and observe the new display window. Now you just need to install and start xfce4. Use vagrant ssh and:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get install -y xfce4 virtualbox-guest-dkms virtualbox-guest-utils virtualbox-guest-x11&lt;br /&gt;
#guest additions are already installed on most of the Vagrant boxes&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Don't start the GUI as root because you really want to stay the vagrant user. To do this you need to permit anyone to start the GUI: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo vim /etc/X11/Xwrapper.config and edit it to allowed_users=anybody&lt;br /&gt;
sudo startxfce4&amp;amp;&lt;br /&gt;
sudo VBoxClient-all #optional&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should be landed in a xfce4 session.&lt;br /&gt;
&lt;br /&gt;
(Optional) If VBoxClient-all script isn't installed or anything is missing, you can replace with the equivalent:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo VBoxClient --clipboard&lt;br /&gt;
sudo VBoxClient --draganddrop&lt;br /&gt;
sudo VBoxClient --display&lt;br /&gt;
sudo VBoxClient --checkhostversion&lt;br /&gt;
sudo VBoxClient --seamless&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
*[http://stackoverflow.com/questions/18878117/using-vagrant-to-run-virtual-machines-with-desktop-environment Vagrant GUI vms] stackoverflow&lt;br /&gt;
&lt;br /&gt;
= Windows=&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
# -*- mode: ruby -*-&lt;br /&gt;
# vi: set ft=ruby :&lt;br /&gt;
&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
  config.vm.box = &amp;quot;gusztavvargadr/windows-server&amp;quot;&lt;br /&gt;
  config.vm.box_check_update = false&lt;br /&gt;
  config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
     vb.gui = true       # Display the VirtualBox GUI when booting the machine&lt;br /&gt;
     vb.memory = &amp;quot;3072&amp;quot;  # Customize the amount of memory on the VM:&lt;br /&gt;
  end&lt;br /&gt;
  # Plugins&lt;br /&gt;
  config.vbguest.auto_update = false&lt;br /&gt;
  config.vbguest.no_remote = true&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Shared location&lt;br /&gt;
* enable Network Sharing&lt;br /&gt;
* Vagrant path is mapped to &amp;lt;code&amp;gt;\\VBOXSVR\vagrant&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= WIP DevOps workstation =&lt;br /&gt;
This to contain:&lt;br /&gt;
*bashrc with git branch in ps1&lt;br /&gt;
*bash autocomplete (...samename)&lt;br /&gt;
*bash colored symlinks&lt;br /&gt;
*bash_logout and .profile to eval ssh-agent and kill on exit&lt;br /&gt;
*git install&lt;br /&gt;
*ansible 1.9.4&lt;br /&gt;
*java Oracle&lt;br /&gt;
*clone tfenv and install terraform&lt;br /&gt;
*vim install&lt;br /&gt;
*vundle install&lt;br /&gt;
*[done] python 2.7 OOB in 16.04&lt;br /&gt;
*[done]python pip: awscli, boto, boto3, etc..&lt;br /&gt;
&lt;br /&gt;
Challenges:&lt;br /&gt;
*Ubuntu 16.04 official box does not come with a default ''vagrant'' user but instead comes with ''ubuntu'' user. This causes a number of incompatibilities.&lt;br /&gt;
**Read more at launchpad [https://bugs.launchpad.net/cloud-images/+bug/1569237 vagrant xenial box is not provided with vagrant/vagrant username and password ]&lt;br /&gt;
* Solutions&lt;br /&gt;
** on W10 host both users: ubuntu &amp;amp; vagrant exist. Only vagrant has insecure_pub installed OOB. I am coping vagrant user pub key into ubuntu user authorized_keys&lt;br /&gt;
** on U16.05 host the official image does not seem to come with vagrant user but Ubuntu user works OOB&lt;br /&gt;
** Read more at SO &lt;br /&gt;
***[Vagrant's Ubuntu 16.04 vagrantfile default password https://stackoverflow.com/questions/41337802/vagrants-ubuntu-16-04-vagrantfile-default-password]&lt;br /&gt;
***[https://stackoverflow.com/questions/30075461/how-do-i-add-my-own-public-key-to-vagrant-vm How do I add my own public key to Vagrant VM?]&lt;br /&gt;
*** [https://blog.ouseful.info/2015/07/27/running-a-shell-script-once-only-in-vagrant/ Running a Shell Script Once Only in vagrant]&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
*[https://www.vagrantup.com/docs/getting-started/ Vagrant Start up documentation]&lt;br /&gt;
*[https://atlas.hashicorp.com/boxes/search Vagrant Hashicorp VMs repository] Virtualbox&lt;br /&gt;
*[https://cloud-images.ubuntu.com/vagrant/ Vagrant Ubuntu VMs images] Virtualbox&lt;br /&gt;
*[https://www.vagrantup.com/docs/provisioning/ansible_intro.html Vagrant and Ansible provisioner] Vagrant docs&lt;br /&gt;
*[https://manski.net/2016/09/vagrant-multi-machine-tutorial/#multi-machine.3A-the-naive-way Vagrant Tutorial – From Nothing To Multi-Machine] Tutorial&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=HashiCorp/Vagrant&amp;diff=7049</id>
		<title>HashiCorp/Vagrant</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=HashiCorp/Vagrant&amp;diff=7049"/>
		<updated>2025-08-23T11:32:23Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Box images advanced */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
Vagrant is configured on a per project basis. Each of these projects has its own Vagran file. The Vagrant file is a text file in which vagrant reads that sets up our environment. There is a description of what OS, how much RAM, and what software to be installed etc. You can version control this file.&lt;br /&gt;
&lt;br /&gt;
= Install | [https://github.com/hashicorp/vagrant/blob/v2.2.10/CHANGELOG.md Changelog] =&lt;br /&gt;
Download or upgrade&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install using Ubuntu package manager (2024)&lt;br /&gt;
wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg&lt;br /&gt;
echo &amp;quot;deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main&amp;quot; | sudo tee /etc/apt/sources.list.d/hashicorp.list&lt;br /&gt;
apt-cache policy vagrant&lt;br /&gt;
sudo apt update &amp;amp;&amp;amp; sudo apt install vagrant&lt;br /&gt;
&lt;br /&gt;
# Install downloading a package from sources (2022)&lt;br /&gt;
LATEST=$(curl -s GET https://api.github.com/repos/hashicorp/vagrant/tags | jq -r '.[].name' | head -n1 | tr -d v); echo $LATEST&lt;br /&gt;
VERSION=${LATEST:=2.2.18}; &lt;br /&gt;
wget https://releases.hashicorp.com/vagrant/${VERSION}/vagrant_${VERSION}_linux_amd64.zip&lt;br /&gt;
sudo install /usr/bin/vagrant&lt;br /&gt;
#sudo dpkg -i vagrant_${VERSION}_x86_64.deb&lt;br /&gt;
#sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -f   # resolve missing dependencies&lt;br /&gt;
&lt;br /&gt;
# Fix plugins if needed&lt;br /&gt;
vagrant plugin update&lt;br /&gt;
vagrant plugin repair&lt;br /&gt;
vagrant plugin expunge --reinstall&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install ruby is recommended as configuration within '''Vagrant''' file is written in Ruby language. &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo apt-get install ruby&lt;br /&gt;
sudo gem install bundler&lt;br /&gt;
sudo gem update  bundler    # if update needed&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repair plugins after the upgrade&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant plugin repair    # use first&lt;br /&gt;
vagrant plugin expunge --reinstall&lt;br /&gt;
vagrant plugin update    # then update broken plugin&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Images aka &amp;lt;code&amp;gt;box&amp;lt;/code&amp;gt; management =&lt;br /&gt;
Vagrant comes with a preconfigured image repositories.&lt;br /&gt;
;Manage boxes&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant box [list | add | remove] [--help]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Add a box (image) into local repository&lt;br /&gt;
These are standard VMs from providers in Virtualbox, VMware or Hyper-V format taken from a given repository&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant box add hashicorp/precise64      #user: hashicorp boximage: precise64, this is preconfigured repository&lt;br /&gt;
vagrant box add ubuntu/xenial64&lt;br /&gt;
vagrant box add ubuntu/xenial64    --box-version 20170618.0.0 --provider virtualbox&lt;br /&gt;
vagrant box add bento/ubuntu-18.04 --box-version 201812.27.0  --provider hyperv&lt;br /&gt;
&lt;br /&gt;
# Box can be specified via URLs or local file paths, Virtualbox can only nest 32bit machines&lt;br /&gt;
vagrant box add --force ubuntu/14.04      https://cloud-images.ubuntu.com/vagrant/trusty/current/trusty-server-cloudimg-amd64-vagrant-disk1.box&lt;br /&gt;
vagrant box add --force ubuntu/14.04-i386 https://cloud-images.ubuntu.com/vagrant/precise/current/precise-server-cloudimg-i386-vagrant-disk1.box&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Windows images&lt;br /&gt;
* devopsgroup-io/windows_server-2012r2-standard-amd64-nocm&lt;br /&gt;
* peru/windows-server-2016-standard-x64-eval&lt;br /&gt;
* scotch/box&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Update a box to the latest version&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; vagrant box list&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20190411.0.0)&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20190718.0.0)&lt;br /&gt;
&lt;br /&gt;
$&amp;gt; vagrant box update --box ubuntu/bionic64&lt;br /&gt;
Checking for updates to 'ubuntu/bionic64'&lt;br /&gt;
Latest installed version: 20190718.0.0&lt;br /&gt;
Version constraints: &amp;gt; 20190718.0.0&lt;br /&gt;
Provider: virtualbox&lt;br /&gt;
Updating 'ubuntu/bionic64' with provider 'virtualbox' from version&lt;br /&gt;
'20190718.0.0' to '20200124.0.0'...&lt;br /&gt;
Loading metadata for box 'https://vagrantcloud.com/ubuntu/bionic64'&lt;br /&gt;
Adding box 'ubuntu/bionic64' (v20200124.0.0) for provider: virtualbox&lt;br /&gt;
Downloading: https://vagrantcloud.com/ubuntu/boxes/bionic64/versions/20200124.0.0/providers/virtualbox.box&lt;br /&gt;
Download redirected to host: cloud-images.ubuntu.com&lt;br /&gt;
&lt;br /&gt;
$&amp;gt; vagrant box list&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20190411.0.0)&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20190718.0.0)&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20200124.0.0) # &amp;lt;- new downloaded&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Delete all images (aka boxes)&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant box prune&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= vagrant init - your first project =&lt;br /&gt;
;Configure Vagrantfile to use the box as your base system&lt;br /&gt;
&amp;lt;source lang=&amp;quot;ruby&amp;quot;&amp;gt;&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
 config.vm.box      = &amp;quot;ubuntu/bionic64&amp;quot;&lt;br /&gt;
 config.vm.hostname = &amp;quot;ubuntu&amp;quot; #hostname, requires reload&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Create Vagrant project, by creating ''Vagrantfile'' in your current directory&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
vagrant init                    #initialises an project &lt;br /&gt;
vagrant init ubuntu/xenial64    # initialises official Ubuntu 16.04 LTS (Xenial Xerus) Daily Build&lt;br /&gt;
vagrant init ubuntu/bionic64    #supports only VirtualBox provider&lt;br /&gt;
vagrant init bento/ubuntu-18.04 #supports variety of providers&lt;br /&gt;
&lt;br /&gt;
#Windows&lt;br /&gt;
vagrant init devopsgroup-io/windows_server-2012r2-standard-amd64-nocm #Windows 2012r2, VirtualBox only; cannot ssh&lt;br /&gt;
vagrant init peru/windows-server-2016-standard-x64-eval               #Windows 2016, halt works&lt;br /&gt;
vagrant init gusztavvargadr/windows-server                            #Windows 2019, full integration&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Power up your Vagrant box&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
vagrant up&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Ssh to the box. Below example of nested virtualisation 64bit VM(host) runs 32bit (guest vm)&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
piotr@vm-ubuntu64:~/git/vagrant$ vagrant ssh    #default password is &amp;quot;vagrant&amp;quot;&lt;br /&gt;
vagrant@vagrant-ubuntu-precise-32:~$ w&lt;br /&gt;
13:08:35 up 15 min,  1 user,  load average: 0.06, 0.31, 0.54&lt;br /&gt;
USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT&lt;br /&gt;
vagrant  pts/0    10.0.2.2         13:02    1.00s  4.63s  0.09s w&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Shared directory between Vagrant VM and an hypervisor provider&lt;br /&gt;
Vagrant VM shares a directory mounted at &amp;lt;tt&amp;gt;/vagrant&amp;lt;/tt&amp;gt;  with the directory on the host containing your Vagrantfile. This can be manually mounted from within VM as long the shared directory is setup in  GUI. &lt;br /&gt;
&lt;br /&gt;
Eg. vm_name &amp;gt; Settings &amp;gt; Shared Folders &amp;gt; Name: vagrant | Path: /home/piotr/vm_name&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
 sudo mount -t vboxsf -o uid=1000 vagrant /vagrant #firts arg 'vagrant' refers to GUI setiing &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
vagrant --debug up&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Nesting VMs ==&lt;br /&gt;
The error below is due to Virtualbox cannot run nested 64bit virtualbox VM. Spinning up a 64bit VM stops with an error that no 64bit CPU could be found. Update [https://forums.virtualbox.org/viewtopic.php?f=1&amp;amp;t=90831 VirtualBox 6.x Nested virtualization, VT-x/AMD-V in the guest].&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Error:&lt;br /&gt;
 Timed out while waiting for the machine to boot. This means that&lt;br /&gt;
 Vagrant was unable to communicate with the guest machine within&lt;br /&gt;
 the configured (&amp;quot;config.vm.boot_timeout&amp;quot; value) time period.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Manage power states =&lt;br /&gt;
*&amp;lt;code&amp;gt;vagrant suspend&amp;lt;/code&amp;gt; - saves the current running state of the machine and stop it&lt;br /&gt;
*&amp;lt;code&amp;gt;vagrant halt&amp;lt;/code&amp;gt; - gracefully shuts down the guest operating system and power down the guest machine&lt;br /&gt;
*&amp;lt;code&amp;gt;vagrant destroy&amp;lt;/code&amp;gt; - removes all traces of the guest machine from your system. It'll stop the guest machine, power it down, and remove all of the guest hard disks&lt;br /&gt;
&lt;br /&gt;
= Managing snapshots =&lt;br /&gt;
You can easily save snapshots.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Get status&lt;br /&gt;
$ vagrant status&lt;br /&gt;
Current machine states:&lt;br /&gt;
default                   poweroff (virtualbox) # &amp;lt;- 'default' it's machine name&lt;br /&gt;
                                                # in multi-vm Vagrant config file&lt;br /&gt;
The VM is powered off. To restart the VM, simply run `vagrant up`&lt;br /&gt;
&lt;br /&gt;
# List&lt;br /&gt;
vagrant snapshot list&lt;br /&gt;
==&amp;gt; default: &lt;br /&gt;
11_b4-upgradeVbox-stopped&lt;br /&gt;
12_Dec01_stopped&lt;br /&gt;
&lt;br /&gt;
# Save&lt;br /&gt;
                        &amp;lt;nameOfvm&amp;gt; &amp;lt;snapshot-name&amp;gt; &lt;br /&gt;
vagrant snapshot save    default    13_Dec30_external-eks_stopped&lt;br /&gt;
&lt;br /&gt;
# Restore&lt;br /&gt;
vagrant snapshot restore default    13_Dec30_external-eks_stopped&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Lookup path precedence for Vagrant project file =&lt;br /&gt;
When you run any vagrant command, Vagrant climbs your directory tree starting first in the current directory you are in. Example:&lt;br /&gt;
 /home/peter/projects/la/Vagrant&lt;br /&gt;
 /home/peter/projects/Vagrant&lt;br /&gt;
 /home/peter/Vagrant&lt;br /&gt;
 /home/Vagrant&lt;br /&gt;
 /Vagrant&lt;br /&gt;
&lt;br /&gt;
= Configuration =&lt;br /&gt;
== Networking ==&lt;br /&gt;
'''Private''' network is network that is not accessible from Internet. Networking stanza is a part of the main &amp;lt;tt&amp;gt;|config|&amp;lt;/tt&amp;gt; loop.&lt;br /&gt;
&lt;br /&gt;
DHCP IP address assigned&lt;br /&gt;
 config.vm.network &amp;quot;private_network&amp;quot;, type: &amp;quot;dhcp&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Static IP assigment&lt;br /&gt;
 config.vm.network &amp;quot;private_network&amp;quot;, ip: &amp;quot;192.168.80.5&amp;quot;&lt;br /&gt;
 auto_config: false     #optional to disable auto-configure&lt;br /&gt;
&lt;br /&gt;
'''Public network'''&lt;br /&gt;
These networks can be accessible from outside of the host machine including Internet, are usually '''Bridged Networks'''.&lt;br /&gt;
&lt;br /&gt;
Examples of dhcp and static IP assignment&lt;br /&gt;
 config.vm.network &amp;quot;public_network&amp;quot;, type: &amp;quot;dhcp&amp;quot;&lt;br /&gt;
 config.vm.network &amp;quot;public_network&amp;quot;, ip: &amp;quot;192.168.80.5&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Default interface. The name need to match your system name otherwise Vagrant will prompt you to choose from available interfaces during ''vagrant up'' process.&lt;br /&gt;
 config.vm.network &amp;quot;public_network&amp;quot;, bridge: 'eth1'&lt;br /&gt;
&lt;br /&gt;
== Port forwarding ==&lt;br /&gt;
Vagrant can forward any host(hypervisor) TCP port to guest vm specyfing in ~/git/vargant/Vagrant file&lt;br /&gt;
 config.vm.network :forwarded_port, guest: 80, host: 4567&lt;br /&gt;
Reload virtual machine &amp;lt;code&amp;gt;vagrant reload&amp;lt;/code&amp;gt; and run from hypervisor web browser http://127.0.0.1:4567 to test it.&lt;br /&gt;
&lt;br /&gt;
== Sync folders ==&lt;br /&gt;
Vagrant v2 renamed ''Shared folders'' into '''Sync folders'''. This feature mounts HostOS directory into GuestOS. allowing workflow of editing files with IDE installed on a host machine but access them within GuestOS. The files sync both directions (as mount on GuestOS), Remember, taking &amp;lt;code&amp;gt;vagrant snapshot save ubuntu-snap1&amp;lt;/code&amp;gt; '''will NOT save''' the '''Sync folder''' content as it's just mounted directory.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When configuring, 1st argument is a path existing on a '''host machine'''. If relative then it's relative to the root-project folder (where Vagrantfile exists) and 2nd arg is a full path to the mounted dir on guest-os.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Enabling Sync folders and Symlinks&lt;br /&gt;
This can be done at any time, it's applied during &amp;lt;code&amp;gt;vagrant up | reload&amp;lt;/code&amp;gt;. In general symlinks are disabled by VirtualBox as insecure.&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
#                    path on the host       mount on the guestOS&lt;br /&gt;
                              \             /         &lt;br /&gt;
     config.vm.sync_folder &amp;quot;git-host/&amp;quot;, &amp;quot;/git&amp;quot;, disabled: false&lt;br /&gt;
 end&lt;br /&gt;
  config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
    vb.name   = File.basename(Dir.pwd) + &amp;quot;_vagrant&amp;quot;&lt;br /&gt;
    ...&lt;br /&gt;
    vb.customize [&amp;quot;setextradata&amp;quot;, :id, &amp;quot;VBoxInternal2/SharedFoldersEnableSymlinksCreate//git&amp;quot;,     &amp;quot;1&amp;quot;]&lt;br /&gt;
#   vb.customize [&amp;quot;setextradata&amp;quot;, :id, &amp;quot;VBoxInternal2/SharedFoldersEnableSymlinksCreate//vagrant&amp;quot;, &amp;quot;1&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
    # symlinks should be active in root of vm by default&lt;br /&gt;
#   vb.customize [&amp;quot;setextradata&amp;quot;, :id, &amp;quot;VBoxInternal2/SharedFoldersEnableSymlinksCreate/v-root&amp;quot;,   &amp;quot;1&amp;quot;]&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Disabling&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
     config.vm.sync_folder &amp;quot;../data/&amp;quot;, &amp;quot;/vagrant-data&amp;quot;, disabled: true&lt;br /&gt;
 end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Modifying the Owner/Group&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 config.vm.sync_folder &amp;quot;../data/&amp;quot;, &amp;quot;/vagrant-data&amp;quot;, disabled: true,&lt;br /&gt;
    owner: &amp;quot;root&amp;quot;, group: &amp;quot;root&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
References&lt;br /&gt;
* [https://www.vagrantup.com/docs/synced-folders/basic_usage.html#id synced-folders] Hashicorp docs&lt;br /&gt;
&lt;br /&gt;
= Vagrant providers =&lt;br /&gt;
Vagrant can work with a wide variety of backend providers, such as VMware, AWS, and more without changing Vagrantfile. It's enough to  specify the provider and Vagrant will do the rest:&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
vagrant up --provider=vmware_fusion&lt;br /&gt;
vagrant up --provider=aws&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Hyper-V ==&lt;br /&gt;
*Enable Hyper-V&lt;br /&gt;
*if you running Docker for Windows make sure is disabled as only one application can bound to Internal NAT vswitch, if you are using it&lt;br /&gt;
*WSL and Windows Vagrant versions must match&lt;br /&gt;
*the terminal you run WSL or PowerShell runs with elevated privileges&lt;br /&gt;
When running in WSL, make sure you have &amp;lt;code&amp;gt;export VAGRANT_WSL_ENABLE_WINDOWS_ACCESS=&amp;quot;1&amp;quot;&amp;lt;/code&amp;gt;,&lt;br /&gt;
*you are in native Bash.exe not eg. ConEmu terminal with as it was proven not working at the time. You can change default provider by &amp;lt;code&amp;gt;export VAGRANT_DEFAULT_PROVIDER=hyperv&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Optional: Set the user-level environment variable in PowerShell: &lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
[Environment]::SetEnvironmentVariable(&amp;quot;VAGRANT_DEFAULT_PROVIDER&amp;quot;, &amp;quot;hyperv&amp;quot;, &amp;quot;User&amp;quot;) &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Workarounds&lt;br /&gt;
Copy insecure private key from &amp;lt;code&amp;gt;https://raw.githubusercontent.com/hashicorp/vagrant/master/keys/vagrant to WSL ~/.vagr&lt;br /&gt;
ant_key/private_key&amp;lt;/code&amp;gt; because Microsoft filesystem does not support Unix style file permissions, until WSL2 is released.&lt;br /&gt;
&amp;lt;source&amp;gt; &lt;br /&gt;
$ wget https://raw.githubusercontent.com/hashicorp/vagrant/master/keys/vagrant -O ~/.vagrant_key/private_key&lt;br /&gt;
# then set in Vagrantfile&lt;br /&gt;
config.ssh.private_key_path = &amp;quot;~/.vagrant_key/private_key&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When running on HyperV you need to choose a vswitch you will use. Vagrant will prompt you, select &amp;quot;Default Switch&amp;quot;, w&lt;br /&gt;
hich is eqvivalent of NAT Network. You need to creat your own vswitch if you want access to Internet.&lt;br /&gt;
&lt;br /&gt;
Go to Hyper-V Manager, open Virtual Switch Manager..., create External switch, name: vagrant-external, press OK. Then&lt;br /&gt;
 run.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
vagrant up --provider hyperv&lt;br /&gt;
&lt;br /&gt;
    default: Please choose a switch to attach to your Hyper-V instance.&lt;br /&gt;
    default: If none of these are appropriate, please open the Hyper-V manager&lt;br /&gt;
    default: to create a new virtual switch.&lt;br /&gt;
    default:&lt;br /&gt;
    default: 1) DockerNAT&lt;br /&gt;
    default: 2) Default Switch&lt;br /&gt;
    default: 3) vagrant-external&lt;br /&gt;
    default:&lt;br /&gt;
    default: What switch would you like to use?3    #&amp;lt;-- select 3&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Read more https://www.vagrantup.com/docs/hyperv/limitations.html&lt;br /&gt;
&lt;br /&gt;
Run Vagrant file&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant up --provider=hyperv&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== References ===&lt;br /&gt;
*[https://gist.github.com/savishy/8ed40cd8692e295d64f45e299c2b83c9 Create vSwitch in Hyper-V to run Vagrant]&lt;br /&gt;
*[https://techcommunity.microsoft.com/t5/Virtualization/Copying-Files-into-a-Hyper-V-VM-with-Vagrant/ba-p/382376 Copying Files into a Hyper-V VM with Vagrant]&lt;br /&gt;
*[https://techcommunity.microsoft.com/t5/Virtualization/Vagrant-and-Hyper-V-Tips-and-Tricks/ba-p/382373 Vagrant and Hyper-V -- Tips and Tricks] techcommunity.microsoft.com&lt;br /&gt;
&lt;br /&gt;
= Provisioners =&lt;br /&gt;
==Shell provisioner==&lt;br /&gt;
Vagrant can run from shared location script or from inline: Vagrant file shell provisioning commands.&lt;br /&gt;
&lt;br /&gt;
Create provisioning script&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi ~/git/vagrant/bootstrap.sh     &lt;br /&gt;
#!/usr/bin/env bash&lt;br /&gt;
export http_proxy=&amp;lt;nowiki&amp;gt;http://username:password@proxyserver.local:8080&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
export https_proxy=$http_proxy &lt;br /&gt;
apt-get update&lt;br /&gt;
apt-get install -y apache2&lt;br /&gt;
if ! [ -L /var/www ]; then &lt;br /&gt;
  rm -rf /var/www&lt;br /&gt;
  ln -sf /vagrant /var/www  # sets Vagrant shared dir to Apache DocumentRoot&lt;br /&gt;
fi&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure Vagrant to run this shell script above when setting up our machine&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi ~/git/vagrant/Vagrantfile   &lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
   config.vm.box = ubuntu/14.04-i386&lt;br /&gt;
   config.vm.provision: shell, path: &amp;quot;bootstrap.sh&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another example of using shell provisioner, separating a script out&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$script = &amp;lt;&amp;lt;SCRIPT&lt;br /&gt;
echo    &amp;quot; touch /home/vagrant/test_\\`date +%s\\`.txt &amp;quot; &amp;gt; /home/vagrant/newfile&lt;br /&gt;
chmod +x        /home/vagrant/newfile&lt;br /&gt;
echo &amp;quot;* * * * * /home/vagrant/newfile&amp;quot; &amp;gt; mycron&lt;br /&gt;
crontab mycron&lt;br /&gt;
SCRIPT&lt;br /&gt;
&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
  config.vm.box = &amp;quot;ubuntu/xenial64&amp;quot;&lt;br /&gt;
  config.vm.provision &amp;quot;shell&amp;quot;, inline: $script , privileged: false&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Bring the environment up  &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant up                   #runs provisioning only once&lt;br /&gt;
vagrant reload --provision   #reloads VM skipping import and runs provisioning&lt;br /&gt;
vagrant ssh                  #ssh to VM&lt;br /&gt;
wget -qO- 127.0.0.1          #test Apache is running on VM&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Provisioners - shell, ansible, ansible_local and more&lt;br /&gt;
&lt;br /&gt;
This section is about using Ansible with Vagrant, &lt;br /&gt;
 *&amp;lt;code&amp;gt;ansible&amp;lt;/code&amp;gt;, where Ansible is executed on the '''Vagrant host'''&lt;br /&gt;
 *&amp;lt;code&amp;gt;ansible_local&amp;lt;/code&amp;gt;, where Ansible is executed on the '''Vagrant guest'''&lt;br /&gt;
&lt;br /&gt;
==Ansible provisioner==&lt;br /&gt;
&lt;br /&gt;
Specify Ansible as a provisioner in Vagrant file&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 # Run Ansible from the Vagrant Host&lt;br /&gt;
 config.vm.provision &amp;quot;ansible&amp;quot; do |ansible|&lt;br /&gt;
    ansible.playbook = &amp;quot;playbook.yml&amp;quot;&lt;br /&gt;
 end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Chef_solo provisioner ==&lt;br /&gt;
Create recipe, the following dirctory structure is required, eg. recipe name is: vagrant_la&lt;br /&gt;
 ├── cookbooks&lt;br /&gt;
 │   └── vagrant_la&lt;br /&gt;
 │       └── recipes&lt;br /&gt;
 │           └── default.rb&lt;br /&gt;
 Vagrant&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Recipe&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
vi cookbooks/vagrant_la/recipes/default.rb&lt;br /&gt;
execute &amp;quot;apt-get update&amp;quot;&lt;br /&gt;
package &amp;quot;apache2&amp;quot;&lt;br /&gt;
execute &amp;quot;rm -rf /var/www&amp;quot;&lt;br /&gt;
link &amp;quot;var/www&amp;quot; do&lt;br /&gt;
        to &amp;quot;/vagrant&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In Vagrant file add following&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
config.vm.provision &amp;quot;chef_solo&amp;quot; do |chef|&lt;br /&gt;
        chef.add_recipe &amp;quot;vagrant_la&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Run &amp;lt;code&amp;gt;vagrant up&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Puppet manifest ==&lt;br /&gt;
Create Vagrant provisioning stanza&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 config.vm.define &amp;quot;web&amp;quot; do |web|&lt;br /&gt;
         web.vm.hostname = &amp;quot;web&amp;quot;&lt;br /&gt;
         web.vm.box = &amp;quot;apache&amp;quot;&lt;br /&gt;
         web.vm.network &amp;quot;private_network&amp;quot;, type: &amp;quot;dhcp&amp;quot;&lt;br /&gt;
         web.vm.network &amp;quot;forwarded_port&amp;quot;, guest: 80, host: 8080&lt;br /&gt;
         web.vm.provision &amp;quot;puppet&amp;quot; do |puppet|&lt;br /&gt;
                 puppet.manifests_path = &amp;quot;manifests&amp;quot;&lt;br /&gt;
                 puppet.manifest_file = &amp;quot;default.pp&amp;quot;&lt;br /&gt;
         end&lt;br /&gt;
 end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create a required folder structure for puppet manifests&lt;br /&gt;
 ├── manifests&lt;br /&gt;
 │   └── default.pp&lt;br /&gt;
 └── Vagrantfile&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Puppet manifest file&lt;br /&gt;
 vi manifests/default.pp&lt;br /&gt;
 exec { &amp;quot;apt-get update&amp;quot;:&lt;br /&gt;
        command =&amp;gt; &amp;quot;/usr/bin/apt-get update&amp;quot;,&lt;br /&gt;
 }&lt;br /&gt;
 package { &amp;quot;apache2&amp;quot;:&lt;br /&gt;
        require =&amp;gt; Exec[&amp;quot;apt-get update&amp;quot;],&lt;br /&gt;
 }&lt;br /&gt;
 file { &amp;quot;/var/www&amp;quot;:&lt;br /&gt;
        ensure =&amp;gt; link,&lt;br /&gt;
        target =&amp;gt; &amp;quot;/vagrant&amp;quot;,&lt;br /&gt;
        force =&amp;gt; true,&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
= [https://tuhrig.de/resizing-vagrant-box-disk-space/ Resizing Vagrant box disk] =&lt;br /&gt;
* [https://www.vagrantup.com/docs/disks/usage Resizing primary disk] native way&lt;br /&gt;
&lt;br /&gt;
= Enable Vagrant to use proxy server for VMs =&lt;br /&gt;
Install proxyconf plugin or use &amp;lt;code&amp;gt;vagrant plugin list&amp;lt;/code&amp;gt; to verify if installed&lt;br /&gt;
 vagrant plugin install vagrant-proxyconf&lt;br /&gt;
&lt;br /&gt;
Configure your Vagrantfile, here particularly host 10.0.0.1:3128 runs CNTLM proxy&lt;br /&gt;
 Vagrant.configure(&amp;quot;2&amp;quot;) do |config| &lt;br /&gt;
     &amp;lt;nowiki&amp;gt;config.proxy.http = &amp;quot;http://10.0.0.1:3128&amp;quot;&lt;br /&gt;
    config.proxy.https = &amp;quot;http://10.0.0.1:3128&amp;quot;&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
     config.proxy.no_proxy = &amp;quot;localhost,127.0.0.1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
= Virtualbox Guest Additions =&lt;br /&gt;
== Sync using vagrant-vbguest plugin ==&lt;br /&gt;
Plugin install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant plugin install vagrant-vbguest&lt;br /&gt;
&lt;br /&gt;
# In a case of dependency issues you can temporary disable the check&lt;br /&gt;
VAGRANT_DISABLE_STRICT_DEPENDENCY_ENFORCEMENT=1 vagrant plugin install vagrant-vbguest&lt;br /&gt;
&lt;br /&gt;
# Verify current version, running on a host(hypervisor)&lt;br /&gt;
vagrant vbguest --status&lt;br /&gt;
&lt;br /&gt;
# Add to your Vagrant file&lt;br /&gt;
if Vagrant.has_plugin?(&amp;quot;vagrant-vbguest&amp;quot;)&lt;br /&gt;
  host.vbguest.auto_update = true&lt;br /&gt;
  host.vbguest.no_remote   = true&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Manual install&lt;br /&gt;
Download VBoxGuestAdditions from:&lt;br /&gt;
* https://download.virtualbox.org/virtualbox&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install a matching version to your host version onto the virtual machine.&lt;br /&gt;
wget https://download.virtualbox.org/virtualbox/7.0.16/VBoxGuestAdditions_7.0.16.iso&lt;br /&gt;
vagrant vbguest --do install --iso VBoxGuestAdditions_7.0.16.iso&lt;br /&gt;
&lt;br /&gt;
Usage: vagrant vbguest [vm-name] [--do start|rebuild|install] [--status] [-f|--force] [-b|--auto-reboot] [-R|--no-remote] [--iso VBoxGuestAdditions.iso] [--no-cleanup]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More you will find at [https://github.com/dotless-de/vagrant-vbguest vagrant-vbguest] plugin project.&lt;br /&gt;
&lt;br /&gt;
== Manual upgrade ==&lt;br /&gt;
Find out what version you are running, execute on a guest VM&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant@ubuntu:~$ modinfo vboxguest | grep ^version&lt;br /&gt;
version:        6.0.10 r132072&lt;br /&gt;
&lt;br /&gt;
vagrant@ubuntu:~$ lsmod | grep -io vboxguest | xargs modinfo | grep -iw version&lt;br /&gt;
version:        6.0.10 r132072&lt;br /&gt;
&lt;br /&gt;
vagrant@u18cli-3:~$ sudo /usr/sbin/VBoxService --version&lt;br /&gt;
6.0.10r132072&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download the extension, you can explore [http://download.virtualbox.org/virtualbox here]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
wget http://download.virtualbox.org/virtualbox/5.0.32/VBoxGuestAdditions_5.0.32.iso&lt;br /&gt;
#you need to get it mounted or extracted the content and run inside the VM.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
*[https://github.com/chilcano/box-vagrant-wso2-dev-srv/blob/master/_downloads/vagrant-vboxguestadditions-workaroud.md Upgrade Vbox extension additions within Vagrant box]&lt;br /&gt;
&lt;br /&gt;
= List all Virtualbox SSH redirections =&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
vboxmanage list vms | cut -d ' ' -f 2  &amp;gt; /tmp/vms.out &amp;amp;&amp;amp; for vm in $(cat /tmp/vms.out); do vboxmanage showvminfo &amp;quot;$vm&amp;quot; | grep ssh; done&lt;br /&gt;
vboxmanage list vms | cut -d ' ' -f 1 | sed 's/&amp;quot;//g' &amp;gt; /tmp/vms.out &amp;amp;&amp;amp; for vm in $(cat /tmp/vms.out); do echo $vm; vboxmanage showvminfo &amp;quot;$vm&amp;quot; | grep ssh; done&lt;br /&gt;
vboxmanage list vms \&lt;br /&gt;
  | cut -d ' ' -f 1 \&lt;br /&gt;
  | sed 's/&amp;quot;//g' &amp;gt; /tmp/vms.out \&lt;br /&gt;
  &amp;amp;&amp;amp; for vm in $(cat /tmp/vms.out); do vboxmanage showvminfo &amp;quot;$vm&amp;quot; \&lt;br /&gt;
                                      | grep ssh \&lt;br /&gt;
                                      | tr --delete '\n'; echo &amp;quot; $vm&amp;quot;; done&lt;br /&gt;
&lt;br /&gt;
sed 's/&amp;quot;//g'      #removes double quotes from whole string&lt;br /&gt;
tr --delete '\n'  #deletes EOL, so the next command output is appended to the previous line&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Vagrant file =&lt;br /&gt;
;Ruby gotchas&lt;br /&gt;
Vagrant configuration file is written in Ruby specification, therefore you need to remember&lt;br /&gt;
*don't use dashes in object names, '''don't''': &amp;lt;tt&amp;gt;jenkins-minion_config.vm.box = &amp;quot;ubuntu/xenial64&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
*don't use symbols (here underscore) in variable names, '''don't''': &amp;lt;tt&amp;gt;(1..2).each do |minion_number|&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== HAProxy cluster, multi-node Vagrant config  ==&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
git clone https://github.com/jweissig/episode-45&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This creates ''Ansible'' mgmt server, Load Balancer and Web nodes [1..2]. HAProxy will be configured via Ansible code.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# -*- mode: ruby -*-&lt;br /&gt;
# vi: set ft=ruby :&lt;br /&gt;
&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
 # create mgmt node&lt;br /&gt;
 config.vm.define :mgmt do |mgmt_config|&lt;br /&gt;
     mgmt_config.vm.box = &amp;quot;ubuntu/trusty64&amp;quot;&lt;br /&gt;
     mgmt_config.vm.hostname = &amp;quot;mgmt&amp;quot;&lt;br /&gt;
     mgmt_config.vm.network :private_network, ip: &amp;quot;10.0.15.10&amp;quot;&lt;br /&gt;
     mgmt_config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
       vb.memory = &amp;quot;256&amp;quot;&lt;br /&gt;
     end&lt;br /&gt;
     mgmt_config.vm.provision :shell, path: &amp;quot;bootstrap-mgmt.sh&amp;quot;&lt;br /&gt;
 end&lt;br /&gt;
&lt;br /&gt;
 # create load balancer&lt;br /&gt;
 config.vm.define :lb do |lb_config|&lt;br /&gt;
     lb_config.vm.box = &amp;quot;ubuntu/trusty64&amp;quot;&lt;br /&gt;
     lb_config.vm.hostname = &amp;quot;lb&amp;quot;&lt;br /&gt;
     lb_config.vm.network :private_network, ip: &amp;quot;10.0.15.11&amp;quot;&lt;br /&gt;
     lb_config.vm.network &amp;quot;forwarded_port&amp;quot;, guest: 80, host: 8080&lt;br /&gt;
     lb_config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
       vb.memory = &amp;quot;256&amp;quot;&lt;br /&gt;
     end&lt;br /&gt;
 end&lt;br /&gt;
&lt;br /&gt;
 # create some web servers&lt;br /&gt;
 # https://docs.vagrantup.com/v2/vagrantfile/tips.html&lt;br /&gt;
  (1..2).each do |i|&lt;br /&gt;
    config.vm.define &amp;quot;web#{i}&amp;quot; do |node|&lt;br /&gt;
        node.vm.box = &amp;quot;ubuntu/trusty64&amp;quot;&lt;br /&gt;
        node.vm.hostname = &amp;quot;web#{i}&amp;quot;&lt;br /&gt;
        node.vm.network :private_network, ip: &amp;quot;10.0.15.2#{i}&amp;quot;&lt;br /&gt;
        node.vm.network &amp;quot;forwarded_port&amp;quot;, guest: 80, host: &amp;quot;808#{i}&amp;quot;&lt;br /&gt;
        node.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
          vb.memory = &amp;quot;256&amp;quot;&lt;br /&gt;
        end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Boot strap script &amp;lt;tt&amp;gt;bootstrap-mgmt.sh&amp;lt;/tt&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;shell&amp;quot;&amp;gt;&lt;br /&gt;
#!/usr/bin/env bash &lt;br /&gt;
# install ansible (http://docs.ansible.com/intro_installation.html)&lt;br /&gt;
apt-get -y install software-properties-common&lt;br /&gt;
apt-add-repository -y ppa:ansible/ansible&lt;br /&gt;
apt-get update&lt;br /&gt;
apt-get -y install ansible&lt;br /&gt;
&lt;br /&gt;
# copy examples into /home/vagrant (from inside the mgmt node)&lt;br /&gt;
cp -a /vagrant/examples/* /home/vagrant&lt;br /&gt;
chown -R vagrant:vagrant /home/vagrant&lt;br /&gt;
&lt;br /&gt;
# configure hosts file for our internal network defined by Vagrantfile&lt;br /&gt;
cat &amp;gt;&amp;gt; /etc/hosts &amp;lt;&amp;lt;EOL&lt;br /&gt;
# vagrant environment nodes&lt;br /&gt;
10.0.15.10  mgmt&lt;br /&gt;
10.0.15.11  lb&lt;br /&gt;
10.0.15.21  web1&lt;br /&gt;
10.0.15.22  web2&lt;br /&gt;
10.0.15.23  web3&lt;br /&gt;
10.0.15.24  web4&lt;br /&gt;
10.0.15.25  web5&lt;br /&gt;
10.0.15.26  web6&lt;br /&gt;
10.0.15.27  web7&lt;br /&gt;
10.0.15.28  web8&lt;br /&gt;
10.0.15.29  web9&lt;br /&gt;
EOL&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Gitbash path -  &amp;lt;code&amp;gt;/c/Program\ Files/Oracle/VirtualBox/VBoxManage.exe&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Set bootstrap script for Proxy or No-proxy specific system&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
Vagrant status&lt;br /&gt;
Vagrant up&lt;br /&gt;
Vagrant ssh mgmt&lt;br /&gt;
ansible all --list-hosts&lt;br /&gt;
ssh-keyscan web1 web2 lb &amp;gt; ~/.ssh/known_hosts&lt;br /&gt;
ansible-playbook ssh-addkey.yml -u vagrant --ask-pass&lt;br /&gt;
ansible-playbook site.yml&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once set it up you can navigate on your laptop to:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
http://localhost:8080/              #Website test&lt;br /&gt;
http://localhost:8080/haproxy?stats #HAProxy stats&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Use to verify end server&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
curl -I http://localhost:8080&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:X-Backend-Server.png|none|left|Curl -i X-Backend-Server]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Generate web traffic&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
vagrant ssh lb&lt;br /&gt;
sudo apt-get install apache2-utils&lt;br /&gt;
ansible localhost -m apt -a &amp;quot;pkg=apache2-utils state=present&amp;quot; --become&lt;br /&gt;
ab -n 1000 -c 1 http://10.0.2.15:80/ # Apache replaced 'ab' with 'hey'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Vagrant DNS =&lt;br /&gt;
== Multi-machine mDNS discovery ==&lt;br /&gt;
Multi-machine setup requires 3 ingredients* :&lt;br /&gt;
* each machine to have different hostname&lt;br /&gt;
* a way of getting the IP address for a hostname (eg. mDNS)&lt;br /&gt;
* connect the VMs through a private network&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In multi-machine configuration we need a way of getting the IP address for a hostname. We use &amp;lt;code&amp;gt;mDNS&amp;lt;/code&amp;gt; for this. By default &amp;lt;codemDNS&amp;lt;/code&amp;gt; only resolves host names ending with the &amp;lt;code&amp;gt;.local&amp;lt;/code&amp;gt; top-level domain (TLD). This can cause problems if that domain includes hosts which do not implement mDNS but which can be found via a conventional unicast DNS server. Resolving such conflicts requires network-configuration changes that violate the zero-configuration goal. Install &amp;lt;code&amp;gt;avahi&amp;lt;/code&amp;gt; system on all machines to facilitate service discovery on a local network via the &amp;lt;code&amp;gt;mDNS/DNS-SD&amp;lt;/code&amp;gt; protocol suite.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
config.vm.provision &amp;quot;shell&amp;quot;, inline: &amp;lt;&amp;lt;-SCRIPT&lt;br /&gt;
  apt-get install -y avahi-daemon libnss-mdns&lt;br /&gt;
SCRIPT&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;References&lt;br /&gt;
*[https://github.com/lathiat/nss-mdns nss-mdns] system which allows hostname lookup of *.local hostnames via mDNS in all system programs using nsswitch&lt;br /&gt;
*[https://www.avahi.org/ avahi.org]&lt;br /&gt;
&lt;br /&gt;
== Set host system DNS server resolver ==&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
config.vm.provider :virtualbox do |vb|&lt;br /&gt;
    vb.customize [&amp;quot;modifyvm&amp;quot;, :id, &amp;quot;--natdnshostresolver1&amp;quot;, &amp;quot;on&amp;quot;]&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Ubuntu with GUI =&lt;br /&gt;
This article is going to describe to setup Vagrant Virtualbox VM with GUI, setting up xserver with xfce4 as desktop environment.&lt;br /&gt;
== Locales ==&lt;br /&gt;
This is not working&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
     locale-gen en_GB.utf8 #en_GB.UTF-8&lt;br /&gt;
     update-locale LANG=en_GB.UTF-8&lt;br /&gt;
     locale-gen --purge &amp;quot;en_GB.UTF-8&amp;quot;&lt;br /&gt;
     dpkg-reconfigure --frontend=noninteractive locales&lt;br /&gt;
     dpkg-reconfigure --frontend=noninteractive keyboard-configuration&lt;br /&gt;
     localedef -i en_GB -c -f UTF-8 en_GB.utf8&lt;br /&gt;
     sudo update-locale LANG=en_GB.UTF-8&lt;br /&gt;
     locale-gen --purge &amp;quot;en_GB.UTF-8&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Troubleshooting&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
locale -a #shows which locales are available on your system&lt;br /&gt;
sudo less /usr/share/i18n/SUPPORTED&lt;br /&gt;
cat /etc/default/locale&lt;br /&gt;
&lt;br /&gt;
#Set system wide locales (does not work for users)&lt;br /&gt;
localectl set-locale LANG=en_GB.UTF-8 LANGUAGE=en_GB:en&lt;br /&gt;
localectl set-keymap gb&lt;br /&gt;
localectl set-x11-keymap gb&lt;br /&gt;
&lt;br /&gt;
#Quick kb change&lt;br /&gt;
apt-get install -yq x11-xkb-utils; setxkbmap gb&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Gnome3 ==&lt;br /&gt;
This setup installs Ubuntu desktop and may require a restart to apply changes to like a taskbar with shortcuts.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;ruby&amp;quot;&amp;gt;&lt;br /&gt;
# -*- mode: ruby -*-&lt;br /&gt;
# vi: set ft=ruby :&lt;br /&gt;
&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
  config.vm.box      = &amp;quot;ubuntu/bionic64&amp;quot; #bento/ubuntu-18.04, ubuntu/xenial64&lt;br /&gt;
&lt;br /&gt;
  machineName = File.basename(Dir.pwd) #name as a current working dir&lt;br /&gt;
# machineName = 'u18gui-1'&lt;br /&gt;
  config.vm.hostname = machineName&lt;br /&gt;
&lt;br /&gt;
  # Manually check for updates `vagrant box outdated`&lt;br /&gt;
  config.vm.box_check_update = false&lt;br /&gt;
&lt;br /&gt;
  # Vbguest plugin&lt;br /&gt;
  if Vagrant.has_plugin?(&amp;quot;vagrant-vbguest&amp;quot;)&lt;br /&gt;
    config.vbguest.auto_update = false&lt;br /&gt;
    config.vbguest.no_remote   = true&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # config.vm.network &amp;quot;forwarded_port&amp;quot;, guest: 80, host: 8080, host_ip: &amp;quot;127.0.0.1&amp;quot;&lt;br /&gt;
  # Public network, which generally matched to bridged network.&lt;br /&gt;
  # config.vm.network &amp;quot;public_network&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  # config.vm.synced_folder &amp;quot;hostDir&amp;quot;, &amp;quot;/InVagrantMount/path&amp;quot; &lt;br /&gt;
  # config.vm.synced_folder &amp;quot;../data&amp;quot;, &amp;quot;/vagrant_data&amp;quot;&lt;br /&gt;
&lt;br /&gt;
   config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
     vb.gui    = true&lt;br /&gt;
     vb.memory = &amp;quot;3072&amp;quot;&lt;br /&gt;
     vb.name   = machineName + &amp;quot;_vagrant&amp;quot;&lt;br /&gt;
   end&lt;br /&gt;
  &lt;br /&gt;
   config.vm.provision &amp;quot;shell&amp;quot;, inline: &amp;lt;&amp;lt;-SHELL&lt;br /&gt;
     export DEBIAN_FRONTEND=noninteractive&lt;br /&gt;
     setxkbmap gb&lt;br /&gt;
     apt-get update &amp;amp;&amp;amp; apt-get upgrade -yq&lt;br /&gt;
     apt-get install -yq ubuntu-desktop --no-install-recommends&lt;br /&gt;
     apt-get install -yq terminator tmux&lt;br /&gt;
     #only U16 xenial to fix Unity&lt;br /&gt;
     #apt-get install -yq unity-lens-files unity-lens-applications indicator-session --no-install-recommends &lt;br /&gt;
   SHELL&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Running up&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
vagrant plugin install vagrant-vbguest&lt;br /&gt;
vagrant up &amp;amp;&amp;amp; vagrant vbguest --do install &amp;amp;&amp;amp; vagrant reload&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Xface ==&lt;br /&gt;
Get a basic Ubuntu image working, boot it up and vagrant ssh.&lt;br /&gt;
Next, enable the VirtualBox display, which is off by default. Halt the VM and uncomment these lines in Vagrantfile:&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
config.vm.provider :virtualbox do |vb|&lt;br /&gt;
  vb.gui = true&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Boot the VM and observe the new display window. Now you just need to install and start xfce4. Use vagrant ssh and:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get install -y xfce4 virtualbox-guest-dkms virtualbox-guest-utils virtualbox-guest-x11&lt;br /&gt;
#guest additions are already installed on most of the Vagrant boxes&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Don't start the GUI as root because you really want to stay the vagrant user. To do this you need to permit anyone to start the GUI: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo vim /etc/X11/Xwrapper.config and edit it to allowed_users=anybody&lt;br /&gt;
sudo startxfce4&amp;amp;&lt;br /&gt;
sudo VBoxClient-all #optional&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should be landed in a xfce4 session.&lt;br /&gt;
&lt;br /&gt;
(Optional) If VBoxClient-all script isn't installed or anything is missing, you can replace with the equivalent:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo VBoxClient --clipboard&lt;br /&gt;
sudo VBoxClient --draganddrop&lt;br /&gt;
sudo VBoxClient --display&lt;br /&gt;
sudo VBoxClient --checkhostversion&lt;br /&gt;
sudo VBoxClient --seamless&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
*[http://stackoverflow.com/questions/18878117/using-vagrant-to-run-virtual-machines-with-desktop-environment Vagrant GUI vms] stackoverflow&lt;br /&gt;
&lt;br /&gt;
= Windows=&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
# -*- mode: ruby -*-&lt;br /&gt;
# vi: set ft=ruby :&lt;br /&gt;
&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
  config.vm.box = &amp;quot;gusztavvargadr/windows-server&amp;quot;&lt;br /&gt;
  config.vm.box_check_update = false&lt;br /&gt;
  config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
     vb.gui = true       # Display the VirtualBox GUI when booting the machine&lt;br /&gt;
     vb.memory = &amp;quot;3072&amp;quot;  # Customize the amount of memory on the VM:&lt;br /&gt;
  end&lt;br /&gt;
  # Plugins&lt;br /&gt;
  config.vbguest.auto_update = false&lt;br /&gt;
  config.vbguest.no_remote = true&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Shared location&lt;br /&gt;
* enable Network Sharing&lt;br /&gt;
* Vagrant path is mapped to &amp;lt;code&amp;gt;\\VBOXSVR\vagrant&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= WIP DevOps workstation =&lt;br /&gt;
This to contain:&lt;br /&gt;
*bashrc with git branch in ps1&lt;br /&gt;
*bash autocomplete (...samename)&lt;br /&gt;
*bash colored symlinks&lt;br /&gt;
*bash_logout and .profile to eval ssh-agent and kill on exit&lt;br /&gt;
*git install&lt;br /&gt;
*ansible 1.9.4&lt;br /&gt;
*java Oracle&lt;br /&gt;
*clone tfenv and install terraform&lt;br /&gt;
*vim install&lt;br /&gt;
*vundle install&lt;br /&gt;
*[done] python 2.7 OOB in 16.04&lt;br /&gt;
*[done]python pip: awscli, boto, boto3, etc..&lt;br /&gt;
&lt;br /&gt;
Challenges:&lt;br /&gt;
*Ubuntu 16.04 official box does not come with a default ''vagrant'' user but instead comes with ''ubuntu'' user. This causes a number of incompatibilities.&lt;br /&gt;
**Read more at launchpad [https://bugs.launchpad.net/cloud-images/+bug/1569237 vagrant xenial box is not provided with vagrant/vagrant username and password ]&lt;br /&gt;
* Solutions&lt;br /&gt;
** on W10 host both users: ubuntu &amp;amp; vagrant exist. Only vagrant has insecure_pub installed OOB. I am coping vagrant user pub key into ubuntu user authorized_keys&lt;br /&gt;
** on U16.05 host the official image does not seem to come with vagrant user but Ubuntu user works OOB&lt;br /&gt;
** Read more at SO &lt;br /&gt;
***[Vagrant's Ubuntu 16.04 vagrantfile default password https://stackoverflow.com/questions/41337802/vagrants-ubuntu-16-04-vagrantfile-default-password]&lt;br /&gt;
***[https://stackoverflow.com/questions/30075461/how-do-i-add-my-own-public-key-to-vagrant-vm How do I add my own public key to Vagrant VM?]&lt;br /&gt;
*** [https://blog.ouseful.info/2015/07/27/running-a-shell-script-once-only-in-vagrant/ Running a Shell Script Once Only in vagrant]&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
*[https://www.vagrantup.com/docs/getting-started/ Vagrant Start up documentation]&lt;br /&gt;
*[https://atlas.hashicorp.com/boxes/search Vagrant Hashicorp VMs repository] Virtualbox&lt;br /&gt;
*[https://cloud-images.ubuntu.com/vagrant/ Vagrant Ubuntu VMs images] Virtualbox&lt;br /&gt;
*[https://www.vagrantup.com/docs/provisioning/ansible_intro.html Vagrant and Ansible provisioner] Vagrant docs&lt;br /&gt;
*[https://manski.net/2016/09/vagrant-multi-machine-tutorial/#multi-machine.3A-the-naive-way Vagrant Tutorial – From Nothing To Multi-Machine] Tutorial&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=HashiCorp/Vagrant&amp;diff=7048</id>
		<title>HashiCorp/Vagrant</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=HashiCorp/Vagrant&amp;diff=7048"/>
		<updated>2025-08-22T11:29:03Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Create box from current project (package a box) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
Vagrant is configured on a per project basis. Each of these projects has its own Vagran file. The Vagrant file is a text file in which vagrant reads that sets up our environment. There is a description of what OS, how much RAM, and what software to be installed etc. You can version control this file.&lt;br /&gt;
&lt;br /&gt;
= Install | [https://github.com/hashicorp/vagrant/blob/v2.2.10/CHANGELOG.md Changelog] =&lt;br /&gt;
Download or upgrade&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install using Ubuntu package manager (2024)&lt;br /&gt;
wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg&lt;br /&gt;
echo &amp;quot;deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main&amp;quot; | sudo tee /etc/apt/sources.list.d/hashicorp.list&lt;br /&gt;
apt-cache policy vagrant&lt;br /&gt;
sudo apt update &amp;amp;&amp;amp; sudo apt install vagrant&lt;br /&gt;
&lt;br /&gt;
# Install downloading a package from sources (2022)&lt;br /&gt;
LATEST=$(curl -s GET https://api.github.com/repos/hashicorp/vagrant/tags | jq -r '.[].name' | head -n1 | tr -d v); echo $LATEST&lt;br /&gt;
VERSION=${LATEST:=2.2.18}; &lt;br /&gt;
wget https://releases.hashicorp.com/vagrant/${VERSION}/vagrant_${VERSION}_linux_amd64.zip&lt;br /&gt;
sudo install /usr/bin/vagrant&lt;br /&gt;
#sudo dpkg -i vagrant_${VERSION}_x86_64.deb&lt;br /&gt;
#sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -f   # resolve missing dependencies&lt;br /&gt;
&lt;br /&gt;
# Fix plugins if needed&lt;br /&gt;
vagrant plugin update&lt;br /&gt;
vagrant plugin repair&lt;br /&gt;
vagrant plugin expunge --reinstall&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install ruby is recommended as configuration within '''Vagrant''' file is written in Ruby language. &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo apt-get install ruby&lt;br /&gt;
sudo gem install bundler&lt;br /&gt;
sudo gem update  bundler    # if update needed&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repair plugins after the upgrade&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant plugin repair    # use first&lt;br /&gt;
vagrant plugin expunge --reinstall&lt;br /&gt;
vagrant plugin update    # then update broken plugin&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Images aka &amp;lt;code&amp;gt;box&amp;lt;/code&amp;gt; management =&lt;br /&gt;
Vagrant comes with a preconfigured image repositories.&lt;br /&gt;
;Manage boxes&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant box [list | add | remove] [--help]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Add a box (image) into local repository&lt;br /&gt;
These are standard VMs from providers in Virtualbox, VMware or Hyper-V format taken from a given repository&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant box add hashicorp/precise64      #user: hashicorp boximage: precise64, this is preconfigured repository&lt;br /&gt;
vagrant box add ubuntu/xenial64&lt;br /&gt;
vagrant box add ubuntu/xenial64    --box-version 20170618.0.0 --provider virtualbox&lt;br /&gt;
vagrant box add bento/ubuntu-18.04 --box-version 201812.27.0  --provider hyperv&lt;br /&gt;
&lt;br /&gt;
# Box can be specified via URLs or local file paths, Virtualbox can only nest 32bit machines&lt;br /&gt;
vagrant box add --force ubuntu/14.04      https://cloud-images.ubuntu.com/vagrant/trusty/current/trusty-server-cloudimg-amd64-vagrant-disk1.box&lt;br /&gt;
vagrant box add --force ubuntu/14.04-i386 https://cloud-images.ubuntu.com/vagrant/precise/current/precise-server-cloudimg-i386-vagrant-disk1.box&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Windows images&lt;br /&gt;
* devopsgroup-io/windows_server-2012r2-standard-amd64-nocm&lt;br /&gt;
* peru/windows-server-2016-standard-x64-eval&lt;br /&gt;
* scotch/box&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Update a box to the latest version&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; vagrant box list&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20190411.0.0)&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20190718.0.0)&lt;br /&gt;
&lt;br /&gt;
$&amp;gt; vagrant box update --box ubuntu/bionic64&lt;br /&gt;
Checking for updates to 'ubuntu/bionic64'&lt;br /&gt;
Latest installed version: 20190718.0.0&lt;br /&gt;
Version constraints: &amp;gt; 20190718.0.0&lt;br /&gt;
Provider: virtualbox&lt;br /&gt;
Updating 'ubuntu/bionic64' with provider 'virtualbox' from version&lt;br /&gt;
'20190718.0.0' to '20200124.0.0'...&lt;br /&gt;
Loading metadata for box 'https://vagrantcloud.com/ubuntu/bionic64'&lt;br /&gt;
Adding box 'ubuntu/bionic64' (v20200124.0.0) for provider: virtualbox&lt;br /&gt;
Downloading: https://vagrantcloud.com/ubuntu/boxes/bionic64/versions/20200124.0.0/providers/virtualbox.box&lt;br /&gt;
Download redirected to host: cloud-images.ubuntu.com&lt;br /&gt;
&lt;br /&gt;
$&amp;gt; vagrant box list&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20190411.0.0)&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20190718.0.0)&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20200124.0.0) # &amp;lt;- new downloaded&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Delete all images (aka boxes)&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant box prune&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= vagrant init - your first project =&lt;br /&gt;
;Configure Vagrantfile to use the box as your base system&lt;br /&gt;
&amp;lt;source lang=&amp;quot;ruby&amp;quot;&amp;gt;&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
 config.vm.box      = &amp;quot;ubuntu/bionic64&amp;quot;&lt;br /&gt;
 config.vm.hostname = &amp;quot;ubuntu&amp;quot; #hostname, requires reload&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Create Vagrant project, by creating ''Vagrantfile'' in your current directory&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
vagrant init                    #initialises an project &lt;br /&gt;
vagrant init ubuntu/xenial64    # initialises official Ubuntu 16.04 LTS (Xenial Xerus) Daily Build&lt;br /&gt;
vagrant init ubuntu/bionic64    #supports only VirtualBox provider&lt;br /&gt;
vagrant init bento/ubuntu-18.04 #supports variety of providers&lt;br /&gt;
&lt;br /&gt;
#Windows&lt;br /&gt;
vagrant init devopsgroup-io/windows_server-2012r2-standard-amd64-nocm #Windows 2012r2, VirtualBox only; cannot ssh&lt;br /&gt;
vagrant init peru/windows-server-2016-standard-x64-eval               #Windows 2016, halt works&lt;br /&gt;
vagrant init gusztavvargadr/windows-server                            #Windows 2019, full integration&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Power up your Vagrant box&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
vagrant up&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Ssh to the box. Below example of nested virtualisation 64bit VM(host) runs 32bit (guest vm)&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
piotr@vm-ubuntu64:~/git/vagrant$ vagrant ssh    #default password is &amp;quot;vagrant&amp;quot;&lt;br /&gt;
vagrant@vagrant-ubuntu-precise-32:~$ w&lt;br /&gt;
13:08:35 up 15 min,  1 user,  load average: 0.06, 0.31, 0.54&lt;br /&gt;
USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT&lt;br /&gt;
vagrant  pts/0    10.0.2.2         13:02    1.00s  4.63s  0.09s w&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Shared directory between Vagrant VM and an hypervisor provider&lt;br /&gt;
Vagrant VM shares a directory mounted at &amp;lt;tt&amp;gt;/vagrant&amp;lt;/tt&amp;gt;  with the directory on the host containing your Vagrantfile. This can be manually mounted from within VM as long the shared directory is setup in  GUI. &lt;br /&gt;
&lt;br /&gt;
Eg. vm_name &amp;gt; Settings &amp;gt; Shared Folders &amp;gt; Name: vagrant | Path: /home/piotr/vm_name&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
 sudo mount -t vboxsf -o uid=1000 vagrant /vagrant #firts arg 'vagrant' refers to GUI setiing &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
vagrant --debug up&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Nesting VMs ==&lt;br /&gt;
The error below is due to Virtualbox cannot run nested 64bit virtualbox VM. Spinning up a 64bit VM stops with an error that no 64bit CPU could be found. Update [https://forums.virtualbox.org/viewtopic.php?f=1&amp;amp;t=90831 VirtualBox 6.x Nested virtualization, VT-x/AMD-V in the guest].&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Error:&lt;br /&gt;
 Timed out while waiting for the machine to boot. This means that&lt;br /&gt;
 Vagrant was unable to communicate with the guest machine within&lt;br /&gt;
 the configured (&amp;quot;config.vm.boot_timeout&amp;quot; value) time period.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Manage power states =&lt;br /&gt;
*&amp;lt;code&amp;gt;vagrant suspend&amp;lt;/code&amp;gt; - saves the current running state of the machine and stop it&lt;br /&gt;
*&amp;lt;code&amp;gt;vagrant halt&amp;lt;/code&amp;gt; - gracefully shuts down the guest operating system and power down the guest machine&lt;br /&gt;
*&amp;lt;code&amp;gt;vagrant destroy&amp;lt;/code&amp;gt; - removes all traces of the guest machine from your system. It'll stop the guest machine, power it down, and remove all of the guest hard disks&lt;br /&gt;
&lt;br /&gt;
= Managing snapshots =&lt;br /&gt;
You can easily save snapshots.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Get status&lt;br /&gt;
$ vagrant status&lt;br /&gt;
Current machine states:&lt;br /&gt;
default                   poweroff (virtualbox) # &amp;lt;- 'default' it's machine name&lt;br /&gt;
                                                # in multi-vm Vagrant config file&lt;br /&gt;
The VM is powered off. To restart the VM, simply run `vagrant up`&lt;br /&gt;
&lt;br /&gt;
# List&lt;br /&gt;
vagrant snapshot list&lt;br /&gt;
==&amp;gt; default: &lt;br /&gt;
11_b4-upgradeVbox-stopped&lt;br /&gt;
12_Dec01_stopped&lt;br /&gt;
&lt;br /&gt;
# Save&lt;br /&gt;
                        &amp;lt;nameOfvm&amp;gt; &amp;lt;snapshot-name&amp;gt; &lt;br /&gt;
vagrant snapshot save    default    13_Dec30_external-eks_stopped&lt;br /&gt;
&lt;br /&gt;
# Restore&lt;br /&gt;
vagrant snapshot restore default    13_Dec30_external-eks_stopped&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Lookup path precedence for Vagrant project file =&lt;br /&gt;
When you run any vagrant command, Vagrant climbs your directory tree starting first in the current directory you are in. Example:&lt;br /&gt;
 /home/peter/projects/la/Vagrant&lt;br /&gt;
 /home/peter/projects/Vagrant&lt;br /&gt;
 /home/peter/Vagrant&lt;br /&gt;
 /home/Vagrant&lt;br /&gt;
 /Vagrant&lt;br /&gt;
&lt;br /&gt;
= Configuration =&lt;br /&gt;
== Networking ==&lt;br /&gt;
'''Private''' network is network that is not accessible from Internet. Networking stanza is a part of the main &amp;lt;tt&amp;gt;|config|&amp;lt;/tt&amp;gt; loop.&lt;br /&gt;
&lt;br /&gt;
DHCP IP address assigned&lt;br /&gt;
 config.vm.network &amp;quot;private_network&amp;quot;, type: &amp;quot;dhcp&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Static IP assigment&lt;br /&gt;
 config.vm.network &amp;quot;private_network&amp;quot;, ip: &amp;quot;192.168.80.5&amp;quot;&lt;br /&gt;
 auto_config: false     #optional to disable auto-configure&lt;br /&gt;
&lt;br /&gt;
'''Public network'''&lt;br /&gt;
These networks can be accessible from outside of the host machine including Internet, are usually '''Bridged Networks'''.&lt;br /&gt;
&lt;br /&gt;
Examples of dhcp and static IP assignment&lt;br /&gt;
 config.vm.network &amp;quot;public_network&amp;quot;, type: &amp;quot;dhcp&amp;quot;&lt;br /&gt;
 config.vm.network &amp;quot;public_network&amp;quot;, ip: &amp;quot;192.168.80.5&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Default interface. The name need to match your system name otherwise Vagrant will prompt you to choose from available interfaces during ''vagrant up'' process.&lt;br /&gt;
 config.vm.network &amp;quot;public_network&amp;quot;, bridge: 'eth1'&lt;br /&gt;
&lt;br /&gt;
== Port forwarding ==&lt;br /&gt;
Vagrant can forward any host(hypervisor) TCP port to guest vm specyfing in ~/git/vargant/Vagrant file&lt;br /&gt;
 config.vm.network :forwarded_port, guest: 80, host: 4567&lt;br /&gt;
Reload virtual machine &amp;lt;code&amp;gt;vagrant reload&amp;lt;/code&amp;gt; and run from hypervisor web browser http://127.0.0.1:4567 to test it.&lt;br /&gt;
&lt;br /&gt;
== Sync folders ==&lt;br /&gt;
Vagrant v2 renamed ''Shared folders'' into '''Sync folders'''. This feature mounts HostOS directory into GuestOS. allowing workflow of editing files with IDE installed on a host machine but access them within GuestOS. The files sync both directions (as mount on GuestOS), Remember, taking &amp;lt;code&amp;gt;vagrant snapshot save ubuntu-snap1&amp;lt;/code&amp;gt; '''will NOT save''' the '''Sync folder''' content as it's just mounted directory.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When configuring, 1st argument is a path existing on a '''host machine'''. If relative then it's relative to the root-project folder (where Vagrantfile exists) and 2nd arg is a full path to the mounted dir on guest-os.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Enabling Sync folders and Symlinks&lt;br /&gt;
This can be done at any time, it's applied during &amp;lt;code&amp;gt;vagrant up | reload&amp;lt;/code&amp;gt;. In general symlinks are disabled by VirtualBox as insecure.&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
#                    path on the host       mount on the guestOS&lt;br /&gt;
                              \             /         &lt;br /&gt;
     config.vm.sync_folder &amp;quot;git-host/&amp;quot;, &amp;quot;/git&amp;quot;, disabled: false&lt;br /&gt;
 end&lt;br /&gt;
  config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
    vb.name   = File.basename(Dir.pwd) + &amp;quot;_vagrant&amp;quot;&lt;br /&gt;
    ...&lt;br /&gt;
    vb.customize [&amp;quot;setextradata&amp;quot;, :id, &amp;quot;VBoxInternal2/SharedFoldersEnableSymlinksCreate//git&amp;quot;,     &amp;quot;1&amp;quot;]&lt;br /&gt;
#   vb.customize [&amp;quot;setextradata&amp;quot;, :id, &amp;quot;VBoxInternal2/SharedFoldersEnableSymlinksCreate//vagrant&amp;quot;, &amp;quot;1&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
    # symlinks should be active in root of vm by default&lt;br /&gt;
#   vb.customize [&amp;quot;setextradata&amp;quot;, :id, &amp;quot;VBoxInternal2/SharedFoldersEnableSymlinksCreate/v-root&amp;quot;,   &amp;quot;1&amp;quot;]&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Disabling&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
     config.vm.sync_folder &amp;quot;../data/&amp;quot;, &amp;quot;/vagrant-data&amp;quot;, disabled: true&lt;br /&gt;
 end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Modifying the Owner/Group&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 config.vm.sync_folder &amp;quot;../data/&amp;quot;, &amp;quot;/vagrant-data&amp;quot;, disabled: true,&lt;br /&gt;
    owner: &amp;quot;root&amp;quot;, group: &amp;quot;root&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
References&lt;br /&gt;
* [https://www.vagrantup.com/docs/synced-folders/basic_usage.html#id synced-folders] Hashicorp docs&lt;br /&gt;
&lt;br /&gt;
= Vagrant providers =&lt;br /&gt;
Vagrant can work with a wide variety of backend providers, such as VMware, AWS, and more without changing Vagrantfile. It's enough to  specify the provider and Vagrant will do the rest:&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
vagrant up --provider=vmware_fusion&lt;br /&gt;
vagrant up --provider=aws&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Hyper-V ==&lt;br /&gt;
*Enable Hyper-V&lt;br /&gt;
*if you running Docker for Windows make sure is disabled as only one application can bound to Internal NAT vswitch, if you are using it&lt;br /&gt;
*WSL and Windows Vagrant versions must match&lt;br /&gt;
*the terminal you run WSL or PowerShell runs with elevated privileges&lt;br /&gt;
When running in WSL, make sure you have &amp;lt;code&amp;gt;export VAGRANT_WSL_ENABLE_WINDOWS_ACCESS=&amp;quot;1&amp;quot;&amp;lt;/code&amp;gt;,&lt;br /&gt;
*you are in native Bash.exe not eg. ConEmu terminal with as it was proven not working at the time. You can change default provider by &amp;lt;code&amp;gt;export VAGRANT_DEFAULT_PROVIDER=hyperv&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Optional: Set the user-level environment variable in PowerShell: &lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
[Environment]::SetEnvironmentVariable(&amp;quot;VAGRANT_DEFAULT_PROVIDER&amp;quot;, &amp;quot;hyperv&amp;quot;, &amp;quot;User&amp;quot;) &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Workarounds&lt;br /&gt;
Copy insecure private key from &amp;lt;code&amp;gt;https://raw.githubusercontent.com/hashicorp/vagrant/master/keys/vagrant to WSL ~/.vagr&lt;br /&gt;
ant_key/private_key&amp;lt;/code&amp;gt; because Microsoft filesystem does not support Unix style file permissions, until WSL2 is released.&lt;br /&gt;
&amp;lt;source&amp;gt; &lt;br /&gt;
$ wget https://raw.githubusercontent.com/hashicorp/vagrant/master/keys/vagrant -O ~/.vagrant_key/private_key&lt;br /&gt;
# then set in Vagrantfile&lt;br /&gt;
config.ssh.private_key_path = &amp;quot;~/.vagrant_key/private_key&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When running on HyperV you need to choose a vswitch you will use. Vagrant will prompt you, select &amp;quot;Default Switch&amp;quot;, w&lt;br /&gt;
hich is eqvivalent of NAT Network. You need to creat your own vswitch if you want access to Internet.&lt;br /&gt;
&lt;br /&gt;
Go to Hyper-V Manager, open Virtual Switch Manager..., create External switch, name: vagrant-external, press OK. Then&lt;br /&gt;
 run.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
vagrant up --provider hyperv&lt;br /&gt;
&lt;br /&gt;
    default: Please choose a switch to attach to your Hyper-V instance.&lt;br /&gt;
    default: If none of these are appropriate, please open the Hyper-V manager&lt;br /&gt;
    default: to create a new virtual switch.&lt;br /&gt;
    default:&lt;br /&gt;
    default: 1) DockerNAT&lt;br /&gt;
    default: 2) Default Switch&lt;br /&gt;
    default: 3) vagrant-external&lt;br /&gt;
    default:&lt;br /&gt;
    default: What switch would you like to use?3    #&amp;lt;-- select 3&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Read more https://www.vagrantup.com/docs/hyperv/limitations.html&lt;br /&gt;
&lt;br /&gt;
Run Vagrant file&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant up --provider=hyperv&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== References ===&lt;br /&gt;
*[https://gist.github.com/savishy/8ed40cd8692e295d64f45e299c2b83c9 Create vSwitch in Hyper-V to run Vagrant]&lt;br /&gt;
*[https://techcommunity.microsoft.com/t5/Virtualization/Copying-Files-into-a-Hyper-V-VM-with-Vagrant/ba-p/382376 Copying Files into a Hyper-V VM with Vagrant]&lt;br /&gt;
*[https://techcommunity.microsoft.com/t5/Virtualization/Vagrant-and-Hyper-V-Tips-and-Tricks/ba-p/382373 Vagrant and Hyper-V -- Tips and Tricks] techcommunity.microsoft.com&lt;br /&gt;
&lt;br /&gt;
= Provisioners =&lt;br /&gt;
==Shell provisioner==&lt;br /&gt;
Vagrant can run from shared location script or from inline: Vagrant file shell provisioning commands.&lt;br /&gt;
&lt;br /&gt;
Create provisioning script&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi ~/git/vagrant/bootstrap.sh     &lt;br /&gt;
#!/usr/bin/env bash&lt;br /&gt;
export http_proxy=&amp;lt;nowiki&amp;gt;http://username:password@proxyserver.local:8080&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
export https_proxy=$http_proxy &lt;br /&gt;
apt-get update&lt;br /&gt;
apt-get install -y apache2&lt;br /&gt;
if ! [ -L /var/www ]; then &lt;br /&gt;
  rm -rf /var/www&lt;br /&gt;
  ln -sf /vagrant /var/www  # sets Vagrant shared dir to Apache DocumentRoot&lt;br /&gt;
fi&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure Vagrant to run this shell script above when setting up our machine&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi ~/git/vagrant/Vagrantfile   &lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
   config.vm.box = ubuntu/14.04-i386&lt;br /&gt;
   config.vm.provision: shell, path: &amp;quot;bootstrap.sh&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another example of using shell provisioner, separating a script out&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$script = &amp;lt;&amp;lt;SCRIPT&lt;br /&gt;
echo    &amp;quot; touch /home/vagrant/test_\\`date +%s\\`.txt &amp;quot; &amp;gt; /home/vagrant/newfile&lt;br /&gt;
chmod +x        /home/vagrant/newfile&lt;br /&gt;
echo &amp;quot;* * * * * /home/vagrant/newfile&amp;quot; &amp;gt; mycron&lt;br /&gt;
crontab mycron&lt;br /&gt;
SCRIPT&lt;br /&gt;
&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
  config.vm.box = &amp;quot;ubuntu/xenial64&amp;quot;&lt;br /&gt;
  config.vm.provision &amp;quot;shell&amp;quot;, inline: $script , privileged: false&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Bring the environment up  &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant up                   #runs provisioning only once&lt;br /&gt;
vagrant reload --provision   #reloads VM skipping import and runs provisioning&lt;br /&gt;
vagrant ssh                  #ssh to VM&lt;br /&gt;
wget -qO- 127.0.0.1          #test Apache is running on VM&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Provisioners - shell, ansible, ansible_local and more&lt;br /&gt;
&lt;br /&gt;
This section is about using Ansible with Vagrant, &lt;br /&gt;
 *&amp;lt;code&amp;gt;ansible&amp;lt;/code&amp;gt;, where Ansible is executed on the '''Vagrant host'''&lt;br /&gt;
 *&amp;lt;code&amp;gt;ansible_local&amp;lt;/code&amp;gt;, where Ansible is executed on the '''Vagrant guest'''&lt;br /&gt;
&lt;br /&gt;
==Ansible provisioner==&lt;br /&gt;
&lt;br /&gt;
Specify Ansible as a provisioner in Vagrant file&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 # Run Ansible from the Vagrant Host&lt;br /&gt;
 config.vm.provision &amp;quot;ansible&amp;quot; do |ansible|&lt;br /&gt;
    ansible.playbook = &amp;quot;playbook.yml&amp;quot;&lt;br /&gt;
 end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Chef_solo provisioner ==&lt;br /&gt;
Create recipe, the following dirctory structure is required, eg. recipe name is: vagrant_la&lt;br /&gt;
 ├── cookbooks&lt;br /&gt;
 │   └── vagrant_la&lt;br /&gt;
 │       └── recipes&lt;br /&gt;
 │           └── default.rb&lt;br /&gt;
 Vagrant&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Recipe&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
vi cookbooks/vagrant_la/recipes/default.rb&lt;br /&gt;
execute &amp;quot;apt-get update&amp;quot;&lt;br /&gt;
package &amp;quot;apache2&amp;quot;&lt;br /&gt;
execute &amp;quot;rm -rf /var/www&amp;quot;&lt;br /&gt;
link &amp;quot;var/www&amp;quot; do&lt;br /&gt;
        to &amp;quot;/vagrant&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In Vagrant file add following&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
config.vm.provision &amp;quot;chef_solo&amp;quot; do |chef|&lt;br /&gt;
        chef.add_recipe &amp;quot;vagrant_la&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Run &amp;lt;code&amp;gt;vagrant up&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Puppet manifest ==&lt;br /&gt;
Create Vagrant provisioning stanza&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 config.vm.define &amp;quot;web&amp;quot; do |web|&lt;br /&gt;
         web.vm.hostname = &amp;quot;web&amp;quot;&lt;br /&gt;
         web.vm.box = &amp;quot;apache&amp;quot;&lt;br /&gt;
         web.vm.network &amp;quot;private_network&amp;quot;, type: &amp;quot;dhcp&amp;quot;&lt;br /&gt;
         web.vm.network &amp;quot;forwarded_port&amp;quot;, guest: 80, host: 8080&lt;br /&gt;
         web.vm.provision &amp;quot;puppet&amp;quot; do |puppet|&lt;br /&gt;
                 puppet.manifests_path = &amp;quot;manifests&amp;quot;&lt;br /&gt;
                 puppet.manifest_file = &amp;quot;default.pp&amp;quot;&lt;br /&gt;
         end&lt;br /&gt;
 end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create a required folder structure for puppet manifests&lt;br /&gt;
 ├── manifests&lt;br /&gt;
 │   └── default.pp&lt;br /&gt;
 └── Vagrantfile&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Puppet manifest file&lt;br /&gt;
 vi manifests/default.pp&lt;br /&gt;
 exec { &amp;quot;apt-get update&amp;quot;:&lt;br /&gt;
        command =&amp;gt; &amp;quot;/usr/bin/apt-get update&amp;quot;,&lt;br /&gt;
 }&lt;br /&gt;
 package { &amp;quot;apache2&amp;quot;:&lt;br /&gt;
        require =&amp;gt; Exec[&amp;quot;apt-get update&amp;quot;],&lt;br /&gt;
 }&lt;br /&gt;
 file { &amp;quot;/var/www&amp;quot;:&lt;br /&gt;
        ensure =&amp;gt; link,&lt;br /&gt;
        target =&amp;gt; &amp;quot;/vagrant&amp;quot;,&lt;br /&gt;
        force =&amp;gt; true,&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
= Box images advanced=&lt;br /&gt;
 vagrant box list   #list all downloaded boxes&lt;br /&gt;
&lt;br /&gt;
Default path of boxes image, it can be specified by environment variable &amp;lt;tt&amp;gt;VAGRANT_HOME&amp;lt;/tt&amp;gt;&lt;br /&gt;
 C:\Users\%username%\.vagrant.d\boxes  #Windows&lt;br /&gt;
 ~/.vagrant.d/boxes                    #Linux&lt;br /&gt;
&lt;br /&gt;
Change default path via environment variable&lt;br /&gt;
 export VAGRANT_HOME=my/new/path/goes/here/&lt;br /&gt;
&lt;br /&gt;
==Box format==&lt;br /&gt;
When you un-tar the &amp;lt;tt&amp;gt;.box&amp;lt;/tt&amp;gt; file it contains 4 files:&lt;br /&gt;
 |--Vagrantfile&lt;br /&gt;
 |--box-disk1.vmdk  #compressed virtual disk&lt;br /&gt;
 |--box.ovf         #description of virtual hardware&lt;br /&gt;
 |--metadata.json&lt;br /&gt;
&lt;br /&gt;
== [https://www.vagrantup.com/docs/virtualbox/boxes.html Create box] from current project (package a box) ==&lt;br /&gt;
This allows to create a reusable box that contains all changes to the software we made, VirtualBox or Hyper-V supported only.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://www.vagrantup.com/docs/cli/package.html Command basics]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant package [options] [name|id]&lt;br /&gt;
# --base NAME - instead of packaging a VirtualBox machine that Vagrant manages, &lt;br /&gt;
#               this will package a VirtualBox machine that VirtualBox manages&lt;br /&gt;
# --output NAME - default is package.box&lt;br /&gt;
# --include x,y,z -  additional files will be packaged with the box&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Package&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ vagrant version # -&amp;gt; Installed Version: 2.2.9&lt;br /&gt;
&lt;br /&gt;
# Optional '--vagrantfile NAME' can be included, that automatically restores '--include' files &lt;br /&gt;
# learn more at https://www.vagrantup.com/docs/vagrantfile#load-order&lt;br /&gt;
$ time vagrant package --output u18cli-1.box --include data,git-host,git-host3rd,sync.sh,cleanup.sh&lt;br /&gt;
==&amp;gt; default: Clearing any previously set forwarded ports...&lt;br /&gt;
==&amp;gt; default: Exporting VM...&lt;br /&gt;
==&amp;gt; default: Compressing package to: /home/piotr/vms-vagrant/u18cli-1/2020-05-23-u18cli-1.box&lt;br /&gt;
==&amp;gt; default: Packaging additional file: data               # &amp;lt;- dir&lt;br /&gt;
==&amp;gt; default: Packaging additional file: git-host           # &amp;lt;- dir&lt;br /&gt;
==&amp;gt; default: Packaging additional file: git-host3rd        # &amp;lt;- dir&lt;br /&gt;
==&amp;gt; default: Packaging additional file: cleanup.sh         # &amp;lt;- file&lt;br /&gt;
real	15m27.324s user	8m23.550s sys	0m16.827s&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Copy the &amp;lt;tt&amp;gt;.box&amp;lt;/tt&amp;gt; file and restore&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Add the packaged box to local system box repository&lt;br /&gt;
#                        _____box-name________ __box-file_____&lt;br /&gt;
$ vagrant box add --name box-packages/u18cli-1 u18cli-1-v1.box&lt;br /&gt;
==&amp;gt; box: Box file was not detected as metadata. Adding it directly...&lt;br /&gt;
==&amp;gt; box: Adding box 'u18cli-1-v1.box' (v0) for provider: &lt;br /&gt;
    box: Unpacking necessary files from: file:///home/piotr/vms-vagrant/test-box-restore/u18cli-1-v1.box&lt;br /&gt;
==&amp;gt; box: Successfully added box 'box-packages/u18cli-1' (v0) for 'virtualbox'!&lt;br /&gt;
&lt;br /&gt;
# List boxes&lt;br /&gt;
$ vagrant box list&lt;br /&gt;
box-packages/u18cli-1 (virtualbox, 0)&lt;br /&gt;
&lt;br /&gt;
$ ls -l ~/.vagrant.d/boxes&lt;br /&gt;
total 16&lt;br /&gt;
drwxrwxr-x 3 piotr piotr 4096 Jul 16 17:44 box-packages-VAGRANTSLASH-u18cli-1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Restore. Create/re-use Vagrantfile using box you added to your local box repository&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# vi Vagrantfile&lt;br /&gt;
config.vm.box = &amp;quot;box-packages/u18cli-1.box&amp;quot;&lt;br /&gt;
&lt;br /&gt;
vagrant up&lt;br /&gt;
# restore '--include' files coping them from&lt;br /&gt;
# 'ls -l ~/.vagrant.d/boxes/box-packages-VAGRANTSLASH-u18cli-1/0/virtualbox/include/*'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://tuhrig.de/resizing-vagrant-box-disk-space/ Resizing Vagrant box disk] =&lt;br /&gt;
* [https://www.vagrantup.com/docs/disks/usage Resizing primary disk] native way&lt;br /&gt;
&lt;br /&gt;
= Enable Vagrant to use proxy server for VMs =&lt;br /&gt;
Install proxyconf plugin or use &amp;lt;code&amp;gt;vagrant plugin list&amp;lt;/code&amp;gt; to verify if installed&lt;br /&gt;
 vagrant plugin install vagrant-proxyconf&lt;br /&gt;
&lt;br /&gt;
Configure your Vagrantfile, here particularly host 10.0.0.1:3128 runs CNTLM proxy&lt;br /&gt;
 Vagrant.configure(&amp;quot;2&amp;quot;) do |config| &lt;br /&gt;
     &amp;lt;nowiki&amp;gt;config.proxy.http = &amp;quot;http://10.0.0.1:3128&amp;quot;&lt;br /&gt;
    config.proxy.https = &amp;quot;http://10.0.0.1:3128&amp;quot;&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
     config.proxy.no_proxy = &amp;quot;localhost,127.0.0.1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
= Virtualbox Guest Additions =&lt;br /&gt;
== Sync using vagrant-vbguest plugin ==&lt;br /&gt;
Plugin install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant plugin install vagrant-vbguest&lt;br /&gt;
&lt;br /&gt;
# In a case of dependency issues you can temporary disable the check&lt;br /&gt;
VAGRANT_DISABLE_STRICT_DEPENDENCY_ENFORCEMENT=1 vagrant plugin install vagrant-vbguest&lt;br /&gt;
&lt;br /&gt;
# Verify current version, running on a host(hypervisor)&lt;br /&gt;
vagrant vbguest --status&lt;br /&gt;
&lt;br /&gt;
# Add to your Vagrant file&lt;br /&gt;
if Vagrant.has_plugin?(&amp;quot;vagrant-vbguest&amp;quot;)&lt;br /&gt;
  host.vbguest.auto_update = true&lt;br /&gt;
  host.vbguest.no_remote   = true&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Manual install&lt;br /&gt;
Download VBoxGuestAdditions from:&lt;br /&gt;
* https://download.virtualbox.org/virtualbox&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install a matching version to your host version onto the virtual machine.&lt;br /&gt;
wget https://download.virtualbox.org/virtualbox/7.0.16/VBoxGuestAdditions_7.0.16.iso&lt;br /&gt;
vagrant vbguest --do install --iso VBoxGuestAdditions_7.0.16.iso&lt;br /&gt;
&lt;br /&gt;
Usage: vagrant vbguest [vm-name] [--do start|rebuild|install] [--status] [-f|--force] [-b|--auto-reboot] [-R|--no-remote] [--iso VBoxGuestAdditions.iso] [--no-cleanup]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More you will find at [https://github.com/dotless-de/vagrant-vbguest vagrant-vbguest] plugin project.&lt;br /&gt;
&lt;br /&gt;
== Manual upgrade ==&lt;br /&gt;
Find out what version you are running, execute on a guest VM&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant@ubuntu:~$ modinfo vboxguest | grep ^version&lt;br /&gt;
version:        6.0.10 r132072&lt;br /&gt;
&lt;br /&gt;
vagrant@ubuntu:~$ lsmod | grep -io vboxguest | xargs modinfo | grep -iw version&lt;br /&gt;
version:        6.0.10 r132072&lt;br /&gt;
&lt;br /&gt;
vagrant@u18cli-3:~$ sudo /usr/sbin/VBoxService --version&lt;br /&gt;
6.0.10r132072&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download the extension, you can explore [http://download.virtualbox.org/virtualbox here]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
wget http://download.virtualbox.org/virtualbox/5.0.32/VBoxGuestAdditions_5.0.32.iso&lt;br /&gt;
#you need to get it mounted or extracted the content and run inside the VM.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
*[https://github.com/chilcano/box-vagrant-wso2-dev-srv/blob/master/_downloads/vagrant-vboxguestadditions-workaroud.md Upgrade Vbox extension additions within Vagrant box]&lt;br /&gt;
&lt;br /&gt;
= List all Virtualbox SSH redirections =&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
vboxmanage list vms | cut -d ' ' -f 2  &amp;gt; /tmp/vms.out &amp;amp;&amp;amp; for vm in $(cat /tmp/vms.out); do vboxmanage showvminfo &amp;quot;$vm&amp;quot; | grep ssh; done&lt;br /&gt;
vboxmanage list vms | cut -d ' ' -f 1 | sed 's/&amp;quot;//g' &amp;gt; /tmp/vms.out &amp;amp;&amp;amp; for vm in $(cat /tmp/vms.out); do echo $vm; vboxmanage showvminfo &amp;quot;$vm&amp;quot; | grep ssh; done&lt;br /&gt;
vboxmanage list vms \&lt;br /&gt;
  | cut -d ' ' -f 1 \&lt;br /&gt;
  | sed 's/&amp;quot;//g' &amp;gt; /tmp/vms.out \&lt;br /&gt;
  &amp;amp;&amp;amp; for vm in $(cat /tmp/vms.out); do vboxmanage showvminfo &amp;quot;$vm&amp;quot; \&lt;br /&gt;
                                      | grep ssh \&lt;br /&gt;
                                      | tr --delete '\n'; echo &amp;quot; $vm&amp;quot;; done&lt;br /&gt;
&lt;br /&gt;
sed 's/&amp;quot;//g'      #removes double quotes from whole string&lt;br /&gt;
tr --delete '\n'  #deletes EOL, so the next command output is appended to the previous line&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Vagrant file =&lt;br /&gt;
;Ruby gotchas&lt;br /&gt;
Vagrant configuration file is written in Ruby specification, therefore you need to remember&lt;br /&gt;
*don't use dashes in object names, '''don't''': &amp;lt;tt&amp;gt;jenkins-minion_config.vm.box = &amp;quot;ubuntu/xenial64&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
*don't use symbols (here underscore) in variable names, '''don't''': &amp;lt;tt&amp;gt;(1..2).each do |minion_number|&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== HAProxy cluster, multi-node Vagrant config  ==&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
git clone https://github.com/jweissig/episode-45&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This creates ''Ansible'' mgmt server, Load Balancer and Web nodes [1..2]. HAProxy will be configured via Ansible code.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# -*- mode: ruby -*-&lt;br /&gt;
# vi: set ft=ruby :&lt;br /&gt;
&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
 # create mgmt node&lt;br /&gt;
 config.vm.define :mgmt do |mgmt_config|&lt;br /&gt;
     mgmt_config.vm.box = &amp;quot;ubuntu/trusty64&amp;quot;&lt;br /&gt;
     mgmt_config.vm.hostname = &amp;quot;mgmt&amp;quot;&lt;br /&gt;
     mgmt_config.vm.network :private_network, ip: &amp;quot;10.0.15.10&amp;quot;&lt;br /&gt;
     mgmt_config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
       vb.memory = &amp;quot;256&amp;quot;&lt;br /&gt;
     end&lt;br /&gt;
     mgmt_config.vm.provision :shell, path: &amp;quot;bootstrap-mgmt.sh&amp;quot;&lt;br /&gt;
 end&lt;br /&gt;
&lt;br /&gt;
 # create load balancer&lt;br /&gt;
 config.vm.define :lb do |lb_config|&lt;br /&gt;
     lb_config.vm.box = &amp;quot;ubuntu/trusty64&amp;quot;&lt;br /&gt;
     lb_config.vm.hostname = &amp;quot;lb&amp;quot;&lt;br /&gt;
     lb_config.vm.network :private_network, ip: &amp;quot;10.0.15.11&amp;quot;&lt;br /&gt;
     lb_config.vm.network &amp;quot;forwarded_port&amp;quot;, guest: 80, host: 8080&lt;br /&gt;
     lb_config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
       vb.memory = &amp;quot;256&amp;quot;&lt;br /&gt;
     end&lt;br /&gt;
 end&lt;br /&gt;
&lt;br /&gt;
 # create some web servers&lt;br /&gt;
 # https://docs.vagrantup.com/v2/vagrantfile/tips.html&lt;br /&gt;
  (1..2).each do |i|&lt;br /&gt;
    config.vm.define &amp;quot;web#{i}&amp;quot; do |node|&lt;br /&gt;
        node.vm.box = &amp;quot;ubuntu/trusty64&amp;quot;&lt;br /&gt;
        node.vm.hostname = &amp;quot;web#{i}&amp;quot;&lt;br /&gt;
        node.vm.network :private_network, ip: &amp;quot;10.0.15.2#{i}&amp;quot;&lt;br /&gt;
        node.vm.network &amp;quot;forwarded_port&amp;quot;, guest: 80, host: &amp;quot;808#{i}&amp;quot;&lt;br /&gt;
        node.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
          vb.memory = &amp;quot;256&amp;quot;&lt;br /&gt;
        end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Boot strap script &amp;lt;tt&amp;gt;bootstrap-mgmt.sh&amp;lt;/tt&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;shell&amp;quot;&amp;gt;&lt;br /&gt;
#!/usr/bin/env bash &lt;br /&gt;
# install ansible (http://docs.ansible.com/intro_installation.html)&lt;br /&gt;
apt-get -y install software-properties-common&lt;br /&gt;
apt-add-repository -y ppa:ansible/ansible&lt;br /&gt;
apt-get update&lt;br /&gt;
apt-get -y install ansible&lt;br /&gt;
&lt;br /&gt;
# copy examples into /home/vagrant (from inside the mgmt node)&lt;br /&gt;
cp -a /vagrant/examples/* /home/vagrant&lt;br /&gt;
chown -R vagrant:vagrant /home/vagrant&lt;br /&gt;
&lt;br /&gt;
# configure hosts file for our internal network defined by Vagrantfile&lt;br /&gt;
cat &amp;gt;&amp;gt; /etc/hosts &amp;lt;&amp;lt;EOL&lt;br /&gt;
# vagrant environment nodes&lt;br /&gt;
10.0.15.10  mgmt&lt;br /&gt;
10.0.15.11  lb&lt;br /&gt;
10.0.15.21  web1&lt;br /&gt;
10.0.15.22  web2&lt;br /&gt;
10.0.15.23  web3&lt;br /&gt;
10.0.15.24  web4&lt;br /&gt;
10.0.15.25  web5&lt;br /&gt;
10.0.15.26  web6&lt;br /&gt;
10.0.15.27  web7&lt;br /&gt;
10.0.15.28  web8&lt;br /&gt;
10.0.15.29  web9&lt;br /&gt;
EOL&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Gitbash path -  &amp;lt;code&amp;gt;/c/Program\ Files/Oracle/VirtualBox/VBoxManage.exe&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Set bootstrap script for Proxy or No-proxy specific system&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
Vagrant status&lt;br /&gt;
Vagrant up&lt;br /&gt;
Vagrant ssh mgmt&lt;br /&gt;
ansible all --list-hosts&lt;br /&gt;
ssh-keyscan web1 web2 lb &amp;gt; ~/.ssh/known_hosts&lt;br /&gt;
ansible-playbook ssh-addkey.yml -u vagrant --ask-pass&lt;br /&gt;
ansible-playbook site.yml&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once set it up you can navigate on your laptop to:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
http://localhost:8080/              #Website test&lt;br /&gt;
http://localhost:8080/haproxy?stats #HAProxy stats&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Use to verify end server&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
curl -I http://localhost:8080&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:X-Backend-Server.png|none|left|Curl -i X-Backend-Server]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Generate web traffic&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
vagrant ssh lb&lt;br /&gt;
sudo apt-get install apache2-utils&lt;br /&gt;
ansible localhost -m apt -a &amp;quot;pkg=apache2-utils state=present&amp;quot; --become&lt;br /&gt;
ab -n 1000 -c 1 http://10.0.2.15:80/ # Apache replaced 'ab' with 'hey'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Vagrant DNS =&lt;br /&gt;
== Multi-machine mDNS discovery ==&lt;br /&gt;
Multi-machine setup requires 3 ingredients* :&lt;br /&gt;
* each machine to have different hostname&lt;br /&gt;
* a way of getting the IP address for a hostname (eg. mDNS)&lt;br /&gt;
* connect the VMs through a private network&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In multi-machine configuration we need a way of getting the IP address for a hostname. We use &amp;lt;code&amp;gt;mDNS&amp;lt;/code&amp;gt; for this. By default &amp;lt;codemDNS&amp;lt;/code&amp;gt; only resolves host names ending with the &amp;lt;code&amp;gt;.local&amp;lt;/code&amp;gt; top-level domain (TLD). This can cause problems if that domain includes hosts which do not implement mDNS but which can be found via a conventional unicast DNS server. Resolving such conflicts requires network-configuration changes that violate the zero-configuration goal. Install &amp;lt;code&amp;gt;avahi&amp;lt;/code&amp;gt; system on all machines to facilitate service discovery on a local network via the &amp;lt;code&amp;gt;mDNS/DNS-SD&amp;lt;/code&amp;gt; protocol suite.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
config.vm.provision &amp;quot;shell&amp;quot;, inline: &amp;lt;&amp;lt;-SCRIPT&lt;br /&gt;
  apt-get install -y avahi-daemon libnss-mdns&lt;br /&gt;
SCRIPT&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;References&lt;br /&gt;
*[https://github.com/lathiat/nss-mdns nss-mdns] system which allows hostname lookup of *.local hostnames via mDNS in all system programs using nsswitch&lt;br /&gt;
*[https://www.avahi.org/ avahi.org]&lt;br /&gt;
&lt;br /&gt;
== Set host system DNS server resolver ==&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
config.vm.provider :virtualbox do |vb|&lt;br /&gt;
    vb.customize [&amp;quot;modifyvm&amp;quot;, :id, &amp;quot;--natdnshostresolver1&amp;quot;, &amp;quot;on&amp;quot;]&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Ubuntu with GUI =&lt;br /&gt;
This article is going to describe to setup Vagrant Virtualbox VM with GUI, setting up xserver with xfce4 as desktop environment.&lt;br /&gt;
== Locales ==&lt;br /&gt;
This is not working&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
     locale-gen en_GB.utf8 #en_GB.UTF-8&lt;br /&gt;
     update-locale LANG=en_GB.UTF-8&lt;br /&gt;
     locale-gen --purge &amp;quot;en_GB.UTF-8&amp;quot;&lt;br /&gt;
     dpkg-reconfigure --frontend=noninteractive locales&lt;br /&gt;
     dpkg-reconfigure --frontend=noninteractive keyboard-configuration&lt;br /&gt;
     localedef -i en_GB -c -f UTF-8 en_GB.utf8&lt;br /&gt;
     sudo update-locale LANG=en_GB.UTF-8&lt;br /&gt;
     locale-gen --purge &amp;quot;en_GB.UTF-8&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Troubleshooting&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
locale -a #shows which locales are available on your system&lt;br /&gt;
sudo less /usr/share/i18n/SUPPORTED&lt;br /&gt;
cat /etc/default/locale&lt;br /&gt;
&lt;br /&gt;
#Set system wide locales (does not work for users)&lt;br /&gt;
localectl set-locale LANG=en_GB.UTF-8 LANGUAGE=en_GB:en&lt;br /&gt;
localectl set-keymap gb&lt;br /&gt;
localectl set-x11-keymap gb&lt;br /&gt;
&lt;br /&gt;
#Quick kb change&lt;br /&gt;
apt-get install -yq x11-xkb-utils; setxkbmap gb&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Gnome3 ==&lt;br /&gt;
This setup installs Ubuntu desktop and may require a restart to apply changes to like a taskbar with shortcuts.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;ruby&amp;quot;&amp;gt;&lt;br /&gt;
# -*- mode: ruby -*-&lt;br /&gt;
# vi: set ft=ruby :&lt;br /&gt;
&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
  config.vm.box      = &amp;quot;ubuntu/bionic64&amp;quot; #bento/ubuntu-18.04, ubuntu/xenial64&lt;br /&gt;
&lt;br /&gt;
  machineName = File.basename(Dir.pwd) #name as a current working dir&lt;br /&gt;
# machineName = 'u18gui-1'&lt;br /&gt;
  config.vm.hostname = machineName&lt;br /&gt;
&lt;br /&gt;
  # Manually check for updates `vagrant box outdated`&lt;br /&gt;
  config.vm.box_check_update = false&lt;br /&gt;
&lt;br /&gt;
  # Vbguest plugin&lt;br /&gt;
  if Vagrant.has_plugin?(&amp;quot;vagrant-vbguest&amp;quot;)&lt;br /&gt;
    config.vbguest.auto_update = false&lt;br /&gt;
    config.vbguest.no_remote   = true&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # config.vm.network &amp;quot;forwarded_port&amp;quot;, guest: 80, host: 8080, host_ip: &amp;quot;127.0.0.1&amp;quot;&lt;br /&gt;
  # Public network, which generally matched to bridged network.&lt;br /&gt;
  # config.vm.network &amp;quot;public_network&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  # config.vm.synced_folder &amp;quot;hostDir&amp;quot;, &amp;quot;/InVagrantMount/path&amp;quot; &lt;br /&gt;
  # config.vm.synced_folder &amp;quot;../data&amp;quot;, &amp;quot;/vagrant_data&amp;quot;&lt;br /&gt;
&lt;br /&gt;
   config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
     vb.gui    = true&lt;br /&gt;
     vb.memory = &amp;quot;3072&amp;quot;&lt;br /&gt;
     vb.name   = machineName + &amp;quot;_vagrant&amp;quot;&lt;br /&gt;
   end&lt;br /&gt;
  &lt;br /&gt;
   config.vm.provision &amp;quot;shell&amp;quot;, inline: &amp;lt;&amp;lt;-SHELL&lt;br /&gt;
     export DEBIAN_FRONTEND=noninteractive&lt;br /&gt;
     setxkbmap gb&lt;br /&gt;
     apt-get update &amp;amp;&amp;amp; apt-get upgrade -yq&lt;br /&gt;
     apt-get install -yq ubuntu-desktop --no-install-recommends&lt;br /&gt;
     apt-get install -yq terminator tmux&lt;br /&gt;
     #only U16 xenial to fix Unity&lt;br /&gt;
     #apt-get install -yq unity-lens-files unity-lens-applications indicator-session --no-install-recommends &lt;br /&gt;
   SHELL&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Running up&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
vagrant plugin install vagrant-vbguest&lt;br /&gt;
vagrant up &amp;amp;&amp;amp; vagrant vbguest --do install &amp;amp;&amp;amp; vagrant reload&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Xface ==&lt;br /&gt;
Get a basic Ubuntu image working, boot it up and vagrant ssh.&lt;br /&gt;
Next, enable the VirtualBox display, which is off by default. Halt the VM and uncomment these lines in Vagrantfile:&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
config.vm.provider :virtualbox do |vb|&lt;br /&gt;
  vb.gui = true&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Boot the VM and observe the new display window. Now you just need to install and start xfce4. Use vagrant ssh and:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get install -y xfce4 virtualbox-guest-dkms virtualbox-guest-utils virtualbox-guest-x11&lt;br /&gt;
#guest additions are already installed on most of the Vagrant boxes&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Don't start the GUI as root because you really want to stay the vagrant user. To do this you need to permit anyone to start the GUI: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo vim /etc/X11/Xwrapper.config and edit it to allowed_users=anybody&lt;br /&gt;
sudo startxfce4&amp;amp;&lt;br /&gt;
sudo VBoxClient-all #optional&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should be landed in a xfce4 session.&lt;br /&gt;
&lt;br /&gt;
(Optional) If VBoxClient-all script isn't installed or anything is missing, you can replace with the equivalent:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo VBoxClient --clipboard&lt;br /&gt;
sudo VBoxClient --draganddrop&lt;br /&gt;
sudo VBoxClient --display&lt;br /&gt;
sudo VBoxClient --checkhostversion&lt;br /&gt;
sudo VBoxClient --seamless&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
*[http://stackoverflow.com/questions/18878117/using-vagrant-to-run-virtual-machines-with-desktop-environment Vagrant GUI vms] stackoverflow&lt;br /&gt;
&lt;br /&gt;
= Windows=&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
# -*- mode: ruby -*-&lt;br /&gt;
# vi: set ft=ruby :&lt;br /&gt;
&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
  config.vm.box = &amp;quot;gusztavvargadr/windows-server&amp;quot;&lt;br /&gt;
  config.vm.box_check_update = false&lt;br /&gt;
  config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
     vb.gui = true       # Display the VirtualBox GUI when booting the machine&lt;br /&gt;
     vb.memory = &amp;quot;3072&amp;quot;  # Customize the amount of memory on the VM:&lt;br /&gt;
  end&lt;br /&gt;
  # Plugins&lt;br /&gt;
  config.vbguest.auto_update = false&lt;br /&gt;
  config.vbguest.no_remote = true&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Shared location&lt;br /&gt;
* enable Network Sharing&lt;br /&gt;
* Vagrant path is mapped to &amp;lt;code&amp;gt;\\VBOXSVR\vagrant&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= WIP DevOps workstation =&lt;br /&gt;
This to contain:&lt;br /&gt;
*bashrc with git branch in ps1&lt;br /&gt;
*bash autocomplete (...samename)&lt;br /&gt;
*bash colored symlinks&lt;br /&gt;
*bash_logout and .profile to eval ssh-agent and kill on exit&lt;br /&gt;
*git install&lt;br /&gt;
*ansible 1.9.4&lt;br /&gt;
*java Oracle&lt;br /&gt;
*clone tfenv and install terraform&lt;br /&gt;
*vim install&lt;br /&gt;
*vundle install&lt;br /&gt;
*[done] python 2.7 OOB in 16.04&lt;br /&gt;
*[done]python pip: awscli, boto, boto3, etc..&lt;br /&gt;
&lt;br /&gt;
Challenges:&lt;br /&gt;
*Ubuntu 16.04 official box does not come with a default ''vagrant'' user but instead comes with ''ubuntu'' user. This causes a number of incompatibilities.&lt;br /&gt;
**Read more at launchpad [https://bugs.launchpad.net/cloud-images/+bug/1569237 vagrant xenial box is not provided with vagrant/vagrant username and password ]&lt;br /&gt;
* Solutions&lt;br /&gt;
** on W10 host both users: ubuntu &amp;amp; vagrant exist. Only vagrant has insecure_pub installed OOB. I am coping vagrant user pub key into ubuntu user authorized_keys&lt;br /&gt;
** on U16.05 host the official image does not seem to come with vagrant user but Ubuntu user works OOB&lt;br /&gt;
** Read more at SO &lt;br /&gt;
***[Vagrant's Ubuntu 16.04 vagrantfile default password https://stackoverflow.com/questions/41337802/vagrants-ubuntu-16-04-vagrantfile-default-password]&lt;br /&gt;
***[https://stackoverflow.com/questions/30075461/how-do-i-add-my-own-public-key-to-vagrant-vm How do I add my own public key to Vagrant VM?]&lt;br /&gt;
*** [https://blog.ouseful.info/2015/07/27/running-a-shell-script-once-only-in-vagrant/ Running a Shell Script Once Only in vagrant]&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
*[https://www.vagrantup.com/docs/getting-started/ Vagrant Start up documentation]&lt;br /&gt;
*[https://atlas.hashicorp.com/boxes/search Vagrant Hashicorp VMs repository] Virtualbox&lt;br /&gt;
*[https://cloud-images.ubuntu.com/vagrant/ Vagrant Ubuntu VMs images] Virtualbox&lt;br /&gt;
*[https://www.vagrantup.com/docs/provisioning/ansible_intro.html Vagrant and Ansible provisioner] Vagrant docs&lt;br /&gt;
*[https://manski.net/2016/09/vagrant-multi-machine-tutorial/#multi-machine.3A-the-naive-way Vagrant Tutorial – From Nothing To Multi-Machine] Tutorial&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=HashiCorp/Vagrant&amp;diff=7047</id>
		<title>HashiCorp/Vagrant</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=HashiCorp/Vagrant&amp;diff=7047"/>
		<updated>2025-08-22T11:23:52Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Snapshots */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
Vagrant is configured on a per project basis. Each of these projects has its own Vagran file. The Vagrant file is a text file in which vagrant reads that sets up our environment. There is a description of what OS, how much RAM, and what software to be installed etc. You can version control this file.&lt;br /&gt;
&lt;br /&gt;
= Install | [https://github.com/hashicorp/vagrant/blob/v2.2.10/CHANGELOG.md Changelog] =&lt;br /&gt;
Download or upgrade&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install using Ubuntu package manager (2024)&lt;br /&gt;
wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg&lt;br /&gt;
echo &amp;quot;deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main&amp;quot; | sudo tee /etc/apt/sources.list.d/hashicorp.list&lt;br /&gt;
apt-cache policy vagrant&lt;br /&gt;
sudo apt update &amp;amp;&amp;amp; sudo apt install vagrant&lt;br /&gt;
&lt;br /&gt;
# Install downloading a package from sources (2022)&lt;br /&gt;
LATEST=$(curl -s GET https://api.github.com/repos/hashicorp/vagrant/tags | jq -r '.[].name' | head -n1 | tr -d v); echo $LATEST&lt;br /&gt;
VERSION=${LATEST:=2.2.18}; &lt;br /&gt;
wget https://releases.hashicorp.com/vagrant/${VERSION}/vagrant_${VERSION}_linux_amd64.zip&lt;br /&gt;
sudo install /usr/bin/vagrant&lt;br /&gt;
#sudo dpkg -i vagrant_${VERSION}_x86_64.deb&lt;br /&gt;
#sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -f   # resolve missing dependencies&lt;br /&gt;
&lt;br /&gt;
# Fix plugins if needed&lt;br /&gt;
vagrant plugin update&lt;br /&gt;
vagrant plugin repair&lt;br /&gt;
vagrant plugin expunge --reinstall&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install ruby is recommended as configuration within '''Vagrant''' file is written in Ruby language. &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo apt-get install ruby&lt;br /&gt;
sudo gem install bundler&lt;br /&gt;
sudo gem update  bundler    # if update needed&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repair plugins after the upgrade&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant plugin repair    # use first&lt;br /&gt;
vagrant plugin expunge --reinstall&lt;br /&gt;
vagrant plugin update    # then update broken plugin&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Images aka &amp;lt;code&amp;gt;box&amp;lt;/code&amp;gt; management =&lt;br /&gt;
Vagrant comes with a preconfigured image repositories.&lt;br /&gt;
;Manage boxes&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant box [list | add | remove] [--help]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Add a box (image) into local repository&lt;br /&gt;
These are standard VMs from providers in Virtualbox, VMware or Hyper-V format taken from a given repository&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant box add hashicorp/precise64      #user: hashicorp boximage: precise64, this is preconfigured repository&lt;br /&gt;
vagrant box add ubuntu/xenial64&lt;br /&gt;
vagrant box add ubuntu/xenial64    --box-version 20170618.0.0 --provider virtualbox&lt;br /&gt;
vagrant box add bento/ubuntu-18.04 --box-version 201812.27.0  --provider hyperv&lt;br /&gt;
&lt;br /&gt;
# Box can be specified via URLs or local file paths, Virtualbox can only nest 32bit machines&lt;br /&gt;
vagrant box add --force ubuntu/14.04      https://cloud-images.ubuntu.com/vagrant/trusty/current/trusty-server-cloudimg-amd64-vagrant-disk1.box&lt;br /&gt;
vagrant box add --force ubuntu/14.04-i386 https://cloud-images.ubuntu.com/vagrant/precise/current/precise-server-cloudimg-i386-vagrant-disk1.box&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Windows images&lt;br /&gt;
* devopsgroup-io/windows_server-2012r2-standard-amd64-nocm&lt;br /&gt;
* peru/windows-server-2016-standard-x64-eval&lt;br /&gt;
* scotch/box&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Update a box to the latest version&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; vagrant box list&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20190411.0.0)&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20190718.0.0)&lt;br /&gt;
&lt;br /&gt;
$&amp;gt; vagrant box update --box ubuntu/bionic64&lt;br /&gt;
Checking for updates to 'ubuntu/bionic64'&lt;br /&gt;
Latest installed version: 20190718.0.0&lt;br /&gt;
Version constraints: &amp;gt; 20190718.0.0&lt;br /&gt;
Provider: virtualbox&lt;br /&gt;
Updating 'ubuntu/bionic64' with provider 'virtualbox' from version&lt;br /&gt;
'20190718.0.0' to '20200124.0.0'...&lt;br /&gt;
Loading metadata for box 'https://vagrantcloud.com/ubuntu/bionic64'&lt;br /&gt;
Adding box 'ubuntu/bionic64' (v20200124.0.0) for provider: virtualbox&lt;br /&gt;
Downloading: https://vagrantcloud.com/ubuntu/boxes/bionic64/versions/20200124.0.0/providers/virtualbox.box&lt;br /&gt;
Download redirected to host: cloud-images.ubuntu.com&lt;br /&gt;
&lt;br /&gt;
$&amp;gt; vagrant box list&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20190411.0.0)&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20190718.0.0)&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20200124.0.0) # &amp;lt;- new downloaded&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Delete all images (aka boxes)&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant box prune&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= vagrant init - your first project =&lt;br /&gt;
;Configure Vagrantfile to use the box as your base system&lt;br /&gt;
&amp;lt;source lang=&amp;quot;ruby&amp;quot;&amp;gt;&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
 config.vm.box      = &amp;quot;ubuntu/bionic64&amp;quot;&lt;br /&gt;
 config.vm.hostname = &amp;quot;ubuntu&amp;quot; #hostname, requires reload&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Create Vagrant project, by creating ''Vagrantfile'' in your current directory&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
vagrant init                    #initialises an project &lt;br /&gt;
vagrant init ubuntu/xenial64    # initialises official Ubuntu 16.04 LTS (Xenial Xerus) Daily Build&lt;br /&gt;
vagrant init ubuntu/bionic64    #supports only VirtualBox provider&lt;br /&gt;
vagrant init bento/ubuntu-18.04 #supports variety of providers&lt;br /&gt;
&lt;br /&gt;
#Windows&lt;br /&gt;
vagrant init devopsgroup-io/windows_server-2012r2-standard-amd64-nocm #Windows 2012r2, VirtualBox only; cannot ssh&lt;br /&gt;
vagrant init peru/windows-server-2016-standard-x64-eval               #Windows 2016, halt works&lt;br /&gt;
vagrant init gusztavvargadr/windows-server                            #Windows 2019, full integration&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Power up your Vagrant box&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
vagrant up&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Ssh to the box. Below example of nested virtualisation 64bit VM(host) runs 32bit (guest vm)&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
piotr@vm-ubuntu64:~/git/vagrant$ vagrant ssh    #default password is &amp;quot;vagrant&amp;quot;&lt;br /&gt;
vagrant@vagrant-ubuntu-precise-32:~$ w&lt;br /&gt;
13:08:35 up 15 min,  1 user,  load average: 0.06, 0.31, 0.54&lt;br /&gt;
USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT&lt;br /&gt;
vagrant  pts/0    10.0.2.2         13:02    1.00s  4.63s  0.09s w&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Shared directory between Vagrant VM and an hypervisor provider&lt;br /&gt;
Vagrant VM shares a directory mounted at &amp;lt;tt&amp;gt;/vagrant&amp;lt;/tt&amp;gt;  with the directory on the host containing your Vagrantfile. This can be manually mounted from within VM as long the shared directory is setup in  GUI. &lt;br /&gt;
&lt;br /&gt;
Eg. vm_name &amp;gt; Settings &amp;gt; Shared Folders &amp;gt; Name: vagrant | Path: /home/piotr/vm_name&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
 sudo mount -t vboxsf -o uid=1000 vagrant /vagrant #firts arg 'vagrant' refers to GUI setiing &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
vagrant --debug up&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Nesting VMs ==&lt;br /&gt;
The error below is due to Virtualbox cannot run nested 64bit virtualbox VM. Spinning up a 64bit VM stops with an error that no 64bit CPU could be found. Update [https://forums.virtualbox.org/viewtopic.php?f=1&amp;amp;t=90831 VirtualBox 6.x Nested virtualization, VT-x/AMD-V in the guest].&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Error:&lt;br /&gt;
 Timed out while waiting for the machine to boot. This means that&lt;br /&gt;
 Vagrant was unable to communicate with the guest machine within&lt;br /&gt;
 the configured (&amp;quot;config.vm.boot_timeout&amp;quot; value) time period.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Manage power states =&lt;br /&gt;
*&amp;lt;code&amp;gt;vagrant suspend&amp;lt;/code&amp;gt; - saves the current running state of the machine and stop it&lt;br /&gt;
*&amp;lt;code&amp;gt;vagrant halt&amp;lt;/code&amp;gt; - gracefully shuts down the guest operating system and power down the guest machine&lt;br /&gt;
*&amp;lt;code&amp;gt;vagrant destroy&amp;lt;/code&amp;gt; - removes all traces of the guest machine from your system. It'll stop the guest machine, power it down, and remove all of the guest hard disks&lt;br /&gt;
&lt;br /&gt;
= Managing snapshots =&lt;br /&gt;
You can easily save snapshots.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Get status&lt;br /&gt;
$ vagrant status&lt;br /&gt;
Current machine states:&lt;br /&gt;
default                   poweroff (virtualbox) # &amp;lt;- 'default' it's machine name&lt;br /&gt;
                                                # in multi-vm Vagrant config file&lt;br /&gt;
The VM is powered off. To restart the VM, simply run `vagrant up`&lt;br /&gt;
&lt;br /&gt;
# List&lt;br /&gt;
vagrant snapshot list&lt;br /&gt;
==&amp;gt; default: &lt;br /&gt;
11_b4-upgradeVbox-stopped&lt;br /&gt;
12_Dec01_stopped&lt;br /&gt;
&lt;br /&gt;
# Save&lt;br /&gt;
                        &amp;lt;nameOfvm&amp;gt; &amp;lt;snapshot-name&amp;gt; &lt;br /&gt;
vagrant snapshot save    default    13_Dec30_external-eks_stopped&lt;br /&gt;
&lt;br /&gt;
# Restore&lt;br /&gt;
vagrant snapshot restore default    13_Dec30_external-eks_stopped&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Lookup path precedence for Vagrant project file =&lt;br /&gt;
When you run any vagrant command, Vagrant climbs your directory tree starting first in the current directory you are in. Example:&lt;br /&gt;
 /home/peter/projects/la/Vagrant&lt;br /&gt;
 /home/peter/projects/Vagrant&lt;br /&gt;
 /home/peter/Vagrant&lt;br /&gt;
 /home/Vagrant&lt;br /&gt;
 /Vagrant&lt;br /&gt;
&lt;br /&gt;
= Configuration =&lt;br /&gt;
== Networking ==&lt;br /&gt;
'''Private''' network is network that is not accessible from Internet. Networking stanza is a part of the main &amp;lt;tt&amp;gt;|config|&amp;lt;/tt&amp;gt; loop.&lt;br /&gt;
&lt;br /&gt;
DHCP IP address assigned&lt;br /&gt;
 config.vm.network &amp;quot;private_network&amp;quot;, type: &amp;quot;dhcp&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Static IP assigment&lt;br /&gt;
 config.vm.network &amp;quot;private_network&amp;quot;, ip: &amp;quot;192.168.80.5&amp;quot;&lt;br /&gt;
 auto_config: false     #optional to disable auto-configure&lt;br /&gt;
&lt;br /&gt;
'''Public network'''&lt;br /&gt;
These networks can be accessible from outside of the host machine including Internet, are usually '''Bridged Networks'''.&lt;br /&gt;
&lt;br /&gt;
Examples of dhcp and static IP assignment&lt;br /&gt;
 config.vm.network &amp;quot;public_network&amp;quot;, type: &amp;quot;dhcp&amp;quot;&lt;br /&gt;
 config.vm.network &amp;quot;public_network&amp;quot;, ip: &amp;quot;192.168.80.5&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Default interface. The name need to match your system name otherwise Vagrant will prompt you to choose from available interfaces during ''vagrant up'' process.&lt;br /&gt;
 config.vm.network &amp;quot;public_network&amp;quot;, bridge: 'eth1'&lt;br /&gt;
&lt;br /&gt;
== Port forwarding ==&lt;br /&gt;
Vagrant can forward any host(hypervisor) TCP port to guest vm specyfing in ~/git/vargant/Vagrant file&lt;br /&gt;
 config.vm.network :forwarded_port, guest: 80, host: 4567&lt;br /&gt;
Reload virtual machine &amp;lt;code&amp;gt;vagrant reload&amp;lt;/code&amp;gt; and run from hypervisor web browser http://127.0.0.1:4567 to test it.&lt;br /&gt;
&lt;br /&gt;
== Sync folders ==&lt;br /&gt;
Vagrant v2 renamed ''Shared folders'' into '''Sync folders'''. This feature mounts HostOS directory into GuestOS. allowing workflow of editing files with IDE installed on a host machine but access them within GuestOS. The files sync both directions (as mount on GuestOS), Remember, taking &amp;lt;code&amp;gt;vagrant snapshot save ubuntu-snap1&amp;lt;/code&amp;gt; '''will NOT save''' the '''Sync folder''' content as it's just mounted directory.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When configuring, 1st argument is a path existing on a '''host machine'''. If relative then it's relative to the root-project folder (where Vagrantfile exists) and 2nd arg is a full path to the mounted dir on guest-os.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Enabling Sync folders and Symlinks&lt;br /&gt;
This can be done at any time, it's applied during &amp;lt;code&amp;gt;vagrant up | reload&amp;lt;/code&amp;gt;. In general symlinks are disabled by VirtualBox as insecure.&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
#                    path on the host       mount on the guestOS&lt;br /&gt;
                              \             /         &lt;br /&gt;
     config.vm.sync_folder &amp;quot;git-host/&amp;quot;, &amp;quot;/git&amp;quot;, disabled: false&lt;br /&gt;
 end&lt;br /&gt;
  config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
    vb.name   = File.basename(Dir.pwd) + &amp;quot;_vagrant&amp;quot;&lt;br /&gt;
    ...&lt;br /&gt;
    vb.customize [&amp;quot;setextradata&amp;quot;, :id, &amp;quot;VBoxInternal2/SharedFoldersEnableSymlinksCreate//git&amp;quot;,     &amp;quot;1&amp;quot;]&lt;br /&gt;
#   vb.customize [&amp;quot;setextradata&amp;quot;, :id, &amp;quot;VBoxInternal2/SharedFoldersEnableSymlinksCreate//vagrant&amp;quot;, &amp;quot;1&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
    # symlinks should be active in root of vm by default&lt;br /&gt;
#   vb.customize [&amp;quot;setextradata&amp;quot;, :id, &amp;quot;VBoxInternal2/SharedFoldersEnableSymlinksCreate/v-root&amp;quot;,   &amp;quot;1&amp;quot;]&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Disabling&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
     config.vm.sync_folder &amp;quot;../data/&amp;quot;, &amp;quot;/vagrant-data&amp;quot;, disabled: true&lt;br /&gt;
 end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Modifying the Owner/Group&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 config.vm.sync_folder &amp;quot;../data/&amp;quot;, &amp;quot;/vagrant-data&amp;quot;, disabled: true,&lt;br /&gt;
    owner: &amp;quot;root&amp;quot;, group: &amp;quot;root&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
References&lt;br /&gt;
* [https://www.vagrantup.com/docs/synced-folders/basic_usage.html#id synced-folders] Hashicorp docs&lt;br /&gt;
&lt;br /&gt;
= Vagrant providers =&lt;br /&gt;
Vagrant can work with a wide variety of backend providers, such as VMware, AWS, and more without changing Vagrantfile. It's enough to  specify the provider and Vagrant will do the rest:&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
vagrant up --provider=vmware_fusion&lt;br /&gt;
vagrant up --provider=aws&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Hyper-V ==&lt;br /&gt;
*Enable Hyper-V&lt;br /&gt;
*if you running Docker for Windows make sure is disabled as only one application can bound to Internal NAT vswitch, if you are using it&lt;br /&gt;
*WSL and Windows Vagrant versions must match&lt;br /&gt;
*the terminal you run WSL or PowerShell runs with elevated privileges&lt;br /&gt;
When running in WSL, make sure you have &amp;lt;code&amp;gt;export VAGRANT_WSL_ENABLE_WINDOWS_ACCESS=&amp;quot;1&amp;quot;&amp;lt;/code&amp;gt;,&lt;br /&gt;
*you are in native Bash.exe not eg. ConEmu terminal with as it was proven not working at the time. You can change default provider by &amp;lt;code&amp;gt;export VAGRANT_DEFAULT_PROVIDER=hyperv&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Optional: Set the user-level environment variable in PowerShell: &lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
[Environment]::SetEnvironmentVariable(&amp;quot;VAGRANT_DEFAULT_PROVIDER&amp;quot;, &amp;quot;hyperv&amp;quot;, &amp;quot;User&amp;quot;) &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Workarounds&lt;br /&gt;
Copy insecure private key from &amp;lt;code&amp;gt;https://raw.githubusercontent.com/hashicorp/vagrant/master/keys/vagrant to WSL ~/.vagr&lt;br /&gt;
ant_key/private_key&amp;lt;/code&amp;gt; because Microsoft filesystem does not support Unix style file permissions, until WSL2 is released.&lt;br /&gt;
&amp;lt;source&amp;gt; &lt;br /&gt;
$ wget https://raw.githubusercontent.com/hashicorp/vagrant/master/keys/vagrant -O ~/.vagrant_key/private_key&lt;br /&gt;
# then set in Vagrantfile&lt;br /&gt;
config.ssh.private_key_path = &amp;quot;~/.vagrant_key/private_key&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When running on HyperV you need to choose a vswitch you will use. Vagrant will prompt you, select &amp;quot;Default Switch&amp;quot;, w&lt;br /&gt;
hich is eqvivalent of NAT Network. You need to creat your own vswitch if you want access to Internet.&lt;br /&gt;
&lt;br /&gt;
Go to Hyper-V Manager, open Virtual Switch Manager..., create External switch, name: vagrant-external, press OK. Then&lt;br /&gt;
 run.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
vagrant up --provider hyperv&lt;br /&gt;
&lt;br /&gt;
    default: Please choose a switch to attach to your Hyper-V instance.&lt;br /&gt;
    default: If none of these are appropriate, please open the Hyper-V manager&lt;br /&gt;
    default: to create a new virtual switch.&lt;br /&gt;
    default:&lt;br /&gt;
    default: 1) DockerNAT&lt;br /&gt;
    default: 2) Default Switch&lt;br /&gt;
    default: 3) vagrant-external&lt;br /&gt;
    default:&lt;br /&gt;
    default: What switch would you like to use?3    #&amp;lt;-- select 3&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Read more https://www.vagrantup.com/docs/hyperv/limitations.html&lt;br /&gt;
&lt;br /&gt;
Run Vagrant file&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant up --provider=hyperv&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== References ===&lt;br /&gt;
*[https://gist.github.com/savishy/8ed40cd8692e295d64f45e299c2b83c9 Create vSwitch in Hyper-V to run Vagrant]&lt;br /&gt;
*[https://techcommunity.microsoft.com/t5/Virtualization/Copying-Files-into-a-Hyper-V-VM-with-Vagrant/ba-p/382376 Copying Files into a Hyper-V VM with Vagrant]&lt;br /&gt;
*[https://techcommunity.microsoft.com/t5/Virtualization/Vagrant-and-Hyper-V-Tips-and-Tricks/ba-p/382373 Vagrant and Hyper-V -- Tips and Tricks] techcommunity.microsoft.com&lt;br /&gt;
&lt;br /&gt;
= Provisioners =&lt;br /&gt;
==Shell provisioner==&lt;br /&gt;
Vagrant can run from shared location script or from inline: Vagrant file shell provisioning commands.&lt;br /&gt;
&lt;br /&gt;
Create provisioning script&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi ~/git/vagrant/bootstrap.sh     &lt;br /&gt;
#!/usr/bin/env bash&lt;br /&gt;
export http_proxy=&amp;lt;nowiki&amp;gt;http://username:password@proxyserver.local:8080&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
export https_proxy=$http_proxy &lt;br /&gt;
apt-get update&lt;br /&gt;
apt-get install -y apache2&lt;br /&gt;
if ! [ -L /var/www ]; then &lt;br /&gt;
  rm -rf /var/www&lt;br /&gt;
  ln -sf /vagrant /var/www  # sets Vagrant shared dir to Apache DocumentRoot&lt;br /&gt;
fi&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure Vagrant to run this shell script above when setting up our machine&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi ~/git/vagrant/Vagrantfile   &lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
   config.vm.box = ubuntu/14.04-i386&lt;br /&gt;
   config.vm.provision: shell, path: &amp;quot;bootstrap.sh&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another example of using shell provisioner, separating a script out&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$script = &amp;lt;&amp;lt;SCRIPT&lt;br /&gt;
echo    &amp;quot; touch /home/vagrant/test_\\`date +%s\\`.txt &amp;quot; &amp;gt; /home/vagrant/newfile&lt;br /&gt;
chmod +x        /home/vagrant/newfile&lt;br /&gt;
echo &amp;quot;* * * * * /home/vagrant/newfile&amp;quot; &amp;gt; mycron&lt;br /&gt;
crontab mycron&lt;br /&gt;
SCRIPT&lt;br /&gt;
&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
  config.vm.box = &amp;quot;ubuntu/xenial64&amp;quot;&lt;br /&gt;
  config.vm.provision &amp;quot;shell&amp;quot;, inline: $script , privileged: false&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Bring the environment up  &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant up                   #runs provisioning only once&lt;br /&gt;
vagrant reload --provision   #reloads VM skipping import and runs provisioning&lt;br /&gt;
vagrant ssh                  #ssh to VM&lt;br /&gt;
wget -qO- 127.0.0.1          #test Apache is running on VM&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Provisioners - shell, ansible, ansible_local and more&lt;br /&gt;
&lt;br /&gt;
This section is about using Ansible with Vagrant, &lt;br /&gt;
 *&amp;lt;code&amp;gt;ansible&amp;lt;/code&amp;gt;, where Ansible is executed on the '''Vagrant host'''&lt;br /&gt;
 *&amp;lt;code&amp;gt;ansible_local&amp;lt;/code&amp;gt;, where Ansible is executed on the '''Vagrant guest'''&lt;br /&gt;
&lt;br /&gt;
==Ansible provisioner==&lt;br /&gt;
&lt;br /&gt;
Specify Ansible as a provisioner in Vagrant file&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 # Run Ansible from the Vagrant Host&lt;br /&gt;
 config.vm.provision &amp;quot;ansible&amp;quot; do |ansible|&lt;br /&gt;
    ansible.playbook = &amp;quot;playbook.yml&amp;quot;&lt;br /&gt;
 end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Chef_solo provisioner ==&lt;br /&gt;
Create recipe, the following dirctory structure is required, eg. recipe name is: vagrant_la&lt;br /&gt;
 ├── cookbooks&lt;br /&gt;
 │   └── vagrant_la&lt;br /&gt;
 │       └── recipes&lt;br /&gt;
 │           └── default.rb&lt;br /&gt;
 Vagrant&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Recipe&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
vi cookbooks/vagrant_la/recipes/default.rb&lt;br /&gt;
execute &amp;quot;apt-get update&amp;quot;&lt;br /&gt;
package &amp;quot;apache2&amp;quot;&lt;br /&gt;
execute &amp;quot;rm -rf /var/www&amp;quot;&lt;br /&gt;
link &amp;quot;var/www&amp;quot; do&lt;br /&gt;
        to &amp;quot;/vagrant&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In Vagrant file add following&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
config.vm.provision &amp;quot;chef_solo&amp;quot; do |chef|&lt;br /&gt;
        chef.add_recipe &amp;quot;vagrant_la&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Run &amp;lt;code&amp;gt;vagrant up&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Puppet manifest ==&lt;br /&gt;
Create Vagrant provisioning stanza&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 config.vm.define &amp;quot;web&amp;quot; do |web|&lt;br /&gt;
         web.vm.hostname = &amp;quot;web&amp;quot;&lt;br /&gt;
         web.vm.box = &amp;quot;apache&amp;quot;&lt;br /&gt;
         web.vm.network &amp;quot;private_network&amp;quot;, type: &amp;quot;dhcp&amp;quot;&lt;br /&gt;
         web.vm.network &amp;quot;forwarded_port&amp;quot;, guest: 80, host: 8080&lt;br /&gt;
         web.vm.provision &amp;quot;puppet&amp;quot; do |puppet|&lt;br /&gt;
                 puppet.manifests_path = &amp;quot;manifests&amp;quot;&lt;br /&gt;
                 puppet.manifest_file = &amp;quot;default.pp&amp;quot;&lt;br /&gt;
         end&lt;br /&gt;
 end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create a required folder structure for puppet manifests&lt;br /&gt;
 ├── manifests&lt;br /&gt;
 │   └── default.pp&lt;br /&gt;
 └── Vagrantfile&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Puppet manifest file&lt;br /&gt;
 vi manifests/default.pp&lt;br /&gt;
 exec { &amp;quot;apt-get update&amp;quot;:&lt;br /&gt;
        command =&amp;gt; &amp;quot;/usr/bin/apt-get update&amp;quot;,&lt;br /&gt;
 }&lt;br /&gt;
 package { &amp;quot;apache2&amp;quot;:&lt;br /&gt;
        require =&amp;gt; Exec[&amp;quot;apt-get update&amp;quot;],&lt;br /&gt;
 }&lt;br /&gt;
 file { &amp;quot;/var/www&amp;quot;:&lt;br /&gt;
        ensure =&amp;gt; link,&lt;br /&gt;
        target =&amp;gt; &amp;quot;/vagrant&amp;quot;,&lt;br /&gt;
        force =&amp;gt; true,&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
= Box images advanced=&lt;br /&gt;
 vagrant box list   #list all downloaded boxes&lt;br /&gt;
&lt;br /&gt;
Default path of boxes image, it can be specified by environment variable &amp;lt;tt&amp;gt;VAGRANT_HOME&amp;lt;/tt&amp;gt;&lt;br /&gt;
 C:\Users\%username%\.vagrant.d\boxes  #Windows&lt;br /&gt;
 ~/.vagrant.d/boxes                    #Linux&lt;br /&gt;
&lt;br /&gt;
Change default path via environment variable&lt;br /&gt;
 export VAGRANT_HOME=my/new/path/goes/here/&lt;br /&gt;
&lt;br /&gt;
==Box format==&lt;br /&gt;
When you un-tar the &amp;lt;tt&amp;gt;.box&amp;lt;/tt&amp;gt; file it contains 4 files:&lt;br /&gt;
 |--Vagrantfile&lt;br /&gt;
 |--box-disk1.vmdk  #compressed virtual disk&lt;br /&gt;
 |--box.ovf         #description of virtual hardware&lt;br /&gt;
 |--metadata.json&lt;br /&gt;
&lt;br /&gt;
== [https://www.vagrantup.com/docs/virtualbox/boxes.html Create box] from current project (package a box) ==&lt;br /&gt;
This allows to create a reusable box that contains all changes to the software we made, VirtualBox or Hyper-V supported only.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://www.vagrantup.com/docs/cli/package.html Command basics]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant package [options] [name|id]&lt;br /&gt;
# --base NAME - instead of packaging a VirtualBox machine that Vagrant manages, &lt;br /&gt;
#               this will package a VirtualBox machine that VirtualBox manages&lt;br /&gt;
# --output NAME - default is package.box&lt;br /&gt;
# --include x,y,z -  additional files will be packaged with the box&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Package&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ vagrant version # -&amp;gt; Installed Version: 2.2.9&lt;br /&gt;
&lt;br /&gt;
# Optional '--vagrantfile NAME' can be included, that automatically restores '--include' files &lt;br /&gt;
# learn more at https://www.vagrantup.com/docs/vagrantfile#load-order&lt;br /&gt;
$ time vagrant package --output u18cli-1.box --include data,git-host,git-host3rd,sync.sh,cleanup.sh&lt;br /&gt;
==&amp;gt; default: Clearing any previously set forwarded ports...&lt;br /&gt;
==&amp;gt; default: Exporting VM...&lt;br /&gt;
==&amp;gt; default: Compressing package to: /home/piotr/vms-vagrant/u18cli-1/2020-05-23-u18cli-1.box&lt;br /&gt;
==&amp;gt; default: Packaging additional file: data               # &amp;lt;- dir&lt;br /&gt;
==&amp;gt; default: Packaging additional file: git-host           # &amp;lt;- dir&lt;br /&gt;
==&amp;gt; default: Packaging additional file: git-host3rd        # &amp;lt;- dir&lt;br /&gt;
==&amp;gt; default: Packaging additional file: cleanup.sh         # &amp;lt;- file&lt;br /&gt;
real	15m27.324s user	8m23.550s sys	0m16.827s&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Re-ristribute the &amp;lt;tt&amp;gt;.box&amp;lt;/tt&amp;gt; file then restore it.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Add the packaged box to local system box repository&lt;br /&gt;
#                        _____box-name________ __box-file_____&lt;br /&gt;
$ vagrant box add --name box-packages/u18cli-1 u18cli-1-v1.box&lt;br /&gt;
==&amp;gt; box: Box file was not detected as metadata. Adding it directly...&lt;br /&gt;
==&amp;gt; box: Adding box 'u18cli-1-v1.box' (v0) for provider: &lt;br /&gt;
    box: Unpacking necessary files from: file:///home/piotr/vms-vagrant/test-box-restore/u18cli-1-v1.box&lt;br /&gt;
==&amp;gt; box: Successfully added box 'box-packages/u18cli-1' (v0) for 'virtualbox'!&lt;br /&gt;
&lt;br /&gt;
# List boxes&lt;br /&gt;
$ vagrant box list&lt;br /&gt;
box-packages/u18cli-1 (virtualbox, 0)&lt;br /&gt;
&lt;br /&gt;
$ ls -l ~/.vagrant.d/boxes&lt;br /&gt;
total 16&lt;br /&gt;
drwxrwxr-x 3 piotr piotr 4096 Jul 16 17:44 box-packages-VAGRANTSLASH-u18cli-1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Restore. Create/re-use Vagrantfile using box you added to your local box repository&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# vi Vagrantfile&lt;br /&gt;
config.vm.box = &amp;quot;box-packages/u18cli-1.box&amp;quot;&lt;br /&gt;
&lt;br /&gt;
vagrant up&lt;br /&gt;
# restore '--include' files coping them from&lt;br /&gt;
# 'ls -l ~/.vagrant.d/boxes/box-packages-VAGRANTSLASH-u18cli-1/0/virtualbox/include/*'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://tuhrig.de/resizing-vagrant-box-disk-space/ Resizing Vagrant box disk] =&lt;br /&gt;
* [https://www.vagrantup.com/docs/disks/usage Resizing primary disk] native way&lt;br /&gt;
&lt;br /&gt;
= Enable Vagrant to use proxy server for VMs =&lt;br /&gt;
Install proxyconf plugin or use &amp;lt;code&amp;gt;vagrant plugin list&amp;lt;/code&amp;gt; to verify if installed&lt;br /&gt;
 vagrant plugin install vagrant-proxyconf&lt;br /&gt;
&lt;br /&gt;
Configure your Vagrantfile, here particularly host 10.0.0.1:3128 runs CNTLM proxy&lt;br /&gt;
 Vagrant.configure(&amp;quot;2&amp;quot;) do |config| &lt;br /&gt;
     &amp;lt;nowiki&amp;gt;config.proxy.http = &amp;quot;http://10.0.0.1:3128&amp;quot;&lt;br /&gt;
    config.proxy.https = &amp;quot;http://10.0.0.1:3128&amp;quot;&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
     config.proxy.no_proxy = &amp;quot;localhost,127.0.0.1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
= Virtualbox Guest Additions =&lt;br /&gt;
== Sync using vagrant-vbguest plugin ==&lt;br /&gt;
Plugin install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant plugin install vagrant-vbguest&lt;br /&gt;
&lt;br /&gt;
# In a case of dependency issues you can temporary disable the check&lt;br /&gt;
VAGRANT_DISABLE_STRICT_DEPENDENCY_ENFORCEMENT=1 vagrant plugin install vagrant-vbguest&lt;br /&gt;
&lt;br /&gt;
# Verify current version, running on a host(hypervisor)&lt;br /&gt;
vagrant vbguest --status&lt;br /&gt;
&lt;br /&gt;
# Add to your Vagrant file&lt;br /&gt;
if Vagrant.has_plugin?(&amp;quot;vagrant-vbguest&amp;quot;)&lt;br /&gt;
  host.vbguest.auto_update = true&lt;br /&gt;
  host.vbguest.no_remote   = true&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Manual install&lt;br /&gt;
Download VBoxGuestAdditions from:&lt;br /&gt;
* https://download.virtualbox.org/virtualbox&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install a matching version to your host version onto the virtual machine.&lt;br /&gt;
wget https://download.virtualbox.org/virtualbox/7.0.16/VBoxGuestAdditions_7.0.16.iso&lt;br /&gt;
vagrant vbguest --do install --iso VBoxGuestAdditions_7.0.16.iso&lt;br /&gt;
&lt;br /&gt;
Usage: vagrant vbguest [vm-name] [--do start|rebuild|install] [--status] [-f|--force] [-b|--auto-reboot] [-R|--no-remote] [--iso VBoxGuestAdditions.iso] [--no-cleanup]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More you will find at [https://github.com/dotless-de/vagrant-vbguest vagrant-vbguest] plugin project.&lt;br /&gt;
&lt;br /&gt;
== Manual upgrade ==&lt;br /&gt;
Find out what version you are running, execute on a guest VM&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant@ubuntu:~$ modinfo vboxguest | grep ^version&lt;br /&gt;
version:        6.0.10 r132072&lt;br /&gt;
&lt;br /&gt;
vagrant@ubuntu:~$ lsmod | grep -io vboxguest | xargs modinfo | grep -iw version&lt;br /&gt;
version:        6.0.10 r132072&lt;br /&gt;
&lt;br /&gt;
vagrant@u18cli-3:~$ sudo /usr/sbin/VBoxService --version&lt;br /&gt;
6.0.10r132072&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download the extension, you can explore [http://download.virtualbox.org/virtualbox here]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
wget http://download.virtualbox.org/virtualbox/5.0.32/VBoxGuestAdditions_5.0.32.iso&lt;br /&gt;
#you need to get it mounted or extracted the content and run inside the VM.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
*[https://github.com/chilcano/box-vagrant-wso2-dev-srv/blob/master/_downloads/vagrant-vboxguestadditions-workaroud.md Upgrade Vbox extension additions within Vagrant box]&lt;br /&gt;
&lt;br /&gt;
= List all Virtualbox SSH redirections =&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
vboxmanage list vms | cut -d ' ' -f 2  &amp;gt; /tmp/vms.out &amp;amp;&amp;amp; for vm in $(cat /tmp/vms.out); do vboxmanage showvminfo &amp;quot;$vm&amp;quot; | grep ssh; done&lt;br /&gt;
vboxmanage list vms | cut -d ' ' -f 1 | sed 's/&amp;quot;//g' &amp;gt; /tmp/vms.out &amp;amp;&amp;amp; for vm in $(cat /tmp/vms.out); do echo $vm; vboxmanage showvminfo &amp;quot;$vm&amp;quot; | grep ssh; done&lt;br /&gt;
vboxmanage list vms \&lt;br /&gt;
  | cut -d ' ' -f 1 \&lt;br /&gt;
  | sed 's/&amp;quot;//g' &amp;gt; /tmp/vms.out \&lt;br /&gt;
  &amp;amp;&amp;amp; for vm in $(cat /tmp/vms.out); do vboxmanage showvminfo &amp;quot;$vm&amp;quot; \&lt;br /&gt;
                                      | grep ssh \&lt;br /&gt;
                                      | tr --delete '\n'; echo &amp;quot; $vm&amp;quot;; done&lt;br /&gt;
&lt;br /&gt;
sed 's/&amp;quot;//g'      #removes double quotes from whole string&lt;br /&gt;
tr --delete '\n'  #deletes EOL, so the next command output is appended to the previous line&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Vagrant file =&lt;br /&gt;
;Ruby gotchas&lt;br /&gt;
Vagrant configuration file is written in Ruby specification, therefore you need to remember&lt;br /&gt;
*don't use dashes in object names, '''don't''': &amp;lt;tt&amp;gt;jenkins-minion_config.vm.box = &amp;quot;ubuntu/xenial64&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
*don't use symbols (here underscore) in variable names, '''don't''': &amp;lt;tt&amp;gt;(1..2).each do |minion_number|&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== HAProxy cluster, multi-node Vagrant config  ==&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
git clone https://github.com/jweissig/episode-45&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This creates ''Ansible'' mgmt server, Load Balancer and Web nodes [1..2]. HAProxy will be configured via Ansible code.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# -*- mode: ruby -*-&lt;br /&gt;
# vi: set ft=ruby :&lt;br /&gt;
&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
 # create mgmt node&lt;br /&gt;
 config.vm.define :mgmt do |mgmt_config|&lt;br /&gt;
     mgmt_config.vm.box = &amp;quot;ubuntu/trusty64&amp;quot;&lt;br /&gt;
     mgmt_config.vm.hostname = &amp;quot;mgmt&amp;quot;&lt;br /&gt;
     mgmt_config.vm.network :private_network, ip: &amp;quot;10.0.15.10&amp;quot;&lt;br /&gt;
     mgmt_config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
       vb.memory = &amp;quot;256&amp;quot;&lt;br /&gt;
     end&lt;br /&gt;
     mgmt_config.vm.provision :shell, path: &amp;quot;bootstrap-mgmt.sh&amp;quot;&lt;br /&gt;
 end&lt;br /&gt;
&lt;br /&gt;
 # create load balancer&lt;br /&gt;
 config.vm.define :lb do |lb_config|&lt;br /&gt;
     lb_config.vm.box = &amp;quot;ubuntu/trusty64&amp;quot;&lt;br /&gt;
     lb_config.vm.hostname = &amp;quot;lb&amp;quot;&lt;br /&gt;
     lb_config.vm.network :private_network, ip: &amp;quot;10.0.15.11&amp;quot;&lt;br /&gt;
     lb_config.vm.network &amp;quot;forwarded_port&amp;quot;, guest: 80, host: 8080&lt;br /&gt;
     lb_config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
       vb.memory = &amp;quot;256&amp;quot;&lt;br /&gt;
     end&lt;br /&gt;
 end&lt;br /&gt;
&lt;br /&gt;
 # create some web servers&lt;br /&gt;
 # https://docs.vagrantup.com/v2/vagrantfile/tips.html&lt;br /&gt;
  (1..2).each do |i|&lt;br /&gt;
    config.vm.define &amp;quot;web#{i}&amp;quot; do |node|&lt;br /&gt;
        node.vm.box = &amp;quot;ubuntu/trusty64&amp;quot;&lt;br /&gt;
        node.vm.hostname = &amp;quot;web#{i}&amp;quot;&lt;br /&gt;
        node.vm.network :private_network, ip: &amp;quot;10.0.15.2#{i}&amp;quot;&lt;br /&gt;
        node.vm.network &amp;quot;forwarded_port&amp;quot;, guest: 80, host: &amp;quot;808#{i}&amp;quot;&lt;br /&gt;
        node.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
          vb.memory = &amp;quot;256&amp;quot;&lt;br /&gt;
        end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Boot strap script &amp;lt;tt&amp;gt;bootstrap-mgmt.sh&amp;lt;/tt&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;shell&amp;quot;&amp;gt;&lt;br /&gt;
#!/usr/bin/env bash &lt;br /&gt;
# install ansible (http://docs.ansible.com/intro_installation.html)&lt;br /&gt;
apt-get -y install software-properties-common&lt;br /&gt;
apt-add-repository -y ppa:ansible/ansible&lt;br /&gt;
apt-get update&lt;br /&gt;
apt-get -y install ansible&lt;br /&gt;
&lt;br /&gt;
# copy examples into /home/vagrant (from inside the mgmt node)&lt;br /&gt;
cp -a /vagrant/examples/* /home/vagrant&lt;br /&gt;
chown -R vagrant:vagrant /home/vagrant&lt;br /&gt;
&lt;br /&gt;
# configure hosts file for our internal network defined by Vagrantfile&lt;br /&gt;
cat &amp;gt;&amp;gt; /etc/hosts &amp;lt;&amp;lt;EOL&lt;br /&gt;
# vagrant environment nodes&lt;br /&gt;
10.0.15.10  mgmt&lt;br /&gt;
10.0.15.11  lb&lt;br /&gt;
10.0.15.21  web1&lt;br /&gt;
10.0.15.22  web2&lt;br /&gt;
10.0.15.23  web3&lt;br /&gt;
10.0.15.24  web4&lt;br /&gt;
10.0.15.25  web5&lt;br /&gt;
10.0.15.26  web6&lt;br /&gt;
10.0.15.27  web7&lt;br /&gt;
10.0.15.28  web8&lt;br /&gt;
10.0.15.29  web9&lt;br /&gt;
EOL&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Gitbash path -  &amp;lt;code&amp;gt;/c/Program\ Files/Oracle/VirtualBox/VBoxManage.exe&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Set bootstrap script for Proxy or No-proxy specific system&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
Vagrant status&lt;br /&gt;
Vagrant up&lt;br /&gt;
Vagrant ssh mgmt&lt;br /&gt;
ansible all --list-hosts&lt;br /&gt;
ssh-keyscan web1 web2 lb &amp;gt; ~/.ssh/known_hosts&lt;br /&gt;
ansible-playbook ssh-addkey.yml -u vagrant --ask-pass&lt;br /&gt;
ansible-playbook site.yml&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once set it up you can navigate on your laptop to:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
http://localhost:8080/              #Website test&lt;br /&gt;
http://localhost:8080/haproxy?stats #HAProxy stats&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Use to verify end server&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
curl -I http://localhost:8080&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:X-Backend-Server.png|none|left|Curl -i X-Backend-Server]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Generate web traffic&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
vagrant ssh lb&lt;br /&gt;
sudo apt-get install apache2-utils&lt;br /&gt;
ansible localhost -m apt -a &amp;quot;pkg=apache2-utils state=present&amp;quot; --become&lt;br /&gt;
ab -n 1000 -c 1 http://10.0.2.15:80/ # Apache replaced 'ab' with 'hey'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Vagrant DNS =&lt;br /&gt;
== Multi-machine mDNS discovery ==&lt;br /&gt;
Multi-machine setup requires 3 ingredients* :&lt;br /&gt;
* each machine to have different hostname&lt;br /&gt;
* a way of getting the IP address for a hostname (eg. mDNS)&lt;br /&gt;
* connect the VMs through a private network&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In multi-machine configuration we need a way of getting the IP address for a hostname. We use &amp;lt;code&amp;gt;mDNS&amp;lt;/code&amp;gt; for this. By default &amp;lt;codemDNS&amp;lt;/code&amp;gt; only resolves host names ending with the &amp;lt;code&amp;gt;.local&amp;lt;/code&amp;gt; top-level domain (TLD). This can cause problems if that domain includes hosts which do not implement mDNS but which can be found via a conventional unicast DNS server. Resolving such conflicts requires network-configuration changes that violate the zero-configuration goal. Install &amp;lt;code&amp;gt;avahi&amp;lt;/code&amp;gt; system on all machines to facilitate service discovery on a local network via the &amp;lt;code&amp;gt;mDNS/DNS-SD&amp;lt;/code&amp;gt; protocol suite.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
config.vm.provision &amp;quot;shell&amp;quot;, inline: &amp;lt;&amp;lt;-SCRIPT&lt;br /&gt;
  apt-get install -y avahi-daemon libnss-mdns&lt;br /&gt;
SCRIPT&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;References&lt;br /&gt;
*[https://github.com/lathiat/nss-mdns nss-mdns] system which allows hostname lookup of *.local hostnames via mDNS in all system programs using nsswitch&lt;br /&gt;
*[https://www.avahi.org/ avahi.org]&lt;br /&gt;
&lt;br /&gt;
== Set host system DNS server resolver ==&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
config.vm.provider :virtualbox do |vb|&lt;br /&gt;
    vb.customize [&amp;quot;modifyvm&amp;quot;, :id, &amp;quot;--natdnshostresolver1&amp;quot;, &amp;quot;on&amp;quot;]&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Ubuntu with GUI =&lt;br /&gt;
This article is going to describe to setup Vagrant Virtualbox VM with GUI, setting up xserver with xfce4 as desktop environment.&lt;br /&gt;
== Locales ==&lt;br /&gt;
This is not working&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
     locale-gen en_GB.utf8 #en_GB.UTF-8&lt;br /&gt;
     update-locale LANG=en_GB.UTF-8&lt;br /&gt;
     locale-gen --purge &amp;quot;en_GB.UTF-8&amp;quot;&lt;br /&gt;
     dpkg-reconfigure --frontend=noninteractive locales&lt;br /&gt;
     dpkg-reconfigure --frontend=noninteractive keyboard-configuration&lt;br /&gt;
     localedef -i en_GB -c -f UTF-8 en_GB.utf8&lt;br /&gt;
     sudo update-locale LANG=en_GB.UTF-8&lt;br /&gt;
     locale-gen --purge &amp;quot;en_GB.UTF-8&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Troubleshooting&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
locale -a #shows which locales are available on your system&lt;br /&gt;
sudo less /usr/share/i18n/SUPPORTED&lt;br /&gt;
cat /etc/default/locale&lt;br /&gt;
&lt;br /&gt;
#Set system wide locales (does not work for users)&lt;br /&gt;
localectl set-locale LANG=en_GB.UTF-8 LANGUAGE=en_GB:en&lt;br /&gt;
localectl set-keymap gb&lt;br /&gt;
localectl set-x11-keymap gb&lt;br /&gt;
&lt;br /&gt;
#Quick kb change&lt;br /&gt;
apt-get install -yq x11-xkb-utils; setxkbmap gb&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Gnome3 ==&lt;br /&gt;
This setup installs Ubuntu desktop and may require a restart to apply changes to like a taskbar with shortcuts.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;ruby&amp;quot;&amp;gt;&lt;br /&gt;
# -*- mode: ruby -*-&lt;br /&gt;
# vi: set ft=ruby :&lt;br /&gt;
&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
  config.vm.box      = &amp;quot;ubuntu/bionic64&amp;quot; #bento/ubuntu-18.04, ubuntu/xenial64&lt;br /&gt;
&lt;br /&gt;
  machineName = File.basename(Dir.pwd) #name as a current working dir&lt;br /&gt;
# machineName = 'u18gui-1'&lt;br /&gt;
  config.vm.hostname = machineName&lt;br /&gt;
&lt;br /&gt;
  # Manually check for updates `vagrant box outdated`&lt;br /&gt;
  config.vm.box_check_update = false&lt;br /&gt;
&lt;br /&gt;
  # Vbguest plugin&lt;br /&gt;
  if Vagrant.has_plugin?(&amp;quot;vagrant-vbguest&amp;quot;)&lt;br /&gt;
    config.vbguest.auto_update = false&lt;br /&gt;
    config.vbguest.no_remote   = true&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # config.vm.network &amp;quot;forwarded_port&amp;quot;, guest: 80, host: 8080, host_ip: &amp;quot;127.0.0.1&amp;quot;&lt;br /&gt;
  # Public network, which generally matched to bridged network.&lt;br /&gt;
  # config.vm.network &amp;quot;public_network&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  # config.vm.synced_folder &amp;quot;hostDir&amp;quot;, &amp;quot;/InVagrantMount/path&amp;quot; &lt;br /&gt;
  # config.vm.synced_folder &amp;quot;../data&amp;quot;, &amp;quot;/vagrant_data&amp;quot;&lt;br /&gt;
&lt;br /&gt;
   config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
     vb.gui    = true&lt;br /&gt;
     vb.memory = &amp;quot;3072&amp;quot;&lt;br /&gt;
     vb.name   = machineName + &amp;quot;_vagrant&amp;quot;&lt;br /&gt;
   end&lt;br /&gt;
  &lt;br /&gt;
   config.vm.provision &amp;quot;shell&amp;quot;, inline: &amp;lt;&amp;lt;-SHELL&lt;br /&gt;
     export DEBIAN_FRONTEND=noninteractive&lt;br /&gt;
     setxkbmap gb&lt;br /&gt;
     apt-get update &amp;amp;&amp;amp; apt-get upgrade -yq&lt;br /&gt;
     apt-get install -yq ubuntu-desktop --no-install-recommends&lt;br /&gt;
     apt-get install -yq terminator tmux&lt;br /&gt;
     #only U16 xenial to fix Unity&lt;br /&gt;
     #apt-get install -yq unity-lens-files unity-lens-applications indicator-session --no-install-recommends &lt;br /&gt;
   SHELL&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Running up&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
vagrant plugin install vagrant-vbguest&lt;br /&gt;
vagrant up &amp;amp;&amp;amp; vagrant vbguest --do install &amp;amp;&amp;amp; vagrant reload&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Xface ==&lt;br /&gt;
Get a basic Ubuntu image working, boot it up and vagrant ssh.&lt;br /&gt;
Next, enable the VirtualBox display, which is off by default. Halt the VM and uncomment these lines in Vagrantfile:&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
config.vm.provider :virtualbox do |vb|&lt;br /&gt;
  vb.gui = true&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Boot the VM and observe the new display window. Now you just need to install and start xfce4. Use vagrant ssh and:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get install -y xfce4 virtualbox-guest-dkms virtualbox-guest-utils virtualbox-guest-x11&lt;br /&gt;
#guest additions are already installed on most of the Vagrant boxes&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Don't start the GUI as root because you really want to stay the vagrant user. To do this you need to permit anyone to start the GUI: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo vim /etc/X11/Xwrapper.config and edit it to allowed_users=anybody&lt;br /&gt;
sudo startxfce4&amp;amp;&lt;br /&gt;
sudo VBoxClient-all #optional&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should be landed in a xfce4 session.&lt;br /&gt;
&lt;br /&gt;
(Optional) If VBoxClient-all script isn't installed or anything is missing, you can replace with the equivalent:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo VBoxClient --clipboard&lt;br /&gt;
sudo VBoxClient --draganddrop&lt;br /&gt;
sudo VBoxClient --display&lt;br /&gt;
sudo VBoxClient --checkhostversion&lt;br /&gt;
sudo VBoxClient --seamless&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
*[http://stackoverflow.com/questions/18878117/using-vagrant-to-run-virtual-machines-with-desktop-environment Vagrant GUI vms] stackoverflow&lt;br /&gt;
&lt;br /&gt;
= Windows=&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
# -*- mode: ruby -*-&lt;br /&gt;
# vi: set ft=ruby :&lt;br /&gt;
&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
  config.vm.box = &amp;quot;gusztavvargadr/windows-server&amp;quot;&lt;br /&gt;
  config.vm.box_check_update = false&lt;br /&gt;
  config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
     vb.gui = true       # Display the VirtualBox GUI when booting the machine&lt;br /&gt;
     vb.memory = &amp;quot;3072&amp;quot;  # Customize the amount of memory on the VM:&lt;br /&gt;
  end&lt;br /&gt;
  # Plugins&lt;br /&gt;
  config.vbguest.auto_update = false&lt;br /&gt;
  config.vbguest.no_remote = true&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Shared location&lt;br /&gt;
* enable Network Sharing&lt;br /&gt;
* Vagrant path is mapped to &amp;lt;code&amp;gt;\\VBOXSVR\vagrant&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= WIP DevOps workstation =&lt;br /&gt;
This to contain:&lt;br /&gt;
*bashrc with git branch in ps1&lt;br /&gt;
*bash autocomplete (...samename)&lt;br /&gt;
*bash colored symlinks&lt;br /&gt;
*bash_logout and .profile to eval ssh-agent and kill on exit&lt;br /&gt;
*git install&lt;br /&gt;
*ansible 1.9.4&lt;br /&gt;
*java Oracle&lt;br /&gt;
*clone tfenv and install terraform&lt;br /&gt;
*vim install&lt;br /&gt;
*vundle install&lt;br /&gt;
*[done] python 2.7 OOB in 16.04&lt;br /&gt;
*[done]python pip: awscli, boto, boto3, etc..&lt;br /&gt;
&lt;br /&gt;
Challenges:&lt;br /&gt;
*Ubuntu 16.04 official box does not come with a default ''vagrant'' user but instead comes with ''ubuntu'' user. This causes a number of incompatibilities.&lt;br /&gt;
**Read more at launchpad [https://bugs.launchpad.net/cloud-images/+bug/1569237 vagrant xenial box is not provided with vagrant/vagrant username and password ]&lt;br /&gt;
* Solutions&lt;br /&gt;
** on W10 host both users: ubuntu &amp;amp; vagrant exist. Only vagrant has insecure_pub installed OOB. I am coping vagrant user pub key into ubuntu user authorized_keys&lt;br /&gt;
** on U16.05 host the official image does not seem to come with vagrant user but Ubuntu user works OOB&lt;br /&gt;
** Read more at SO &lt;br /&gt;
***[Vagrant's Ubuntu 16.04 vagrantfile default password https://stackoverflow.com/questions/41337802/vagrants-ubuntu-16-04-vagrantfile-default-password]&lt;br /&gt;
***[https://stackoverflow.com/questions/30075461/how-do-i-add-my-own-public-key-to-vagrant-vm How do I add my own public key to Vagrant VM?]&lt;br /&gt;
*** [https://blog.ouseful.info/2015/07/27/running-a-shell-script-once-only-in-vagrant/ Running a Shell Script Once Only in vagrant]&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
*[https://www.vagrantup.com/docs/getting-started/ Vagrant Start up documentation]&lt;br /&gt;
*[https://atlas.hashicorp.com/boxes/search Vagrant Hashicorp VMs repository] Virtualbox&lt;br /&gt;
*[https://cloud-images.ubuntu.com/vagrant/ Vagrant Ubuntu VMs images] Virtualbox&lt;br /&gt;
*[https://www.vagrantup.com/docs/provisioning/ansible_intro.html Vagrant and Ansible provisioner] Vagrant docs&lt;br /&gt;
*[https://manski.net/2016/09/vagrant-multi-machine-tutorial/#multi-machine.3A-the-naive-way Vagrant Tutorial – From Nothing To Multi-Machine] Tutorial&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Virtualbox&amp;diff=7046</id>
		<title>Virtualbox</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Virtualbox&amp;diff=7046"/>
		<updated>2025-08-22T09:15:38Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Install */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Install =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# 2025 onwards Ubuntu 24.04.3&lt;br /&gt;
wget -O- https://www.virtualbox.org/download/oracle_vbox_2016.asc | sudo gpg --yes --output /usr/share/keyrings/oracle-virtualbox-2016.gpg --dearmor&lt;br /&gt;
echo &amp;quot;deb [arch=amd64 signed-by=/usr/share/keyrings/oracle-virtualbox-2016.gpg] https://download.virtualbox.org/virtualbox/debian noble contrib&amp;quot; | sudo tee /etc/apt/sources.list.d/virtualbox.list&lt;br /&gt;
sudo apt install virtualbox-7.2&lt;br /&gt;
&lt;br /&gt;
# Install extension pack&lt;br /&gt;
## Open VirtualBox Manager, Go to File &amp;gt; Preferences &amp;gt; Extensions &amp;gt; Add ...&lt;br /&gt;
&lt;br /&gt;
# Before 2025&lt;br /&gt;
echo &amp;quot;deb [arch=amd64] https://download.virtualbox.org/virtualbox/debian $(lsb_release -cs) contrib&amp;quot; | sudo tee -a /etc/apt/sources.list&lt;br /&gt;
wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add -&lt;br /&gt;
sudo apt-get update&lt;br /&gt;
sudo apt-get install virtualbox-6.1&lt;br /&gt;
sudo apt-mark hold virtualbox-6.1&lt;br /&gt;
sudo apt-mark showhold&lt;br /&gt;
sudo apt-mark unhold virtualbox-6.1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Resize disks in VirtualBox with Snapshots =&lt;br /&gt;
It is quite straightforward to resize a disk in VirtualBox as stated here and there. It becomes tricky though if the virtual machine,aka VM, has snapshots attached. The virtual disk thus is persisted across multiple VHD files, and the old trick will generally take not effect. This is also a [https://www.virtualbox.org/ticket/9103 known bug] hanging there for more than three years.&lt;br /&gt;
&lt;br /&gt;
The suggested approach is to delete all snapshots and wait patiently for VirtualBox Manager to merge all the VHD files for you. It is a painfully lengthy process, so I decide to take a shortcut.&lt;br /&gt;
&lt;br /&gt;
# First, shutdown the VM and backup the whole virtual machine folder.&lt;br /&gt;
# Then modify the size of all .vdi files in the root of the VM and Snapshots subdirectory.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
VBoxManage modifyhd &amp;quot;Windows 8.1.vdi&amp;quot; --resize 81920&lt;br /&gt;
for x in Snapshots/*.vdi ; do VBoxManage modifyhd $x --resize 81920 ; done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Startup the VM, and you will see the unallocated space in the Disk Management utility.&lt;br /&gt;
&lt;br /&gt;
= Resize .vmdk disk on Linux =&lt;br /&gt;
Convert .vmdk format to .vdi and then resize. You can change format back after the resizing.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
VBoxManage clonehd &amp;quot;ubuntu-xenial-16.04-cloudimg.vmdk&amp;quot; &amp;quot;ubuntu-xenial-16.04-cloudimg.vdi&amp;quot;  --format vdi  #.vmdk -&amp;gt; .vdi&lt;br /&gt;
VBoxManage clonehd &amp;quot;ubuntu-xenial-16.04-cloudimg.vdi&amp;quot;  &amp;quot;ubuntu-xenial-16.04-cloudimg.vmdk&amp;quot; --format vmdk #.vdi  -&amp;gt; .vmdk&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Resize .vdi disk on Windows =&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
cd &amp;quot;C:\Program Files\Oracle\VirtualBox&amp;quot;&lt;br /&gt;
VBoxManage.exe modifyhd &amp;quot;C:\Users\piotr\VirtualBox VMs\vm-ubuntu64\vm-ubuntu64.vdi&amp;quot; --resize 20480&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Note it also can resize VHD (Hyper-V) file formats.&lt;br /&gt;
&lt;br /&gt;
= Vagrant note =&lt;br /&gt;
* OS: Ubuntu 16.04 LTS&lt;br /&gt;
* Vagrant version 2.1.1&lt;br /&gt;
* VirtualBox: 5.1.34_Ubuntu&lt;br /&gt;
&lt;br /&gt;
Steps I have taken to resize Vagrant Ubuntu disk&lt;br /&gt;
# Stopped VM&lt;br /&gt;
# In settings removed attached drive &amp;quot;ubuntu-xenial-16.04-cloudimg.vmdk&amp;quot;&lt;br /&gt;
# Converted .vmdk into .vdi&lt;br /&gt;
# Attached &amp;quot;ubuntu-xenial-16.04-cloudimg.vdi&amp;quot; making sure that &lt;br /&gt;
#* Controller: SCSI Controller&lt;br /&gt;
#* Hard disk is attached to: SCSI Port 0, otherwise may throw error &amp;quot;no bootable medium found&amp;quot;&lt;br /&gt;
&lt;br /&gt;
= Shrink unused space on virtual drive =&lt;br /&gt;
Hypervisor: Virtualbox, VMware&lt;br /&gt;
&lt;br /&gt;
Virtual disks can be shrink as long as they are ext3 or ext4 file systems.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ vagrant ssh&lt;br /&gt;
$ sudo dd if=/dev/zero of=wipefile bs=1024x1024; rm wipefile&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The above command is simply writing zero bytes to the wipefile in chunks of 1024 bytes until there is no disk space left in your VM’s disk. Then it is removing the wipefile. This basically leaves all those excess bytes zero’d out.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is necessary because the shrink/compaction tools provided by either VMWare or VirtualBox both have no way of identifying space they can free up in the disks unless they are zero’d out.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With VirtualBox the only way I was able to shrink the disk image was to clone it to a smaller copy using the following command:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ VboxManage clonehd name-of-original-vm.vdi name-of-clone-vm.vdi&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once you have cloned the vdi you can then import it into the VM through VirtualBox and get rid of the original vdi.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With VMware you can shrink the vmdk disk by doing the following:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ vmware-vdiskmanager -d /path/to/main.vmdk&lt;br /&gt;
$ vmware-vdiskmanager -k /path/to/main.vmdk&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Installing Virtualbox guest additions =&lt;br /&gt;
Be sure to install DKMS(Dynamic Kernel Module System) before installing the Linux Guest Additions &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get install dkms&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Additional packages if ''dkms'' was not enough&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get install linux-headers-$(uname -r) build-essential dkms #if above not work&lt;br /&gt;
sudo apt-get install perl make gcc #was required for Ubuntu 18.04 LTS&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the &amp;quot;Devices&amp;quot; menu in the virtual machine's menu bar, VirtualBox has a handy menu item named &amp;quot;Insert Guest Additions CD image&amp;quot;, which mounts the Guest Additions ISO file inside your virtual machine. Then change directory to your CD-ROM and issue following command:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sh ./VBoxLinuxAdditions.run&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Generalize Windows =&lt;br /&gt;
If you wish to re use your Windows VM image it needs to be generalized:&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
C:\Windows\System32\Sysprep&lt;br /&gt;
sysprep.exe /oobe /generalize /shutdown /mode:vm&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Virtualbox&amp;diff=7045</id>
		<title>Virtualbox</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Virtualbox&amp;diff=7045"/>
		<updated>2025-08-22T08:28:10Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Install */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Install =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# 2025 onwards Ubuntu 24.04.3&lt;br /&gt;
wget -O- https://www.virtualbox.org/download/oracle_vbox_2016.asc | sudo gpg --yes --output /usr/share/keyrings/oracle-virtualbox-2016.gpg --dearmor&lt;br /&gt;
echo &amp;quot;deb [arch=amd64 signed-by=/usr/share/keyrings/oracle-virtualbox-2016.gpg] https://download.virtualbox.org/virtualbox/debian noble contrib&amp;quot; | sudo tee /etc/apt/sources.list.d/virtualbox.list&lt;br /&gt;
sudo apt install virtualbox-7.2&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Before 2025&lt;br /&gt;
echo &amp;quot;deb [arch=amd64] https://download.virtualbox.org/virtualbox/debian $(lsb_release -cs) contrib&amp;quot; | sudo tee -a /etc/apt/sources.list&lt;br /&gt;
wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add -&lt;br /&gt;
sudo apt-get update&lt;br /&gt;
sudo apt-get install virtualbox-6.1&lt;br /&gt;
sudo apt-mark hold virtualbox-6.1&lt;br /&gt;
sudo apt-mark showhold&lt;br /&gt;
sudo apt-mark unhold virtualbox-6.1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Resize disks in VirtualBox with Snapshots =&lt;br /&gt;
It is quite straightforward to resize a disk in VirtualBox as stated here and there. It becomes tricky though if the virtual machine,aka VM, has snapshots attached. The virtual disk thus is persisted across multiple VHD files, and the old trick will generally take not effect. This is also a [https://www.virtualbox.org/ticket/9103 known bug] hanging there for more than three years.&lt;br /&gt;
&lt;br /&gt;
The suggested approach is to delete all snapshots and wait patiently for VirtualBox Manager to merge all the VHD files for you. It is a painfully lengthy process, so I decide to take a shortcut.&lt;br /&gt;
&lt;br /&gt;
# First, shutdown the VM and backup the whole virtual machine folder.&lt;br /&gt;
# Then modify the size of all .vdi files in the root of the VM and Snapshots subdirectory.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
VBoxManage modifyhd &amp;quot;Windows 8.1.vdi&amp;quot; --resize 81920&lt;br /&gt;
for x in Snapshots/*.vdi ; do VBoxManage modifyhd $x --resize 81920 ; done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Startup the VM, and you will see the unallocated space in the Disk Management utility.&lt;br /&gt;
&lt;br /&gt;
= Resize .vmdk disk on Linux =&lt;br /&gt;
Convert .vmdk format to .vdi and then resize. You can change format back after the resizing.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
VBoxManage clonehd &amp;quot;ubuntu-xenial-16.04-cloudimg.vmdk&amp;quot; &amp;quot;ubuntu-xenial-16.04-cloudimg.vdi&amp;quot;  --format vdi  #.vmdk -&amp;gt; .vdi&lt;br /&gt;
VBoxManage clonehd &amp;quot;ubuntu-xenial-16.04-cloudimg.vdi&amp;quot;  &amp;quot;ubuntu-xenial-16.04-cloudimg.vmdk&amp;quot; --format vmdk #.vdi  -&amp;gt; .vmdk&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Resize .vdi disk on Windows =&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
cd &amp;quot;C:\Program Files\Oracle\VirtualBox&amp;quot;&lt;br /&gt;
VBoxManage.exe modifyhd &amp;quot;C:\Users\piotr\VirtualBox VMs\vm-ubuntu64\vm-ubuntu64.vdi&amp;quot; --resize 20480&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Note it also can resize VHD (Hyper-V) file formats.&lt;br /&gt;
&lt;br /&gt;
= Vagrant note =&lt;br /&gt;
* OS: Ubuntu 16.04 LTS&lt;br /&gt;
* Vagrant version 2.1.1&lt;br /&gt;
* VirtualBox: 5.1.34_Ubuntu&lt;br /&gt;
&lt;br /&gt;
Steps I have taken to resize Vagrant Ubuntu disk&lt;br /&gt;
# Stopped VM&lt;br /&gt;
# In settings removed attached drive &amp;quot;ubuntu-xenial-16.04-cloudimg.vmdk&amp;quot;&lt;br /&gt;
# Converted .vmdk into .vdi&lt;br /&gt;
# Attached &amp;quot;ubuntu-xenial-16.04-cloudimg.vdi&amp;quot; making sure that &lt;br /&gt;
#* Controller: SCSI Controller&lt;br /&gt;
#* Hard disk is attached to: SCSI Port 0, otherwise may throw error &amp;quot;no bootable medium found&amp;quot;&lt;br /&gt;
&lt;br /&gt;
= Shrink unused space on virtual drive =&lt;br /&gt;
Hypervisor: Virtualbox, VMware&lt;br /&gt;
&lt;br /&gt;
Virtual disks can be shrink as long as they are ext3 or ext4 file systems.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ vagrant ssh&lt;br /&gt;
$ sudo dd if=/dev/zero of=wipefile bs=1024x1024; rm wipefile&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The above command is simply writing zero bytes to the wipefile in chunks of 1024 bytes until there is no disk space left in your VM’s disk. Then it is removing the wipefile. This basically leaves all those excess bytes zero’d out.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is necessary because the shrink/compaction tools provided by either VMWare or VirtualBox both have no way of identifying space they can free up in the disks unless they are zero’d out.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With VirtualBox the only way I was able to shrink the disk image was to clone it to a smaller copy using the following command:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ VboxManage clonehd name-of-original-vm.vdi name-of-clone-vm.vdi&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once you have cloned the vdi you can then import it into the VM through VirtualBox and get rid of the original vdi.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With VMware you can shrink the vmdk disk by doing the following:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ vmware-vdiskmanager -d /path/to/main.vmdk&lt;br /&gt;
$ vmware-vdiskmanager -k /path/to/main.vmdk&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Installing Virtualbox guest additions =&lt;br /&gt;
Be sure to install DKMS(Dynamic Kernel Module System) before installing the Linux Guest Additions &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get install dkms&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Additional packages if ''dkms'' was not enough&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get install linux-headers-$(uname -r) build-essential dkms #if above not work&lt;br /&gt;
sudo apt-get install perl make gcc #was required for Ubuntu 18.04 LTS&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the &amp;quot;Devices&amp;quot; menu in the virtual machine's menu bar, VirtualBox has a handy menu item named &amp;quot;Insert Guest Additions CD image&amp;quot;, which mounts the Guest Additions ISO file inside your virtual machine. Then change directory to your CD-ROM and issue following command:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sh ./VBoxLinuxAdditions.run&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Generalize Windows =&lt;br /&gt;
If you wish to re use your Windows VM image it needs to be generalized:&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
C:\Windows\System32\Sysprep&lt;br /&gt;
sysprep.exe /oobe /generalize /shutdown /mode:vm&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=HashiCorp/Vagrant&amp;diff=7044</id>
		<title>HashiCorp/Vagrant</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=HashiCorp/Vagrant&amp;diff=7044"/>
		<updated>2025-04-22T07:20:20Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Sync using vagrant-vbguest plugin */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
Vagrant is configured on a per project basis. Each of these projects has its own Vagran file. The Vagrant file is a text file in which vagrant reads that sets up our environment. There is a description of what OS, how much RAM, and what software to be installed etc. You can version control this file.&lt;br /&gt;
&lt;br /&gt;
= Install | [https://github.com/hashicorp/vagrant/blob/v2.2.10/CHANGELOG.md Changelog] =&lt;br /&gt;
Download or upgrade&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install using Ubuntu package manager (2024)&lt;br /&gt;
wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg&lt;br /&gt;
echo &amp;quot;deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main&amp;quot; | sudo tee /etc/apt/sources.list.d/hashicorp.list&lt;br /&gt;
apt-cache policy vagrant&lt;br /&gt;
sudo apt update &amp;amp;&amp;amp; sudo apt install vagrant&lt;br /&gt;
&lt;br /&gt;
# Install downloading a package from sources (2022)&lt;br /&gt;
LATEST=$(curl -s GET https://api.github.com/repos/hashicorp/vagrant/tags | jq -r '.[].name' | head -n1 | tr -d v); echo $LATEST&lt;br /&gt;
VERSION=${LATEST:=2.2.18}; &lt;br /&gt;
wget https://releases.hashicorp.com/vagrant/${VERSION}/vagrant_${VERSION}_linux_amd64.zip&lt;br /&gt;
sudo install /usr/bin/vagrant&lt;br /&gt;
#sudo dpkg -i vagrant_${VERSION}_x86_64.deb&lt;br /&gt;
#sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -f   # resolve missing dependencies&lt;br /&gt;
&lt;br /&gt;
# Fix plugins if needed&lt;br /&gt;
vagrant plugin update&lt;br /&gt;
vagrant plugin repair&lt;br /&gt;
vagrant plugin expunge --reinstall&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install ruby is recommended as configuration within '''Vagrant''' file is written in Ruby language. &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo apt-get install ruby&lt;br /&gt;
sudo gem install bundler&lt;br /&gt;
sudo gem update  bundler    # if update needed&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repair plugins after the upgrade&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant plugin repair    # use first&lt;br /&gt;
vagrant plugin expunge --reinstall&lt;br /&gt;
vagrant plugin update    # then update broken plugin&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Images aka &amp;lt;code&amp;gt;box&amp;lt;/code&amp;gt; management =&lt;br /&gt;
Vagrant comes with a preconfigured image repositories.&lt;br /&gt;
;Manage boxes&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant box [list | add | remove] [--help]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Add a box (image) into local repository&lt;br /&gt;
These are standard VMs from providers in Virtualbox, VMware or Hyper-V format taken from a given repository&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant box add hashicorp/precise64      #user: hashicorp boximage: precise64, this is preconfigured repository&lt;br /&gt;
vagrant box add ubuntu/xenial64&lt;br /&gt;
vagrant box add ubuntu/xenial64    --box-version 20170618.0.0 --provider virtualbox&lt;br /&gt;
vagrant box add bento/ubuntu-18.04 --box-version 201812.27.0  --provider hyperv&lt;br /&gt;
&lt;br /&gt;
# Box can be specified via URLs or local file paths, Virtualbox can only nest 32bit machines&lt;br /&gt;
vagrant box add --force ubuntu/14.04      https://cloud-images.ubuntu.com/vagrant/trusty/current/trusty-server-cloudimg-amd64-vagrant-disk1.box&lt;br /&gt;
vagrant box add --force ubuntu/14.04-i386 https://cloud-images.ubuntu.com/vagrant/precise/current/precise-server-cloudimg-i386-vagrant-disk1.box&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Windows images&lt;br /&gt;
* devopsgroup-io/windows_server-2012r2-standard-amd64-nocm&lt;br /&gt;
* peru/windows-server-2016-standard-x64-eval&lt;br /&gt;
* scotch/box&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Update a box to the latest version&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; vagrant box list&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20190411.0.0)&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20190718.0.0)&lt;br /&gt;
&lt;br /&gt;
$&amp;gt; vagrant box update --box ubuntu/bionic64&lt;br /&gt;
Checking for updates to 'ubuntu/bionic64'&lt;br /&gt;
Latest installed version: 20190718.0.0&lt;br /&gt;
Version constraints: &amp;gt; 20190718.0.0&lt;br /&gt;
Provider: virtualbox&lt;br /&gt;
Updating 'ubuntu/bionic64' with provider 'virtualbox' from version&lt;br /&gt;
'20190718.0.0' to '20200124.0.0'...&lt;br /&gt;
Loading metadata for box 'https://vagrantcloud.com/ubuntu/bionic64'&lt;br /&gt;
Adding box 'ubuntu/bionic64' (v20200124.0.0) for provider: virtualbox&lt;br /&gt;
Downloading: https://vagrantcloud.com/ubuntu/boxes/bionic64/versions/20200124.0.0/providers/virtualbox.box&lt;br /&gt;
Download redirected to host: cloud-images.ubuntu.com&lt;br /&gt;
&lt;br /&gt;
$&amp;gt; vagrant box list&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20190411.0.0)&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20190718.0.0)&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20200124.0.0) # &amp;lt;- new downloaded&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Delete all images (aka boxes)&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant box prune&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= vagrant init - your first project =&lt;br /&gt;
;Configure Vagrantfile to use the box as your base system&lt;br /&gt;
&amp;lt;source lang=&amp;quot;ruby&amp;quot;&amp;gt;&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
 config.vm.box      = &amp;quot;ubuntu/bionic64&amp;quot;&lt;br /&gt;
 config.vm.hostname = &amp;quot;ubuntu&amp;quot; #hostname, requires reload&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Create Vagrant project, by creating ''Vagrantfile'' in your current directory&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
vagrant init                    #initialises an project &lt;br /&gt;
vagrant init ubuntu/xenial64    # initialises official Ubuntu 16.04 LTS (Xenial Xerus) Daily Build&lt;br /&gt;
vagrant init ubuntu/bionic64    #supports only VirtualBox provider&lt;br /&gt;
vagrant init bento/ubuntu-18.04 #supports variety of providers&lt;br /&gt;
&lt;br /&gt;
#Windows&lt;br /&gt;
vagrant init devopsgroup-io/windows_server-2012r2-standard-amd64-nocm #Windows 2012r2, VirtualBox only; cannot ssh&lt;br /&gt;
vagrant init peru/windows-server-2016-standard-x64-eval               #Windows 2016, halt works&lt;br /&gt;
vagrant init gusztavvargadr/windows-server                            #Windows 2019, full integration&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Power up your Vagrant box&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
vagrant up&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Ssh to the box. Below example of nested virtualisation 64bit VM(host) runs 32bit (guest vm)&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
piotr@vm-ubuntu64:~/git/vagrant$ vagrant ssh    #default password is &amp;quot;vagrant&amp;quot;&lt;br /&gt;
vagrant@vagrant-ubuntu-precise-32:~$ w&lt;br /&gt;
13:08:35 up 15 min,  1 user,  load average: 0.06, 0.31, 0.54&lt;br /&gt;
USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT&lt;br /&gt;
vagrant  pts/0    10.0.2.2         13:02    1.00s  4.63s  0.09s w&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Shared directory between Vagrant VM and an hypervisor provider&lt;br /&gt;
Vagrant VM shares a directory mounted at &amp;lt;tt&amp;gt;/vagrant&amp;lt;/tt&amp;gt;  with the directory on the host containing your Vagrantfile. This can be manually mounted from within VM as long the shared directory is setup in  GUI. &lt;br /&gt;
&lt;br /&gt;
Eg. vm_name &amp;gt; Settings &amp;gt; Shared Folders &amp;gt; Name: vagrant | Path: /home/piotr/vm_name&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
 sudo mount -t vboxsf -o uid=1000 vagrant /vagrant #firts arg 'vagrant' refers to GUI setiing &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
vagrant --debug up&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Nesting VMs ==&lt;br /&gt;
The error below is due to Virtualbox cannot run nested 64bit virtualbox VM. Spinning up a 64bit VM stops with an error that no 64bit CPU could be found. Update [https://forums.virtualbox.org/viewtopic.php?f=1&amp;amp;t=90831 VirtualBox 6.x Nested virtualization, VT-x/AMD-V in the guest].&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Error:&lt;br /&gt;
 Timed out while waiting for the machine to boot. This means that&lt;br /&gt;
 Vagrant was unable to communicate with the guest machine within&lt;br /&gt;
 the configured (&amp;quot;config.vm.boot_timeout&amp;quot; value) time period.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Manage power states =&lt;br /&gt;
*&amp;lt;code&amp;gt;vagrant suspend&amp;lt;/code&amp;gt; - saves the current running state of the machine and stop it&lt;br /&gt;
*&amp;lt;code&amp;gt;vagrant halt&amp;lt;/code&amp;gt; - gracefully shuts down the guest operating system and power down the guest machine&lt;br /&gt;
*&amp;lt;code&amp;gt;vagrant destroy&amp;lt;/code&amp;gt; - removes all traces of the guest machine from your system. It'll stop the guest machine, power it down, and remove all of the guest hard disks&lt;br /&gt;
&lt;br /&gt;
= Snapshots =&lt;br /&gt;
You can easily save snapshots.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Get status&lt;br /&gt;
$ vagrant status&lt;br /&gt;
Current machine states:&lt;br /&gt;
default                   poweroff (virtualbox) # &amp;lt;- 'default' it's machine name&lt;br /&gt;
                                                # in multi-vm Vagrant config file&lt;br /&gt;
The VM is powered off. To restart the VM, simply run `vagrant up`&lt;br /&gt;
&lt;br /&gt;
# List&lt;br /&gt;
vagrant snapshot list&lt;br /&gt;
==&amp;gt; default: &lt;br /&gt;
11_b4-upgradeVbox-stopped&lt;br /&gt;
12_Dec01_stopped&lt;br /&gt;
&lt;br /&gt;
# Save&lt;br /&gt;
                        &amp;lt;nameOfvm&amp;gt; &amp;lt;snapshot-name&amp;gt; &lt;br /&gt;
vagrant snapshot save    default    13_Dec30_external-eks_stopped&lt;br /&gt;
&lt;br /&gt;
# Restore&lt;br /&gt;
vagrant snapshot restore default    13_Dec30_external-eks_stopped&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Lookup path precedence for Vagrant project file =&lt;br /&gt;
When you run any vagrant command, Vagrant climbs your directory tree starting first in the current directory you are in. Example:&lt;br /&gt;
 /home/peter/projects/la/Vagrant&lt;br /&gt;
 /home/peter/projects/Vagrant&lt;br /&gt;
 /home/peter/Vagrant&lt;br /&gt;
 /home/Vagrant&lt;br /&gt;
 /Vagrant&lt;br /&gt;
&lt;br /&gt;
= Configuration =&lt;br /&gt;
== Networking ==&lt;br /&gt;
'''Private''' network is network that is not accessible from Internet. Networking stanza is a part of the main &amp;lt;tt&amp;gt;|config|&amp;lt;/tt&amp;gt; loop.&lt;br /&gt;
&lt;br /&gt;
DHCP IP address assigned&lt;br /&gt;
 config.vm.network &amp;quot;private_network&amp;quot;, type: &amp;quot;dhcp&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Static IP assigment&lt;br /&gt;
 config.vm.network &amp;quot;private_network&amp;quot;, ip: &amp;quot;192.168.80.5&amp;quot;&lt;br /&gt;
 auto_config: false     #optional to disable auto-configure&lt;br /&gt;
&lt;br /&gt;
'''Public network'''&lt;br /&gt;
These networks can be accessible from outside of the host machine including Internet, are usually '''Bridged Networks'''.&lt;br /&gt;
&lt;br /&gt;
Examples of dhcp and static IP assignment&lt;br /&gt;
 config.vm.network &amp;quot;public_network&amp;quot;, type: &amp;quot;dhcp&amp;quot;&lt;br /&gt;
 config.vm.network &amp;quot;public_network&amp;quot;, ip: &amp;quot;192.168.80.5&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Default interface. The name need to match your system name otherwise Vagrant will prompt you to choose from available interfaces during ''vagrant up'' process.&lt;br /&gt;
 config.vm.network &amp;quot;public_network&amp;quot;, bridge: 'eth1'&lt;br /&gt;
&lt;br /&gt;
== Port forwarding ==&lt;br /&gt;
Vagrant can forward any host(hypervisor) TCP port to guest vm specyfing in ~/git/vargant/Vagrant file&lt;br /&gt;
 config.vm.network :forwarded_port, guest: 80, host: 4567&lt;br /&gt;
Reload virtual machine &amp;lt;code&amp;gt;vagrant reload&amp;lt;/code&amp;gt; and run from hypervisor web browser http://127.0.0.1:4567 to test it.&lt;br /&gt;
&lt;br /&gt;
== Sync folders ==&lt;br /&gt;
Vagrant v2 renamed ''Shared folders'' into '''Sync folders'''. This feature mounts HostOS directory into GuestOS. allowing workflow of editing files with IDE installed on a host machine but access them within GuestOS. The files sync both directions (as mount on GuestOS), Remember, taking &amp;lt;code&amp;gt;vagrant snapshot save ubuntu-snap1&amp;lt;/code&amp;gt; '''will NOT save''' the '''Sync folder''' content as it's just mounted directory.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When configuring, 1st argument is a path existing on a '''host machine'''. If relative then it's relative to the root-project folder (where Vagrantfile exists) and 2nd arg is a full path to the mounted dir on guest-os.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Enabling Sync folders and Symlinks&lt;br /&gt;
This can be done at any time, it's applied during &amp;lt;code&amp;gt;vagrant up | reload&amp;lt;/code&amp;gt;. In general symlinks are disabled by VirtualBox as insecure.&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
#                    path on the host       mount on the guestOS&lt;br /&gt;
                              \             /         &lt;br /&gt;
     config.vm.sync_folder &amp;quot;git-host/&amp;quot;, &amp;quot;/git&amp;quot;, disabled: false&lt;br /&gt;
 end&lt;br /&gt;
  config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
    vb.name   = File.basename(Dir.pwd) + &amp;quot;_vagrant&amp;quot;&lt;br /&gt;
    ...&lt;br /&gt;
    vb.customize [&amp;quot;setextradata&amp;quot;, :id, &amp;quot;VBoxInternal2/SharedFoldersEnableSymlinksCreate//git&amp;quot;,     &amp;quot;1&amp;quot;]&lt;br /&gt;
#   vb.customize [&amp;quot;setextradata&amp;quot;, :id, &amp;quot;VBoxInternal2/SharedFoldersEnableSymlinksCreate//vagrant&amp;quot;, &amp;quot;1&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
    # symlinks should be active in root of vm by default&lt;br /&gt;
#   vb.customize [&amp;quot;setextradata&amp;quot;, :id, &amp;quot;VBoxInternal2/SharedFoldersEnableSymlinksCreate/v-root&amp;quot;,   &amp;quot;1&amp;quot;]&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Disabling&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
     config.vm.sync_folder &amp;quot;../data/&amp;quot;, &amp;quot;/vagrant-data&amp;quot;, disabled: true&lt;br /&gt;
 end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Modifying the Owner/Group&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 config.vm.sync_folder &amp;quot;../data/&amp;quot;, &amp;quot;/vagrant-data&amp;quot;, disabled: true,&lt;br /&gt;
    owner: &amp;quot;root&amp;quot;, group: &amp;quot;root&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
References&lt;br /&gt;
* [https://www.vagrantup.com/docs/synced-folders/basic_usage.html#id synced-folders] Hashicorp docs&lt;br /&gt;
&lt;br /&gt;
= Vagrant providers =&lt;br /&gt;
Vagrant can work with a wide variety of backend providers, such as VMware, AWS, and more without changing Vagrantfile. It's enough to  specify the provider and Vagrant will do the rest:&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
vagrant up --provider=vmware_fusion&lt;br /&gt;
vagrant up --provider=aws&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Hyper-V ==&lt;br /&gt;
*Enable Hyper-V&lt;br /&gt;
*if you running Docker for Windows make sure is disabled as only one application can bound to Internal NAT vswitch, if you are using it&lt;br /&gt;
*WSL and Windows Vagrant versions must match&lt;br /&gt;
*the terminal you run WSL or PowerShell runs with elevated privileges&lt;br /&gt;
When running in WSL, make sure you have &amp;lt;code&amp;gt;export VAGRANT_WSL_ENABLE_WINDOWS_ACCESS=&amp;quot;1&amp;quot;&amp;lt;/code&amp;gt;,&lt;br /&gt;
*you are in native Bash.exe not eg. ConEmu terminal with as it was proven not working at the time. You can change default provider by &amp;lt;code&amp;gt;export VAGRANT_DEFAULT_PROVIDER=hyperv&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Optional: Set the user-level environment variable in PowerShell: &lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
[Environment]::SetEnvironmentVariable(&amp;quot;VAGRANT_DEFAULT_PROVIDER&amp;quot;, &amp;quot;hyperv&amp;quot;, &amp;quot;User&amp;quot;) &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Workarounds&lt;br /&gt;
Copy insecure private key from &amp;lt;code&amp;gt;https://raw.githubusercontent.com/hashicorp/vagrant/master/keys/vagrant to WSL ~/.vagr&lt;br /&gt;
ant_key/private_key&amp;lt;/code&amp;gt; because Microsoft filesystem does not support Unix style file permissions, until WSL2 is released.&lt;br /&gt;
&amp;lt;source&amp;gt; &lt;br /&gt;
$ wget https://raw.githubusercontent.com/hashicorp/vagrant/master/keys/vagrant -O ~/.vagrant_key/private_key&lt;br /&gt;
# then set in Vagrantfile&lt;br /&gt;
config.ssh.private_key_path = &amp;quot;~/.vagrant_key/private_key&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When running on HyperV you need to choose a vswitch you will use. Vagrant will prompt you, select &amp;quot;Default Switch&amp;quot;, w&lt;br /&gt;
hich is eqvivalent of NAT Network. You need to creat your own vswitch if you want access to Internet.&lt;br /&gt;
&lt;br /&gt;
Go to Hyper-V Manager, open Virtual Switch Manager..., create External switch, name: vagrant-external, press OK. Then&lt;br /&gt;
 run.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
vagrant up --provider hyperv&lt;br /&gt;
&lt;br /&gt;
    default: Please choose a switch to attach to your Hyper-V instance.&lt;br /&gt;
    default: If none of these are appropriate, please open the Hyper-V manager&lt;br /&gt;
    default: to create a new virtual switch.&lt;br /&gt;
    default:&lt;br /&gt;
    default: 1) DockerNAT&lt;br /&gt;
    default: 2) Default Switch&lt;br /&gt;
    default: 3) vagrant-external&lt;br /&gt;
    default:&lt;br /&gt;
    default: What switch would you like to use?3    #&amp;lt;-- select 3&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Read more https://www.vagrantup.com/docs/hyperv/limitations.html&lt;br /&gt;
&lt;br /&gt;
Run Vagrant file&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant up --provider=hyperv&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== References ===&lt;br /&gt;
*[https://gist.github.com/savishy/8ed40cd8692e295d64f45e299c2b83c9 Create vSwitch in Hyper-V to run Vagrant]&lt;br /&gt;
*[https://techcommunity.microsoft.com/t5/Virtualization/Copying-Files-into-a-Hyper-V-VM-with-Vagrant/ba-p/382376 Copying Files into a Hyper-V VM with Vagrant]&lt;br /&gt;
*[https://techcommunity.microsoft.com/t5/Virtualization/Vagrant-and-Hyper-V-Tips-and-Tricks/ba-p/382373 Vagrant and Hyper-V -- Tips and Tricks] techcommunity.microsoft.com&lt;br /&gt;
&lt;br /&gt;
= Provisioners =&lt;br /&gt;
==Shell provisioner==&lt;br /&gt;
Vagrant can run from shared location script or from inline: Vagrant file shell provisioning commands.&lt;br /&gt;
&lt;br /&gt;
Create provisioning script&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi ~/git/vagrant/bootstrap.sh     &lt;br /&gt;
#!/usr/bin/env bash&lt;br /&gt;
export http_proxy=&amp;lt;nowiki&amp;gt;http://username:password@proxyserver.local:8080&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
export https_proxy=$http_proxy &lt;br /&gt;
apt-get update&lt;br /&gt;
apt-get install -y apache2&lt;br /&gt;
if ! [ -L /var/www ]; then &lt;br /&gt;
  rm -rf /var/www&lt;br /&gt;
  ln -sf /vagrant /var/www  # sets Vagrant shared dir to Apache DocumentRoot&lt;br /&gt;
fi&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure Vagrant to run this shell script above when setting up our machine&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi ~/git/vagrant/Vagrantfile   &lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
   config.vm.box = ubuntu/14.04-i386&lt;br /&gt;
   config.vm.provision: shell, path: &amp;quot;bootstrap.sh&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another example of using shell provisioner, separating a script out&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$script = &amp;lt;&amp;lt;SCRIPT&lt;br /&gt;
echo    &amp;quot; touch /home/vagrant/test_\\`date +%s\\`.txt &amp;quot; &amp;gt; /home/vagrant/newfile&lt;br /&gt;
chmod +x        /home/vagrant/newfile&lt;br /&gt;
echo &amp;quot;* * * * * /home/vagrant/newfile&amp;quot; &amp;gt; mycron&lt;br /&gt;
crontab mycron&lt;br /&gt;
SCRIPT&lt;br /&gt;
&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
  config.vm.box = &amp;quot;ubuntu/xenial64&amp;quot;&lt;br /&gt;
  config.vm.provision &amp;quot;shell&amp;quot;, inline: $script , privileged: false&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Bring the environment up  &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant up                   #runs provisioning only once&lt;br /&gt;
vagrant reload --provision   #reloads VM skipping import and runs provisioning&lt;br /&gt;
vagrant ssh                  #ssh to VM&lt;br /&gt;
wget -qO- 127.0.0.1          #test Apache is running on VM&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Provisioners - shell, ansible, ansible_local and more&lt;br /&gt;
&lt;br /&gt;
This section is about using Ansible with Vagrant, &lt;br /&gt;
 *&amp;lt;code&amp;gt;ansible&amp;lt;/code&amp;gt;, where Ansible is executed on the '''Vagrant host'''&lt;br /&gt;
 *&amp;lt;code&amp;gt;ansible_local&amp;lt;/code&amp;gt;, where Ansible is executed on the '''Vagrant guest'''&lt;br /&gt;
&lt;br /&gt;
==Ansible provisioner==&lt;br /&gt;
&lt;br /&gt;
Specify Ansible as a provisioner in Vagrant file&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 # Run Ansible from the Vagrant Host&lt;br /&gt;
 config.vm.provision &amp;quot;ansible&amp;quot; do |ansible|&lt;br /&gt;
    ansible.playbook = &amp;quot;playbook.yml&amp;quot;&lt;br /&gt;
 end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Chef_solo provisioner ==&lt;br /&gt;
Create recipe, the following dirctory structure is required, eg. recipe name is: vagrant_la&lt;br /&gt;
 ├── cookbooks&lt;br /&gt;
 │   └── vagrant_la&lt;br /&gt;
 │       └── recipes&lt;br /&gt;
 │           └── default.rb&lt;br /&gt;
 Vagrant&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Recipe&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
vi cookbooks/vagrant_la/recipes/default.rb&lt;br /&gt;
execute &amp;quot;apt-get update&amp;quot;&lt;br /&gt;
package &amp;quot;apache2&amp;quot;&lt;br /&gt;
execute &amp;quot;rm -rf /var/www&amp;quot;&lt;br /&gt;
link &amp;quot;var/www&amp;quot; do&lt;br /&gt;
        to &amp;quot;/vagrant&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In Vagrant file add following&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
config.vm.provision &amp;quot;chef_solo&amp;quot; do |chef|&lt;br /&gt;
        chef.add_recipe &amp;quot;vagrant_la&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Run &amp;lt;code&amp;gt;vagrant up&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Puppet manifest ==&lt;br /&gt;
Create Vagrant provisioning stanza&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 config.vm.define &amp;quot;web&amp;quot; do |web|&lt;br /&gt;
         web.vm.hostname = &amp;quot;web&amp;quot;&lt;br /&gt;
         web.vm.box = &amp;quot;apache&amp;quot;&lt;br /&gt;
         web.vm.network &amp;quot;private_network&amp;quot;, type: &amp;quot;dhcp&amp;quot;&lt;br /&gt;
         web.vm.network &amp;quot;forwarded_port&amp;quot;, guest: 80, host: 8080&lt;br /&gt;
         web.vm.provision &amp;quot;puppet&amp;quot; do |puppet|&lt;br /&gt;
                 puppet.manifests_path = &amp;quot;manifests&amp;quot;&lt;br /&gt;
                 puppet.manifest_file = &amp;quot;default.pp&amp;quot;&lt;br /&gt;
         end&lt;br /&gt;
 end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create a required folder structure for puppet manifests&lt;br /&gt;
 ├── manifests&lt;br /&gt;
 │   └── default.pp&lt;br /&gt;
 └── Vagrantfile&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Puppet manifest file&lt;br /&gt;
 vi manifests/default.pp&lt;br /&gt;
 exec { &amp;quot;apt-get update&amp;quot;:&lt;br /&gt;
        command =&amp;gt; &amp;quot;/usr/bin/apt-get update&amp;quot;,&lt;br /&gt;
 }&lt;br /&gt;
 package { &amp;quot;apache2&amp;quot;:&lt;br /&gt;
        require =&amp;gt; Exec[&amp;quot;apt-get update&amp;quot;],&lt;br /&gt;
 }&lt;br /&gt;
 file { &amp;quot;/var/www&amp;quot;:&lt;br /&gt;
        ensure =&amp;gt; link,&lt;br /&gt;
        target =&amp;gt; &amp;quot;/vagrant&amp;quot;,&lt;br /&gt;
        force =&amp;gt; true,&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
= Box images advanced=&lt;br /&gt;
 vagrant box list   #list all downloaded boxes&lt;br /&gt;
&lt;br /&gt;
Default path of boxes image, it can be specified by environment variable &amp;lt;tt&amp;gt;VAGRANT_HOME&amp;lt;/tt&amp;gt;&lt;br /&gt;
 C:\Users\%username%\.vagrant.d\boxes  #Windows&lt;br /&gt;
 ~/.vagrant.d/boxes                    #Linux&lt;br /&gt;
&lt;br /&gt;
Change default path via environment variable&lt;br /&gt;
 export VAGRANT_HOME=my/new/path/goes/here/&lt;br /&gt;
&lt;br /&gt;
==Box format==&lt;br /&gt;
When you un-tar the &amp;lt;tt&amp;gt;.box&amp;lt;/tt&amp;gt; file it contains 4 files:&lt;br /&gt;
 |--Vagrantfile&lt;br /&gt;
 |--box-disk1.vmdk  #compressed virtual disk&lt;br /&gt;
 |--box.ovf         #description of virtual hardware&lt;br /&gt;
 |--metadata.json&lt;br /&gt;
&lt;br /&gt;
== [https://www.vagrantup.com/docs/virtualbox/boxes.html Create box] from current project (package a box) ==&lt;br /&gt;
This allows to create a reusable box that contains all changes to the software we made, VirtualBox or Hyper-V supported only.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://www.vagrantup.com/docs/cli/package.html Command basics]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant package [options] [name|id]&lt;br /&gt;
# --base NAME - instead of packaging a VirtualBox machine that Vagrant manages, &lt;br /&gt;
#               this will package a VirtualBox machine that VirtualBox manages&lt;br /&gt;
# --output NAME - default is package.box&lt;br /&gt;
# --include x,y,z -  additional files will be packaged with the box&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Package&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ vagrant version # -&amp;gt; Installed Version: 2.2.9&lt;br /&gt;
&lt;br /&gt;
# Optional '--vagrantfile NAME' can be included, that automatically restores '--include' files &lt;br /&gt;
# learn more at https://www.vagrantup.com/docs/vagrantfile#load-order&lt;br /&gt;
$ time vagrant package --output u18cli-1.box --include data,git-host,git-host3rd,sync.sh,cleanup.sh&lt;br /&gt;
==&amp;gt; default: Clearing any previously set forwarded ports...&lt;br /&gt;
==&amp;gt; default: Exporting VM...&lt;br /&gt;
==&amp;gt; default: Compressing package to: /home/piotr/vms-vagrant/u18cli-1/2020-05-23-u18cli-1.box&lt;br /&gt;
==&amp;gt; default: Packaging additional file: data               # &amp;lt;- dir&lt;br /&gt;
==&amp;gt; default: Packaging additional file: git-host           # &amp;lt;- dir&lt;br /&gt;
==&amp;gt; default: Packaging additional file: git-host3rd        # &amp;lt;- dir&lt;br /&gt;
==&amp;gt; default: Packaging additional file: cleanup.sh         # &amp;lt;- file&lt;br /&gt;
real	15m27.324s user	8m23.550s sys	0m16.827s&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Re-ristribute the &amp;lt;tt&amp;gt;.box&amp;lt;/tt&amp;gt; file then restore it.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Add the packaged box to local system box repository&lt;br /&gt;
#                        _____box-name________ __box-file_____&lt;br /&gt;
$ vagrant box add --name box-packages/u18cli-1 u18cli-1-v1.box&lt;br /&gt;
==&amp;gt; box: Box file was not detected as metadata. Adding it directly...&lt;br /&gt;
==&amp;gt; box: Adding box 'u18cli-1-v1.box' (v0) for provider: &lt;br /&gt;
    box: Unpacking necessary files from: file:///home/piotr/vms-vagrant/test-box-restore/u18cli-1-v1.box&lt;br /&gt;
==&amp;gt; box: Successfully added box 'box-packages/u18cli-1' (v0) for 'virtualbox'!&lt;br /&gt;
&lt;br /&gt;
# List boxes&lt;br /&gt;
$ vagrant box list&lt;br /&gt;
box-packages/u18cli-1 (virtualbox, 0)&lt;br /&gt;
&lt;br /&gt;
$ ls -l ~/.vagrant.d/boxes&lt;br /&gt;
total 16&lt;br /&gt;
drwxrwxr-x 3 piotr piotr 4096 Jul 16 17:44 box-packages-VAGRANTSLASH-u18cli-1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Restore. Create/re-use Vagrantfile using box you added to your local box repository&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# vi Vagrantfile&lt;br /&gt;
config.vm.box = &amp;quot;box-packages/u18cli-1.box&amp;quot;&lt;br /&gt;
&lt;br /&gt;
vagrant up&lt;br /&gt;
# restore '--include' files coping them from&lt;br /&gt;
# 'ls -l ~/.vagrant.d/boxes/box-packages-VAGRANTSLASH-u18cli-1/0/virtualbox/include/*'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://tuhrig.de/resizing-vagrant-box-disk-space/ Resizing Vagrant box disk] =&lt;br /&gt;
* [https://www.vagrantup.com/docs/disks/usage Resizing primary disk] native way&lt;br /&gt;
&lt;br /&gt;
= Enable Vagrant to use proxy server for VMs =&lt;br /&gt;
Install proxyconf plugin or use &amp;lt;code&amp;gt;vagrant plugin list&amp;lt;/code&amp;gt; to verify if installed&lt;br /&gt;
 vagrant plugin install vagrant-proxyconf&lt;br /&gt;
&lt;br /&gt;
Configure your Vagrantfile, here particularly host 10.0.0.1:3128 runs CNTLM proxy&lt;br /&gt;
 Vagrant.configure(&amp;quot;2&amp;quot;) do |config| &lt;br /&gt;
     &amp;lt;nowiki&amp;gt;config.proxy.http = &amp;quot;http://10.0.0.1:3128&amp;quot;&lt;br /&gt;
    config.proxy.https = &amp;quot;http://10.0.0.1:3128&amp;quot;&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
     config.proxy.no_proxy = &amp;quot;localhost,127.0.0.1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
= Virtualbox Guest Additions =&lt;br /&gt;
== Sync using vagrant-vbguest plugin ==&lt;br /&gt;
Plugin install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant plugin install vagrant-vbguest&lt;br /&gt;
&lt;br /&gt;
# In a case of dependency issues you can temporary disable the check&lt;br /&gt;
VAGRANT_DISABLE_STRICT_DEPENDENCY_ENFORCEMENT=1 vagrant plugin install vagrant-vbguest&lt;br /&gt;
&lt;br /&gt;
# Verify current version, running on a host(hypervisor)&lt;br /&gt;
vagrant vbguest --status&lt;br /&gt;
&lt;br /&gt;
# Add to your Vagrant file&lt;br /&gt;
if Vagrant.has_plugin?(&amp;quot;vagrant-vbguest&amp;quot;)&lt;br /&gt;
  host.vbguest.auto_update = true&lt;br /&gt;
  host.vbguest.no_remote   = true&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Manual install&lt;br /&gt;
Download VBoxGuestAdditions from:&lt;br /&gt;
* https://download.virtualbox.org/virtualbox&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install a matching version to your host version onto the virtual machine.&lt;br /&gt;
wget https://download.virtualbox.org/virtualbox/7.0.16/VBoxGuestAdditions_7.0.16.iso&lt;br /&gt;
vagrant vbguest --do install --iso VBoxGuestAdditions_7.0.16.iso&lt;br /&gt;
&lt;br /&gt;
Usage: vagrant vbguest [vm-name] [--do start|rebuild|install] [--status] [-f|--force] [-b|--auto-reboot] [-R|--no-remote] [--iso VBoxGuestAdditions.iso] [--no-cleanup]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More you will find at [https://github.com/dotless-de/vagrant-vbguest vagrant-vbguest] plugin project.&lt;br /&gt;
&lt;br /&gt;
== Manual upgrade ==&lt;br /&gt;
Find out what version you are running, execute on a guest VM&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant@ubuntu:~$ modinfo vboxguest | grep ^version&lt;br /&gt;
version:        6.0.10 r132072&lt;br /&gt;
&lt;br /&gt;
vagrant@ubuntu:~$ lsmod | grep -io vboxguest | xargs modinfo | grep -iw version&lt;br /&gt;
version:        6.0.10 r132072&lt;br /&gt;
&lt;br /&gt;
vagrant@u18cli-3:~$ sudo /usr/sbin/VBoxService --version&lt;br /&gt;
6.0.10r132072&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download the extension, you can explore [http://download.virtualbox.org/virtualbox here]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
wget http://download.virtualbox.org/virtualbox/5.0.32/VBoxGuestAdditions_5.0.32.iso&lt;br /&gt;
#you need to get it mounted or extracted the content and run inside the VM.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
*[https://github.com/chilcano/box-vagrant-wso2-dev-srv/blob/master/_downloads/vagrant-vboxguestadditions-workaroud.md Upgrade Vbox extension additions within Vagrant box]&lt;br /&gt;
&lt;br /&gt;
= List all Virtualbox SSH redirections =&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
vboxmanage list vms | cut -d ' ' -f 2  &amp;gt; /tmp/vms.out &amp;amp;&amp;amp; for vm in $(cat /tmp/vms.out); do vboxmanage showvminfo &amp;quot;$vm&amp;quot; | grep ssh; done&lt;br /&gt;
vboxmanage list vms | cut -d ' ' -f 1 | sed 's/&amp;quot;//g' &amp;gt; /tmp/vms.out &amp;amp;&amp;amp; for vm in $(cat /tmp/vms.out); do echo $vm; vboxmanage showvminfo &amp;quot;$vm&amp;quot; | grep ssh; done&lt;br /&gt;
vboxmanage list vms \&lt;br /&gt;
  | cut -d ' ' -f 1 \&lt;br /&gt;
  | sed 's/&amp;quot;//g' &amp;gt; /tmp/vms.out \&lt;br /&gt;
  &amp;amp;&amp;amp; for vm in $(cat /tmp/vms.out); do vboxmanage showvminfo &amp;quot;$vm&amp;quot; \&lt;br /&gt;
                                      | grep ssh \&lt;br /&gt;
                                      | tr --delete '\n'; echo &amp;quot; $vm&amp;quot;; done&lt;br /&gt;
&lt;br /&gt;
sed 's/&amp;quot;//g'      #removes double quotes from whole string&lt;br /&gt;
tr --delete '\n'  #deletes EOL, so the next command output is appended to the previous line&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Vagrant file =&lt;br /&gt;
;Ruby gotchas&lt;br /&gt;
Vagrant configuration file is written in Ruby specification, therefore you need to remember&lt;br /&gt;
*don't use dashes in object names, '''don't''': &amp;lt;tt&amp;gt;jenkins-minion_config.vm.box = &amp;quot;ubuntu/xenial64&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
*don't use symbols (here underscore) in variable names, '''don't''': &amp;lt;tt&amp;gt;(1..2).each do |minion_number|&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== HAProxy cluster, multi-node Vagrant config  ==&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
git clone https://github.com/jweissig/episode-45&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This creates ''Ansible'' mgmt server, Load Balancer and Web nodes [1..2]. HAProxy will be configured via Ansible code.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# -*- mode: ruby -*-&lt;br /&gt;
# vi: set ft=ruby :&lt;br /&gt;
&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
 # create mgmt node&lt;br /&gt;
 config.vm.define :mgmt do |mgmt_config|&lt;br /&gt;
     mgmt_config.vm.box = &amp;quot;ubuntu/trusty64&amp;quot;&lt;br /&gt;
     mgmt_config.vm.hostname = &amp;quot;mgmt&amp;quot;&lt;br /&gt;
     mgmt_config.vm.network :private_network, ip: &amp;quot;10.0.15.10&amp;quot;&lt;br /&gt;
     mgmt_config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
       vb.memory = &amp;quot;256&amp;quot;&lt;br /&gt;
     end&lt;br /&gt;
     mgmt_config.vm.provision :shell, path: &amp;quot;bootstrap-mgmt.sh&amp;quot;&lt;br /&gt;
 end&lt;br /&gt;
&lt;br /&gt;
 # create load balancer&lt;br /&gt;
 config.vm.define :lb do |lb_config|&lt;br /&gt;
     lb_config.vm.box = &amp;quot;ubuntu/trusty64&amp;quot;&lt;br /&gt;
     lb_config.vm.hostname = &amp;quot;lb&amp;quot;&lt;br /&gt;
     lb_config.vm.network :private_network, ip: &amp;quot;10.0.15.11&amp;quot;&lt;br /&gt;
     lb_config.vm.network &amp;quot;forwarded_port&amp;quot;, guest: 80, host: 8080&lt;br /&gt;
     lb_config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
       vb.memory = &amp;quot;256&amp;quot;&lt;br /&gt;
     end&lt;br /&gt;
 end&lt;br /&gt;
&lt;br /&gt;
 # create some web servers&lt;br /&gt;
 # https://docs.vagrantup.com/v2/vagrantfile/tips.html&lt;br /&gt;
  (1..2).each do |i|&lt;br /&gt;
    config.vm.define &amp;quot;web#{i}&amp;quot; do |node|&lt;br /&gt;
        node.vm.box = &amp;quot;ubuntu/trusty64&amp;quot;&lt;br /&gt;
        node.vm.hostname = &amp;quot;web#{i}&amp;quot;&lt;br /&gt;
        node.vm.network :private_network, ip: &amp;quot;10.0.15.2#{i}&amp;quot;&lt;br /&gt;
        node.vm.network &amp;quot;forwarded_port&amp;quot;, guest: 80, host: &amp;quot;808#{i}&amp;quot;&lt;br /&gt;
        node.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
          vb.memory = &amp;quot;256&amp;quot;&lt;br /&gt;
        end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Boot strap script &amp;lt;tt&amp;gt;bootstrap-mgmt.sh&amp;lt;/tt&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;shell&amp;quot;&amp;gt;&lt;br /&gt;
#!/usr/bin/env bash &lt;br /&gt;
# install ansible (http://docs.ansible.com/intro_installation.html)&lt;br /&gt;
apt-get -y install software-properties-common&lt;br /&gt;
apt-add-repository -y ppa:ansible/ansible&lt;br /&gt;
apt-get update&lt;br /&gt;
apt-get -y install ansible&lt;br /&gt;
&lt;br /&gt;
# copy examples into /home/vagrant (from inside the mgmt node)&lt;br /&gt;
cp -a /vagrant/examples/* /home/vagrant&lt;br /&gt;
chown -R vagrant:vagrant /home/vagrant&lt;br /&gt;
&lt;br /&gt;
# configure hosts file for our internal network defined by Vagrantfile&lt;br /&gt;
cat &amp;gt;&amp;gt; /etc/hosts &amp;lt;&amp;lt;EOL&lt;br /&gt;
# vagrant environment nodes&lt;br /&gt;
10.0.15.10  mgmt&lt;br /&gt;
10.0.15.11  lb&lt;br /&gt;
10.0.15.21  web1&lt;br /&gt;
10.0.15.22  web2&lt;br /&gt;
10.0.15.23  web3&lt;br /&gt;
10.0.15.24  web4&lt;br /&gt;
10.0.15.25  web5&lt;br /&gt;
10.0.15.26  web6&lt;br /&gt;
10.0.15.27  web7&lt;br /&gt;
10.0.15.28  web8&lt;br /&gt;
10.0.15.29  web9&lt;br /&gt;
EOL&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Gitbash path -  &amp;lt;code&amp;gt;/c/Program\ Files/Oracle/VirtualBox/VBoxManage.exe&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Set bootstrap script for Proxy or No-proxy specific system&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
Vagrant status&lt;br /&gt;
Vagrant up&lt;br /&gt;
Vagrant ssh mgmt&lt;br /&gt;
ansible all --list-hosts&lt;br /&gt;
ssh-keyscan web1 web2 lb &amp;gt; ~/.ssh/known_hosts&lt;br /&gt;
ansible-playbook ssh-addkey.yml -u vagrant --ask-pass&lt;br /&gt;
ansible-playbook site.yml&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once set it up you can navigate on your laptop to:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
http://localhost:8080/              #Website test&lt;br /&gt;
http://localhost:8080/haproxy?stats #HAProxy stats&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Use to verify end server&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
curl -I http://localhost:8080&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:X-Backend-Server.png|none|left|Curl -i X-Backend-Server]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Generate web traffic&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
vagrant ssh lb&lt;br /&gt;
sudo apt-get install apache2-utils&lt;br /&gt;
ansible localhost -m apt -a &amp;quot;pkg=apache2-utils state=present&amp;quot; --become&lt;br /&gt;
ab -n 1000 -c 1 http://10.0.2.15:80/ # Apache replaced 'ab' with 'hey'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Vagrant DNS =&lt;br /&gt;
== Multi-machine mDNS discovery ==&lt;br /&gt;
Multi-machine setup requires 3 ingredients* :&lt;br /&gt;
* each machine to have different hostname&lt;br /&gt;
* a way of getting the IP address for a hostname (eg. mDNS)&lt;br /&gt;
* connect the VMs through a private network&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In multi-machine configuration we need a way of getting the IP address for a hostname. We use &amp;lt;code&amp;gt;mDNS&amp;lt;/code&amp;gt; for this. By default &amp;lt;codemDNS&amp;lt;/code&amp;gt; only resolves host names ending with the &amp;lt;code&amp;gt;.local&amp;lt;/code&amp;gt; top-level domain (TLD). This can cause problems if that domain includes hosts which do not implement mDNS but which can be found via a conventional unicast DNS server. Resolving such conflicts requires network-configuration changes that violate the zero-configuration goal. Install &amp;lt;code&amp;gt;avahi&amp;lt;/code&amp;gt; system on all machines to facilitate service discovery on a local network via the &amp;lt;code&amp;gt;mDNS/DNS-SD&amp;lt;/code&amp;gt; protocol suite.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
config.vm.provision &amp;quot;shell&amp;quot;, inline: &amp;lt;&amp;lt;-SCRIPT&lt;br /&gt;
  apt-get install -y avahi-daemon libnss-mdns&lt;br /&gt;
SCRIPT&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;References&lt;br /&gt;
*[https://github.com/lathiat/nss-mdns nss-mdns] system which allows hostname lookup of *.local hostnames via mDNS in all system programs using nsswitch&lt;br /&gt;
*[https://www.avahi.org/ avahi.org]&lt;br /&gt;
&lt;br /&gt;
== Set host system DNS server resolver ==&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
config.vm.provider :virtualbox do |vb|&lt;br /&gt;
    vb.customize [&amp;quot;modifyvm&amp;quot;, :id, &amp;quot;--natdnshostresolver1&amp;quot;, &amp;quot;on&amp;quot;]&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Ubuntu with GUI =&lt;br /&gt;
This article is going to describe to setup Vagrant Virtualbox VM with GUI, setting up xserver with xfce4 as desktop environment.&lt;br /&gt;
== Locales ==&lt;br /&gt;
This is not working&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
     locale-gen en_GB.utf8 #en_GB.UTF-8&lt;br /&gt;
     update-locale LANG=en_GB.UTF-8&lt;br /&gt;
     locale-gen --purge &amp;quot;en_GB.UTF-8&amp;quot;&lt;br /&gt;
     dpkg-reconfigure --frontend=noninteractive locales&lt;br /&gt;
     dpkg-reconfigure --frontend=noninteractive keyboard-configuration&lt;br /&gt;
     localedef -i en_GB -c -f UTF-8 en_GB.utf8&lt;br /&gt;
     sudo update-locale LANG=en_GB.UTF-8&lt;br /&gt;
     locale-gen --purge &amp;quot;en_GB.UTF-8&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Troubleshooting&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
locale -a #shows which locales are available on your system&lt;br /&gt;
sudo less /usr/share/i18n/SUPPORTED&lt;br /&gt;
cat /etc/default/locale&lt;br /&gt;
&lt;br /&gt;
#Set system wide locales (does not work for users)&lt;br /&gt;
localectl set-locale LANG=en_GB.UTF-8 LANGUAGE=en_GB:en&lt;br /&gt;
localectl set-keymap gb&lt;br /&gt;
localectl set-x11-keymap gb&lt;br /&gt;
&lt;br /&gt;
#Quick kb change&lt;br /&gt;
apt-get install -yq x11-xkb-utils; setxkbmap gb&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Gnome3 ==&lt;br /&gt;
This setup installs Ubuntu desktop and may require a restart to apply changes to like a taskbar with shortcuts.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;ruby&amp;quot;&amp;gt;&lt;br /&gt;
# -*- mode: ruby -*-&lt;br /&gt;
# vi: set ft=ruby :&lt;br /&gt;
&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
  config.vm.box      = &amp;quot;ubuntu/bionic64&amp;quot; #bento/ubuntu-18.04, ubuntu/xenial64&lt;br /&gt;
&lt;br /&gt;
  machineName = File.basename(Dir.pwd) #name as a current working dir&lt;br /&gt;
# machineName = 'u18gui-1'&lt;br /&gt;
  config.vm.hostname = machineName&lt;br /&gt;
&lt;br /&gt;
  # Manually check for updates `vagrant box outdated`&lt;br /&gt;
  config.vm.box_check_update = false&lt;br /&gt;
&lt;br /&gt;
  # Vbguest plugin&lt;br /&gt;
  if Vagrant.has_plugin?(&amp;quot;vagrant-vbguest&amp;quot;)&lt;br /&gt;
    config.vbguest.auto_update = false&lt;br /&gt;
    config.vbguest.no_remote   = true&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # config.vm.network &amp;quot;forwarded_port&amp;quot;, guest: 80, host: 8080, host_ip: &amp;quot;127.0.0.1&amp;quot;&lt;br /&gt;
  # Public network, which generally matched to bridged network.&lt;br /&gt;
  # config.vm.network &amp;quot;public_network&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  # config.vm.synced_folder &amp;quot;hostDir&amp;quot;, &amp;quot;/InVagrantMount/path&amp;quot; &lt;br /&gt;
  # config.vm.synced_folder &amp;quot;../data&amp;quot;, &amp;quot;/vagrant_data&amp;quot;&lt;br /&gt;
&lt;br /&gt;
   config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
     vb.gui    = true&lt;br /&gt;
     vb.memory = &amp;quot;3072&amp;quot;&lt;br /&gt;
     vb.name   = machineName + &amp;quot;_vagrant&amp;quot;&lt;br /&gt;
   end&lt;br /&gt;
  &lt;br /&gt;
   config.vm.provision &amp;quot;shell&amp;quot;, inline: &amp;lt;&amp;lt;-SHELL&lt;br /&gt;
     export DEBIAN_FRONTEND=noninteractive&lt;br /&gt;
     setxkbmap gb&lt;br /&gt;
     apt-get update &amp;amp;&amp;amp; apt-get upgrade -yq&lt;br /&gt;
     apt-get install -yq ubuntu-desktop --no-install-recommends&lt;br /&gt;
     apt-get install -yq terminator tmux&lt;br /&gt;
     #only U16 xenial to fix Unity&lt;br /&gt;
     #apt-get install -yq unity-lens-files unity-lens-applications indicator-session --no-install-recommends &lt;br /&gt;
   SHELL&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Running up&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
vagrant plugin install vagrant-vbguest&lt;br /&gt;
vagrant up &amp;amp;&amp;amp; vagrant vbguest --do install &amp;amp;&amp;amp; vagrant reload&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Xface ==&lt;br /&gt;
Get a basic Ubuntu image working, boot it up and vagrant ssh.&lt;br /&gt;
Next, enable the VirtualBox display, which is off by default. Halt the VM and uncomment these lines in Vagrantfile:&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
config.vm.provider :virtualbox do |vb|&lt;br /&gt;
  vb.gui = true&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Boot the VM and observe the new display window. Now you just need to install and start xfce4. Use vagrant ssh and:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get install -y xfce4 virtualbox-guest-dkms virtualbox-guest-utils virtualbox-guest-x11&lt;br /&gt;
#guest additions are already installed on most of the Vagrant boxes&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Don't start the GUI as root because you really want to stay the vagrant user. To do this you need to permit anyone to start the GUI: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo vim /etc/X11/Xwrapper.config and edit it to allowed_users=anybody&lt;br /&gt;
sudo startxfce4&amp;amp;&lt;br /&gt;
sudo VBoxClient-all #optional&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should be landed in a xfce4 session.&lt;br /&gt;
&lt;br /&gt;
(Optional) If VBoxClient-all script isn't installed or anything is missing, you can replace with the equivalent:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo VBoxClient --clipboard&lt;br /&gt;
sudo VBoxClient --draganddrop&lt;br /&gt;
sudo VBoxClient --display&lt;br /&gt;
sudo VBoxClient --checkhostversion&lt;br /&gt;
sudo VBoxClient --seamless&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
*[http://stackoverflow.com/questions/18878117/using-vagrant-to-run-virtual-machines-with-desktop-environment Vagrant GUI vms] stackoverflow&lt;br /&gt;
&lt;br /&gt;
= Windows=&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
# -*- mode: ruby -*-&lt;br /&gt;
# vi: set ft=ruby :&lt;br /&gt;
&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
  config.vm.box = &amp;quot;gusztavvargadr/windows-server&amp;quot;&lt;br /&gt;
  config.vm.box_check_update = false&lt;br /&gt;
  config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
     vb.gui = true       # Display the VirtualBox GUI when booting the machine&lt;br /&gt;
     vb.memory = &amp;quot;3072&amp;quot;  # Customize the amount of memory on the VM:&lt;br /&gt;
  end&lt;br /&gt;
  # Plugins&lt;br /&gt;
  config.vbguest.auto_update = false&lt;br /&gt;
  config.vbguest.no_remote = true&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Shared location&lt;br /&gt;
* enable Network Sharing&lt;br /&gt;
* Vagrant path is mapped to &amp;lt;code&amp;gt;\\VBOXSVR\vagrant&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= WIP DevOps workstation =&lt;br /&gt;
This to contain:&lt;br /&gt;
*bashrc with git branch in ps1&lt;br /&gt;
*bash autocomplete (...samename)&lt;br /&gt;
*bash colored symlinks&lt;br /&gt;
*bash_logout and .profile to eval ssh-agent and kill on exit&lt;br /&gt;
*git install&lt;br /&gt;
*ansible 1.9.4&lt;br /&gt;
*java Oracle&lt;br /&gt;
*clone tfenv and install terraform&lt;br /&gt;
*vim install&lt;br /&gt;
*vundle install&lt;br /&gt;
*[done] python 2.7 OOB in 16.04&lt;br /&gt;
*[done]python pip: awscli, boto, boto3, etc..&lt;br /&gt;
&lt;br /&gt;
Challenges:&lt;br /&gt;
*Ubuntu 16.04 official box does not come with a default ''vagrant'' user but instead comes with ''ubuntu'' user. This causes a number of incompatibilities.&lt;br /&gt;
**Read more at launchpad [https://bugs.launchpad.net/cloud-images/+bug/1569237 vagrant xenial box is not provided with vagrant/vagrant username and password ]&lt;br /&gt;
* Solutions&lt;br /&gt;
** on W10 host both users: ubuntu &amp;amp; vagrant exist. Only vagrant has insecure_pub installed OOB. I am coping vagrant user pub key into ubuntu user authorized_keys&lt;br /&gt;
** on U16.05 host the official image does not seem to come with vagrant user but Ubuntu user works OOB&lt;br /&gt;
** Read more at SO &lt;br /&gt;
***[Vagrant's Ubuntu 16.04 vagrantfile default password https://stackoverflow.com/questions/41337802/vagrants-ubuntu-16-04-vagrantfile-default-password]&lt;br /&gt;
***[https://stackoverflow.com/questions/30075461/how-do-i-add-my-own-public-key-to-vagrant-vm How do I add my own public key to Vagrant VM?]&lt;br /&gt;
*** [https://blog.ouseful.info/2015/07/27/running-a-shell-script-once-only-in-vagrant/ Running a Shell Script Once Only in vagrant]&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
*[https://www.vagrantup.com/docs/getting-started/ Vagrant Start up documentation]&lt;br /&gt;
*[https://atlas.hashicorp.com/boxes/search Vagrant Hashicorp VMs repository] Virtualbox&lt;br /&gt;
*[https://cloud-images.ubuntu.com/vagrant/ Vagrant Ubuntu VMs images] Virtualbox&lt;br /&gt;
*[https://www.vagrantup.com/docs/provisioning/ansible_intro.html Vagrant and Ansible provisioner] Vagrant docs&lt;br /&gt;
*[https://manski.net/2016/09/vagrant-multi-machine-tutorial/#multi-machine.3A-the-naive-way Vagrant Tutorial – From Nothing To Multi-Machine] Tutorial&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Kubernetes/Tools&amp;diff=7043</id>
		<title>Kubernetes/Tools</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Kubernetes/Tools&amp;diff=7043"/>
		<updated>2025-02-11T09:41:29Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Install kubectl */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= kubectl =&lt;br /&gt;
== Install kubectl ==&lt;br /&gt;
List of kubectl [https://kubernetes.io/releases/ releases].&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# List releases&lt;br /&gt;
curl -s https://api.github.com/repos/kubernetes/kubernetes/releases | jq -r '.[].tag_name' | sort -V&lt;br /&gt;
curl -s https://api.github.com/repos/kubernetes/kubernetes/releases | jq -r '[.[] | select(.prerelease == false) | .tag_name] | map(sub(&amp;quot;^v&amp;quot;;&amp;quot;&amp;quot;)) | map(split(&amp;quot;.&amp;quot;)) | group_by(.[0:2]) | map(max_by(.[2]|tonumber)) | map(join(&amp;quot;.&amp;quot;)) | map(&amp;quot;v&amp;quot; + .) | sort | reverse | .[]'&lt;br /&gt;
v1.32.1&lt;br /&gt;
v1.31.5&lt;br /&gt;
v1.30.9&lt;br /&gt;
v1.29.13&lt;br /&gt;
v1.28.15&lt;br /&gt;
&lt;br /&gt;
# Latest&lt;br /&gt;
ARCH=amd64 # amd64|arm&lt;br /&gt;
VERSION=$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt); echo $VERSION&lt;br /&gt;
curl -LO https://storage.googleapis.com/kubernetes-release/release/$VERSION/bin/linux/$ARCH/kubectl&lt;br /&gt;
&lt;br /&gt;
# Specific version&lt;br /&gt;
# Find specific Kubernetes release, then download kubectl&lt;br /&gt;
VERSION=v1.29.13; ARCH=amd64 # amd64|arm&lt;br /&gt;
curl -LO https://dl.k8s.io/release/$VERSION/bin/linux/$ARCH/kubectl&lt;br /&gt;
sudo install ./kubectl /usr/local/bin/kubectl&lt;br /&gt;
&lt;br /&gt;
# Note: sudo install := chmod +x ./kubectl; sudo mv&lt;br /&gt;
&lt;br /&gt;
# Verify, kubectl should not be more than -/+ 1 minor version difference then api-server&lt;br /&gt;
kubectl version &lt;br /&gt;
Client Version: v1.29.13&lt;br /&gt;
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3&lt;br /&gt;
Server Version: v1.29.12-gke.1120001&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Google way&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install kubectl if you don't already have a suitable version&lt;br /&gt;
# https://kubernetes.io/docs/setup/release/version-skew-policy/#kubectl&lt;br /&gt;
kubectl version --client || gcloud components install kubectl&lt;br /&gt;
kubectl get clusterrolebinding $(gcloud config get-value core/account)-cluster-admin ||&lt;br /&gt;
  kubectl create clusterrolebinding $(gcloud config get-value core/account)-cluster-admin \&lt;br /&gt;
  --clusterrole=cluster-admin \&lt;br /&gt;
  --user=&amp;quot;$(gcloud config get-value core/account)&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
kubectl plugin called [https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke gke-gcloud-auth-plugin]&lt;br /&gt;
* [https://cloud.google.com/sdk/docs/install#deb Install Google Cloud SDK]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo apt-get install apt-transport-https ca-certificates gnupg curl&lt;br /&gt;
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /usr/share/keyrings/cloud.google.gpg&lt;br /&gt;
echo &amp;quot;deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main&amp;quot; | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list&lt;br /&gt;
sudo apt-get update &lt;br /&gt;
sudo apt-get install google-cloud-cli # required to authenticate with GCP&lt;br /&gt;
sudo apt-get install google-cloud-sdk-gke-gcloud-auth-plugin&lt;br /&gt;
gcloud init&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Autocompletion and kubeconfig ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
source &amp;lt;(kubectl completion bash); alias k=kubectl; complete -F __start_kubectl k&lt;br /&gt;
&lt;br /&gt;
# Set default namespace&lt;br /&gt;
kubectl config set-context --current --namespace=dev&lt;br /&gt;
kubectl config set-context $(kubectl config current-context) --namespace=dev&lt;br /&gt;
&lt;br /&gt;
vi ~/.kube/config&lt;br /&gt;
...&lt;br /&gt;
contexts:&lt;br /&gt;
- context:&lt;br /&gt;
    cluster: kubernetes&lt;br /&gt;
    user: kubernetes-admin&lt;br /&gt;
    namespace: web       # default namespace&lt;br /&gt;
  name: dev-frontend&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Add &amp;lt;code&amp;gt;proxy-url&amp;lt;/code&amp;gt; using &amp;lt;code&amp;gt;yq&amp;lt;/code&amp;gt; to kubeconfig ==&lt;br /&gt;
Minimum yq version required is v2.x, tested with yq 2.13.0. The example below does the file inline `-i` update.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
yq -i -y --indentless '.clusters[0].cluster += {&amp;quot;proxy-url&amp;quot;: &amp;quot;http://proxy.acme.com:8080&amp;quot;}' ~/.kube/$ENVIRONMENT&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Get resources and cheatsheet ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Get a list of nodes&lt;br /&gt;
kubectl get nodes -o jsonpath=&amp;quot;{.items[*].metadata.name}&amp;quot;&lt;br /&gt;
ip-10-10-10-10.eu-west-1.compute.internal ip-10-10-10-20.eu-west-1.compute.internal&lt;br /&gt;
&lt;br /&gt;
kubectl get nodes -oname&lt;br /&gt;
node/ip-10-10-10-10.eu-west-1.compute.internal&lt;br /&gt;
node/ip-10-10-10-20.eu-west-1.compute.internal&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
# Pods sorted by node name&lt;br /&gt;
kubectl get pods --sort-by=.spec.nodeName -owide -A&lt;br /&gt;
&lt;br /&gt;
# Watch a namespace in a convinient resources order | sts=statefulset, rs=replicaset, ep=endpoint, cm=configmap&lt;br /&gt;
watch -d kubectl -n dev get sts,deploy,rc,rs,pods,svc,ep,ing,pvc,cm,sa,secret,es,cronjob,job -owide --show-labels &lt;br /&gt;
   # note es - externalsecrets&lt;br /&gt;
watch -d 'kubectl get pv -owide --show-labels | grep -e &amp;lt;eg.NAMESPACE&amp;gt;'&lt;br /&gt;
watch -d helm list -A&lt;br /&gt;
&lt;br /&gt;
# Test your context by creating configMap&lt;br /&gt;
kubectl create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2&lt;br /&gt;
kubectl delete configmap my-config&lt;br /&gt;
&lt;br /&gt;
# Watch multiple namespaces&lt;br /&gt;
eval 'kubectl --context='{context1,context2}' --namespace='{ns1,ns2}' get pod;'&lt;br /&gt;
eval kubectl\ --context={context1,context2}\ --namespace={ns1,ns2}\ get\ pod\;&lt;br /&gt;
watch -d eval 'kubectl -n '{default,ingress-nginx}' get sts,deploy,rc,rs,pods,svc,ep,ing,pvc,cm,sa,secret,es,cronjob,job -owide --show-labels;'&lt;br /&gt;
&lt;br /&gt;
# Auth, can-i&lt;br /&gt;
kubectl auth can-i delete pods&lt;br /&gt;
yes&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Get yaml from existing object ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl create namespace kiali --dry-run=client -o yaml &amp;gt; ns.yaml&lt;br /&gt;
kubectl create namespace kiali --dry-run=client -o yaml | kubectl apply -f -&lt;br /&gt;
&lt;br /&gt;
# Saves version revision in metadata.annotations.kubectl.kubernetes.io/last-applied-configuration={..manifest_json..} &lt;br /&gt;
kubectl create ns foo --save-config&lt;br /&gt;
&lt;br /&gt;
# Get a yaml without status information, almost clean manifest. Deprecated '--export' before &amp;lt;v17.x.&lt;br /&gt;
kubectl -n web get pod &amp;lt;podName&amp;gt; -oyaml --export&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Generate pod manifest, the most clean way I know&lt;br /&gt;
&amp;lt;syntaxhighlightjs lang=bash&amp;gt;&lt;br /&gt;
# kubectl -n foo run --image=ubuntu:20.04 ubuntu-1 --dry-run=client -oyaml -- bash -c sleep&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Pod&lt;br /&gt;
metadata:&lt;br /&gt;
  creationTimestamp: null  # &amp;lt;- can be deleted&lt;br /&gt;
  labels:&lt;br /&gt;
    run: ubuntu-1&lt;br /&gt;
  name: ubuntu-1&lt;br /&gt;
  namespace: foo&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
  - args:&lt;br /&gt;
    - bash&lt;br /&gt;
    - -c&lt;br /&gt;
    - sleep&lt;br /&gt;
    image: ubuntu:20.04&lt;br /&gt;
    name: ubuntu-1&lt;br /&gt;
    resources: {}  # &amp;lt;- can be deleted&lt;br /&gt;
  dnsPolicy: ClusterFirst&lt;br /&gt;
  restartPolicy: Always&lt;br /&gt;
status: {}         # &amp;lt;- can be deleted&lt;br /&gt;
&amp;lt;/syntaxhighlightjs&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;code&amp;gt;kubectl cp&amp;lt;/code&amp;gt; ==&lt;br /&gt;
Each container must be prefixed with a namespace, the copy-to-file(&amp;lt;filename&amp;gt;) must be in place. The recursive copy might be tricky. &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl cp [[namespace/]pod:]file/path ./&amp;lt;filename&amp;gt; -c &amp;lt;container_name&amp;gt;&lt;br /&gt;
kubectl cp vegeta/vegeta-5847d879d8-p9kqw:plot.html ./plot.html -c vegeta&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== One liners ==&lt;br /&gt;
=== Single purpose pods ===&lt;br /&gt;
Note: &amp;lt;code&amp;gt;--generator=deployment/apps.v1&amp;lt;/code&amp;gt; is DEPRECATED and will be removed, use &amp;lt;code&amp;gt;--generator=run-pod/v1 &amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;kubectl create&amp;lt;/code&amp;gt; instead.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Exec to deployment, no need to specify unique pod name&lt;br /&gt;
kubectl exec -it deploy/sleep -- curl httpbin:8000/headers&lt;br /&gt;
&lt;br /&gt;
NS=mynamespace; LABEL='app.kubernetes.io/name=myvalue'&lt;br /&gt;
kubectl exec -n $NS -it $(kubectl get pod -l &amp;quot;$LABEL&amp;quot; -n $NS -o jsonpath='{.items[0].metadata.name}') -- bash&lt;br /&gt;
&lt;br /&gt;
# Echo server&lt;br /&gt;
kubectl run --image=k8s.gcr.io/echoserver:1.4 hello-1 --port=8080&lt;br /&gt;
&lt;br /&gt;
# Single purpose pods&lt;br /&gt;
kubectl run    --image=bitnami/kubectl:1.21.8 kubectl-1    --rm -it -- get pods&lt;br /&gt;
kubectl run    --image=appropriate/curl       curl-1       --rm -it -- sh&lt;br /&gt;
kubectl run    --image=ubuntu:18.04     ubuntu-1  --rm -it -- bash&lt;br /&gt;
kubectl create --image=ubuntu:20.04     ubuntu-2  --rm -it -- bash&lt;br /&gt;
kubectl run    --image=busybox:1.31.0   busybox-1 --rm -it -- sh          # exec and delete when completed&lt;br /&gt;
kubectl run    --image=busybox:1.31.0   busybox-2          -- sleep 7200  # sleep, so you can exec&lt;br /&gt;
kubectl run    --image=alpine           alpine-1  --rm -it -- ping -c 1 8.8.8.8&lt;br /&gt;
 docker run    --rm -it --name alpine-1 alpine                ping -c 1 8.8.8.8&lt;br /&gt;
&lt;br /&gt;
# Network-multitool | https://github.com/wbitt/Network-MultiTool | Runs as a webserver, so won't complete.&lt;br /&gt;
kubectl run    --image=wbitt/network-multitool multitool-1&lt;br /&gt;
kubectl create --image=wbitt/network-multitool deployment multitool&lt;br /&gt;
kubectl exec -it multitool-1          -- /bin/bash&lt;br /&gt;
kubectl exec -it deployment/multitool -- /bin/bash&lt;br /&gt;
docker run --rm -it --name network-multitool wbitt/network-multitool bash&lt;br /&gt;
&lt;br /&gt;
# Curl&lt;br /&gt;
kubectl run test --image=tutum/curl -- sleep 10000&lt;br /&gt;
&lt;br /&gt;
# Deprecation syntax&lt;br /&gt;
kubectl run --image=k8s.gcr.io/echoserver:1.4 --generator=run-pod/v1         hello-1 --port=8080 # VALID!&lt;br /&gt;
kubectl run --image=k8s.gcr.io/echoserver:1.4 --generator=deployment/apps.v1 hello-1 --port=8080 # &amp;lt;- deprecated&lt;br /&gt;
&lt;br /&gt;
# Errors&lt;br /&gt;
# | error: --rm should only be used for attached containers&lt;br /&gt;
# | Error: unknown flag: --image # when kubectl create --image&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Additional software&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Process and network comamnds&lt;br /&gt;
export DEBIAN_FRONTEND=noninteractive # Ubuntu 20.04&lt;br /&gt;
DEBIAN_FRONTEND=noninteractive apt install -yq dnsutils iproute2 iputils-ping iputils-tracepath net-tools netcat procps&lt;br /&gt;
# | dnsutils     - nslookup, dig&lt;br /&gt;
# | iproute2     - ip addr, ss&lt;br /&gt;
# | iputils-ping      - ping&lt;br /&gt;
# | iputils-tracepath - tracepath&lt;br /&gt;
# | net-tools    - ifconfig&lt;br /&gt;
# | netcat       - nc&lt;br /&gt;
# | procps       - ps, top&lt;br /&gt;
&lt;br /&gt;
# Databases&lt;br /&gt;
apt install -yq redis-tools&lt;br /&gt;
apt install -yq postgresql-client&lt;br /&gt;
&lt;br /&gt;
# AWS cli v1 - Debian&lt;br /&gt;
apt install python-pip&lt;br /&gt;
pip install awscli&lt;br /&gt;
&lt;br /&gt;
# Network test without ping, nc or telnet&lt;br /&gt;
(timeout 1 bash -c '&amp;lt;/dev/tcp/127.0.0.1/22 &amp;amp;&amp;amp; echo PORT OPEN || echo PORT CLOSED') 2&amp;gt;/dev/null&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;kubectl heredocs&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl apply -f &amp;lt;(cat &amp;lt;&amp;lt;EOF&lt;br /&gt;
EOF&lt;br /&gt;
) --dry-run=server&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;One lines move to yamls&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
# kubectl exec -it ubuntu-2 -- bash&lt;br /&gt;
kubectl apply -f &amp;lt;(cat &amp;lt;&amp;lt;EOF&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Pod&lt;br /&gt;
metadata:&lt;br /&gt;
# namespace: default&lt;br /&gt;
  name: ubuntu-1&lt;br /&gt;
# annotations:&lt;br /&gt;
#   kubernetes.io/psp: eks.privileged&lt;br /&gt;
# labels:&lt;br /&gt;
#   app: ubuntu&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
  - command:&lt;br /&gt;
    - &amp;quot;sleep&amp;quot;&lt;br /&gt;
    - &amp;quot;7200&amp;quot;&lt;br /&gt;
#   args:&lt;br /&gt;
#   - &amp;quot;bash&amp;quot;&lt;br /&gt;
    image: ubuntu:20.04&lt;br /&gt;
    imagePullPolicy: IfNotPresent&lt;br /&gt;
    name: ubuntu-1&lt;br /&gt;
#   securityContext:&lt;br /&gt;
#     privileged: true&lt;br /&gt;
#   tty: true&lt;br /&gt;
# dnsPolicy: ClusterFirst&lt;br /&gt;
# enableServiceLinks: true&lt;br /&gt;
  restartPolicy: Never&lt;br /&gt;
# serviceAccount    : sa1&lt;br /&gt;
# serviceAccountName: sa1&lt;br /&gt;
# nodeSelector:&lt;br /&gt;
#   node.kubernetes.io/lifecycle: spot&lt;br /&gt;
EOF&lt;br /&gt;
) --dry-run=server&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Docker - for a single missing commands ===&lt;br /&gt;
If you ever miss some commands you can use docker container package with it:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# curl - missing on minikube node that runs CoreOS&lt;br /&gt;
minikube -p metrics ip; minikube ssh&lt;br /&gt;
docker run appropriate/curl -- http://&amp;lt;NodeIP&amp;gt;:10255/stats/summary # check kubelet-metrics non secure endpoint&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== [https://kubernetes.io/blog/2019/01/14/apiserver-dry-run-and-kubectl-diff/ &amp;lt;code&amp;gt;kubectl diff&amp;lt;/code&amp;gt;] ==&lt;br /&gt;
Shows the differences between the current '''live''' object and the new '''dry-run''' object.&lt;br /&gt;
&amp;lt;source lang=diff&amp;gt;&lt;br /&gt;
kubectl diff -f webfront-deploy.yaml&lt;br /&gt;
diff -u -N /tmp/LIVE-761963756/apps.v1.Deployment.default.webfront-deploy /tmp/MERGED-431884635/apps.v1.Deployment.default.webfront-deploy&lt;br /&gt;
--- /tmp/LIVE-761963756/apps.v1.Deployment.default.webfront-deploy      2019-10-13 17:46:59.784000000 +0000&lt;br /&gt;
+++ /tmp/MERGED-431884635/apps.v1.Deployment.default.webfront-deploy    2019-10-13 17:46:59.788000000 +0000&lt;br /&gt;
@@ -4,7 +4,7 @@&lt;br /&gt;
   annotations:&lt;br /&gt;
     deployment.kubernetes.io/revision: &amp;quot;1&amp;quot;&lt;br /&gt;
   creationTimestamp: &amp;quot;2019-10-13T16:38:43Z&amp;quot;&lt;br /&gt;
-  generation: 2&lt;br /&gt;
+  generation: 3&lt;br /&gt;
   labels:&lt;br /&gt;
     app: webfront-deploy&lt;br /&gt;
   name: webfront-deploy&lt;br /&gt;
@@ -14,7 +14,7 @@&lt;br /&gt;
   uid: ebaf757e-edd7-11e9-8060-0a2fb3cdd79a&lt;br /&gt;
 spec:&lt;br /&gt;
   progressDeadlineSeconds: 600&lt;br /&gt;
-  replicas: 2&lt;br /&gt;
+  replicas: 1&lt;br /&gt;
   revisionHistoryLimit: 10&lt;br /&gt;
   selector:&lt;br /&gt;
     matchLabels:&lt;br /&gt;
@@ -29,6 +29,7 @@&lt;br /&gt;
       creationTimestamp: null&lt;br /&gt;
       labels:&lt;br /&gt;
         app: webfront-deploy&lt;br /&gt;
+        role: webfront&lt;br /&gt;
     spec:&lt;br /&gt;
       containers:&lt;br /&gt;
       - image: nginx:1.7.8&lt;br /&gt;
exit status 1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Kubectl-plugins - [https://krew.sigs.k8s.io/docs/ Krew] plugin manager ==&lt;br /&gt;
Install [https://github.com/kubernetes-sigs/krew krew] package manager for kubectl plugins, requires K8s v1.12+&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
(&lt;br /&gt;
  set -x; cd &amp;quot;$(mktemp -d)&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  OS=&amp;quot;$(uname | tr '[:upper:]' '[:lower:]')&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  ARCH=&amp;quot;$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  KREW=&amp;quot;krew-${OS}_${ARCH}&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  curl -fsSLO &amp;quot;https://github.com/kubernetes-sigs/krew/releases/latest/download/${KREW}.tar.gz&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  tar zxvf &amp;quot;${KREW}.tar.gz&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  ./&amp;quot;${KREW}&amp;quot; install krew&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# update PATH&lt;br /&gt;
[ -d ${HOME}/.krew/bin ] &amp;amp;&amp;amp; export PATH=&amp;quot;${PATH}:${HOME}/.krew/bin&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# List plugins&lt;br /&gt;
kubectl krew search&lt;br /&gt;
&lt;br /&gt;
# Install plugins&lt;br /&gt;
kubectl krew install sniff&lt;br /&gt;
&lt;br /&gt;
# Upgrade plugins&lt;br /&gt;
kubectl krew upgrade&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*[https://github.com/kubernetes-sigs/krew-index/blob/master/plugins.md Available kubectl plugins] Github&lt;br /&gt;
*[https://ahmet.im/blog/kubectl-plugins/ kubectl subcommands] write your own plugin&lt;br /&gt;
&lt;br /&gt;
== Install kubectl plugins ==&lt;br /&gt;
&amp;lt;code&amp;gt;kubectl ctx&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;kubectl ns&amp;lt;/code&amp;gt; - change context and set default namespace&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl krew install ctx ns&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;kubectl cssh&amp;lt;/code&amp;gt; - SSH into Kubernetes nodes ===&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Ssh to all nodes, example below for EKS v1.15.11&lt;br /&gt;
kubectl cssh -u ec2-user -i /git/secrets/ssh/dev.pem -a &amp;quot;InternalIP&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;code&amp;gt;[https://github.com/FairwindsOps/pluto kubectl deprecations]&amp;lt;/code&amp;gt;===&lt;br /&gt;
;&amp;lt;code&amp;gt;[https://github.com/FairwindsOps/pluto kubectl deprecations]&amp;lt;/code&amp;gt;: shows all the deprecated objects in a Kubernetes cluster allowing the operator to verify them before upgrading the cluster. It uses the swagger.json version available in master branch of Kubernetes repository (https://github.com/kubernetes/kubernetes/tree/master/api/openapi-spec) as a reference.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl deprecations&lt;br /&gt;
StatefulSet found in statefulsets.apps/v1beta1&lt;br /&gt;
	 ├─ API REMOVED FROM THE CURRENT VERSION AND SHOULD BE MIGRATED IMMEDIATELY!!&lt;br /&gt;
		-&amp;gt; OBJECT: myapp namespace: mynamespace1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Prior upgrade report. Script specific to EKS.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
[[ $# -eq 0 ]] &amp;amp;&amp;amp; echo &amp;quot;no args, provide prefix for the file name&amp;quot; &amp;amp;&amp;amp; exit 1&lt;br /&gt;
PREFIX=$1&lt;br /&gt;
TARGET_K8S_VER=v1.16.8&lt;br /&gt;
K8Sid=$(kubectl cluster-info | head -1 | cut -d'/' -f3 | cut -d'.' -f1)&lt;br /&gt;
kubectl deprecations --k8s-version $TARGET_K8S_VER &amp;gt; $PREFIX-$(kubectl cluster-info | head -1 | cut -d'/' -f3 | cut -d'.' -f1)-$(date +&amp;quot;%Y%m%d-%H%M&amp;quot;)-from-$(kubectl version --short | grep Server | cut -f3 -d' ')-to-${TARGET_K8S_VER}.yaml&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ ./kube-deprecations.sh test&lt;br /&gt;
$ ls -l&lt;br /&gt;
-rw-rw-r-- 1 vagrant vagrant 29356 Jun 29 16:09 test-11111111112222222222333333333344-20200629-1609-from-v1.15.11-eks-af3caf-to-latest.yaml&lt;br /&gt;
-rw-rw-r-- 1 vagrant vagrant   852 Jun 30 22:41 test-11111111112222222222333333333344-20200630-2241-from-v1.15.11-eks-af3caf-to-v1.16.8.yaml&lt;br /&gt;
-rwxrwxr-x 1 vagrant vagrant   437 Jun 30 22:41 kube-deprecations.sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;code&amp;gt;kubectl df-pv&amp;lt;/code&amp;gt;===&lt;br /&gt;
;&amp;lt;code&amp;gt;kubectl df-pv&amp;lt;/code&amp;gt;: Show disk usage (like unix df) for persistent volumes&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl df-pv&lt;br /&gt;
PVC                   NAMESPACE   POD                    SIZE          USED        AVAILABLE     PERCENTUSED   IUSED   IFREE     PERCENTIUSED&lt;br /&gt;
rdbms-volume          shared1     rdbms-d494fbf4-xrssk   2046640128    252817408   1777045504    12.35         688     130384    0.52&lt;br /&gt;
userdata-0            shared2     mft-0                  21003583488   57692160    20929114112   0.27          749     1309971   0.06&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
===&amp;lt;code&amp;gt;kubectl sniff&amp;lt;/code&amp;gt;===&lt;br /&gt;
Start a remote packet capture on pods using tcpdump.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl sniff hello-minikube-7c77b68cff-qbvsd -c hello-minikube&lt;br /&gt;
# Flags:&lt;br /&gt;
#   -c, --container string             container (optional)&lt;br /&gt;
#   -x, --context string               kubectl context to work on (optional)&lt;br /&gt;
#   -f, --filter string                tcpdump filter (optional)&lt;br /&gt;
#   -h, --help                         help for sniff&lt;br /&gt;
#       --image string                 the privileged container image (optional)&lt;br /&gt;
#   -i, --interface string             pod interface to packet capture (optional) (default &amp;quot;any&amp;quot;)&lt;br /&gt;
#   -l, --local-tcpdump-path string    local static tcpdump binary path (optional)&lt;br /&gt;
#   -n, --namespace string             namespace (optional) (default &amp;quot;default&amp;quot;)&lt;br /&gt;
#   -o, --output-file string           output file path, tcpdump output will be redirect to this file instead of wireshark (optional) ('-' stdout)&lt;br /&gt;
#   -p, --privileged                   if specified, ksniff will deploy another pod that have privileges to attach target pod network namespace&lt;br /&gt;
#   -r, --remote-tcpdump-path string   remote static tcpdump binary path (optional) (default &amp;quot;/tmp/static-tcpdump&amp;quot;)&lt;br /&gt;
#   -v, --verbose                      if specified, ksniff output will include debug information (optional)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The command above will open Wireshark, interesting article to follow:&lt;br /&gt;
* [https://istio.io/latest/docs/tasks/security/authentication/mtls-migration/#set-up-the-cluster mutual TLS] istio&lt;br /&gt;
* [https://dzone.com/articles/verifying-service-mesh-tls-in-kubernetes-using-ksn Verifying Service Mesh TLS in Kubernetes, Using Ksniff and Wireshark]&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;code&amp;gt;kubectl neat&amp;lt;/code&amp;gt;===&lt;br /&gt;
Print sanitized Kubernetes manifest.&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
kubectl get csec  dummy-secret -n clustersecret -oyaml | kubectl neat&lt;br /&gt;
apiVersion: clustersecret.io/v1&lt;br /&gt;
data:&lt;br /&gt;
  tls.crt: ***&lt;br /&gt;
  tls.key: ***&lt;br /&gt;
kind: ClusterSecret&lt;br /&gt;
matchNamespace:&lt;br /&gt;
- anothernamespace&lt;br /&gt;
metadata:&lt;br /&gt;
  name: dummy-secret&lt;br /&gt;
  namespace: clustersecret&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Getting help like manpages &amp;lt;code&amp;gt;kubectl explain&amp;lt;/code&amp;gt; ==&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
$ kubectl --help&lt;br /&gt;
$ kubectl get --help&lt;br /&gt;
$ kubectl explain --help&lt;br /&gt;
$ kubectl explain pod.spec.containers # kubectl knows cluster version, so gives you correct schema details&lt;br /&gt;
$ kubectl explain pods.spec.tolerations --recursive # show only fields&lt;br /&gt;
(...)&lt;br /&gt;
FIELDS:&lt;br /&gt;
   effect	&amp;lt;string&amp;gt;&lt;br /&gt;
   key	&amp;lt;string&amp;gt;&lt;br /&gt;
   operator	&amp;lt;string&amp;gt;&lt;br /&gt;
   tolerationSeconds	&amp;lt;integer&amp;gt;&lt;br /&gt;
   value	&amp;lt;string&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*[https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-strong-getting-started-strong- kubectl-commands] K8s interactive kubectl command reference&lt;br /&gt;
&lt;br /&gt;
= Watch Containers logs =&lt;br /&gt;
== [https://github.com/stern/stern Stern] ==&lt;br /&gt;
{{note| https://github.com/wercker/stern repository has no activity [https://github.com/wercker/stern/issues/140 ISSUE-140], the new community maintain repo is &amp;lt;tt&amp;gt;[https://github.com/stern/stern stern/stern]&amp;lt;/tt&amp;gt;  }}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Log tailing and landscape viewing tool. It connects to kubeapi and streams logs from all pods. Thus using this external tool with clusters that have 100ts of containers can be put significant load on kubeapi.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It will re-use kubectl config file to connect to your clusters, so works oob.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Govendor - this module manager is required&lt;br /&gt;
export GOPATH=$HOME/go        # path where go modules can be found, used by 'go get -u &amp;lt;url&amp;gt;'&lt;br /&gt;
export PATH=$PATH:$GOPATH/bin # path to the additional 'go' binaries&lt;br /&gt;
go get -u github.com/kardianos/govendor  # there will be no output&lt;br /&gt;
&lt;br /&gt;
# Stern (official)&lt;br /&gt;
mkdir -p $GOPATH/src/github.com/stern # new link: https://github.com/stern/stern&lt;br /&gt;
cd $GOPATH/src/github.com/stern&lt;br /&gt;
git clone https://github.com/stern/stern.git &amp;amp;&amp;amp; cd stern&lt;br /&gt;
govendor sync # there will be no output, may take 2 min&lt;br /&gt;
go install    # no output&lt;br /&gt;
&lt;br /&gt;
# Stern latest, download binary, no need for govendor&lt;br /&gt;
REPO=stern/stern&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/$REPO/releases/latest&amp;quot; | jq -r .tag_name | tr -d v); echo $LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d); FILE=stern_${LATEST}_linux_amd64&lt;br /&gt;
curl -L https://github.com/$REPO/releases/download/v${LATEST}/$FILE.tar.gz -o $TEMPDIR/$FILE.tar.gz&lt;br /&gt;
sudo tar xzvf $TEMPDIR/$FILE.tar.gz -C /usr/local/bin/ stern&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Usage&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Regex filter (pod-query) to match 2 pods patterns 'proxy' and 'gateway'&lt;br /&gt;
stern -n dev --kubeconfig ~/.kube/dev-config \(proxy\|gateway\)  # escape to protect regex mod characters&lt;br /&gt;
stern -n dev --kubeconfig ~/.kube/dev-config '(proxy|gateway)'   # single-quote to protect mod characters&lt;br /&gt;
&lt;br /&gt;
# Template the output&lt;br /&gt;
stern --template '{{.Message}} ({{.NodeName}}/{{.Namespace}}/{{.PodName}}/{{.ContainerName}}){{&amp;quot;\n&amp;quot;}}' .&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Help&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ stern&lt;br /&gt;
Tail multiple pods and containers from Kubernetes&lt;br /&gt;
&lt;br /&gt;
Usage:&lt;br /&gt;
  stern pod-query [flags]&lt;br /&gt;
&lt;br /&gt;
Flags:&lt;br /&gt;
  -A, --all-namespaces             If present, tail across all namespaces. A specific namespace is ignored even if specified with --namespace.&lt;br /&gt;
      --color string               Color output. Can be 'always', 'never', or 'auto' (default &amp;quot;auto&amp;quot;)&lt;br /&gt;
      --completion string          Outputs stern command-line completion code for the specified shell. Can be 'bash' or 'zsh'&lt;br /&gt;
  -c, --container string           Container name when multiple containers in pod (default &amp;quot;.*&amp;quot;)&lt;br /&gt;
      --container-state string     If present, tail containers with status in running, waiting or terminated. Default to running. (default &amp;quot;running&amp;quot;)&lt;br /&gt;
      --context string             Kubernetes context to use. Default to current context configured in kubeconfig.&lt;br /&gt;
  -e, --exclude strings            Regex of log lines to exclude&lt;br /&gt;
  -E, --exclude-container string   Exclude a Container name&lt;br /&gt;
  -h, --help                       help for stern&lt;br /&gt;
  -i, --include strings            Regex of log lines to include&lt;br /&gt;
      --init-containers            Include or exclude init containers (default true)&lt;br /&gt;
      --kubeconfig string          Path to kubeconfig file to use&lt;br /&gt;
  -n, --namespace string           Kubernetes namespace to use. Default to namespace configured in Kubernetes context.&lt;br /&gt;
  -o, --output string              Specify predefined template. Currently support: [default, raw, json] (default &amp;quot;default&amp;quot;)&lt;br /&gt;
  -l, --selector string            Selector (label query) to filter on. If present, default to &amp;quot;.*&amp;quot; for the pod-query.&lt;br /&gt;
  -s, --since duration             Return logs newer than a relative duration like 5s, 2m, or 3h. Defaults to 48h.&lt;br /&gt;
      --tail int                   The number of lines from the end of the logs to show. Defaults to -1, showing all logs. (default -1)&lt;br /&gt;
      --template string            Template to use for log lines, leave empty to use --output flag&lt;br /&gt;
  -t, --timestamps                 Print timestamps&lt;br /&gt;
  -v, --version                    Print the version and exit&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Usage&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
stern &amp;lt;pod&amp;gt;&lt;br /&gt;
stern --tail 1 busybox -n &amp;lt;namespace&amp;gt; #this is RegEx that matches busybox1|2|etc&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== [https://github.com/johanhaleby/kubetail kubetail] ==&lt;br /&gt;
Bash script that enables you to aggregate (tail/follow) logs from multiple pods into one stream. This is the same as running &amp;lt;code&amp;gt;kubectl logs -f&amp;lt;/code&amp;gt; but for multiple pods.&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/lensapp/lens Lens | Kubernetes IDE] =&lt;br /&gt;
Kubernetes client, this is not a dashboard that needs installing on a cluster. Similar to KUI but much more powerful.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Deb&lt;br /&gt;
curl curl https://api.k8slens.dev/binaries/Lens-5.4.1-latest.20220304.1.amd64.deb&lt;br /&gt;
sudo apt-get install ./Lens-5.4.1-latest.20220304.1.amd64.deb&lt;br /&gt;
&lt;br /&gt;
# Snap&lt;br /&gt;
snap list&lt;br /&gt;
sudo snap install kontena-lens --classic # U16.04+, tested on U20.04&lt;br /&gt;
&lt;br /&gt;
# Install from a .snap file&lt;br /&gt;
mkdir -p ~/Downloads/kontena-lens &amp;amp;&amp;amp; cd $_&lt;br /&gt;
snap download kontena-lens&lt;br /&gt;
sudo snap ack     kontena-lens_152.assert         # add an assertion to the system assertion database&lt;br /&gt;
sudo snap install kontena-lens_152.snap --classic # --dangerous if you do not have the assert file&lt;br /&gt;
&lt;br /&gt;
# download snap from https://k8slens.dev/&lt;br /&gt;
curl https://api.k8slens.dev/binaries/Lens-5.3.4-latest.20220120.1.amd64.snap&lt;br /&gt;
sudo snap install Lens-5.3.4-latest.20220120.1.amd64.snap --classic --dangerous&lt;br /&gt;
&lt;br /&gt;
# Info&lt;br /&gt;
$ snap info kontena-lens_152.assert&lt;br /&gt;
name:      kontena-lens&lt;br /&gt;
summary:   Lens - The Kubernetes IDE&lt;br /&gt;
publisher: Mirantis Inc (jakolehm)&lt;br /&gt;
store-url: https://snapcraft.io/kontena-lens&lt;br /&gt;
contact:   info@k8slens.dev&lt;br /&gt;
license:   Proprietary&lt;br /&gt;
description: |&lt;br /&gt;
  Lens is the most powerful IDE for people who need to deal with Kubernetes clusters on a daily&lt;br /&gt;
  basis. Ensure your clusters are properly setup and configured. Enjoy increased visibility, real&lt;br /&gt;
  time statistics, log streams and hands-on troubleshooting capabilities. With Lens, you can work&lt;br /&gt;
  with your clusters more easily and fast, radically improving productivity and the speed of&lt;br /&gt;
  business.&lt;br /&gt;
snap-id: Dek6y5mTEPxhySFKPB4Z0WVi5EPS9osS&lt;br /&gt;
channels:&lt;br /&gt;
  latest/stable:    4.0.7      2021-01-20 (152) 107MB classic&lt;br /&gt;
  latest/candidate: 4.0.7      2021-01-20 (152) 107MB classic&lt;br /&gt;
  latest/beta:      4.0.7      2021-01-20 (152) 107MB classic&lt;br /&gt;
  latest/edge:      4.1.0-rc.1 2021-02-11 (157) 108MB classic&lt;br /&gt;
&lt;br /&gt;
$ snap info kontena-lens_152.snap&lt;br /&gt;
path:       &amp;quot;kontena-lens_152.snap&amp;quot;&lt;br /&gt;
name:       kontena-lens&lt;br /&gt;
summary:    Lens&lt;br /&gt;
version:    4.0.7 classic&lt;br /&gt;
build-date: 24 days ago, at 16:31 GMT&lt;br /&gt;
license:    unset&lt;br /&gt;
description: |&lt;br /&gt;
  Lens - The Kubernetes IDE&lt;br /&gt;
commands:&lt;br /&gt;
  - kontena-lens&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/lensapp/lens.git OpenLens] | Kubernetes IDE =&lt;br /&gt;
Download binary from https://github.com/MuhammedKalkan/OpenLens&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install on Ubuntu&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
SUDO=''&lt;br /&gt;
if (( $EUID != 0 )); then&lt;br /&gt;
    SUDO='sudo'&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
REPO=MuhammedKalkan/OpenLens&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/$REPO/releases/latest&amp;quot; | jq -r .tag_name | tr -d v); echo $LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d); FILE=OpenLens-${LATEST}.amd64.deb&lt;br /&gt;
curl -L https://github.com/${REPO}/releases/download/v${LATEST}/$FILE -o $TEMPDIR/$FILE&lt;br /&gt;
$SUDO dpkg -i $TEMPDIR/$FILE&lt;br /&gt;
$SUDO apt-get install -y --fix-broken&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Build your own - [https://gist.github.com/jslay88/bf654c23eaaaed443bb8e8b41d02b2a9 gist]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
install_deps_windows() {&lt;br /&gt;
    echo &amp;quot;Installing Build Dependencies (Windows)...&amp;quot;&lt;br /&gt;
    choco install -y make visualstudio2019buildtools visualstudio2019-workload-vctools&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_deps_darwin() {&lt;br /&gt;
    echo &amp;quot;Installing Build Dependencies (Darwin)...&amp;quot;&lt;br /&gt;
    xcode-select --install&lt;br /&gt;
    if ! hash make 2&amp;gt;/dev/null; then&lt;br /&gt;
        if ! hash brew 2&amp;gt;/dev/null; then&lt;br /&gt;
            echo &amp;quot;Installing Homebrew...&amp;quot;&lt;br /&gt;
            /bin/bash -c &amp;quot;$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)&amp;quot;&lt;br /&gt;
        fi&lt;br /&gt;
        echo &amp;quot;Installing make via Homebrew...&amp;quot;&lt;br /&gt;
        brew install make&lt;br /&gt;
    fi&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_deps_posix() {&lt;br /&gt;
    echo &amp;quot;Installing Build Dependencies (Posix)...&amp;quot;&lt;br /&gt;
    sudo apt-get install -y make g++ curl&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_darwin() {&lt;br /&gt;
    echo &amp;quot;Killing OpenLens (if open)...&amp;quot;&lt;br /&gt;
    killall OpenLens&lt;br /&gt;
    echo &amp;quot;Installing OpenLens (Darwin)...&amp;quot;&lt;br /&gt;
    rm -Rf &amp;quot;$HOME/Applications/OpenLens.app&amp;quot;&lt;br /&gt;
    arch=&amp;quot;mac&amp;quot;&lt;br /&gt;
    if [[ &amp;quot;$(uname -m)&amp;quot; == &amp;quot;arm64&amp;quot; ]]; then&lt;br /&gt;
        arch=&amp;quot;mac-arm64&amp;quot;  # credit @teefax&lt;br /&gt;
    fi&lt;br /&gt;
    cp -Rfp &amp;quot;$tempdir/lens/dist/$arch/OpenLens.app&amp;quot; &amp;quot;$HOME/Applications/&amp;quot;&lt;br /&gt;
    rm -Rf &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    &lt;br /&gt;
    print_alias_message&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_posix() {&lt;br /&gt;
    echo &amp;quot;Installing OpenLens (Posix)...&amp;quot;&lt;br /&gt;
    cd &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    sudo dpkg -i &amp;quot;$(ls -Art $tempdir/lens/dist/*.deb  | tail -n 1)&amp;quot;&lt;br /&gt;
    rm -Rf &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    &lt;br /&gt;
    print_alias_message&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_windows() {&lt;br /&gt;
    echo &amp;quot;Installing OpenLens (Windows)...&amp;quot;&lt;br /&gt;
    &amp;quot;$(/bin/ls -Art $tempdir/lens/dist/OpenLens*.exe  | tail -n 1)&amp;quot;&lt;br /&gt;
    rm -Rf &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    &lt;br /&gt;
    print_alias_message&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_nvm() {&lt;br /&gt;
    if [ -z &amp;quot;$NVM_DIR&amp;quot; ]; then&lt;br /&gt;
        echo &amp;quot;Installing NVM...&amp;quot;&lt;br /&gt;
        NVM_VERSION=$(curl -s https://api.github.com/repos/nvm-sh/nvm/releases/latest | sed -En 's/  &amp;quot;tag_name&amp;quot;: &amp;quot;(.+)&amp;quot;,/\1/p')&lt;br /&gt;
        curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/$NVM_VERSION/install.sh | bash&lt;br /&gt;
        NVM_DIR=&amp;quot;$HOME/.nvm&amp;quot;&lt;br /&gt;
    fi&lt;br /&gt;
    [ -s &amp;quot;$NVM_DIR/nvm.sh&amp;quot; ] &amp;amp;&amp;amp; \. &amp;quot;$NVM_DIR/nvm.sh&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
build_openlens() {&lt;br /&gt;
    tempdir=$(mktemp -d)&lt;br /&gt;
    cd &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    if [ -z &amp;quot;$1&amp;quot; ]; then&lt;br /&gt;
        echo &amp;quot;Checking GitHub API for latests tag...&amp;quot;&lt;br /&gt;
        OPENLENS_VERSION=$(curl -s https://api.github.com/repos/lensapp/lens/releases/latest | sed -En 's/  &amp;quot;tag_name&amp;quot;: &amp;quot;(.+)&amp;quot;,/\1/p')&lt;br /&gt;
    else&lt;br /&gt;
        if [[ &amp;quot;$1&amp;quot; == v* ]]; then&lt;br /&gt;
            OPENLENS_VERSION=&amp;quot;$1&amp;quot;&lt;br /&gt;
        else&lt;br /&gt;
            OPENLENS_VERSION=&amp;quot;v$1&amp;quot;&lt;br /&gt;
        fi&lt;br /&gt;
        echo &amp;quot;Using supplied tag $OPENLENS_VERSION&amp;quot;&lt;br /&gt;
    fi&lt;br /&gt;
    if [ -z $OPENLENS_VERSION ]; then&lt;br /&gt;
        echo &amp;quot;Failed to get valid version tag. Aborting!&amp;quot;&lt;br /&gt;
        exit 1&lt;br /&gt;
    fi&lt;br /&gt;
    curl -L https://github.com/lensapp/lens/archive/refs/tags/$OPENLENS_VERSION.tar.gz | tar xvz&lt;br /&gt;
    mv lens-* lens&lt;br /&gt;
    cd lens&lt;br /&gt;
    NVM_CURRENT=$(nvm current)&lt;br /&gt;
    nvm install 16&lt;br /&gt;
    nvm use 16&lt;br /&gt;
    npm install -g yarn&lt;br /&gt;
    make build&lt;br /&gt;
    nvm use &amp;quot;$NVM_CURRENT&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
print_alias_message() {&lt;br /&gt;
    if [ &amp;quot;$(type -t install_openlens)&amp;quot; != 'alias' ]; then&lt;br /&gt;
        printf &amp;quot;It is recommended to add an alias to your shell profile to run this script again.\n&amp;quot;&lt;br /&gt;
        printf &amp;quot;alias install_openlens=\&amp;quot;curl -o- https://gist.githubusercontent.com/jslay88/bf654c23eaaaed443bb8e8b41d02b2a9/raw/install_openlens.sh | bash\&amp;quot;\n\n&amp;quot;&lt;br /&gt;
    fi&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
if [[ &amp;quot;$(uname)&amp;quot; == &amp;quot;Linux&amp;quot; ]]; then&lt;br /&gt;
    install_deps_posix&lt;br /&gt;
    install_nvm&lt;br /&gt;
    build_openlens &amp;quot;$1&amp;quot;&lt;br /&gt;
    install_posix&lt;br /&gt;
elif [[ &amp;quot;$(uname)&amp;quot; == &amp;quot;Darwin&amp;quot; ]]; then&lt;br /&gt;
    install_deps_darwin&lt;br /&gt;
    install_nvm&lt;br /&gt;
    build_openlens &amp;quot;$1&amp;quot;&lt;br /&gt;
    install_darwin&lt;br /&gt;
else&lt;br /&gt;
    install_deps_windows&lt;br /&gt;
    install_nvm&lt;br /&gt;
    build_openlens &amp;quot;$1&amp;quot;&lt;br /&gt;
    install_windows&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Done!&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://kui.tools/ kui terminal] =&lt;br /&gt;
kui is a terminal with visualizations, provided by IBM&lt;br /&gt;
&lt;br /&gt;
Install using continent install script into &amp;lt;code&amp;gt;/opt/Kui-linux-x64/&amp;lt;/code&amp;gt; and symlink &amp;lt;code&amp;gt;Kui&amp;lt;/code&amp;gt; binary to &amp;lt;code&amp;gt;/usr/local/bin/kui&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
REPO=kubernetes-sigs/kui&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/$REPO/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d); FILE=Kui-linux-x64.zip&lt;br /&gt;
curl -L https://github.com/$REPO/releases/download/$LATEST/Kui-linux-x64.zip -o $TEMPDIR/$FILE&lt;br /&gt;
sudo mkdir -p /opt/Kui-linux-x64&lt;br /&gt;
sudo unzip $TEMPDIR/$FILE -d /opt/&lt;br /&gt;
&lt;br /&gt;
# Run&lt;br /&gt;
$&amp;gt; /opt/Kui-linux-x64/Kui&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Run Kui as [Kubernetes plugin https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
export PATH=$PATH:/opt/Kui-linux-x64/ # make sure Kui libs are in environment PATH&lt;br /&gt;
kubectl kui get pods -A               # -&amp;gt; a pop up window will show up&lt;br /&gt;
&lt;br /&gt;
$ kubectl plugin list &lt;br /&gt;
The following compatible plugins are available:&lt;br /&gt;
/opt/Kui-linux-x64/kubectl-kui&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
:[[File:ClipCapIt-200428-205600.PNG]]&lt;br /&gt;
&lt;br /&gt;
; Resources&lt;br /&gt;
* [https://github.com/IBM/kui/wiki kui/wiki] Github&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/derailed/popeye popeye] =&lt;br /&gt;
Popeye is a utility that scans live Kubernetes cluster and reports potential issues with deployed resources and configurations.&lt;br /&gt;
:[[File:ClipCapIt-200501-123645.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install&lt;br /&gt;
REPO=derailed/popeye&lt;br /&gt;
RELEASE=popeye_Linux_x86_64.tar.gz&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/${REPO}/releases/latest&amp;quot; | jq -r .tag_name); echo $VERSION # latest&lt;br /&gt;
wget https://github.com/${REPO}/releases/download/${VERSION}/${RELEASE}&lt;br /&gt;
tar xf ${RELEASE} popeye --remove-files&lt;br /&gt;
sudo install popeye /usr/local/bin&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
popeye # --out html&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/derailed/k9s k9s] =&lt;br /&gt;
K9s provides a terminal UI to interact with Kubernetes clusters.&lt;br /&gt;
&lt;br /&gt;
;Install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/derailed/k9s/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
wget https://github.com/derailed/k9s/releases/download/$LATEST/k9s_Linux_amd64.tar.gz&lt;br /&gt;
tar xf k9s_Linux_amd64.tar.gz --remove-files k9s&lt;br /&gt;
sudo install k9s /usr/local/bin&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Usage&lt;br /&gt;
* &amp;lt;code&amp;gt;?&amp;lt;/code&amp;gt; help&lt;br /&gt;
* &amp;lt;code&amp;gt;:ns&amp;lt;/code&amp;gt; select namespace&lt;br /&gt;
* &amp;lt;code&amp;gt;:nodes&amp;lt;/code&amp;gt; show nodes&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:[[File:ClipCapIt-190826-152830.PNG]]&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/droctothorpe/kubecolor kubecolor] =&lt;br /&gt;
Kubecolor is a bash function that colorizes the output of kubectl get events -w.&lt;br /&gt;
:[[File:ClipCapIt-190831-113158.PNG]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# This script is not working&lt;br /&gt;
git clone https://github.com/droctothorpe/kubecolor.git ~/.kubecolor&lt;br /&gt;
echo &amp;quot;source ~/.kubecolor/kubecolor.bash&amp;quot; &amp;gt;&amp;gt; ~/.bash_profile # (or ~/.bashrc)&lt;br /&gt;
source ~/.bash_profile # (or ~/.bashrc)&lt;br /&gt;
&lt;br /&gt;
# You can source this function instead&lt;br /&gt;
kube-events() {&lt;br /&gt;
    kubectl get events --all-namespaces --watch \&lt;br /&gt;
    -o 'go-template={{.lastTimestamp}} ^ {{.involvedObject.kind}} ^ {{.message}} ^ ({{.involvedObject.name}}){{&amp;quot;\n&amp;quot;}}' \&lt;br /&gt;
    | awk -F^ \&lt;br /&gt;
    -v   black=$(tput setaf 0) \&lt;br /&gt;
    -v     red=$(tput setaf 1) \&lt;br /&gt;
    -v   green=$(tput setaf 2) \&lt;br /&gt;
    -v  yellow=$(tput setaf 3) \&lt;br /&gt;
    -v    blue=$(tput setaf 4) \&lt;br /&gt;
    -v magenta=$(tput setaf 5) \&lt;br /&gt;
    -v    cyan=$(tput setaf 6) \&lt;br /&gt;
    -v   white=$(tput setaf 7) \&lt;br /&gt;
    '{ $1=blue $1; $2=green $2; $3=white $3; }  1'&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
kube-events&lt;br /&gt;
kubectl get events -A -w&lt;br /&gt;
kubectl get events --all-namespaces --watch -o 'go-template={{.lastTimestamp}} {{.involvedObject.kind}} {{.message}} ({{.involvedObject.name}}){{&amp;quot;\n&amp;quot;}}'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
= [https://argoproj.github.io/argo-rollouts/ argo-rollouts] =&lt;br /&gt;
Argo Rollouts introduces a new custom resource called a Rollout to provide additional deployment strategies such as Blue Green and Canary to Kubernetes.&lt;br /&gt;
&lt;br /&gt;
= &amp;lt;code&amp;gt;[https://github.com/groundcover-com/murre murre]&amp;lt;/code&amp;gt; =&lt;br /&gt;
Murre is an on-demand, scaleable source of container resource metrics for K8s. No dependencies needed, no install needed on a cluster.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
goenv install 1.18 # although 1.19 is the latest and the install completes successfully it wont create the binary  &lt;br /&gt;
go install github.com/groundcover-com/murre@latest&lt;br /&gt;
murre --sortby-cpu-util&lt;br /&gt;
murre --sortby-cpu&lt;br /&gt;
murre --pod kong-51xst&lt;br /&gt;
murre --namespace dev&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/amelbakry/kubernetes-scripts/blob/master/cluster-health.sh Kubernetes scripts] =&lt;br /&gt;
These Scripts allow you to troubleshoot and check the health status of the cluster and deployments They allow you to gather these information&lt;br /&gt;
* Cluster resources&lt;br /&gt;
* Cluster Nodes status&lt;br /&gt;
* Nodes Conditions&lt;br /&gt;
* Pods per Nodes&lt;br /&gt;
* Worker Nodes Per Availability Zones&lt;br /&gt;
* Cluster Node Types&lt;br /&gt;
* Pods not in running or completed status&lt;br /&gt;
* Top Pods according to Memory Limits&lt;br /&gt;
* Top Pods according to CPU Limits&lt;br /&gt;
* Number of Pods&lt;br /&gt;
* Pods Status&lt;br /&gt;
* Max Pods restart count&lt;br /&gt;
* Readiness of Pods&lt;br /&gt;
* Pods Average Utilization&lt;br /&gt;
* Top Pods according to CPU Utilization&lt;br /&gt;
* Top Pods according to Memory Utilization&lt;br /&gt;
* Pods Distribution per Nodes&lt;br /&gt;
* Node Distribution per Availability Zone&lt;br /&gt;
* Deployments without correct resources (Memory or CPU)&lt;br /&gt;
* Deployments without Limits&lt;br /&gt;
* Deployments without Application configured in Labels&lt;br /&gt;
&lt;br /&gt;
= Multi-node clusters =&lt;br /&gt;
{{Note|[[Kubernetes/minikube]] can do this natively}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Build multi node cluster for development.&lt;br /&gt;
On a single machine&lt;br /&gt;
* [https://github.com/kinvolk/kube-spawn/ kube-spawn] tool for creating a multi-node Kubernetes (&amp;gt;= 1.8) cluster on a single Linux machine&lt;br /&gt;
* [https://github.com/sttts/kubernetes-dind-cluster kubernetes-dind-cluster] Kubernetes multi-node cluster for developer of Kubernetes that launches in 36 seconds&lt;br /&gt;
* [https://kind.sigs.k8s.io/ kind] is a tool for running local Kubernetes clusters using Docker container “nodes”&lt;br /&gt;
* [https://github.com/ecomm-integration-ballerina/kubernetes-cluster Vagrant] full documentation in thsi [https://medium.com/@wso2tech/multi-node-kubernetes-cluster-with-vagrant-virtualbox-and-kubeadm-9d3eaac28b98 article]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Full cluster provisioning&lt;br /&gt;
* [https://github.com/kubernetes-sigs/kubespray kubespray] Deploy a Production Ready Kubernetes Cluster&lt;br /&gt;
* [https://github.com/kubernetes/kops kops] get a production grade Kubernetes cluster up and running&lt;br /&gt;
&lt;br /&gt;
= [https://kubernetes.io/docs/tasks/debug-application-cluster/crictl/ crictl] =&lt;br /&gt;
CLI and validation tools for Kubelet Container Runtime Interface (CRI). Used for debugging Kubernetes nodes with &amp;lt;code&amp;gt;crictl&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;crictl&amp;lt;/code&amp;gt; requires a Linux operating system with a CRI runtime. Creating containers with this tool on K8s cluster, will eventually cause that Kubernetes will delete these containers.&lt;br /&gt;
= [https://github.com/weaveworks/kubediff kubediff] show diff code vs what is deployed =&lt;br /&gt;
Kubediff is a tool for Kubernetes to show you the differences between your running configuration and your version controlled configuration.&lt;br /&gt;
= Mozilla SOPS - secret manager =&lt;br /&gt;
* [https://github.com/mozilla/sops SOPS] Mozilla SOPS: Secrets OPerationS, sops is an editor of encrypted files that supports YAML, JSON, ENV, INI and BINARY formats and encrypts with AWS KMS, GCP KMS, Azure Key Vault and PGP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/getsops/sops/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d)&lt;br /&gt;
curl -sL https://github.com/mozilla/sops/releases/download/${LATEST}/sops-${LATEST}.linux.amd64 -o $TEMPDIR/sops&lt;br /&gt;
sudo install $TEMPDIR/sops /usr/local/bin/sops&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://kompose.io/ Kompose] (Kubernetes + Compose) =&lt;br /&gt;
kompose is a tool to convert docker-compose files to Kubernetes manifests. &amp;lt;code&amp;gt;kompose&amp;lt;/code&amp;gt; takes a Docker Compose file and translates it into Kubernetes resources.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Linux&lt;br /&gt;
curl -L https://github.com/kubernetes/kompose/releases/download/v1.21.0/kompose-linux-amd64 -o kompose&lt;br /&gt;
sudo install ./kompose /usr/local/bin/kompose               # option 1&lt;br /&gt;
chmod +x kompose; sudo mv ./kompose /usr/local/bin/kompose  # option 2&lt;br /&gt;
&lt;br /&gt;
# Completion&lt;br /&gt;
source &amp;lt;(kompose completion bash)&lt;br /&gt;
&lt;br /&gt;
# Convert&lt;br /&gt;
kompose convert -f docker-compose-mac.yaml&lt;br /&gt;
&lt;br /&gt;
WARN Restart policy 'unless-stopped' in service mysql is not supported, convert it to 'always'&lt;br /&gt;
INFO Kubernetes file &amp;quot;mysql-service.yaml&amp;quot; created&lt;br /&gt;
INFO Kubernetes file &amp;quot;cluster-dir-persistentvolumeclaim.yaml&amp;quot; created&lt;br /&gt;
INFO Kubernetes file &amp;quot;mysql-deployment.yaml&amp;quot; created&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;References&lt;br /&gt;
*[https://github.com/kubernetes/kompose kompose] Github&lt;br /&gt;
&lt;br /&gt;
= [https://kubernetes.io/blog/2019/04/19/introducing-kube-iptables-tailer/ kube-iptables-tailer] - ip-table drop packages logger =&lt;br /&gt;
Allows to view iptables dropped packages, useful when working with Network Policies to identify pods trying to talk to disallowed destinations.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This project deploys &amp;lt;tt&amp;gt;[https://github.com/box/kube-iptables-tailer/tree/master/demo kube-iptables-tailer]&amp;lt;/tt&amp;gt; daemonset that watches iptables log &amp;lt;code&amp;gt;/var/log/iptables.log&amp;lt;/code&amp;gt; on each k8s-node mounted as &amp;lt;code&amp;gt;hostPath&amp;lt;/code&amp;gt; volume. It filters the log for custom prefix, set in &amp;lt;code&amp;gt;daemonset.spec.template.spec.containers.env&amp;lt;/code&amp;gt; and sends to cluster events.&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
            env: &lt;br /&gt;
              - name: &amp;quot;IPTABLES_LOG_PATH&amp;quot;&lt;br /&gt;
                value: &amp;quot;/var/log/iptables.log&amp;quot;&lt;br /&gt;
              - name: &amp;quot;IPTABLES_LOG_PREFIX&amp;quot;&lt;br /&gt;
                # log prefix defined in your iptables chains&lt;br /&gt;
                value: &amp;quot;my-prefix:&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://github.com/box/kube-iptables-tailer#setup-iptables-log-prefix Set iptables Log Prefix]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ iptables -A CHAIN_NAME -j LOG --log-prefix &amp;quot;EXAMPLE_LOG_PREFIX: &amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example output, when packet dropped&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ kubectl describe pods --namespace=YOUR_NAMESPACE&lt;br /&gt;
...&lt;br /&gt;
Events:&lt;br /&gt;
  FirstSeen   LastSeen    Count   From                    Type          Reason          Message&lt;br /&gt;
  ---------   --------	  -----	  ----                    ----          ------          -------&lt;br /&gt;
  1h          5s          10      kube-iptables-tailer    Warning       PacketDrop      Packet dropped when receiving traffic from example-service-2 (IP: 22.222.22.222).&lt;br /&gt;
  3h          2m          5       kube-iptables-tailer    Warning       PacketDrop      Packet dropped when sending traffic to example-service-1 (IP: 11.111.11.111).&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
= [https://github.com/eldadru/ksniff ksniff] - pipe a pod traffic to Wireshark or Tshark =&lt;br /&gt;
A kubectl plugin that utilize tcpdump and Wireshark to start a remote capture on any pod&lt;br /&gt;
&lt;br /&gt;
= [https://docs.flagger.app/ flagger - canary deployments] =&lt;br /&gt;
Flagger is a Kubernetes operator that automates the promotion of canary deployments using Istio, Linkerd, App Mesh, NGINX, Skipper, Contour or Gloo routing for traffic shifting and Prometheus metrics for canary analysis.&lt;br /&gt;
= [https://www.kubeval.com/ Kubeval] =&lt;br /&gt;
Kubeval is used to validate one or more Kubernetes configuration files, and is often used locally as part of a development workflow as well as in CI pipelines.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install&lt;br /&gt;
wget https://github.com/instrumenta/kubeval/releases/latest/download/kubeval-linux-amd64.tar.gz&lt;br /&gt;
tar xf kubeval-linux-amd64.tar.gz&lt;br /&gt;
sudo cp kubeval /usr/local/bin&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
$&amp;gt; kubeval my-invalid-rc.yaml&lt;br /&gt;
WARN - my-invalid-rc.yaml contains an invalid ReplicationController - spec.replicas: Invalid type. Expected: integer, given: string&lt;br /&gt;
$&amp;gt; echo $?&lt;br /&gt;
1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/yannh/kubeconform kubeconform] - improved Kubeval =&lt;br /&gt;
Kubeconform is a Kubernetes manifests validation tool.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install&lt;br /&gt;
wget https://github.com/yannh/kubeconform/releases/latest/download/kubeconform-linux-amd64.tar.gz&lt;br /&gt;
tar xf kubeconform-linux-amd64.tar.gz&lt;br /&gt;
sudo install kubeconform /usr/local/bin&lt;br /&gt;
&lt;br /&gt;
# Show version&lt;br /&gt;
kubeconform -v&lt;br /&gt;
v0.4.14&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Observability =&lt;br /&gt;
== [https://github.com/oslabs-beta/KUR8 KUR8] - like Elastic.io EFK dashboards ==&lt;br /&gt;
{{Note|I've deployed v1.0.0 to monitoring ns along with already existing service &amp;lt;code&amp;gt;&lt;br /&gt;
kube-prometheus-stack-prometheus:9090&amp;lt;/code&amp;gt; but the application was crashing}}&lt;br /&gt;
&lt;br /&gt;
= CPU Load pods =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Repeat the command times x CPU&lt;br /&gt;
cat /proc/cpuinfo | grep processor | wc -l # count processors&lt;br /&gt;
yes &amp;gt; /dev/null &amp;amp;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
*[https://kubernetes.io/docs/reference/kubectl/overview/ kubectl overview - resources types, Namespaced, kinds] K8s docs&lt;br /&gt;
*[https://github.com/johanhaleby/kubetail kubetail] Bash script that enables you to aggregate (tail/follow) logs from multiple pods into one stream. This is the same as running &amp;quot;kubectl logs -f &amp;quot; but for multiple pods.&lt;br /&gt;
*[https://github.com/ahmetb/kubectx kubectx kubens] Kubernetes config switches for context and setting up default namespace&lt;br /&gt;
*[https://medium.com/faun/using-different-kubectl-versions-with-multiple-kubernetes-clusters-a3ad8707b87b manages different ver kubectl] blog&lt;br /&gt;
*[https://github.com/kubernetes/community/blob/master/contributors/devel/sig-cli/kubectl-conventions.md#rules-for-extending-special-resource-alias---all kubectl] Kubectl Conventions&lt;br /&gt;
&lt;br /&gt;
Cheatsheets&lt;br /&gt;
*[https://cheatsheet.dennyzhang.com/cheatsheet-kubernetes-A4 cheatsheet-kubernetes-A4] by dennyzhang&lt;br /&gt;
&lt;br /&gt;
Other projects&lt;br /&gt;
*[https://github.com/jonmosco/kube-tmux kube-tmux] Kubernetes context and namespace status for tmux&lt;br /&gt;
*[https://github.com/jonmosco/kube-ps1 kube-ps1] Kubernetes prompt for bash and zsh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:kubernetes]]&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Kubernetes/Tools&amp;diff=7042</id>
		<title>Kubernetes/Tools</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Kubernetes/Tools&amp;diff=7042"/>
		<updated>2025-02-11T09:40:31Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Install kubectl */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= kubectl =&lt;br /&gt;
== Install kubectl ==&lt;br /&gt;
List of kubectl [https://kubernetes.io/releases/ releases].&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# List releases&lt;br /&gt;
curl -s https://api.github.com/repos/kubernetes/kubernetes/releases | jq -r '.[].tag_name' | sort -V&lt;br /&gt;
curl -s https://api.github.com/repos/kubernetes/kubernetes/releases | jq -r '[.[] | select(.prerelease == false) | .tag_name] | map(sub(&amp;quot;^v&amp;quot;;&amp;quot;&amp;quot;)) | map(split(&amp;quot;.&amp;quot;)) | group_by(.[0:2]) | map(max_by(.[2]|tonumber)) | map(join(&amp;quot;.&amp;quot;)) | map(&amp;quot;v&amp;quot; + .) | sort | reverse | .[]'&lt;br /&gt;
v1.32.1&lt;br /&gt;
v1.31.5&lt;br /&gt;
v1.30.9&lt;br /&gt;
v1.29.13&lt;br /&gt;
v1.28.15&lt;br /&gt;
&lt;br /&gt;
# Latest&lt;br /&gt;
ARCH=amd64 # amd64|arm&lt;br /&gt;
VERSION=$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt); echo $VERSION&lt;br /&gt;
curl -LO https://storage.googleapis.com/kubernetes-release/release/$VERSION/bin/linux/$ARCH/kubectl&lt;br /&gt;
&lt;br /&gt;
# Specific version&lt;br /&gt;
# Find specific Kubernetes release, then download kubectl&lt;br /&gt;
VERSION=v1.26.14; ARCH=amd64 # amd64|arm&lt;br /&gt;
curl -LO https://dl.k8s.io/release/$VERSION/bin/linux/$ARCH/kubectl&lt;br /&gt;
sudo install ./kubectl /usr/local/bin/kubectl&lt;br /&gt;
&lt;br /&gt;
# Note: sudo install := chmod +x ./kubectl; sudo mv&lt;br /&gt;
&lt;br /&gt;
# Verify, kubectl should not be more than -/+ 1 minor version difference then api-server&lt;br /&gt;
kubectl version --short&lt;br /&gt;
Client Version: v1.26.14&lt;br /&gt;
Kustomize Version: v4.5.7&lt;br /&gt;
Server Version: v1.24.10&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Google way&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install kubectl if you don't already have a suitable version&lt;br /&gt;
# https://kubernetes.io/docs/setup/release/version-skew-policy/#kubectl&lt;br /&gt;
kubectl version --client || gcloud components install kubectl&lt;br /&gt;
kubectl get clusterrolebinding $(gcloud config get-value core/account)-cluster-admin ||&lt;br /&gt;
  kubectl create clusterrolebinding $(gcloud config get-value core/account)-cluster-admin \&lt;br /&gt;
  --clusterrole=cluster-admin \&lt;br /&gt;
  --user=&amp;quot;$(gcloud config get-value core/account)&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
kubectl plugin called [https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke gke-gcloud-auth-plugin]&lt;br /&gt;
* [https://cloud.google.com/sdk/docs/install#deb Install Google Cloud SDK]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo apt-get install apt-transport-https ca-certificates gnupg curl&lt;br /&gt;
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /usr/share/keyrings/cloud.google.gpg&lt;br /&gt;
echo &amp;quot;deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main&amp;quot; | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list&lt;br /&gt;
sudo apt-get update &lt;br /&gt;
sudo apt-get install google-cloud-cli # required to authenticate with GCP&lt;br /&gt;
sudo apt-get install google-cloud-sdk-gke-gcloud-auth-plugin&lt;br /&gt;
gcloud init&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Autocompletion and kubeconfig ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
source &amp;lt;(kubectl completion bash); alias k=kubectl; complete -F __start_kubectl k&lt;br /&gt;
&lt;br /&gt;
# Set default namespace&lt;br /&gt;
kubectl config set-context --current --namespace=dev&lt;br /&gt;
kubectl config set-context $(kubectl config current-context) --namespace=dev&lt;br /&gt;
&lt;br /&gt;
vi ~/.kube/config&lt;br /&gt;
...&lt;br /&gt;
contexts:&lt;br /&gt;
- context:&lt;br /&gt;
    cluster: kubernetes&lt;br /&gt;
    user: kubernetes-admin&lt;br /&gt;
    namespace: web       # default namespace&lt;br /&gt;
  name: dev-frontend&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Add &amp;lt;code&amp;gt;proxy-url&amp;lt;/code&amp;gt; using &amp;lt;code&amp;gt;yq&amp;lt;/code&amp;gt; to kubeconfig ==&lt;br /&gt;
Minimum yq version required is v2.x, tested with yq 2.13.0. The example below does the file inline `-i` update.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
yq -i -y --indentless '.clusters[0].cluster += {&amp;quot;proxy-url&amp;quot;: &amp;quot;http://proxy.acme.com:8080&amp;quot;}' ~/.kube/$ENVIRONMENT&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Get resources and cheatsheet ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Get a list of nodes&lt;br /&gt;
kubectl get nodes -o jsonpath=&amp;quot;{.items[*].metadata.name}&amp;quot;&lt;br /&gt;
ip-10-10-10-10.eu-west-1.compute.internal ip-10-10-10-20.eu-west-1.compute.internal&lt;br /&gt;
&lt;br /&gt;
kubectl get nodes -oname&lt;br /&gt;
node/ip-10-10-10-10.eu-west-1.compute.internal&lt;br /&gt;
node/ip-10-10-10-20.eu-west-1.compute.internal&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
# Pods sorted by node name&lt;br /&gt;
kubectl get pods --sort-by=.spec.nodeName -owide -A&lt;br /&gt;
&lt;br /&gt;
# Watch a namespace in a convinient resources order | sts=statefulset, rs=replicaset, ep=endpoint, cm=configmap&lt;br /&gt;
watch -d kubectl -n dev get sts,deploy,rc,rs,pods,svc,ep,ing,pvc,cm,sa,secret,es,cronjob,job -owide --show-labels &lt;br /&gt;
   # note es - externalsecrets&lt;br /&gt;
watch -d 'kubectl get pv -owide --show-labels | grep -e &amp;lt;eg.NAMESPACE&amp;gt;'&lt;br /&gt;
watch -d helm list -A&lt;br /&gt;
&lt;br /&gt;
# Test your context by creating configMap&lt;br /&gt;
kubectl create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2&lt;br /&gt;
kubectl delete configmap my-config&lt;br /&gt;
&lt;br /&gt;
# Watch multiple namespaces&lt;br /&gt;
eval 'kubectl --context='{context1,context2}' --namespace='{ns1,ns2}' get pod;'&lt;br /&gt;
eval kubectl\ --context={context1,context2}\ --namespace={ns1,ns2}\ get\ pod\;&lt;br /&gt;
watch -d eval 'kubectl -n '{default,ingress-nginx}' get sts,deploy,rc,rs,pods,svc,ep,ing,pvc,cm,sa,secret,es,cronjob,job -owide --show-labels;'&lt;br /&gt;
&lt;br /&gt;
# Auth, can-i&lt;br /&gt;
kubectl auth can-i delete pods&lt;br /&gt;
yes&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Get yaml from existing object ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl create namespace kiali --dry-run=client -o yaml &amp;gt; ns.yaml&lt;br /&gt;
kubectl create namespace kiali --dry-run=client -o yaml | kubectl apply -f -&lt;br /&gt;
&lt;br /&gt;
# Saves version revision in metadata.annotations.kubectl.kubernetes.io/last-applied-configuration={..manifest_json..} &lt;br /&gt;
kubectl create ns foo --save-config&lt;br /&gt;
&lt;br /&gt;
# Get a yaml without status information, almost clean manifest. Deprecated '--export' before &amp;lt;v17.x.&lt;br /&gt;
kubectl -n web get pod &amp;lt;podName&amp;gt; -oyaml --export&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Generate pod manifest, the most clean way I know&lt;br /&gt;
&amp;lt;syntaxhighlightjs lang=bash&amp;gt;&lt;br /&gt;
# kubectl -n foo run --image=ubuntu:20.04 ubuntu-1 --dry-run=client -oyaml -- bash -c sleep&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Pod&lt;br /&gt;
metadata:&lt;br /&gt;
  creationTimestamp: null  # &amp;lt;- can be deleted&lt;br /&gt;
  labels:&lt;br /&gt;
    run: ubuntu-1&lt;br /&gt;
  name: ubuntu-1&lt;br /&gt;
  namespace: foo&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
  - args:&lt;br /&gt;
    - bash&lt;br /&gt;
    - -c&lt;br /&gt;
    - sleep&lt;br /&gt;
    image: ubuntu:20.04&lt;br /&gt;
    name: ubuntu-1&lt;br /&gt;
    resources: {}  # &amp;lt;- can be deleted&lt;br /&gt;
  dnsPolicy: ClusterFirst&lt;br /&gt;
  restartPolicy: Always&lt;br /&gt;
status: {}         # &amp;lt;- can be deleted&lt;br /&gt;
&amp;lt;/syntaxhighlightjs&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;code&amp;gt;kubectl cp&amp;lt;/code&amp;gt; ==&lt;br /&gt;
Each container must be prefixed with a namespace, the copy-to-file(&amp;lt;filename&amp;gt;) must be in place. The recursive copy might be tricky. &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl cp [[namespace/]pod:]file/path ./&amp;lt;filename&amp;gt; -c &amp;lt;container_name&amp;gt;&lt;br /&gt;
kubectl cp vegeta/vegeta-5847d879d8-p9kqw:plot.html ./plot.html -c vegeta&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== One liners ==&lt;br /&gt;
=== Single purpose pods ===&lt;br /&gt;
Note: &amp;lt;code&amp;gt;--generator=deployment/apps.v1&amp;lt;/code&amp;gt; is DEPRECATED and will be removed, use &amp;lt;code&amp;gt;--generator=run-pod/v1 &amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;kubectl create&amp;lt;/code&amp;gt; instead.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Exec to deployment, no need to specify unique pod name&lt;br /&gt;
kubectl exec -it deploy/sleep -- curl httpbin:8000/headers&lt;br /&gt;
&lt;br /&gt;
NS=mynamespace; LABEL='app.kubernetes.io/name=myvalue'&lt;br /&gt;
kubectl exec -n $NS -it $(kubectl get pod -l &amp;quot;$LABEL&amp;quot; -n $NS -o jsonpath='{.items[0].metadata.name}') -- bash&lt;br /&gt;
&lt;br /&gt;
# Echo server&lt;br /&gt;
kubectl run --image=k8s.gcr.io/echoserver:1.4 hello-1 --port=8080&lt;br /&gt;
&lt;br /&gt;
# Single purpose pods&lt;br /&gt;
kubectl run    --image=bitnami/kubectl:1.21.8 kubectl-1    --rm -it -- get pods&lt;br /&gt;
kubectl run    --image=appropriate/curl       curl-1       --rm -it -- sh&lt;br /&gt;
kubectl run    --image=ubuntu:18.04     ubuntu-1  --rm -it -- bash&lt;br /&gt;
kubectl create --image=ubuntu:20.04     ubuntu-2  --rm -it -- bash&lt;br /&gt;
kubectl run    --image=busybox:1.31.0   busybox-1 --rm -it -- sh          # exec and delete when completed&lt;br /&gt;
kubectl run    --image=busybox:1.31.0   busybox-2          -- sleep 7200  # sleep, so you can exec&lt;br /&gt;
kubectl run    --image=alpine           alpine-1  --rm -it -- ping -c 1 8.8.8.8&lt;br /&gt;
 docker run    --rm -it --name alpine-1 alpine                ping -c 1 8.8.8.8&lt;br /&gt;
&lt;br /&gt;
# Network-multitool | https://github.com/wbitt/Network-MultiTool | Runs as a webserver, so won't complete.&lt;br /&gt;
kubectl run    --image=wbitt/network-multitool multitool-1&lt;br /&gt;
kubectl create --image=wbitt/network-multitool deployment multitool&lt;br /&gt;
kubectl exec -it multitool-1          -- /bin/bash&lt;br /&gt;
kubectl exec -it deployment/multitool -- /bin/bash&lt;br /&gt;
docker run --rm -it --name network-multitool wbitt/network-multitool bash&lt;br /&gt;
&lt;br /&gt;
# Curl&lt;br /&gt;
kubectl run test --image=tutum/curl -- sleep 10000&lt;br /&gt;
&lt;br /&gt;
# Deprecation syntax&lt;br /&gt;
kubectl run --image=k8s.gcr.io/echoserver:1.4 --generator=run-pod/v1         hello-1 --port=8080 # VALID!&lt;br /&gt;
kubectl run --image=k8s.gcr.io/echoserver:1.4 --generator=deployment/apps.v1 hello-1 --port=8080 # &amp;lt;- deprecated&lt;br /&gt;
&lt;br /&gt;
# Errors&lt;br /&gt;
# | error: --rm should only be used for attached containers&lt;br /&gt;
# | Error: unknown flag: --image # when kubectl create --image&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Additional software&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Process and network comamnds&lt;br /&gt;
export DEBIAN_FRONTEND=noninteractive # Ubuntu 20.04&lt;br /&gt;
DEBIAN_FRONTEND=noninteractive apt install -yq dnsutils iproute2 iputils-ping iputils-tracepath net-tools netcat procps&lt;br /&gt;
# | dnsutils     - nslookup, dig&lt;br /&gt;
# | iproute2     - ip addr, ss&lt;br /&gt;
# | iputils-ping      - ping&lt;br /&gt;
# | iputils-tracepath - tracepath&lt;br /&gt;
# | net-tools    - ifconfig&lt;br /&gt;
# | netcat       - nc&lt;br /&gt;
# | procps       - ps, top&lt;br /&gt;
&lt;br /&gt;
# Databases&lt;br /&gt;
apt install -yq redis-tools&lt;br /&gt;
apt install -yq postgresql-client&lt;br /&gt;
&lt;br /&gt;
# AWS cli v1 - Debian&lt;br /&gt;
apt install python-pip&lt;br /&gt;
pip install awscli&lt;br /&gt;
&lt;br /&gt;
# Network test without ping, nc or telnet&lt;br /&gt;
(timeout 1 bash -c '&amp;lt;/dev/tcp/127.0.0.1/22 &amp;amp;&amp;amp; echo PORT OPEN || echo PORT CLOSED') 2&amp;gt;/dev/null&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;kubectl heredocs&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl apply -f &amp;lt;(cat &amp;lt;&amp;lt;EOF&lt;br /&gt;
EOF&lt;br /&gt;
) --dry-run=server&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;One lines move to yamls&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
# kubectl exec -it ubuntu-2 -- bash&lt;br /&gt;
kubectl apply -f &amp;lt;(cat &amp;lt;&amp;lt;EOF&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Pod&lt;br /&gt;
metadata:&lt;br /&gt;
# namespace: default&lt;br /&gt;
  name: ubuntu-1&lt;br /&gt;
# annotations:&lt;br /&gt;
#   kubernetes.io/psp: eks.privileged&lt;br /&gt;
# labels:&lt;br /&gt;
#   app: ubuntu&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
  - command:&lt;br /&gt;
    - &amp;quot;sleep&amp;quot;&lt;br /&gt;
    - &amp;quot;7200&amp;quot;&lt;br /&gt;
#   args:&lt;br /&gt;
#   - &amp;quot;bash&amp;quot;&lt;br /&gt;
    image: ubuntu:20.04&lt;br /&gt;
    imagePullPolicy: IfNotPresent&lt;br /&gt;
    name: ubuntu-1&lt;br /&gt;
#   securityContext:&lt;br /&gt;
#     privileged: true&lt;br /&gt;
#   tty: true&lt;br /&gt;
# dnsPolicy: ClusterFirst&lt;br /&gt;
# enableServiceLinks: true&lt;br /&gt;
  restartPolicy: Never&lt;br /&gt;
# serviceAccount    : sa1&lt;br /&gt;
# serviceAccountName: sa1&lt;br /&gt;
# nodeSelector:&lt;br /&gt;
#   node.kubernetes.io/lifecycle: spot&lt;br /&gt;
EOF&lt;br /&gt;
) --dry-run=server&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Docker - for a single missing commands ===&lt;br /&gt;
If you ever miss some commands you can use docker container package with it:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# curl - missing on minikube node that runs CoreOS&lt;br /&gt;
minikube -p metrics ip; minikube ssh&lt;br /&gt;
docker run appropriate/curl -- http://&amp;lt;NodeIP&amp;gt;:10255/stats/summary # check kubelet-metrics non secure endpoint&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== [https://kubernetes.io/blog/2019/01/14/apiserver-dry-run-and-kubectl-diff/ &amp;lt;code&amp;gt;kubectl diff&amp;lt;/code&amp;gt;] ==&lt;br /&gt;
Shows the differences between the current '''live''' object and the new '''dry-run''' object.&lt;br /&gt;
&amp;lt;source lang=diff&amp;gt;&lt;br /&gt;
kubectl diff -f webfront-deploy.yaml&lt;br /&gt;
diff -u -N /tmp/LIVE-761963756/apps.v1.Deployment.default.webfront-deploy /tmp/MERGED-431884635/apps.v1.Deployment.default.webfront-deploy&lt;br /&gt;
--- /tmp/LIVE-761963756/apps.v1.Deployment.default.webfront-deploy      2019-10-13 17:46:59.784000000 +0000&lt;br /&gt;
+++ /tmp/MERGED-431884635/apps.v1.Deployment.default.webfront-deploy    2019-10-13 17:46:59.788000000 +0000&lt;br /&gt;
@@ -4,7 +4,7 @@&lt;br /&gt;
   annotations:&lt;br /&gt;
     deployment.kubernetes.io/revision: &amp;quot;1&amp;quot;&lt;br /&gt;
   creationTimestamp: &amp;quot;2019-10-13T16:38:43Z&amp;quot;&lt;br /&gt;
-  generation: 2&lt;br /&gt;
+  generation: 3&lt;br /&gt;
   labels:&lt;br /&gt;
     app: webfront-deploy&lt;br /&gt;
   name: webfront-deploy&lt;br /&gt;
@@ -14,7 +14,7 @@&lt;br /&gt;
   uid: ebaf757e-edd7-11e9-8060-0a2fb3cdd79a&lt;br /&gt;
 spec:&lt;br /&gt;
   progressDeadlineSeconds: 600&lt;br /&gt;
-  replicas: 2&lt;br /&gt;
+  replicas: 1&lt;br /&gt;
   revisionHistoryLimit: 10&lt;br /&gt;
   selector:&lt;br /&gt;
     matchLabels:&lt;br /&gt;
@@ -29,6 +29,7 @@&lt;br /&gt;
       creationTimestamp: null&lt;br /&gt;
       labels:&lt;br /&gt;
         app: webfront-deploy&lt;br /&gt;
+        role: webfront&lt;br /&gt;
     spec:&lt;br /&gt;
       containers:&lt;br /&gt;
       - image: nginx:1.7.8&lt;br /&gt;
exit status 1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Kubectl-plugins - [https://krew.sigs.k8s.io/docs/ Krew] plugin manager ==&lt;br /&gt;
Install [https://github.com/kubernetes-sigs/krew krew] package manager for kubectl plugins, requires K8s v1.12+&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
(&lt;br /&gt;
  set -x; cd &amp;quot;$(mktemp -d)&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  OS=&amp;quot;$(uname | tr '[:upper:]' '[:lower:]')&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  ARCH=&amp;quot;$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  KREW=&amp;quot;krew-${OS}_${ARCH}&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  curl -fsSLO &amp;quot;https://github.com/kubernetes-sigs/krew/releases/latest/download/${KREW}.tar.gz&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  tar zxvf &amp;quot;${KREW}.tar.gz&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  ./&amp;quot;${KREW}&amp;quot; install krew&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# update PATH&lt;br /&gt;
[ -d ${HOME}/.krew/bin ] &amp;amp;&amp;amp; export PATH=&amp;quot;${PATH}:${HOME}/.krew/bin&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# List plugins&lt;br /&gt;
kubectl krew search&lt;br /&gt;
&lt;br /&gt;
# Install plugins&lt;br /&gt;
kubectl krew install sniff&lt;br /&gt;
&lt;br /&gt;
# Upgrade plugins&lt;br /&gt;
kubectl krew upgrade&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*[https://github.com/kubernetes-sigs/krew-index/blob/master/plugins.md Available kubectl plugins] Github&lt;br /&gt;
*[https://ahmet.im/blog/kubectl-plugins/ kubectl subcommands] write your own plugin&lt;br /&gt;
&lt;br /&gt;
== Install kubectl plugins ==&lt;br /&gt;
&amp;lt;code&amp;gt;kubectl ctx&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;kubectl ns&amp;lt;/code&amp;gt; - change context and set default namespace&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl krew install ctx ns&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;kubectl cssh&amp;lt;/code&amp;gt; - SSH into Kubernetes nodes ===&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Ssh to all nodes, example below for EKS v1.15.11&lt;br /&gt;
kubectl cssh -u ec2-user -i /git/secrets/ssh/dev.pem -a &amp;quot;InternalIP&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;code&amp;gt;[https://github.com/FairwindsOps/pluto kubectl deprecations]&amp;lt;/code&amp;gt;===&lt;br /&gt;
;&amp;lt;code&amp;gt;[https://github.com/FairwindsOps/pluto kubectl deprecations]&amp;lt;/code&amp;gt;: shows all the deprecated objects in a Kubernetes cluster allowing the operator to verify them before upgrading the cluster. It uses the swagger.json version available in master branch of Kubernetes repository (https://github.com/kubernetes/kubernetes/tree/master/api/openapi-spec) as a reference.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl deprecations&lt;br /&gt;
StatefulSet found in statefulsets.apps/v1beta1&lt;br /&gt;
	 ├─ API REMOVED FROM THE CURRENT VERSION AND SHOULD BE MIGRATED IMMEDIATELY!!&lt;br /&gt;
		-&amp;gt; OBJECT: myapp namespace: mynamespace1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Prior upgrade report. Script specific to EKS.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
[[ $# -eq 0 ]] &amp;amp;&amp;amp; echo &amp;quot;no args, provide prefix for the file name&amp;quot; &amp;amp;&amp;amp; exit 1&lt;br /&gt;
PREFIX=$1&lt;br /&gt;
TARGET_K8S_VER=v1.16.8&lt;br /&gt;
K8Sid=$(kubectl cluster-info | head -1 | cut -d'/' -f3 | cut -d'.' -f1)&lt;br /&gt;
kubectl deprecations --k8s-version $TARGET_K8S_VER &amp;gt; $PREFIX-$(kubectl cluster-info | head -1 | cut -d'/' -f3 | cut -d'.' -f1)-$(date +&amp;quot;%Y%m%d-%H%M&amp;quot;)-from-$(kubectl version --short | grep Server | cut -f3 -d' ')-to-${TARGET_K8S_VER}.yaml&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ ./kube-deprecations.sh test&lt;br /&gt;
$ ls -l&lt;br /&gt;
-rw-rw-r-- 1 vagrant vagrant 29356 Jun 29 16:09 test-11111111112222222222333333333344-20200629-1609-from-v1.15.11-eks-af3caf-to-latest.yaml&lt;br /&gt;
-rw-rw-r-- 1 vagrant vagrant   852 Jun 30 22:41 test-11111111112222222222333333333344-20200630-2241-from-v1.15.11-eks-af3caf-to-v1.16.8.yaml&lt;br /&gt;
-rwxrwxr-x 1 vagrant vagrant   437 Jun 30 22:41 kube-deprecations.sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;code&amp;gt;kubectl df-pv&amp;lt;/code&amp;gt;===&lt;br /&gt;
;&amp;lt;code&amp;gt;kubectl df-pv&amp;lt;/code&amp;gt;: Show disk usage (like unix df) for persistent volumes&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl df-pv&lt;br /&gt;
PVC                   NAMESPACE   POD                    SIZE          USED        AVAILABLE     PERCENTUSED   IUSED   IFREE     PERCENTIUSED&lt;br /&gt;
rdbms-volume          shared1     rdbms-d494fbf4-xrssk   2046640128    252817408   1777045504    12.35         688     130384    0.52&lt;br /&gt;
userdata-0            shared2     mft-0                  21003583488   57692160    20929114112   0.27          749     1309971   0.06&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
===&amp;lt;code&amp;gt;kubectl sniff&amp;lt;/code&amp;gt;===&lt;br /&gt;
Start a remote packet capture on pods using tcpdump.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl sniff hello-minikube-7c77b68cff-qbvsd -c hello-minikube&lt;br /&gt;
# Flags:&lt;br /&gt;
#   -c, --container string             container (optional)&lt;br /&gt;
#   -x, --context string               kubectl context to work on (optional)&lt;br /&gt;
#   -f, --filter string                tcpdump filter (optional)&lt;br /&gt;
#   -h, --help                         help for sniff&lt;br /&gt;
#       --image string                 the privileged container image (optional)&lt;br /&gt;
#   -i, --interface string             pod interface to packet capture (optional) (default &amp;quot;any&amp;quot;)&lt;br /&gt;
#   -l, --local-tcpdump-path string    local static tcpdump binary path (optional)&lt;br /&gt;
#   -n, --namespace string             namespace (optional) (default &amp;quot;default&amp;quot;)&lt;br /&gt;
#   -o, --output-file string           output file path, tcpdump output will be redirect to this file instead of wireshark (optional) ('-' stdout)&lt;br /&gt;
#   -p, --privileged                   if specified, ksniff will deploy another pod that have privileges to attach target pod network namespace&lt;br /&gt;
#   -r, --remote-tcpdump-path string   remote static tcpdump binary path (optional) (default &amp;quot;/tmp/static-tcpdump&amp;quot;)&lt;br /&gt;
#   -v, --verbose                      if specified, ksniff output will include debug information (optional)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The command above will open Wireshark, interesting article to follow:&lt;br /&gt;
* [https://istio.io/latest/docs/tasks/security/authentication/mtls-migration/#set-up-the-cluster mutual TLS] istio&lt;br /&gt;
* [https://dzone.com/articles/verifying-service-mesh-tls-in-kubernetes-using-ksn Verifying Service Mesh TLS in Kubernetes, Using Ksniff and Wireshark]&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;code&amp;gt;kubectl neat&amp;lt;/code&amp;gt;===&lt;br /&gt;
Print sanitized Kubernetes manifest.&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
kubectl get csec  dummy-secret -n clustersecret -oyaml | kubectl neat&lt;br /&gt;
apiVersion: clustersecret.io/v1&lt;br /&gt;
data:&lt;br /&gt;
  tls.crt: ***&lt;br /&gt;
  tls.key: ***&lt;br /&gt;
kind: ClusterSecret&lt;br /&gt;
matchNamespace:&lt;br /&gt;
- anothernamespace&lt;br /&gt;
metadata:&lt;br /&gt;
  name: dummy-secret&lt;br /&gt;
  namespace: clustersecret&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Getting help like manpages &amp;lt;code&amp;gt;kubectl explain&amp;lt;/code&amp;gt; ==&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
$ kubectl --help&lt;br /&gt;
$ kubectl get --help&lt;br /&gt;
$ kubectl explain --help&lt;br /&gt;
$ kubectl explain pod.spec.containers # kubectl knows cluster version, so gives you correct schema details&lt;br /&gt;
$ kubectl explain pods.spec.tolerations --recursive # show only fields&lt;br /&gt;
(...)&lt;br /&gt;
FIELDS:&lt;br /&gt;
   effect	&amp;lt;string&amp;gt;&lt;br /&gt;
   key	&amp;lt;string&amp;gt;&lt;br /&gt;
   operator	&amp;lt;string&amp;gt;&lt;br /&gt;
   tolerationSeconds	&amp;lt;integer&amp;gt;&lt;br /&gt;
   value	&amp;lt;string&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*[https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-strong-getting-started-strong- kubectl-commands] K8s interactive kubectl command reference&lt;br /&gt;
&lt;br /&gt;
= Watch Containers logs =&lt;br /&gt;
== [https://github.com/stern/stern Stern] ==&lt;br /&gt;
{{note| https://github.com/wercker/stern repository has no activity [https://github.com/wercker/stern/issues/140 ISSUE-140], the new community maintain repo is &amp;lt;tt&amp;gt;[https://github.com/stern/stern stern/stern]&amp;lt;/tt&amp;gt;  }}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Log tailing and landscape viewing tool. It connects to kubeapi and streams logs from all pods. Thus using this external tool with clusters that have 100ts of containers can be put significant load on kubeapi.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It will re-use kubectl config file to connect to your clusters, so works oob.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Govendor - this module manager is required&lt;br /&gt;
export GOPATH=$HOME/go        # path where go modules can be found, used by 'go get -u &amp;lt;url&amp;gt;'&lt;br /&gt;
export PATH=$PATH:$GOPATH/bin # path to the additional 'go' binaries&lt;br /&gt;
go get -u github.com/kardianos/govendor  # there will be no output&lt;br /&gt;
&lt;br /&gt;
# Stern (official)&lt;br /&gt;
mkdir -p $GOPATH/src/github.com/stern # new link: https://github.com/stern/stern&lt;br /&gt;
cd $GOPATH/src/github.com/stern&lt;br /&gt;
git clone https://github.com/stern/stern.git &amp;amp;&amp;amp; cd stern&lt;br /&gt;
govendor sync # there will be no output, may take 2 min&lt;br /&gt;
go install    # no output&lt;br /&gt;
&lt;br /&gt;
# Stern latest, download binary, no need for govendor&lt;br /&gt;
REPO=stern/stern&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/$REPO/releases/latest&amp;quot; | jq -r .tag_name | tr -d v); echo $LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d); FILE=stern_${LATEST}_linux_amd64&lt;br /&gt;
curl -L https://github.com/$REPO/releases/download/v${LATEST}/$FILE.tar.gz -o $TEMPDIR/$FILE.tar.gz&lt;br /&gt;
sudo tar xzvf $TEMPDIR/$FILE.tar.gz -C /usr/local/bin/ stern&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Usage&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Regex filter (pod-query) to match 2 pods patterns 'proxy' and 'gateway'&lt;br /&gt;
stern -n dev --kubeconfig ~/.kube/dev-config \(proxy\|gateway\)  # escape to protect regex mod characters&lt;br /&gt;
stern -n dev --kubeconfig ~/.kube/dev-config '(proxy|gateway)'   # single-quote to protect mod characters&lt;br /&gt;
&lt;br /&gt;
# Template the output&lt;br /&gt;
stern --template '{{.Message}} ({{.NodeName}}/{{.Namespace}}/{{.PodName}}/{{.ContainerName}}){{&amp;quot;\n&amp;quot;}}' .&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Help&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ stern&lt;br /&gt;
Tail multiple pods and containers from Kubernetes&lt;br /&gt;
&lt;br /&gt;
Usage:&lt;br /&gt;
  stern pod-query [flags]&lt;br /&gt;
&lt;br /&gt;
Flags:&lt;br /&gt;
  -A, --all-namespaces             If present, tail across all namespaces. A specific namespace is ignored even if specified with --namespace.&lt;br /&gt;
      --color string               Color output. Can be 'always', 'never', or 'auto' (default &amp;quot;auto&amp;quot;)&lt;br /&gt;
      --completion string          Outputs stern command-line completion code for the specified shell. Can be 'bash' or 'zsh'&lt;br /&gt;
  -c, --container string           Container name when multiple containers in pod (default &amp;quot;.*&amp;quot;)&lt;br /&gt;
      --container-state string     If present, tail containers with status in running, waiting or terminated. Default to running. (default &amp;quot;running&amp;quot;)&lt;br /&gt;
      --context string             Kubernetes context to use. Default to current context configured in kubeconfig.&lt;br /&gt;
  -e, --exclude strings            Regex of log lines to exclude&lt;br /&gt;
  -E, --exclude-container string   Exclude a Container name&lt;br /&gt;
  -h, --help                       help for stern&lt;br /&gt;
  -i, --include strings            Regex of log lines to include&lt;br /&gt;
      --init-containers            Include or exclude init containers (default true)&lt;br /&gt;
      --kubeconfig string          Path to kubeconfig file to use&lt;br /&gt;
  -n, --namespace string           Kubernetes namespace to use. Default to namespace configured in Kubernetes context.&lt;br /&gt;
  -o, --output string              Specify predefined template. Currently support: [default, raw, json] (default &amp;quot;default&amp;quot;)&lt;br /&gt;
  -l, --selector string            Selector (label query) to filter on. If present, default to &amp;quot;.*&amp;quot; for the pod-query.&lt;br /&gt;
  -s, --since duration             Return logs newer than a relative duration like 5s, 2m, or 3h. Defaults to 48h.&lt;br /&gt;
      --tail int                   The number of lines from the end of the logs to show. Defaults to -1, showing all logs. (default -1)&lt;br /&gt;
      --template string            Template to use for log lines, leave empty to use --output flag&lt;br /&gt;
  -t, --timestamps                 Print timestamps&lt;br /&gt;
  -v, --version                    Print the version and exit&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Usage&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
stern &amp;lt;pod&amp;gt;&lt;br /&gt;
stern --tail 1 busybox -n &amp;lt;namespace&amp;gt; #this is RegEx that matches busybox1|2|etc&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== [https://github.com/johanhaleby/kubetail kubetail] ==&lt;br /&gt;
Bash script that enables you to aggregate (tail/follow) logs from multiple pods into one stream. This is the same as running &amp;lt;code&amp;gt;kubectl logs -f&amp;lt;/code&amp;gt; but for multiple pods.&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/lensapp/lens Lens | Kubernetes IDE] =&lt;br /&gt;
Kubernetes client, this is not a dashboard that needs installing on a cluster. Similar to KUI but much more powerful.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Deb&lt;br /&gt;
curl curl https://api.k8slens.dev/binaries/Lens-5.4.1-latest.20220304.1.amd64.deb&lt;br /&gt;
sudo apt-get install ./Lens-5.4.1-latest.20220304.1.amd64.deb&lt;br /&gt;
&lt;br /&gt;
# Snap&lt;br /&gt;
snap list&lt;br /&gt;
sudo snap install kontena-lens --classic # U16.04+, tested on U20.04&lt;br /&gt;
&lt;br /&gt;
# Install from a .snap file&lt;br /&gt;
mkdir -p ~/Downloads/kontena-lens &amp;amp;&amp;amp; cd $_&lt;br /&gt;
snap download kontena-lens&lt;br /&gt;
sudo snap ack     kontena-lens_152.assert         # add an assertion to the system assertion database&lt;br /&gt;
sudo snap install kontena-lens_152.snap --classic # --dangerous if you do not have the assert file&lt;br /&gt;
&lt;br /&gt;
# download snap from https://k8slens.dev/&lt;br /&gt;
curl https://api.k8slens.dev/binaries/Lens-5.3.4-latest.20220120.1.amd64.snap&lt;br /&gt;
sudo snap install Lens-5.3.4-latest.20220120.1.amd64.snap --classic --dangerous&lt;br /&gt;
&lt;br /&gt;
# Info&lt;br /&gt;
$ snap info kontena-lens_152.assert&lt;br /&gt;
name:      kontena-lens&lt;br /&gt;
summary:   Lens - The Kubernetes IDE&lt;br /&gt;
publisher: Mirantis Inc (jakolehm)&lt;br /&gt;
store-url: https://snapcraft.io/kontena-lens&lt;br /&gt;
contact:   info@k8slens.dev&lt;br /&gt;
license:   Proprietary&lt;br /&gt;
description: |&lt;br /&gt;
  Lens is the most powerful IDE for people who need to deal with Kubernetes clusters on a daily&lt;br /&gt;
  basis. Ensure your clusters are properly setup and configured. Enjoy increased visibility, real&lt;br /&gt;
  time statistics, log streams and hands-on troubleshooting capabilities. With Lens, you can work&lt;br /&gt;
  with your clusters more easily and fast, radically improving productivity and the speed of&lt;br /&gt;
  business.&lt;br /&gt;
snap-id: Dek6y5mTEPxhySFKPB4Z0WVi5EPS9osS&lt;br /&gt;
channels:&lt;br /&gt;
  latest/stable:    4.0.7      2021-01-20 (152) 107MB classic&lt;br /&gt;
  latest/candidate: 4.0.7      2021-01-20 (152) 107MB classic&lt;br /&gt;
  latest/beta:      4.0.7      2021-01-20 (152) 107MB classic&lt;br /&gt;
  latest/edge:      4.1.0-rc.1 2021-02-11 (157) 108MB classic&lt;br /&gt;
&lt;br /&gt;
$ snap info kontena-lens_152.snap&lt;br /&gt;
path:       &amp;quot;kontena-lens_152.snap&amp;quot;&lt;br /&gt;
name:       kontena-lens&lt;br /&gt;
summary:    Lens&lt;br /&gt;
version:    4.0.7 classic&lt;br /&gt;
build-date: 24 days ago, at 16:31 GMT&lt;br /&gt;
license:    unset&lt;br /&gt;
description: |&lt;br /&gt;
  Lens - The Kubernetes IDE&lt;br /&gt;
commands:&lt;br /&gt;
  - kontena-lens&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/lensapp/lens.git OpenLens] | Kubernetes IDE =&lt;br /&gt;
Download binary from https://github.com/MuhammedKalkan/OpenLens&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install on Ubuntu&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
SUDO=''&lt;br /&gt;
if (( $EUID != 0 )); then&lt;br /&gt;
    SUDO='sudo'&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
REPO=MuhammedKalkan/OpenLens&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/$REPO/releases/latest&amp;quot; | jq -r .tag_name | tr -d v); echo $LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d); FILE=OpenLens-${LATEST}.amd64.deb&lt;br /&gt;
curl -L https://github.com/${REPO}/releases/download/v${LATEST}/$FILE -o $TEMPDIR/$FILE&lt;br /&gt;
$SUDO dpkg -i $TEMPDIR/$FILE&lt;br /&gt;
$SUDO apt-get install -y --fix-broken&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Build your own - [https://gist.github.com/jslay88/bf654c23eaaaed443bb8e8b41d02b2a9 gist]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
install_deps_windows() {&lt;br /&gt;
    echo &amp;quot;Installing Build Dependencies (Windows)...&amp;quot;&lt;br /&gt;
    choco install -y make visualstudio2019buildtools visualstudio2019-workload-vctools&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_deps_darwin() {&lt;br /&gt;
    echo &amp;quot;Installing Build Dependencies (Darwin)...&amp;quot;&lt;br /&gt;
    xcode-select --install&lt;br /&gt;
    if ! hash make 2&amp;gt;/dev/null; then&lt;br /&gt;
        if ! hash brew 2&amp;gt;/dev/null; then&lt;br /&gt;
            echo &amp;quot;Installing Homebrew...&amp;quot;&lt;br /&gt;
            /bin/bash -c &amp;quot;$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)&amp;quot;&lt;br /&gt;
        fi&lt;br /&gt;
        echo &amp;quot;Installing make via Homebrew...&amp;quot;&lt;br /&gt;
        brew install make&lt;br /&gt;
    fi&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_deps_posix() {&lt;br /&gt;
    echo &amp;quot;Installing Build Dependencies (Posix)...&amp;quot;&lt;br /&gt;
    sudo apt-get install -y make g++ curl&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_darwin() {&lt;br /&gt;
    echo &amp;quot;Killing OpenLens (if open)...&amp;quot;&lt;br /&gt;
    killall OpenLens&lt;br /&gt;
    echo &amp;quot;Installing OpenLens (Darwin)...&amp;quot;&lt;br /&gt;
    rm -Rf &amp;quot;$HOME/Applications/OpenLens.app&amp;quot;&lt;br /&gt;
    arch=&amp;quot;mac&amp;quot;&lt;br /&gt;
    if [[ &amp;quot;$(uname -m)&amp;quot; == &amp;quot;arm64&amp;quot; ]]; then&lt;br /&gt;
        arch=&amp;quot;mac-arm64&amp;quot;  # credit @teefax&lt;br /&gt;
    fi&lt;br /&gt;
    cp -Rfp &amp;quot;$tempdir/lens/dist/$arch/OpenLens.app&amp;quot; &amp;quot;$HOME/Applications/&amp;quot;&lt;br /&gt;
    rm -Rf &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    &lt;br /&gt;
    print_alias_message&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_posix() {&lt;br /&gt;
    echo &amp;quot;Installing OpenLens (Posix)...&amp;quot;&lt;br /&gt;
    cd &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    sudo dpkg -i &amp;quot;$(ls -Art $tempdir/lens/dist/*.deb  | tail -n 1)&amp;quot;&lt;br /&gt;
    rm -Rf &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    &lt;br /&gt;
    print_alias_message&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_windows() {&lt;br /&gt;
    echo &amp;quot;Installing OpenLens (Windows)...&amp;quot;&lt;br /&gt;
    &amp;quot;$(/bin/ls -Art $tempdir/lens/dist/OpenLens*.exe  | tail -n 1)&amp;quot;&lt;br /&gt;
    rm -Rf &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    &lt;br /&gt;
    print_alias_message&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_nvm() {&lt;br /&gt;
    if [ -z &amp;quot;$NVM_DIR&amp;quot; ]; then&lt;br /&gt;
        echo &amp;quot;Installing NVM...&amp;quot;&lt;br /&gt;
        NVM_VERSION=$(curl -s https://api.github.com/repos/nvm-sh/nvm/releases/latest | sed -En 's/  &amp;quot;tag_name&amp;quot;: &amp;quot;(.+)&amp;quot;,/\1/p')&lt;br /&gt;
        curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/$NVM_VERSION/install.sh | bash&lt;br /&gt;
        NVM_DIR=&amp;quot;$HOME/.nvm&amp;quot;&lt;br /&gt;
    fi&lt;br /&gt;
    [ -s &amp;quot;$NVM_DIR/nvm.sh&amp;quot; ] &amp;amp;&amp;amp; \. &amp;quot;$NVM_DIR/nvm.sh&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
build_openlens() {&lt;br /&gt;
    tempdir=$(mktemp -d)&lt;br /&gt;
    cd &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    if [ -z &amp;quot;$1&amp;quot; ]; then&lt;br /&gt;
        echo &amp;quot;Checking GitHub API for latests tag...&amp;quot;&lt;br /&gt;
        OPENLENS_VERSION=$(curl -s https://api.github.com/repos/lensapp/lens/releases/latest | sed -En 's/  &amp;quot;tag_name&amp;quot;: &amp;quot;(.+)&amp;quot;,/\1/p')&lt;br /&gt;
    else&lt;br /&gt;
        if [[ &amp;quot;$1&amp;quot; == v* ]]; then&lt;br /&gt;
            OPENLENS_VERSION=&amp;quot;$1&amp;quot;&lt;br /&gt;
        else&lt;br /&gt;
            OPENLENS_VERSION=&amp;quot;v$1&amp;quot;&lt;br /&gt;
        fi&lt;br /&gt;
        echo &amp;quot;Using supplied tag $OPENLENS_VERSION&amp;quot;&lt;br /&gt;
    fi&lt;br /&gt;
    if [ -z $OPENLENS_VERSION ]; then&lt;br /&gt;
        echo &amp;quot;Failed to get valid version tag. Aborting!&amp;quot;&lt;br /&gt;
        exit 1&lt;br /&gt;
    fi&lt;br /&gt;
    curl -L https://github.com/lensapp/lens/archive/refs/tags/$OPENLENS_VERSION.tar.gz | tar xvz&lt;br /&gt;
    mv lens-* lens&lt;br /&gt;
    cd lens&lt;br /&gt;
    NVM_CURRENT=$(nvm current)&lt;br /&gt;
    nvm install 16&lt;br /&gt;
    nvm use 16&lt;br /&gt;
    npm install -g yarn&lt;br /&gt;
    make build&lt;br /&gt;
    nvm use &amp;quot;$NVM_CURRENT&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
print_alias_message() {&lt;br /&gt;
    if [ &amp;quot;$(type -t install_openlens)&amp;quot; != 'alias' ]; then&lt;br /&gt;
        printf &amp;quot;It is recommended to add an alias to your shell profile to run this script again.\n&amp;quot;&lt;br /&gt;
        printf &amp;quot;alias install_openlens=\&amp;quot;curl -o- https://gist.githubusercontent.com/jslay88/bf654c23eaaaed443bb8e8b41d02b2a9/raw/install_openlens.sh | bash\&amp;quot;\n\n&amp;quot;&lt;br /&gt;
    fi&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
if [[ &amp;quot;$(uname)&amp;quot; == &amp;quot;Linux&amp;quot; ]]; then&lt;br /&gt;
    install_deps_posix&lt;br /&gt;
    install_nvm&lt;br /&gt;
    build_openlens &amp;quot;$1&amp;quot;&lt;br /&gt;
    install_posix&lt;br /&gt;
elif [[ &amp;quot;$(uname)&amp;quot; == &amp;quot;Darwin&amp;quot; ]]; then&lt;br /&gt;
    install_deps_darwin&lt;br /&gt;
    install_nvm&lt;br /&gt;
    build_openlens &amp;quot;$1&amp;quot;&lt;br /&gt;
    install_darwin&lt;br /&gt;
else&lt;br /&gt;
    install_deps_windows&lt;br /&gt;
    install_nvm&lt;br /&gt;
    build_openlens &amp;quot;$1&amp;quot;&lt;br /&gt;
    install_windows&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Done!&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://kui.tools/ kui terminal] =&lt;br /&gt;
kui is a terminal with visualizations, provided by IBM&lt;br /&gt;
&lt;br /&gt;
Install using continent install script into &amp;lt;code&amp;gt;/opt/Kui-linux-x64/&amp;lt;/code&amp;gt; and symlink &amp;lt;code&amp;gt;Kui&amp;lt;/code&amp;gt; binary to &amp;lt;code&amp;gt;/usr/local/bin/kui&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
REPO=kubernetes-sigs/kui&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/$REPO/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d); FILE=Kui-linux-x64.zip&lt;br /&gt;
curl -L https://github.com/$REPO/releases/download/$LATEST/Kui-linux-x64.zip -o $TEMPDIR/$FILE&lt;br /&gt;
sudo mkdir -p /opt/Kui-linux-x64&lt;br /&gt;
sudo unzip $TEMPDIR/$FILE -d /opt/&lt;br /&gt;
&lt;br /&gt;
# Run&lt;br /&gt;
$&amp;gt; /opt/Kui-linux-x64/Kui&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Run Kui as [Kubernetes plugin https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
export PATH=$PATH:/opt/Kui-linux-x64/ # make sure Kui libs are in environment PATH&lt;br /&gt;
kubectl kui get pods -A               # -&amp;gt; a pop up window will show up&lt;br /&gt;
&lt;br /&gt;
$ kubectl plugin list &lt;br /&gt;
The following compatible plugins are available:&lt;br /&gt;
/opt/Kui-linux-x64/kubectl-kui&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
:[[File:ClipCapIt-200428-205600.PNG]]&lt;br /&gt;
&lt;br /&gt;
; Resources&lt;br /&gt;
* [https://github.com/IBM/kui/wiki kui/wiki] Github&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/derailed/popeye popeye] =&lt;br /&gt;
Popeye is a utility that scans live Kubernetes cluster and reports potential issues with deployed resources and configurations.&lt;br /&gt;
:[[File:ClipCapIt-200501-123645.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install&lt;br /&gt;
REPO=derailed/popeye&lt;br /&gt;
RELEASE=popeye_Linux_x86_64.tar.gz&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/${REPO}/releases/latest&amp;quot; | jq -r .tag_name); echo $VERSION # latest&lt;br /&gt;
wget https://github.com/${REPO}/releases/download/${VERSION}/${RELEASE}&lt;br /&gt;
tar xf ${RELEASE} popeye --remove-files&lt;br /&gt;
sudo install popeye /usr/local/bin&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
popeye # --out html&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/derailed/k9s k9s] =&lt;br /&gt;
K9s provides a terminal UI to interact with Kubernetes clusters.&lt;br /&gt;
&lt;br /&gt;
;Install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/derailed/k9s/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
wget https://github.com/derailed/k9s/releases/download/$LATEST/k9s_Linux_amd64.tar.gz&lt;br /&gt;
tar xf k9s_Linux_amd64.tar.gz --remove-files k9s&lt;br /&gt;
sudo install k9s /usr/local/bin&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Usage&lt;br /&gt;
* &amp;lt;code&amp;gt;?&amp;lt;/code&amp;gt; help&lt;br /&gt;
* &amp;lt;code&amp;gt;:ns&amp;lt;/code&amp;gt; select namespace&lt;br /&gt;
* &amp;lt;code&amp;gt;:nodes&amp;lt;/code&amp;gt; show nodes&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:[[File:ClipCapIt-190826-152830.PNG]]&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/droctothorpe/kubecolor kubecolor] =&lt;br /&gt;
Kubecolor is a bash function that colorizes the output of kubectl get events -w.&lt;br /&gt;
:[[File:ClipCapIt-190831-113158.PNG]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# This script is not working&lt;br /&gt;
git clone https://github.com/droctothorpe/kubecolor.git ~/.kubecolor&lt;br /&gt;
echo &amp;quot;source ~/.kubecolor/kubecolor.bash&amp;quot; &amp;gt;&amp;gt; ~/.bash_profile # (or ~/.bashrc)&lt;br /&gt;
source ~/.bash_profile # (or ~/.bashrc)&lt;br /&gt;
&lt;br /&gt;
# You can source this function instead&lt;br /&gt;
kube-events() {&lt;br /&gt;
    kubectl get events --all-namespaces --watch \&lt;br /&gt;
    -o 'go-template={{.lastTimestamp}} ^ {{.involvedObject.kind}} ^ {{.message}} ^ ({{.involvedObject.name}}){{&amp;quot;\n&amp;quot;}}' \&lt;br /&gt;
    | awk -F^ \&lt;br /&gt;
    -v   black=$(tput setaf 0) \&lt;br /&gt;
    -v     red=$(tput setaf 1) \&lt;br /&gt;
    -v   green=$(tput setaf 2) \&lt;br /&gt;
    -v  yellow=$(tput setaf 3) \&lt;br /&gt;
    -v    blue=$(tput setaf 4) \&lt;br /&gt;
    -v magenta=$(tput setaf 5) \&lt;br /&gt;
    -v    cyan=$(tput setaf 6) \&lt;br /&gt;
    -v   white=$(tput setaf 7) \&lt;br /&gt;
    '{ $1=blue $1; $2=green $2; $3=white $3; }  1'&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
kube-events&lt;br /&gt;
kubectl get events -A -w&lt;br /&gt;
kubectl get events --all-namespaces --watch -o 'go-template={{.lastTimestamp}} {{.involvedObject.kind}} {{.message}} ({{.involvedObject.name}}){{&amp;quot;\n&amp;quot;}}'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
= [https://argoproj.github.io/argo-rollouts/ argo-rollouts] =&lt;br /&gt;
Argo Rollouts introduces a new custom resource called a Rollout to provide additional deployment strategies such as Blue Green and Canary to Kubernetes.&lt;br /&gt;
&lt;br /&gt;
= &amp;lt;code&amp;gt;[https://github.com/groundcover-com/murre murre]&amp;lt;/code&amp;gt; =&lt;br /&gt;
Murre is an on-demand, scaleable source of container resource metrics for K8s. No dependencies needed, no install needed on a cluster.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
goenv install 1.18 # although 1.19 is the latest and the install completes successfully it wont create the binary  &lt;br /&gt;
go install github.com/groundcover-com/murre@latest&lt;br /&gt;
murre --sortby-cpu-util&lt;br /&gt;
murre --sortby-cpu&lt;br /&gt;
murre --pod kong-51xst&lt;br /&gt;
murre --namespace dev&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/amelbakry/kubernetes-scripts/blob/master/cluster-health.sh Kubernetes scripts] =&lt;br /&gt;
These Scripts allow you to troubleshoot and check the health status of the cluster and deployments They allow you to gather these information&lt;br /&gt;
* Cluster resources&lt;br /&gt;
* Cluster Nodes status&lt;br /&gt;
* Nodes Conditions&lt;br /&gt;
* Pods per Nodes&lt;br /&gt;
* Worker Nodes Per Availability Zones&lt;br /&gt;
* Cluster Node Types&lt;br /&gt;
* Pods not in running or completed status&lt;br /&gt;
* Top Pods according to Memory Limits&lt;br /&gt;
* Top Pods according to CPU Limits&lt;br /&gt;
* Number of Pods&lt;br /&gt;
* Pods Status&lt;br /&gt;
* Max Pods restart count&lt;br /&gt;
* Readiness of Pods&lt;br /&gt;
* Pods Average Utilization&lt;br /&gt;
* Top Pods according to CPU Utilization&lt;br /&gt;
* Top Pods according to Memory Utilization&lt;br /&gt;
* Pods Distribution per Nodes&lt;br /&gt;
* Node Distribution per Availability Zone&lt;br /&gt;
* Deployments without correct resources (Memory or CPU)&lt;br /&gt;
* Deployments without Limits&lt;br /&gt;
* Deployments without Application configured in Labels&lt;br /&gt;
&lt;br /&gt;
= Multi-node clusters =&lt;br /&gt;
{{Note|[[Kubernetes/minikube]] can do this natively}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Build multi node cluster for development.&lt;br /&gt;
On a single machine&lt;br /&gt;
* [https://github.com/kinvolk/kube-spawn/ kube-spawn] tool for creating a multi-node Kubernetes (&amp;gt;= 1.8) cluster on a single Linux machine&lt;br /&gt;
* [https://github.com/sttts/kubernetes-dind-cluster kubernetes-dind-cluster] Kubernetes multi-node cluster for developer of Kubernetes that launches in 36 seconds&lt;br /&gt;
* [https://kind.sigs.k8s.io/ kind] is a tool for running local Kubernetes clusters using Docker container “nodes”&lt;br /&gt;
* [https://github.com/ecomm-integration-ballerina/kubernetes-cluster Vagrant] full documentation in thsi [https://medium.com/@wso2tech/multi-node-kubernetes-cluster-with-vagrant-virtualbox-and-kubeadm-9d3eaac28b98 article]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Full cluster provisioning&lt;br /&gt;
* [https://github.com/kubernetes-sigs/kubespray kubespray] Deploy a Production Ready Kubernetes Cluster&lt;br /&gt;
* [https://github.com/kubernetes/kops kops] get a production grade Kubernetes cluster up and running&lt;br /&gt;
&lt;br /&gt;
= [https://kubernetes.io/docs/tasks/debug-application-cluster/crictl/ crictl] =&lt;br /&gt;
CLI and validation tools for Kubelet Container Runtime Interface (CRI). Used for debugging Kubernetes nodes with &amp;lt;code&amp;gt;crictl&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;crictl&amp;lt;/code&amp;gt; requires a Linux operating system with a CRI runtime. Creating containers with this tool on K8s cluster, will eventually cause that Kubernetes will delete these containers.&lt;br /&gt;
= [https://github.com/weaveworks/kubediff kubediff] show diff code vs what is deployed =&lt;br /&gt;
Kubediff is a tool for Kubernetes to show you the differences between your running configuration and your version controlled configuration.&lt;br /&gt;
= Mozilla SOPS - secret manager =&lt;br /&gt;
* [https://github.com/mozilla/sops SOPS] Mozilla SOPS: Secrets OPerationS, sops is an editor of encrypted files that supports YAML, JSON, ENV, INI and BINARY formats and encrypts with AWS KMS, GCP KMS, Azure Key Vault and PGP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/getsops/sops/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d)&lt;br /&gt;
curl -sL https://github.com/mozilla/sops/releases/download/${LATEST}/sops-${LATEST}.linux.amd64 -o $TEMPDIR/sops&lt;br /&gt;
sudo install $TEMPDIR/sops /usr/local/bin/sops&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://kompose.io/ Kompose] (Kubernetes + Compose) =&lt;br /&gt;
kompose is a tool to convert docker-compose files to Kubernetes manifests. &amp;lt;code&amp;gt;kompose&amp;lt;/code&amp;gt; takes a Docker Compose file and translates it into Kubernetes resources.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Linux&lt;br /&gt;
curl -L https://github.com/kubernetes/kompose/releases/download/v1.21.0/kompose-linux-amd64 -o kompose&lt;br /&gt;
sudo install ./kompose /usr/local/bin/kompose               # option 1&lt;br /&gt;
chmod +x kompose; sudo mv ./kompose /usr/local/bin/kompose  # option 2&lt;br /&gt;
&lt;br /&gt;
# Completion&lt;br /&gt;
source &amp;lt;(kompose completion bash)&lt;br /&gt;
&lt;br /&gt;
# Convert&lt;br /&gt;
kompose convert -f docker-compose-mac.yaml&lt;br /&gt;
&lt;br /&gt;
WARN Restart policy 'unless-stopped' in service mysql is not supported, convert it to 'always'&lt;br /&gt;
INFO Kubernetes file &amp;quot;mysql-service.yaml&amp;quot; created&lt;br /&gt;
INFO Kubernetes file &amp;quot;cluster-dir-persistentvolumeclaim.yaml&amp;quot; created&lt;br /&gt;
INFO Kubernetes file &amp;quot;mysql-deployment.yaml&amp;quot; created&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;References&lt;br /&gt;
*[https://github.com/kubernetes/kompose kompose] Github&lt;br /&gt;
&lt;br /&gt;
= [https://kubernetes.io/blog/2019/04/19/introducing-kube-iptables-tailer/ kube-iptables-tailer] - ip-table drop packages logger =&lt;br /&gt;
Allows to view iptables dropped packages, useful when working with Network Policies to identify pods trying to talk to disallowed destinations.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This project deploys &amp;lt;tt&amp;gt;[https://github.com/box/kube-iptables-tailer/tree/master/demo kube-iptables-tailer]&amp;lt;/tt&amp;gt; daemonset that watches iptables log &amp;lt;code&amp;gt;/var/log/iptables.log&amp;lt;/code&amp;gt; on each k8s-node mounted as &amp;lt;code&amp;gt;hostPath&amp;lt;/code&amp;gt; volume. It filters the log for custom prefix, set in &amp;lt;code&amp;gt;daemonset.spec.template.spec.containers.env&amp;lt;/code&amp;gt; and sends to cluster events.&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
            env: &lt;br /&gt;
              - name: &amp;quot;IPTABLES_LOG_PATH&amp;quot;&lt;br /&gt;
                value: &amp;quot;/var/log/iptables.log&amp;quot;&lt;br /&gt;
              - name: &amp;quot;IPTABLES_LOG_PREFIX&amp;quot;&lt;br /&gt;
                # log prefix defined in your iptables chains&lt;br /&gt;
                value: &amp;quot;my-prefix:&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://github.com/box/kube-iptables-tailer#setup-iptables-log-prefix Set iptables Log Prefix]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ iptables -A CHAIN_NAME -j LOG --log-prefix &amp;quot;EXAMPLE_LOG_PREFIX: &amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example output, when packet dropped&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ kubectl describe pods --namespace=YOUR_NAMESPACE&lt;br /&gt;
...&lt;br /&gt;
Events:&lt;br /&gt;
  FirstSeen   LastSeen    Count   From                    Type          Reason          Message&lt;br /&gt;
  ---------   --------	  -----	  ----                    ----          ------          -------&lt;br /&gt;
  1h          5s          10      kube-iptables-tailer    Warning       PacketDrop      Packet dropped when receiving traffic from example-service-2 (IP: 22.222.22.222).&lt;br /&gt;
  3h          2m          5       kube-iptables-tailer    Warning       PacketDrop      Packet dropped when sending traffic to example-service-1 (IP: 11.111.11.111).&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
= [https://github.com/eldadru/ksniff ksniff] - pipe a pod traffic to Wireshark or Tshark =&lt;br /&gt;
A kubectl plugin that utilize tcpdump and Wireshark to start a remote capture on any pod&lt;br /&gt;
&lt;br /&gt;
= [https://docs.flagger.app/ flagger - canary deployments] =&lt;br /&gt;
Flagger is a Kubernetes operator that automates the promotion of canary deployments using Istio, Linkerd, App Mesh, NGINX, Skipper, Contour or Gloo routing for traffic shifting and Prometheus metrics for canary analysis.&lt;br /&gt;
= [https://www.kubeval.com/ Kubeval] =&lt;br /&gt;
Kubeval is used to validate one or more Kubernetes configuration files, and is often used locally as part of a development workflow as well as in CI pipelines.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install&lt;br /&gt;
wget https://github.com/instrumenta/kubeval/releases/latest/download/kubeval-linux-amd64.tar.gz&lt;br /&gt;
tar xf kubeval-linux-amd64.tar.gz&lt;br /&gt;
sudo cp kubeval /usr/local/bin&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
$&amp;gt; kubeval my-invalid-rc.yaml&lt;br /&gt;
WARN - my-invalid-rc.yaml contains an invalid ReplicationController - spec.replicas: Invalid type. Expected: integer, given: string&lt;br /&gt;
$&amp;gt; echo $?&lt;br /&gt;
1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/yannh/kubeconform kubeconform] - improved Kubeval =&lt;br /&gt;
Kubeconform is a Kubernetes manifests validation tool.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install&lt;br /&gt;
wget https://github.com/yannh/kubeconform/releases/latest/download/kubeconform-linux-amd64.tar.gz&lt;br /&gt;
tar xf kubeconform-linux-amd64.tar.gz&lt;br /&gt;
sudo install kubeconform /usr/local/bin&lt;br /&gt;
&lt;br /&gt;
# Show version&lt;br /&gt;
kubeconform -v&lt;br /&gt;
v0.4.14&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Observability =&lt;br /&gt;
== [https://github.com/oslabs-beta/KUR8 KUR8] - like Elastic.io EFK dashboards ==&lt;br /&gt;
{{Note|I've deployed v1.0.0 to monitoring ns along with already existing service &amp;lt;code&amp;gt;&lt;br /&gt;
kube-prometheus-stack-prometheus:9090&amp;lt;/code&amp;gt; but the application was crashing}}&lt;br /&gt;
&lt;br /&gt;
= CPU Load pods =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Repeat the command times x CPU&lt;br /&gt;
cat /proc/cpuinfo | grep processor | wc -l # count processors&lt;br /&gt;
yes &amp;gt; /dev/null &amp;amp;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
*[https://kubernetes.io/docs/reference/kubectl/overview/ kubectl overview - resources types, Namespaced, kinds] K8s docs&lt;br /&gt;
*[https://github.com/johanhaleby/kubetail kubetail] Bash script that enables you to aggregate (tail/follow) logs from multiple pods into one stream. This is the same as running &amp;quot;kubectl logs -f &amp;quot; but for multiple pods.&lt;br /&gt;
*[https://github.com/ahmetb/kubectx kubectx kubens] Kubernetes config switches for context and setting up default namespace&lt;br /&gt;
*[https://medium.com/faun/using-different-kubectl-versions-with-multiple-kubernetes-clusters-a3ad8707b87b manages different ver kubectl] blog&lt;br /&gt;
*[https://github.com/kubernetes/community/blob/master/contributors/devel/sig-cli/kubectl-conventions.md#rules-for-extending-special-resource-alias---all kubectl] Kubectl Conventions&lt;br /&gt;
&lt;br /&gt;
Cheatsheets&lt;br /&gt;
*[https://cheatsheet.dennyzhang.com/cheatsheet-kubernetes-A4 cheatsheet-kubernetes-A4] by dennyzhang&lt;br /&gt;
&lt;br /&gt;
Other projects&lt;br /&gt;
*[https://github.com/jonmosco/kube-tmux kube-tmux] Kubernetes context and namespace status for tmux&lt;br /&gt;
*[https://github.com/jonmosco/kube-ps1 kube-ps1] Kubernetes prompt for bash and zsh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:kubernetes]]&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Terraform&amp;diff=7041</id>
		<title>Terraform</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Terraform&amp;diff=7041"/>
		<updated>2024-11-07T23:00:12Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Syntax Terraform ~0.11 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article is about utilising a tool from HashiCorp called Terraform to build infrastructure as a code - IoC.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Note| most of the paragraphs have examples of Terraform prior 0.12 version syntax that uses HCLv1. HCLv2 has been introduced with v0.12+ that contains significiant syntax and capabilites improvments. }}&lt;br /&gt;
&lt;br /&gt;
= Install terraform =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
wget https://releases.hashicorp.com/terraform/0.11.11/terraform_0.11.11_linux_amd64.zip&lt;br /&gt;
unzip terraform_0.11.11_linux_amd64.zip&lt;br /&gt;
sudo mv ./terraform /usr/local/bin&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== [https://github.com/kamatama41/tfenv tfenv] - manage multiple versions of Teraform ==&lt;br /&gt;
Install and usage&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
git clone https://github.com/tfutils/tfenv.git ~/.tfenv&lt;br /&gt;
echo &amp;quot;[ -d $HOME/.tfenv ] &amp;amp;&amp;amp; export PATH=$PATH:$HOME/.tfenv/bin/&amp;quot; &amp;gt;&amp;gt; ~/.bashrc # or ~/.bash_profile&lt;br /&gt;
&lt;br /&gt;
# Use&lt;br /&gt;
tfenv install 1.0.6&lt;br /&gt;
tfenv use 1.0.6&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== IDE ==&lt;br /&gt;
Development I use:&lt;br /&gt;
* VSCode with 1.41.1+ (for reference) with extensions:&lt;br /&gt;
** Terraform Autocomplete by erd0s&lt;br /&gt;
** Terraform by Mikael Olenfalk with enabled Language Server; open the command pallet with &amp;lt;code&amp;gt;Ctrl+Shift+P&amp;lt;/code&amp;gt;&lt;br /&gt;
:[[File:ClipCapIt-200202-153128.PNG]]&lt;br /&gt;
&lt;br /&gt;
= Basic configuration =&lt;br /&gt;
When terraform is run it looks for .tf file where configuration is stored. The look up process is limited to a flat directory and never leaves the directory that runs from. Therefore if you wish to address a common file a symbolic-link needs to be created within the directory you have .tf file.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
$ vi example.tf &lt;br /&gt;
provider &amp;quot;aws&amp;quot; {&lt;br /&gt;
  access_key = &amp;quot;AK01234567890OGD6WGA&amp;quot; &lt;br /&gt;
  secret_key = &amp;quot;N8012345678905acCY6XIc1bYjsvvlXHUXMaxOzN&amp;quot;&lt;br /&gt;
  region     = &amp;quot;eu-west-1&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
resource &amp;quot;aws_instance&amp;quot; &amp;quot;webserver&amp;quot; {&lt;br /&gt;
  ami           = &amp;quot;ami-405f7226&amp;quot;&lt;br /&gt;
  instance_type = &amp;quot;t2.nano&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since version 10.8.x major changes and features have been introduced including split of providers binary. Now each provider is a separate binary. Please see below example for Azure provider and other internal Terraform developed providers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Azure ==&lt;br /&gt;
Terraform credentials&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
export ARM_SUBSCRIPTION_ID=&amp;quot;YOUR_SUBSCRIPTION_ID&amp;quot;&lt;br /&gt;
export ARM_TENANT_ID=&amp;quot;TENANT_ID&amp;quot;&lt;br /&gt;
export ARM_CLIENT_ID=&amp;quot;CLIENT_ID&amp;quot;&lt;br /&gt;
export ARM_CLIENT_SECRET=&amp;quot;CLIENT_SECRET&amp;quot;&lt;br /&gt;
export TF_VAR_client_id=${ARM_CLIENT_ID}&lt;br /&gt;
export TF_VAR_client_secret=${ARM_CLIENT_SECRET}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example, how to source credentials&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
export VAULT_CLIENT_ADDR=http://10.1.1.1:8200&lt;br /&gt;
export VAULT_TOKEN=11111111-1111-1111-1111-1111111111111&lt;br /&gt;
vault read -format=json -address=$VAULT_CLIENT_ADDR secret/azure/subscription   | jq -r '.data | .subscription_id, .tenant_id'&lt;br /&gt;
vault read -format=json -address=$VAULT_CLIENT_ADDR secret/azure/${application} | jq -r '.data | .client_id, .client_secret'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Terraform providers, modules and backend config&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
$ vi providers.tf&lt;br /&gt;
provider &amp;quot;azurerm&amp;quot; {&lt;br /&gt;
  version         = &amp;quot;1.10.0&amp;quot;&lt;br /&gt;
  subscription_id = &amp;quot;${var.subscription_id}&amp;quot;&lt;br /&gt;
  tenant_id       = &amp;quot;${var.tenant_id}&amp;quot;&lt;br /&gt;
  client_id       = &amp;quot;${var.client_id}&amp;quot;&lt;br /&gt;
  client_secret   = &amp;quot;${var.client_secret}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# HashiCorp special providers https://github.com/terraform-providers&lt;br /&gt;
provider &amp;quot;template&amp;quot; { version = &amp;quot;1.0.0&amp;quot; }&lt;br /&gt;
provider &amp;quot;external&amp;quot; { version = &amp;quot;1.0.0&amp;quot; }&lt;br /&gt;
provider &amp;quot;local&amp;quot;    { version = &amp;quot;1.1.0&amp;quot; }&lt;br /&gt;
&lt;br /&gt;
terraform {&lt;br /&gt;
  backend &amp;quot;local&amp;quot; {}&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== AWS ==&lt;br /&gt;
;References&lt;br /&gt;
*[https://www.padok.fr/en/blog/terraform-s3-bucket-aws S3 bucket for all accounts]&lt;br /&gt;
*[https://www.padok.fr/en/blog/authentication-aws-profiles Multi account auth using aws profiles and &amp;lt;code&amp;gt;provider &amp;quot;aws&amp;quot; {}&amp;lt;/code&amp;gt;]&lt;br /&gt;
=== Local state ===&lt;br /&gt;
Local state configuration&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
vi backend.tf&lt;br /&gt;
terraform {&lt;br /&gt;
  version          = &amp;quot;~&amp;gt; 1.0&amp;quot;&lt;br /&gt;
  required_version = &amp;quot;= 0.12.29&amp;quot;&lt;br /&gt;
  backend &amp;quot;local&amp;quot; {}&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Remote state (single) for multi account deployments ===&lt;br /&gt;
There are many combination setting up backend and AWS credentials. Important understand is that &amp;lt;code&amp;gt;terraform { backend{} }&amp;lt;/code&amp;gt; block does NOT use &amp;lt;code&amp;gt;provider &amp;quot;aws {}&amp;quot;&amp;lt;/code&amp;gt; configuration in order to access the state bucket. It only uses the backend one.&lt;br /&gt;
* exporting credentials allows working with assume roles that are different in the backend and terraform blocks. &lt;br /&gt;
* specifying different &amp;lt;code&amp;gt;profile = &amp;lt;/code&amp;gt; in each blocks&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Credentials&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
## profile allows assumes roles in other accounts&lt;br /&gt;
#export AWS_PROFILE=&amp;quot;piotr&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Environment credentials for a user that can assume roles (eg. ) in other accounts:&lt;br /&gt;
#          | * arn:aws:iam::111111111111:role/terraform-s3state              - save state in s3 bucket&lt;br /&gt;
#          | * arn:aws:iam::222222222222:role/terraform-crossaccount-admin   - deploy resources&lt;br /&gt;
export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE&lt;br /&gt;
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY&lt;br /&gt;
export AWS_DEFAULT_REGION=us-east-1&lt;br /&gt;
&lt;br /&gt;
# unset all of them if need to &lt;br /&gt;
unset ${!AWS@}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;code&amp;gt;terraform {}&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
terraform {&lt;br /&gt;
  version          = &amp;quot;~&amp;gt; 1.0&amp;quot;&lt;br /&gt;
  required_version = &amp;quot;= 0.12.29&amp;quot;&lt;br /&gt;
# profile &amp;quot;dev-us&amp;quot; # we use 'role_arn' but could specify aws profile instead&lt;br /&gt;
  backend &amp;quot;s3&amp;quot; {&lt;br /&gt;
    bucket  = &amp;quot;tfstate-${var.project}-${var.account-id}&amp;quot; # must exist beforehand&lt;br /&gt;
    key     = &amp;quot;terraform/aws/${var.project}/tfstate&amp;quot;     # this could be much simpler when working with terraform workspaces&lt;br /&gt;
    region  = &amp;quot;${var.region}&amp;quot;&lt;br /&gt;
    role_arn  = &amp;quot;arn:aws:iam::111111111111:role/terraform-s3state&amp;quot; # role to assume in an infra account that the s3 state exists&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;code&amp;gt;provider {}&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
provider &amp;quot;aws&amp;quot; {&lt;br /&gt;
## We could use profiles but instead we use 'assume_role' option. Also on your laptop &lt;br /&gt;
## it should be your creds profile eg. 'piotr-xaccount-admin'&lt;br /&gt;
#profile = &amp;quot;terraform-crossaccount-admin&amp;quot;&lt;br /&gt;
#shared_credentials_file = &amp;quot;/home/piotr/.aws/credentials&amp;quot;&lt;br /&gt;
  assume_role = {&lt;br /&gt;
    role_arn  = &amp;quot;arn:aws:iam::&amp;lt;MY_PROD_ACCOUNT&amp;gt;:role/terraform-crossaccount-admin&amp;quot;       # assume role in target account&lt;br /&gt;
    role_arn  = &amp;quot;arn:aws:iam::${var.aws_account}:role/terraform-crossaccount-admin&amp;quot; # can use variables&lt;br /&gt;
  }&lt;br /&gt;
  region  = &amp;quot;var.aws_region&amp;quot;&lt;br /&gt;
  allowed_account_ids = [ &amp;quot;111111111111&amp;quot;, &amp;quot;222222222222&amp;quot; ] # safety net&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Workspace configuration&lt;br /&gt;
Dev configuration in &amp;lt;code&amp;gt;dev.tfvars&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
aws_region  = &amp;quot;eu-west-1&amp;quot;&lt;br /&gt;
aws_account = &amp;quot;&amp;lt;MY_DEV_ACCOUNT&amp;gt;&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Prod configuration in &amp;lt;code&amp;gt;prod.tfvars&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
aws_region  = &amp;quot;eu-west-1&amp;quot;&lt;br /&gt;
aws_account = &amp;quot;&amp;lt;MY_PROD_ACCOUNT&amp;gt;&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Workspaces&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
terraform init&lt;br /&gt;
terraform workspace new dev&lt;br /&gt;
terraform workspace new prod&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Apply on one account&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
terraform workspace select dev&lt;br /&gt;
terraform apply --var-file $(terraform workspace show).tfvars&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== GCP Google Cloud Platform ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Generate default app credentials&lt;br /&gt;
&lt;br /&gt;
gcloud auth application-default login&lt;br /&gt;
Go to the following link in your browser:&lt;br /&gt;
https://accounts.google.com/o/oauth2/auth?response_type=code&amp;amp;client_id=****_challenge_method=S256&lt;br /&gt;
Enter verification code: ***&lt;br /&gt;
Credentials saved to file: [/home/piotr/.config/gcloud/application_default_credentials.json]&lt;br /&gt;
&lt;br /&gt;
These credentials will be used by any library that requests Application Default Credentials (ADC).&lt;br /&gt;
Quota project &amp;quot;test-devops-candidate1&amp;quot; was added to ADC which can be used by Google client libraries for billing and quota. Note that some services may still bill the project owning the resource&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Plan / apply =&lt;br /&gt;
== Meaning of markings in a plan output ==&lt;br /&gt;
For now, here they are, until we get it included in the docs better:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;+&amp;lt;/code&amp;gt; create&lt;br /&gt;
* &amp;lt;code&amp;gt;-&amp;lt;/code&amp;gt; destroy&lt;br /&gt;
* &amp;lt;code&amp;gt;-/+&amp;lt;/code&amp;gt; replace (destroy and then create, or vice-versa if create-before-destroy is used)&lt;br /&gt;
* &amp;lt;code&amp;gt;~&amp;lt;/code&amp;gt; update in-place&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;lt;=&amp;lt;/code&amp;gt; applies only to data resources. You won't see this one often, because whenever possible Terraform does reads during the refresh phase. You will see it, though, if you have a data resource whose configuration depends on something that we don't know yet, such as an attribute of a resource that isn't yet created. In that case, it's necessary to wait until apply time to find out the final configuration before doing the read.&lt;br /&gt;
&lt;br /&gt;
== Plan and apply ==&lt;br /&gt;
Apply stage, if runs first time will create terraform.tfstate after all changes are done. This file should not be modified manually. It's used to compare what is out in cloud already so the next time APPLY stage runs it will look at the file and execute only necessary changes.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Terraform plan and apply&lt;br /&gt;
|- &lt;br /&gt;
! terraform plan&lt;br /&gt;
! terraform apply&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| &amp;lt;source&amp;gt;$ terraform plan&lt;br /&gt;
Refreshing Terraform state in-memory prior to plan...&lt;br /&gt;
The refreshed state will be used to calculate this plan, but will not be&lt;br /&gt;
persisted to local or remote state storage.&lt;br /&gt;
&amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
+ aws_instance.webserver&lt;br /&gt;
   ami:                         &amp;quot;ami-405f7226&amp;quot;&lt;br /&gt;
   associate_public_ip_address: &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   availability_zone:           &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   ebs_block_device.#:          &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   ephemeral_block_device.#:    &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   instance_state:              &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   instance_type:               &amp;quot;t2.nano&amp;quot;&lt;br /&gt;
   ipv6_addresses.#:            &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   key_name:                    &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   network_interface_id:        &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   placement_group:             &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   private_dns:                 &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   private_ip:                  &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   public_dns:                  &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   public_ip:                   &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   root_block_device.#:         &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   security_groups.#:           &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   source_dest_check:           &amp;quot;true&amp;quot;&lt;br /&gt;
   subnet_id:                   &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   tenancy:                     &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   vpc_security_group_ids.#:    &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
| &amp;lt;source&amp;gt;$ terraform apply&lt;br /&gt;
aws_instance.webserver: Creating...&lt;br /&gt;
 ami:                         &amp;quot;&amp;quot; =&amp;gt; &amp;quot;ami-405f7226&amp;quot;&lt;br /&gt;
 associate_public_ip_address: &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 availability_zone:           &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 ebs_block_device.#:          &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 ephemeral_block_device.#:    &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 instance_state:              &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 instance_type:               &amp;quot;&amp;quot; =&amp;gt; &amp;quot;t2.nano&amp;quot;&lt;br /&gt;
 ipv6_addresses.#:            &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 key_name:                    &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 network_interface_id:        &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 placement_group:             &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 private_dns:                 &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 private_ip:                  &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 public_dns:                  &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 public_ip:                   &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 root_block_device.#:         &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 security_groups.#:           &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 source_dest_check:           &amp;quot;&amp;quot; =&amp;gt; &amp;quot;true&amp;quot;&lt;br /&gt;
 subnet_id:                   &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 tenancy:                     &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 vpc_security_group_ids.#:    &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
aws_instance.webserver: Still creating... (10s elapsed)&lt;br /&gt;
aws_instance.webserver: Creation complete (ID: i-0eb33af34b94d1a78)&lt;br /&gt;
&lt;br /&gt;
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.&lt;br /&gt;
&lt;br /&gt;
The state of your infrastructure has been saved to the path&lt;br /&gt;
below. This state is required to modify and destroy your&lt;br /&gt;
infrastructure, so keep it safe. To inspect the complete state&lt;br /&gt;
use the `terraform show` command.&lt;br /&gt;
&lt;br /&gt;
State path:  &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Show ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ terraform show&lt;br /&gt;
aws_instance.webserver:&lt;br /&gt;
 id = i-0eb33af34b94d1a78&lt;br /&gt;
 ami = ami-405f7226&lt;br /&gt;
 associate_public_ip_address = true&lt;br /&gt;
 availability_zone = eu-west-1c&lt;br /&gt;
 disable_api_termination = false&lt;br /&gt;
(...)&lt;br /&gt;
 source_dest_check = true&lt;br /&gt;
 subnet_id = subnet-92a4bbf6&lt;br /&gt;
 tags.% = 0&lt;br /&gt;
 tenancy = default&lt;br /&gt;
 vpc_security_group_ids.# = 1&lt;br /&gt;
 vpc_security_group_ids.1039819662 = sg-5201fb2b&lt;br /&gt;
&lt;br /&gt;
$&amp;gt; terraform destroy&lt;br /&gt;
Do you really want to destroy?&lt;br /&gt;
 Terraform will delete all your managed infrastructure.&lt;br /&gt;
 There is no undo. Only 'yes' will be accepted to confirm.&lt;br /&gt;
 Enter a value: yes&lt;br /&gt;
aws_instance.webserver: Refreshing state... (ID: i-0eb33af34b94d1a78)&lt;br /&gt;
aws_instance.webserver: Destroying... (ID: i-0eb33af34b94d1a78)&lt;br /&gt;
aws_instance.webserver: Still destroying... (ID: i-0eb33af34b94d1a78, 10s elapsed)&lt;br /&gt;
aws_instance.webserver: Still destroying... (ID: i-0eb33af34b94d1a78, 20s elapsed)&lt;br /&gt;
aws_instance.webserver: Still destroying... (ID: i-0eb33af34b94d1a78, 30s elapsed)&lt;br /&gt;
aws_instance.webserver: Destruction complete&lt;br /&gt;
 &lt;br /&gt;
Destroy complete! Resources: 1 destroyed.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
After the instance has been terminated the terraform.tfstate looks like below:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi terraform.tfstate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=json&amp;gt;&lt;br /&gt;
 {&lt;br /&gt;
     &amp;quot;version&amp;quot;: 3,&lt;br /&gt;
     &amp;quot;terraform_version&amp;quot;: &amp;quot;0.9.1&amp;quot;,&lt;br /&gt;
     &amp;quot;serial&amp;quot;: 1,&lt;br /&gt;
     &amp;quot;lineage&amp;quot;: &amp;quot;c22ccad7-ff26-4b8a-bf19-819477b45202&amp;quot;,&lt;br /&gt;
     &amp;quot;modules&amp;quot;: [&lt;br /&gt;
         {&lt;br /&gt;
             &amp;quot;path&amp;quot;: [&lt;br /&gt;
                 &amp;quot;root&amp;quot;&lt;br /&gt;
             ],&lt;br /&gt;
             &amp;quot;outputs&amp;quot;: {},&lt;br /&gt;
             &amp;quot;resources&amp;quot;: {},&lt;br /&gt;
             &amp;quot;depends_on&amp;quot;: []&lt;br /&gt;
         }&lt;br /&gt;
     ]&lt;br /&gt;
 }&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= AWS credentials profiles and variable files=&lt;br /&gt;
Instead to reference secret_access keys within .tf file directly we can use AWS profile file. This file will be look at for the profile variable we specify in variables.tf file. Note: there is '''no double quotes'''.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ vi ~/.aws/credentials    #AWS credentials file with named profiles&lt;br /&gt;
[terraform-profile1]       #profile name&lt;br /&gt;
aws_access_key_id     = AAAAAAAAAAA&lt;br /&gt;
aws_secret_access_key = BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then we can now remove the secret_access keys from the main .tf file (example.tf) and amend as follows:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi provider.tf&lt;br /&gt;
terraform {&lt;br /&gt;
  version          = &amp;quot;~&amp;gt; 1.0&amp;quot;&lt;br /&gt;
  required_version = &amp;quot;= 0.11.11&amp;quot;&lt;br /&gt;
  region           = &amp;quot;eu-west-1&amp;quot;&lt;br /&gt;
  backend &amp;quot;s3&amp;quot; {}  # in this case all s3 details are passed as ENV vars&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
provider &amp;quot;aws&amp;quot; {&lt;br /&gt;
  version    =   &amp;quot;~&amp;gt; 1.57&amp;quot;&lt;br /&gt;
# Static credentials - provided directly&lt;br /&gt;
  access_key = &amp;quot;AAAAAAAAAAA&amp;quot;&lt;br /&gt;
  secret_key = &amp;quot;BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Shared Credentials file - $HOME/.aws/credentials, static credentials are not needed then&lt;br /&gt;
# profile                 = &amp;quot;terraform-profile1&amp;quot;           #profile name in credentials file, acc 111111111111&lt;br /&gt;
# shared_credentials_file = &amp;quot;/home/user1/.aws/credentials&amp;quot; #if different than default&lt;br /&gt;
&lt;br /&gt;
# If specified, assume role in another account using the user credentials&lt;br /&gt;
# defined in the profile above&lt;br /&gt;
# assume_role {&lt;br /&gt;
#   role_arn     = &amp;quot;${var.aws_xaccount_role}&amp;quot; #variable version&lt;br /&gt;
#   role_arn     = &amp;quot;arn:aws:iam::222222222222:role/CrossAccountSignin_Terraform&amp;quot;&lt;br /&gt;
# }&lt;br /&gt;
# allowed_account_ids = [ &amp;quot;111111111111&amp;quot;, &amp;quot;222222222222&amp;quot; ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
provider &amp;quot;template&amp;quot; {&lt;br /&gt;
  version = &amp;quot;~&amp;gt; 1.0.0&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
and create a variable file to reference it&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ vi variables.tf&lt;br /&gt;
variable &amp;quot;region&amp;quot; {&lt;br /&gt;
  default = &amp;quot;eu-west-1&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
variable &amp;quot;profile&amp;quot; {} #variable without a default value will prompt to type in the value. And that should be 'terraform-profile1'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Run terraform&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ terraform plan -var 'profile=terraform-profile1'  #this way value can be set&lt;br /&gt;
$ terraform plan -destroy -input=false&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= AWS example =&lt;br /&gt;
Prerequisites are:&lt;br /&gt;
*~/.aws/credential file exists&lt;br /&gt;
*variables.tf exist, with context below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you remove &amp;lt;tt&amp;gt;default&amp;lt;/tt&amp;gt; value you will be prompted for it.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;inputs.tf&amp;lt;/code&amp;gt; also known as a variable file.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; vi inputs.tf&lt;br /&gt;
variable &amp;quot;region&amp;quot; { default = &amp;quot;eu-west-1&amp;quot;  } &lt;br /&gt;
variable &amp;quot;profile&amp;quot; {&lt;br /&gt;
       description = &amp;quot;Provide AWS credentials profile you want to use, saved in ~/.aws/credentials file&amp;quot;&lt;br /&gt;
       default     = &amp;quot;terraform-profile&amp;quot; }&lt;br /&gt;
variable &amp;quot;key_name&amp;quot; {&lt;br /&gt;
        description = &amp;lt;&amp;lt;DESCRIPTION&lt;br /&gt;
Provide name of the ssh private key file name, ~/.ssh will be search&lt;br /&gt;
This is the key assosiated with the IAM user in AWS. Example: id_rsa&lt;br /&gt;
DESCRIPTION&lt;br /&gt;
        default     = &amp;quot;id_rsa&amp;quot; }&lt;br /&gt;
variable &amp;quot;public_key_path&amp;quot; {&lt;br /&gt;
        description = &amp;lt;&amp;lt;DESCRIPTION&lt;br /&gt;
Path to the SSH public keys for authentication. This key will be injected&lt;br /&gt;
into all ec2 instances created by Terraform.&lt;br /&gt;
Example: ~./ssh/terraform.pub&lt;br /&gt;
DESCRIPTION&lt;br /&gt;
        default     = &amp;quot;~/.ssh/id_rsa.pub&amp;quot; }&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Terraform .tf file&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ vi example.tf&lt;br /&gt;
provider &amp;quot;aws&amp;quot; {&lt;br /&gt;
  region = &amp;quot;${var.region}&amp;quot;&lt;br /&gt;
  profile = &amp;quot;${var.profile}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
resource &amp;quot;aws_vpc&amp;quot; &amp;quot;vpc&amp;quot; {&lt;br /&gt;
  cidr_block = &amp;quot;10.0.0.0/16&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
# Create an internet gateway to give our subnet access to the open internet&lt;br /&gt;
resource &amp;quot;aws_internet_gateway&amp;quot; &amp;quot;internet-gateway&amp;quot; {&lt;br /&gt;
  vpc_id = &amp;quot;${aws_vpc.vpc.id}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
# Give the VPC internet access on its main route table&lt;br /&gt;
resource &amp;quot;aws_route&amp;quot; &amp;quot;internet_access&amp;quot; {&lt;br /&gt;
  route_table_id         = &amp;quot;${aws_vpc.vpc.main_route_table_id}&amp;quot;&lt;br /&gt;
  destination_cidr_block = &amp;quot;0.0.0.0/0&amp;quot;&lt;br /&gt;
  gateway_id             = &amp;quot;${aws_internet_gateway.internet-gateway.id}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
# Create a subnet to launch our instances into&lt;br /&gt;
resource &amp;quot;aws_subnet&amp;quot; &amp;quot;default&amp;quot; {&lt;br /&gt;
  vpc_id                  = &amp;quot;${aws_vpc.vpc.id}&amp;quot;&lt;br /&gt;
  cidr_block              = &amp;quot;10.0.1.0/24&amp;quot;&lt;br /&gt;
  map_public_ip_on_launch = true&lt;br /&gt;
&lt;br /&gt;
  tags {&lt;br /&gt;
    Name = &amp;quot;Public&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
# Our default security group to access&lt;br /&gt;
# instances over SSH and HTTP&lt;br /&gt;
resource &amp;quot;aws_security_group&amp;quot; &amp;quot;default&amp;quot; {&lt;br /&gt;
  name        = &amp;quot;terraform_securitygroup&amp;quot;&lt;br /&gt;
  description = &amp;quot;Used for public instances&amp;quot;&lt;br /&gt;
  vpc_id      = &amp;quot;${aws_vpc.vpc.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  # SSH access from anywhere&lt;br /&gt;
  ingress {&lt;br /&gt;
    from_port   = 22&lt;br /&gt;
    to_port     = 22&lt;br /&gt;
    protocol    = &amp;quot;tcp&amp;quot;&lt;br /&gt;
    cidr_blocks = [&amp;quot;0.0.0.0/0&amp;quot;]&lt;br /&gt;
  }&lt;br /&gt;
  # HTTP access from the VPC&lt;br /&gt;
  ingress {&lt;br /&gt;
    from_port   = 80&lt;br /&gt;
    to_port     = 80&lt;br /&gt;
    protocol    = &amp;quot;tcp&amp;quot;&lt;br /&gt;
    cidr_blocks = [&amp;quot;10.0.0.0/16&amp;quot;]&lt;br /&gt;
  }&lt;br /&gt;
  # outbound internet access&lt;br /&gt;
  egress {&lt;br /&gt;
    from_port   = 0&lt;br /&gt;
    to_port     = 0&lt;br /&gt;
    protocol    = &amp;quot;-1&amp;quot; # all protocols&lt;br /&gt;
    cidr_blocks = [&amp;quot;0.0.0.0/0&amp;quot;]&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
resource &amp;quot;aws_key_pair&amp;quot; &amp;quot;auth&amp;quot; {&lt;br /&gt;
  key_name   = &amp;quot;${var.key_name}&amp;quot;&lt;br /&gt;
  public_key = &amp;quot;${file(var.public_key_path)}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
resource &amp;quot;aws_instance&amp;quot; &amp;quot;webserver&amp;quot; {&lt;br /&gt;
  ami = &amp;quot;ami-405f7226&amp;quot;&lt;br /&gt;
  instance_type = &amp;quot;t2.nano&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  key_name = &amp;quot;${aws_key_pair.auth.id}&amp;quot;&lt;br /&gt;
  vpc_security_group_ids = [&amp;quot;${aws_security_group.default.id}&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
  # We're going to launch into the public subnet for this.&lt;br /&gt;
  # Normally, in production environments, webservers would be in&lt;br /&gt;
  # private subnets.&lt;br /&gt;
  subnet_id = &amp;quot;${aws_subnet.default.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  # The connection block tells our provisioner how to&lt;br /&gt;
  # communicate with the instance&lt;br /&gt;
  connection {&lt;br /&gt;
    user = &amp;quot;ubuntu&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
  # We run a remote provisioner on the instance after creating it &lt;br /&gt;
  # to install Nginx. By default, this should be on port 80&lt;br /&gt;
  provisioner &amp;quot;remote-exec&amp;quot; {&lt;br /&gt;
    inline = [&lt;br /&gt;
      &amp;quot;sudo apt-get -y update&amp;quot;,&lt;br /&gt;
      &amp;quot;sudo apt-get -y install nginx&amp;quot;,&lt;br /&gt;
      &amp;quot;sudo service nginx start&amp;quot;&lt;br /&gt;
    ]&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Run a plan ==&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
$ terraform plan&lt;br /&gt;
var.key_name&lt;br /&gt;
  Name of the AWS key pair&lt;br /&gt;
&lt;br /&gt;
  Enter a value: id_rsa        #name of the key_pair&lt;br /&gt;
&lt;br /&gt;
var.profile&lt;br /&gt;
  AWS credentials profile you want to use&lt;br /&gt;
&lt;br /&gt;
  Enter a value: terraform-profile   #aws profile in ~/.aws/credentials file&lt;br /&gt;
&lt;br /&gt;
var.public_key_path&lt;br /&gt;
  Path to the SSH public keys for authentication.&lt;br /&gt;
  Example: ~./ssh/terraform.pub&lt;br /&gt;
&lt;br /&gt;
  Enter a value: ~/.ssh/id_rsa.pub  #path to the matching public key&lt;br /&gt;
&lt;br /&gt;
Refreshing Terraform state in-memory prior to plan...&lt;br /&gt;
The refreshed state will be used to calculate this plan, but will not be&lt;br /&gt;
persisted to local or remote state storage.&lt;br /&gt;
&lt;br /&gt;
The Terraform execution plan has been generated and is shown below.&lt;br /&gt;
Resources are shown in alphabetical order for quick scanning. Green resources&lt;br /&gt;
will be created (or destroyed and then created if an existing resource&lt;br /&gt;
exists), yellow resources are being changed in-place, and red resources&lt;br /&gt;
will be destroyed. Cyan entries are data sources to be read.&lt;br /&gt;
&lt;br /&gt;
+ aws_instance.webserver&lt;br /&gt;
    ami:                         &amp;quot;ami-405f7226&amp;quot;&lt;br /&gt;
    associate_public_ip_address: &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    availability_zone:           &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    ebs_block_device.#:          &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    ephemeral_block_device.#:    &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    instance_state:              &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    instance_type:               &amp;quot;t2.nano&amp;quot;&lt;br /&gt;
    ipv6_addresses.#:            &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    key_name:                    &amp;quot;${aws_key_pair.auth.id}&amp;quot;&lt;br /&gt;
    network_interface_id:        &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    placement_group:             &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    private_dns:                 &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    private_ip:                  &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    public_dns:                  &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    public_ip:                   &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    root_block_device.#:         &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    security_groups.#:           &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    source_dest_check:           &amp;quot;true&amp;quot;&lt;br /&gt;
    subnet_id:                   &amp;quot;${aws_subnet.default.id}&amp;quot;&lt;br /&gt;
    tenancy:                     &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    vpc_security_group_ids.#:    &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
+ aws_internet_gateway.internet-gateway&lt;br /&gt;
    vpc_id: &amp;quot;${aws_vpc.vpc.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
+ aws_key_pair.auth&lt;br /&gt;
    fingerprint: &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    key_name:    &amp;quot;id_rsa&amp;quot;&lt;br /&gt;
    public_key:  &amp;quot;ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDfc piotr@ubuntu&amp;quot;&lt;br /&gt;
&lt;br /&gt;
...omitted...&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
Plan: 7 to add, 0 to change, 0 to destroy.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Plan a single target&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; terraform plan -target=aws_ami_from_instance.golden&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Terraform apply ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; terraform apply&lt;br /&gt;
$&amp;gt; terraform show # shoe current resources in the state file&lt;br /&gt;
aws_instance.webserver:&lt;br /&gt;
 id = i-09c1c665cef284235&lt;br /&gt;
 ami = ami-405f7226&lt;br /&gt;
&amp;lt;...&amp;gt;&lt;br /&gt;
aws_security_group.default:&lt;br /&gt;
 id = sg-b14bb1c8&lt;br /&gt;
 description = Used for public instances&lt;br /&gt;
 egress.# = 1&lt;br /&gt;
&amp;lt;...&amp;gt;&lt;br /&gt;
aws_subnet.default:&lt;br /&gt;
 id = subnet-6f4f510b&lt;br /&gt;
&amp;lt;...&amp;gt;&lt;br /&gt;
aws_vpc.vpc:&lt;br /&gt;
 id = vpc-9ba0b7ff&lt;br /&gt;
&amp;lt;...&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Apply a single resource using &amp;lt;code&amp;gt;-target &amp;lt;resource&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; terraform apply -target=aws_ami_from_instance.golden&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Terraform destroy ==&lt;br /&gt;
Run destroy command to delete all resources that were created&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; terraform destroy&lt;br /&gt;
&lt;br /&gt;
aws_key_pair.auth: Refreshing state... (ID: id_rsa)&lt;br /&gt;
aws_vpc.vpc: Refreshing state... (ID: vpc-9ba0b7ff)&lt;br /&gt;
&amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Destroy complete! Resources: 7 destroyed.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Destroy a single resource - targeting&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; terraform show&lt;br /&gt;
$&amp;gt; terraform destroy -target=aws_ami_from_instance.golden&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Terraform taint ==&lt;br /&gt;
Get a resource list&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
terraform state list&lt;br /&gt;
# select item for the list #&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Version 0.11: resource index must be addressed as eg. &amp;lt;code&amp;gt;aws_instance.main.0&amp;lt;/code&amp;gt; not  &amp;lt;code&amp;gt;aws_instance.main[0]&amp;lt;/code&amp;gt;. It's not possible to tain whole module&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
terraform taint -module=&amp;lt;MODULE_NAME&amp;gt; aws_instance.main.0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Version 0.12: resources and modules can be addressed in more natural way&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
terraform taint module.MODULE_NAME.aws_instance.main.0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Use ansible from Terraform - Provision using Ansible =&lt;br /&gt;
Unsurr if this is the best approach due to the fact of how to store the state of local-exec Ansible run. Could be set to always run as Ansible playbooks are immutable. Exame: https://github.com/dzeban/c10k/blob/master/infrastructure/main.tf&lt;br /&gt;
&lt;br /&gt;
= Debug =&lt;br /&gt;
== Output complex object ==&lt;br /&gt;
Often it is required to manipulate a data structure that is an output of &amp;lt;tt&amp;gt;resource&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;data.resource&amp;lt;/tt&amp;gt; or simply a template that might be hidden computation not always displayed on your screen. You can use following techniques to iterate over you code output:&lt;br /&gt;
&lt;br /&gt;
;Output and [https://www.terraform.io/docs/providers/null/resource.html null_resource] - empty virtual container that can run any arbitrary commands&lt;br /&gt;
* '''Problem statement:''' Display computed Terrafom &amp;lt;code&amp;gt;templatefile&amp;lt;/code&amp;gt;&lt;br /&gt;
* '''Solution:''' Use &amp;lt;code&amp;gt;null_resource&amp;lt;/code&amp;gt; to create a template, such template will be shown in a &amp;lt;tt&amp;gt;plan&amp;lt;/tt&amp;gt;. If such template is Json policy, invalid policies fail and you cannot see why. Plan will show the object being constructed, running &amp;lt;code&amp;gt;terraform apply&amp;lt;/code&amp;gt; it can be saved into state file as output variable. Then the object can be re-used for further transformations.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;Terraform&amp;quot;&amp;gt;&lt;br /&gt;
data &amp;quot;aws_caller_identity&amp;quot; &amp;quot;current&amp;quot; {}&lt;br /&gt;
&lt;br /&gt;
# resource &amp;quot;aws_kms_key&amp;quot; &amp;quot;secretmanager&amp;quot; {&lt;br /&gt;
#  policy = templatefile(&amp;quot;./templates/kms_secretmanager.policy.json.tpl&amp;quot;, ... # debugging policy with &lt;br /&gt;
# }                                                                           # null_resource and ouput&lt;br /&gt;
&lt;br /&gt;
resource &amp;quot;aws_kms_alias&amp;quot; &amp;quot;secretmanager&amp;quot; {&lt;br /&gt;
  name          = &amp;quot;alias/secretmanager&amp;quot;&lt;br /&gt;
  target_key_id = aws_kms_key.secretmanager.key_id&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
resource &amp;quot;null_resource&amp;quot; &amp;quot;policytest&amp;quot; {&lt;br /&gt;
  triggers = {&lt;br /&gt;
    policytest = templatefile(&amp;quot;./templates/kms_secretmanager.policy.json.tpl&amp;quot;, &lt;br /&gt;
    {&lt;br /&gt;
      arns_json                 = jsonencode(var.crossAccountIamUsers_arns)&lt;br /&gt;
      currentAccountId          = data.aws_caller_identity.current.account_id&lt;br /&gt;
      crossAccountAccessEnabled = length([var.crossAccountIamUsers_arns]) &amp;gt; 0 ? true : false&lt;br /&gt;
    })&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
output &amp;quot;policy&amp;quot; {&lt;br /&gt;
  value = templatefile(&amp;quot;./templates/kms_secretmanager.policy.json.tpl&amp;quot;, &lt;br /&gt;
    {&lt;br /&gt;
      arns_json                 = jsonencode(var.crossAccountIamUsers_arns)&lt;br /&gt;
      currentAccountId          = data.aws_caller_identity.current.account_id&lt;br /&gt;
      crossAccountAccessEnabled = length([var.crossAccountIamUsers_arns]) &amp;gt; 0 ? true : false&lt;br /&gt;
    }&lt;br /&gt;
  )&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Policy template file &amp;lt;code&amp;gt;./templates/kms_secretmanager.policy.json.tpl&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=json&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
    &amp;quot;Version&amp;quot;: &amp;quot;2012-10-17&amp;quot;,&lt;br /&gt;
    &amp;quot;Id&amp;quot;: &amp;quot;key-consolepolicy-1&amp;quot;,&lt;br /&gt;
    &amp;quot;Statement&amp;quot;: [&lt;br /&gt;
        {&lt;br /&gt;
            &amp;quot;Sid&amp;quot;: &amp;quot;Enable IAM User Permissions&amp;quot;,&lt;br /&gt;
            &amp;quot;Effect&amp;quot;: &amp;quot;Allow&amp;quot;,&lt;br /&gt;
            &amp;quot;Principal&amp;quot;: {&lt;br /&gt;
                &amp;quot;AWS&amp;quot;: &amp;quot;arn:aws:iam::${currentAccountId}:root&amp;quot;&lt;br /&gt;
            },&lt;br /&gt;
            &amp;quot;Action&amp;quot;: &amp;quot;kms:*&amp;quot;,&lt;br /&gt;
            &amp;quot;Resource&amp;quot;: &amp;quot;*&amp;quot;&lt;br /&gt;
        },&lt;br /&gt;
%{ if crossAccountAccessEnabled == true ~}&lt;br /&gt;
        {&lt;br /&gt;
            &amp;quot;Sid&amp;quot;: &amp;quot;Allow cross-accounts retrieve secrets&amp;quot;,&lt;br /&gt;
            &amp;quot;Effect&amp;quot;: &amp;quot;Allow&amp;quot;,&lt;br /&gt;
            &amp;quot;Principal&amp;quot;: {&lt;br /&gt;
                &amp;quot;AWS&amp;quot;: ${arns_json}&lt;br /&gt;
            },&lt;br /&gt;
            &amp;quot;Action&amp;quot;: [&lt;br /&gt;
                &amp;quot;kms:Decrypt&amp;quot;,&lt;br /&gt;
                &amp;quot;kms:DescribeKey&amp;quot;&lt;br /&gt;
            ],&lt;br /&gt;
            &amp;quot;Resource&amp;quot;: &amp;quot;*&amp;quot;&lt;br /&gt;
        }&lt;br /&gt;
%{ endif ~}&lt;br /&gt;
    ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Run&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ terraform apply -var-file=test.tfvars -target null_resource.policytest # -var-file contains 'var.crossAccountIamUsers_arns' list variable&lt;br /&gt;
&lt;br /&gt;
Terraform will perform the following actions:&lt;br /&gt;
&lt;br /&gt;
  # null_resource.policytest will be created&lt;br /&gt;
  + resource &amp;quot;null_resource&amp;quot; &amp;quot;policytest&amp;quot; {&lt;br /&gt;
      + id       = (known after apply)&lt;br /&gt;
      + triggers = {&lt;br /&gt;
          + &amp;quot;policytest&amp;quot; = jsonencode(&lt;br /&gt;
                {&lt;br /&gt;
                  + Id        = &amp;quot;key-consolepolicy-1&amp;quot;&lt;br /&gt;
                  + Statement = [&lt;br /&gt;
                      + {&lt;br /&gt;
                          + Action    = &amp;quot;kms:*&amp;quot;&lt;br /&gt;
                          + Effect    = &amp;quot;Allow&amp;quot;&lt;br /&gt;
                          + Principal = {&lt;br /&gt;
                              + AWS = &amp;quot;arn:aws:iam::111111111111:root&amp;quot;&lt;br /&gt;
                            }&lt;br /&gt;
                          + Resource  = &amp;quot;*&amp;quot;&lt;br /&gt;
                          + Sid       = &amp;quot;Enable IAM User Permissions&amp;quot;&lt;br /&gt;
                        },&lt;br /&gt;
                      + {&lt;br /&gt;
                          + Action    = [&lt;br /&gt;
                              + &amp;quot;kms:Decrypt&amp;quot;,&lt;br /&gt;
                              + &amp;quot;kms:DescribeKey&amp;quot;,&lt;br /&gt;
                            ]&lt;br /&gt;
                          + Effect    = &amp;quot;Allow&amp;quot;&lt;br /&gt;
                          + Principal = {&lt;br /&gt;
                              + AWS = [&lt;br /&gt;
                                  + &amp;quot;arn:aws:iam::111111111111:user/dev&amp;quot;,&lt;br /&gt;
                                  + &amp;quot;arn:aws:iam::111111111111:user/test&amp;quot;,&lt;br /&gt;
                                ]&lt;br /&gt;
                            }&lt;br /&gt;
                          + Resource  = &amp;quot;*&amp;quot;&lt;br /&gt;
                          + Sid       = &amp;quot;Allow cross-accounts retrieve secrets&amp;quot;&lt;br /&gt;
                        },&lt;br /&gt;
                    ]&lt;br /&gt;
                  + Version   = &amp;quot;2012-10-17&amp;quot;&lt;br /&gt;
                }&lt;br /&gt;
            )&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
Plan: 1 to add, 0 to change, 0 to destroy.&lt;br /&gt;
&lt;br /&gt;
Do you want to perform these actions?&lt;br /&gt;
  Terraform will perform the actions described above.&lt;br /&gt;
  Only 'yes' will be accepted to approve.&lt;br /&gt;
&lt;br /&gt;
  Enter a value: yes # &amp;lt;- manual imput&lt;br /&gt;
&lt;br /&gt;
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.&lt;br /&gt;
&lt;br /&gt;
Outputs:&lt;br /&gt;
policy = {&lt;br /&gt;
    &amp;quot;Version&amp;quot;: &amp;quot;2012-10-17&amp;quot;,&lt;br /&gt;
    &amp;quot;Id&amp;quot;: &amp;quot;key-consolepolicy-1&amp;quot;,&lt;br /&gt;
    &amp;quot;Statement&amp;quot;: [&lt;br /&gt;
        {&lt;br /&gt;
            &amp;quot;Sid&amp;quot;: &amp;quot;Enable IAM User Permissions&amp;quot;,&lt;br /&gt;
            &amp;quot;Effect&amp;quot;: &amp;quot;Allow&amp;quot;,&lt;br /&gt;
            &amp;quot;Principal&amp;quot;: {&lt;br /&gt;
                &amp;quot;AWS&amp;quot;: &amp;quot;arn:aws:iam::111111111111:root&amp;quot;&lt;br /&gt;
            },&lt;br /&gt;
            &amp;quot;Action&amp;quot;: &amp;quot;kms:*&amp;quot;,&lt;br /&gt;
            &amp;quot;Resource&amp;quot;: &amp;quot;*&amp;quot;&lt;br /&gt;
        },&lt;br /&gt;
        {&lt;br /&gt;
            &amp;quot;Sid&amp;quot;: &amp;quot;Allow cross-accounts retrieve secrets&amp;quot;,&lt;br /&gt;
            &amp;quot;Effect&amp;quot;: &amp;quot;Allow&amp;quot;,&lt;br /&gt;
            &amp;quot;Principal&amp;quot;: {&lt;br /&gt;
                &amp;quot;AWS&amp;quot;: [&amp;quot;arn:aws:iam::111111111111:user/dev&amp;quot;,&amp;quot;arn:aws:iam::111111111111:user/test&amp;quot;]&lt;br /&gt;
            },&lt;br /&gt;
            &amp;quot;Action&amp;quot;: [&lt;br /&gt;
                &amp;quot;kms:Decrypt&amp;quot;,&lt;br /&gt;
                &amp;quot;kms:DescribeKey&amp;quot;&lt;br /&gt;
            ],&lt;br /&gt;
            &amp;quot;Resource&amp;quot;: &amp;quot;*&amp;quot;&lt;br /&gt;
        }&lt;br /&gt;
    ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Debug =&lt;br /&gt;
== Debug and analyze logs ==&lt;br /&gt;
We are going to enable logging to a file in Terraform. Convert log file to pdf and use sheri.ai to give us the answers.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Pre req - Ubuntu 22.04&lt;br /&gt;
sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install ghostscript # for ps2pdf converter&lt;br /&gt;
&lt;br /&gt;
# Set Terraform logging&lt;br /&gt;
export TF_LOG=TRACE # DEBUG&lt;br /&gt;
export TF_LOG_PATH=/tmp/tflogs.log&lt;br /&gt;
&lt;br /&gt;
terraform plan|apply&lt;br /&gt;
vim $TF_LOG_PATH -c &amp;quot;hardcopy &amp;gt; ${TF_LOG_PATH}.ps | q&amp;quot;; ps2pdf ${TF_LOG_PATH}.ps ${TF_LOG_PATH}-$(echo $(date +&amp;quot;%Y%m%d-%H%M&amp;quot;)).pdf&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Debug using &amp;lt;code&amp;gt;terraform console&amp;lt;/code&amp;gt;==&lt;br /&gt;
This command provides an interactive command-line console for evaluating and experimenting with expressions. This is useful for testing interpolations before using them in configurations, and for interacting with any values currently saved in state. Terraform console will read configured state even if it is remote.&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
$&amp;gt; terraform console #-state=path # note I have 'tfstate' available; this could be remote state&lt;br /&gt;
&amp;gt; var.vpc_cidr       # &amp;lt;- new syntax&lt;br /&gt;
10.123.0.0/16&lt;br /&gt;
&amp;gt; &amp;quot;${var.vpc_cidr}&amp;quot;  # &amp;lt;- old syntax&lt;br /&gt;
10.123.0.0/16&lt;br /&gt;
&amp;gt; aws_security_group.tf_public_sg.id   # interpolate from state&lt;br /&gt;
sg-04d51b5ae10e6f0b0&lt;br /&gt;
&amp;gt; help&lt;br /&gt;
The Terraform console allows you to experiment with Terraform interpolations.&lt;br /&gt;
You may access resources in the state (if you have one) just as you would&lt;br /&gt;
from a configuration. For example: &amp;quot;aws_instance.foo.id&amp;quot; would evaluate&lt;br /&gt;
to the ID of &amp;quot;aws_instance.foo&amp;quot; if it exists in your state.&lt;br /&gt;
&lt;br /&gt;
Type in the interpolation to test and hit &amp;lt;enter&amp;gt; to see the result.&lt;br /&gt;
&lt;br /&gt;
To exit the console, type &amp;quot;exit&amp;quot; and hit &amp;lt;enter&amp;gt;, or use Control-C or&lt;br /&gt;
Control-D.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ echo &amp;quot;aws_iam_user.notif.arn&amp;quot; | terraform console&lt;br /&gt;
arn:aws:iam::123456789:user/notif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Log user_data to console logs ==&lt;br /&gt;
In Linux add a line below after she-bang&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
exec &amp;gt; &amp;gt;(tee /var/log/user-data.log|logger -t user-data -s 2&amp;gt;/dev/console)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Now you can go and open System Logs in AWS Console to view user-data script logs.&lt;br /&gt;
&lt;br /&gt;
= terraform graph to visualise configuration =&lt;br /&gt;
== Graph dependencies ==&lt;br /&gt;
Create visualised file. You may need to install &amp;lt;code&amp;gt;sudo apt-get install graphviz&amp;lt;/code&amp;gt; if it is not in your system.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo apt install graphviz # installs 'dot'&lt;br /&gt;
terraform graph | dot -Tpng &amp;gt; graph.png&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
[[File:Example2.png|none|left|700px|Terraform visual configuration]]&lt;br /&gt;
&lt;br /&gt;
== [https://serverfault.com/questions/1005761/what-does-error-cycle-means-in-terraform Cycle error] ==&lt;br /&gt;
Example cycle error:&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
Error: Cycle: module.gke.google_container_node_pool.pools[&amp;quot;low-standard-n1&amp;quot;]&lt;br /&gt;
 module.gke.google_container_node_pool.pools[&amp;quot;medium-standard-n1&amp;quot;]&lt;br /&gt;
 module.gke.google_container_node_pool.pools[&amp;quot;large-standard-n1&amp;quot;]&lt;br /&gt;
 module.gke.local.cluster_endpoint (expand)&lt;br /&gt;
 module.gke.output.endpoint (expand)&lt;br /&gt;
 provider[&amp;quot;registry.terraform.io/gavinbunney/kubectl&amp;quot;]&lt;br /&gt;
 kubectl_manifest.sync[&amp;quot;source.toolkit.fluxcd.io/v1beta1/gitrepository/flux-system/flux-system&amp;quot;] (destroy)&lt;br /&gt;
 module.gke.google_container_node_pool.pools[&amp;quot;preemptible&amp;quot;] (destroy)&lt;br /&gt;
 module.gke.module.gcloud_delete_default_kube_dns_configmap.module.gcloud_kubectl.null_resource.additional_components[0] (destroy)&lt;br /&gt;
 module.gke.module.gcloud_delete_default_kube_dns_configmap.module.gcloud_kubectl.null_resource.run_command[0] (destroy)&lt;br /&gt;
 module.gke.module.gcloud_delete_default_kube_dns_configmap.module.gcloud_kubectl.null_resource.module_depends_on[0] (destroy)&lt;br /&gt;
 module.gke.module.gcloud_delete_default_kube_dns_configmap.module.gcloud_kubectl.null_resource.run_destroy_command[0] (destroy)&lt;br /&gt;
 module.gke.kubernetes_config_map.kube-dns[0] (destroy)&lt;br /&gt;
 module.gke.google_container_cluster.primary&lt;br /&gt;
 module.gke.local.cluster_output_master_auth (expand)&lt;br /&gt;
 module.gke.local.cluster_master_auth_list_layer1 (expand)&lt;br /&gt;
 module.gke.local.cluster_master_auth_list_layer2 (expand)&lt;br /&gt;
 module.gke.local.cluster_master_auth_map (expand)&lt;br /&gt;
 module.gke.local.cluster_ca_certificate (expand)&lt;br /&gt;
 module.gke.output.ca_certificate (expand)&lt;br /&gt;
 provider[&amp;quot;registry.terraform.io/hashicorp/kubernetes&amp;quot;]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;-draw-cycles&amp;lt;/code&amp;gt; command causes Terraform to mark the arrows that are related to the cycle being reported using the color red. If you cannot visually distinguish red from black, you may wish to first edit the generated Graphviz code to replace red with some other color you can distinguish.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo apt install graphviz&lt;br /&gt;
terraform graph -draw-cycles -type=plan &amp;gt; cycle-plan.graphviz&lt;br /&gt;
terraform graph -draw-cycles | dot -Tpng &amp;gt; cycles.png&lt;br /&gt;
terraform graph -draw-cycles | dot -Tsvg &amp;gt; cycles.svg&lt;br /&gt;
terraform graph -draw-cycles | dot -Tpdf &amp;gt; cycles.pdf&lt;br /&gt;
# | -draw-cycles - highlight any cycles in the graph with colored edges. This helps when diagnosing cycle errors.&lt;br /&gt;
# | -type=plan   - type of graph to output. Can be: plan, plan-destroy, apply, validate, input, refresh.&lt;br /&gt;
&lt;br /&gt;
# For large graphs you may want to install inkscape&lt;br /&gt;
sudo apt install inkscape --no-install-suggests --no-install-recommends&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Awoid cycle errors in modules by structuring your config to avoid cross-module references. So instead of directly accessing an output of one module from inside another, set it up as in input parameter instead and wire everything together on the top level.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;How to get it solved&lt;br /&gt;
With the cycling dependency issue, study the graph then decide on removing from the state a resource that should be generated later. If the graph is not clear or too complex to read you may need to guess and delete from the state a resource marked for deletion, ie:&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
terraform state  rm kubectl_manifest.install[\&amp;quot;apps/v1/deployment/flux-system/kustomize-controller\&amp;quot;]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Remote state =&lt;br /&gt;
== Enable ==&lt;br /&gt;
Create s3 bucket with unique name, enable versioning and choose a region.&lt;br /&gt;
&lt;br /&gt;
Then configure terraform:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ terraform remote config \&lt;br /&gt;
     -backend=s3 \&lt;br /&gt;
     -backend-config=&amp;quot;bucket=YOUR_BUCKET_NAME&amp;quot; \&lt;br /&gt;
     -backend-config=&amp;quot;key=terraform.tfstate&amp;quot; \&lt;br /&gt;
     -backend-config=&amp;quot;region=YOUR_BUCKET_REGION&amp;quot; \&lt;br /&gt;
     -backend-config=&amp;quot;encrypt=true&amp;quot;&lt;br /&gt;
 Remote configuration updated&lt;br /&gt;
 Remote state configured and pulled.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
After running this command, you should see your Terraform state show up in that S3 bucket.&lt;br /&gt;
&lt;br /&gt;
== Locking ==&lt;br /&gt;
Add &amp;lt;code&amp;gt;dynamodb_table&amp;lt;/code&amp;gt; name to backend configuration. &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
terraform {&lt;br /&gt;
  version          = &amp;quot;~&amp;gt; 1.0&amp;quot;&lt;br /&gt;
  required_version = &amp;quot;= 0.11.11&amp;quot;&lt;br /&gt;
  backend &amp;quot;s3&amp;quot; {&lt;br /&gt;
    dynamodb_table = &amp;quot;tfstate-lock&amp;quot;&lt;br /&gt;
    profile        = &amp;quot;terraform-agent&amp;quot;&lt;br /&gt;
#   assume_role {&lt;br /&gt;
#     role_arn     = &amp;quot;${var.aws_xaccount_role}&amp;quot;&lt;br /&gt;
#     session_name = &amp;quot;${var.aws_xsession_name}&amp;quot;&lt;br /&gt;
#   }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In AWS create dynamo-db table, named: &amp;lt;tt&amp;gt;tfsate-lock&amp;lt;/tt&amp;gt; with index &amp;lt;tt&amp;gt;LockID&amp;lt;/tt&amp;gt;; as on a picture below. It an event of taking a lock the entry similar to one below gets created.&lt;br /&gt;
[[File:Terraform-dynamo-db-state-locking.png|none|left|Terraform-dynamo-db-state-locking]]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
{&amp;quot;ID&amp;quot;:&amp;quot;62a453e8-7fbc-cfa2-e07f-be1381b82af3&amp;quot;,&amp;quot;Operation&amp;quot;:&amp;quot;OperationTypePlan&amp;quot;,&amp;quot;Info&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;Who&amp;quot;:&amp;quot;piotr@laptop1&amp;quot;,&amp;quot;Version&amp;quot;:&amp;quot;0.11.11&amp;quot;,&amp;quot;Created&amp;quot;:&amp;quot;2019-03-07T08:49:33.3078722Z&amp;quot;,&amp;quot;Path&amp;quot;:&amp;quot;tfstate-acmedev01-acmedev-111111111111/aws/acmedev01/state&amp;quot;}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Workspaces =&lt;br /&gt;
== [https://discuss.hashicorp.com/t/how-to-change-the-name-of-a-workspace/24010 Rename a workspace / move the state file] ==&lt;br /&gt;
{{Note|The state manipulation commands run through Terraform’s automatic state upgrading processes and so best to do this with the same Terraform CLI version that you’ve most recently been using against this workspace so that the state won’t be implicitly upgraded as part of the operation.}}&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
terraform workspace select old-name&lt;br /&gt;
terraform state pull &amp;gt;old-name.tfstate&lt;br /&gt;
terraform workspace new new-name&lt;br /&gt;
terraform state push old-name.tfstate&lt;br /&gt;
terraform show # confirm that the newly-imported state looks 'right', before deleting the old workspace&lt;br /&gt;
terraform workspace delete -force old-name&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
Variables can be provided via cli&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
terraform apply -var=&amp;quot;image_id=ami-abc123&amp;quot;&lt;br /&gt;
terraform apply -var='image_id_list=[&amp;quot;ami-abc123&amp;quot;,&amp;quot;ami-def456&amp;quot;]'&lt;br /&gt;
terraform apply -var='image_id_map={&amp;quot;us-east-1&amp;quot;:&amp;quot;ami-abc123&amp;quot;,&amp;quot;us-east-2&amp;quot;:&amp;quot;ami-def456&amp;quot;}'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Terraform also automatically loads a number of variable definitions files if they are present:&lt;br /&gt;
* Files named exactly &amp;lt;code&amp;gt;terraform.tfvars&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;terraform.tfvars.json&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Any files with names ending in &amp;lt;code&amp;gt;.auto.tfvars&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;.auto.tfvars.json&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=Syntax Terraform 0.12.6+=&lt;br /&gt;
{{Note|This [https://alexharv074.github.io/2019/06/02/adventures-in-the-terraform-dsl-part-iii-iteration-enhancements-in-terraform-0.12.html#for-expressions for-expressions] link is a little diamond for this subject}}&lt;br /&gt;
&lt;br /&gt;
== Map and nested block ==&lt;br /&gt;
Terrafom 0.12 introduces stricter validation for followings but allows map keys to be set dynamically from expressions. Note of &amp;quot;=&amp;quot; sign.&lt;br /&gt;
* a map attribute - usually have user-defined keys, like we see in the tags example &lt;br /&gt;
* a nested block always has a fixed set of supported arguments defined by the resource type schema, which Terraform will validate&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
resource &amp;quot;aws_instance&amp;quot; &amp;quot;example&amp;quot; {&lt;br /&gt;
  instance_type = &amp;quot;t2.micro&amp;quot;&lt;br /&gt;
  ami           = &amp;quot;ami-abcd1234&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  tags = {             # &amp;lt;- a map attribute, requires '='&lt;br /&gt;
    Name = &amp;quot;example instance&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  ebs_block_device {    # &amp;lt;- a nested block, no '='&lt;br /&gt;
    device_name = &amp;quot;sda2&amp;quot;&lt;br /&gt;
    volume_type = &amp;quot;gp2&amp;quot;&lt;br /&gt;
    volume_size = 24&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== [https://alexharv074.github.io/2019/06/02/adventures-in-the-terraform-dsl-part-iii-iteration-enhancements-in-terraform-0.12.html For_each] ==&lt;br /&gt;
* [https://alexharv074.github.io/2019/06/02/adventures-in-the-terraform-dsl-part-iii-iteration-enhancements-in-terraform-0.12.html terraform iterations]&lt;br /&gt;
* [https://discuss.hashicorp.com/t/produce-maps-from-list-of-strings-of-a-map/2197 Produce maps from list of strings of a map]&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ For_each and new allowed formatting without the need for &amp;quot;${var.vpc_cidr}&amp;quot; syntax = var.vpc_cidr&lt;br /&gt;
|- &lt;br /&gt;
! main.tf&lt;br /&gt;
! variables.tf and outputs.tf&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| &amp;lt;source&amp;gt;# vi main.tf&lt;br /&gt;
resource &amp;quot;aws_vpc&amp;quot; &amp;quot;tf_vpc&amp;quot; {&lt;br /&gt;
  cidr_block           = &amp;quot;${var.vpc_cidr}&amp;quot;&lt;br /&gt;
  enable_dns_hostnames = true&lt;br /&gt;
  enable_dns_support   = true&lt;br /&gt;
  tags =  {           #&amp;lt;-note of '=' as this is an argument&lt;br /&gt;
    Name = &amp;quot;tf_vpc&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
resource &amp;quot;aws_security_group&amp;quot; &amp;quot;tf_public_sg&amp;quot; {&lt;br /&gt;
  name        = &amp;quot;tf_public_sg&amp;quot;&lt;br /&gt;
  description = &amp;quot;Used for access to the public instances&amp;quot;&lt;br /&gt;
  vpc_id      = &amp;quot;${aws_vpc.tf_vpc.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  dynamic &amp;quot;ingress&amp;quot; {&lt;br /&gt;
    for_each = [ for s in var.service_ports: {&lt;br /&gt;
       from_port = s.from_port&lt;br /&gt;
       to_port   = s.to_port   }]&lt;br /&gt;
    content {&lt;br /&gt;
      from_port   = ingress.value.from_port&lt;br /&gt;
      to_port     = ingress.value.to_port&lt;br /&gt;
      protocol    = &amp;quot;tcp&amp;quot;&lt;br /&gt;
      cidr_blocks = [ var.accessip ]&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
# Commented block has been replaced by 'dynamic &amp;quot;ingress&amp;quot;'&lt;br /&gt;
# ingress {  #SSH&lt;br /&gt;
#   from_port   = 22&lt;br /&gt;
#   to_port     = 22&lt;br /&gt;
#   protocol    = &amp;quot;tcp&amp;quot;&lt;br /&gt;
#   cidr_blocks = [&amp;quot;${var.accessip}&amp;quot;]&lt;br /&gt;
# }&lt;br /&gt;
# ingress {  #HTTP&lt;br /&gt;
#   from_port   = 80&lt;br /&gt;
#   to_port     = 80&lt;br /&gt;
#   protocol    = &amp;quot;tcp&amp;quot;&lt;br /&gt;
#   cidr_blocks = [&amp;quot;${var.accessip}&amp;quot;]&lt;br /&gt;
# }&lt;br /&gt;
  egress { &lt;br /&gt;
    from_port   = 0&lt;br /&gt;
    to_port     = 0&lt;br /&gt;
    protocol    = &amp;quot;-1&amp;quot;&lt;br /&gt;
    cidr_blocks = [&amp;quot;0.0.0.0/0&amp;quot;]&lt;br /&gt;
  }&lt;br /&gt;
}&amp;lt;/source&amp;gt; &lt;br /&gt;
| &amp;lt;source&amp;gt;# vi variables.tf&lt;br /&gt;
variable &amp;quot;vpc_cidr&amp;quot; { default = &amp;quot;10.123.0.0/16&amp;quot; }&lt;br /&gt;
variable &amp;quot;accessip&amp;quot; { default = &amp;quot;0.0.0.0/0&amp;quot;     }&lt;br /&gt;
&lt;br /&gt;
variable &amp;quot;service_ports&amp;quot; {&lt;br /&gt;
  type = &amp;quot;list&amp;quot;&lt;br /&gt;
  default = [&lt;br /&gt;
    { from_port = 22, to_port = 22 },&lt;br /&gt;
    { from_port = 80, to_port = 80 }&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# vi outputs.tf&lt;br /&gt;
output &amp;quot;public_sg&amp;quot; { &lt;br /&gt;
  value = aws_security_group.tf_public_sg.id&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
output &amp;quot;ingress_port_mapping&amp;quot; {&lt;br /&gt;
  value = {&lt;br /&gt;
    for ingress in aws_security_group.tf_public_sg.ingress:&lt;br /&gt;
    format(&amp;quot;From %d&amp;quot;, ingress.from_port) =&amp;gt; format(&amp;quot;To %d&amp;quot;, ingress.to_port)&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Computed 'Outputs:'&lt;br /&gt;
ingress_port_mapping = {&lt;br /&gt;
  &amp;quot;From 22&amp;quot; = &amp;quot;To 22&amp;quot;&lt;br /&gt;
  &amp;quot;From 80&amp;quot; = &amp;quot;To 80&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
public_sg = sg-04d51b5ae10e6f0b0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== [https://www.sheldonhull.com/blog/how-to-iterate-through-a-list-of-objects-with-terraforms-for-each-function/ Iterate over list of objects] ===&lt;br /&gt;
[https://stackoverflow.com/questions/58594506/how-to-for-each-through-a-listobjects-in-terraform-0-12 how-to-for-each-through-a-listobjects]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
# debug.tf&lt;br /&gt;
locals {&lt;br /&gt;
  users = [&lt;br /&gt;
    # list of objects&lt;br /&gt;
    { name = &amp;quot;foo&amp;quot;, is_enabled = true  },&lt;br /&gt;
    { name = &amp;quot;bar&amp;quot;, is_enabled = false },&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
resource &amp;quot;null_resource&amp;quot; &amp;quot;this&amp;quot; {&lt;br /&gt;
    for_each = { for name in local.users: name.name =&amp;gt; name.is_enabled }&lt;br /&gt;
    connection {&lt;br /&gt;
      name     = each.key&lt;br /&gt;
      email    = each.value&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
output &amp;quot;users_map&amp;quot; {&lt;br /&gt;
  value = { for name in local.users: name.name =&amp;gt; name.is_enabled }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# terraform init&lt;br /&gt;
# terraform apply&lt;br /&gt;
&lt;br /&gt;
null_resource.this[&amp;quot;bar&amp;quot;]: Creating...&lt;br /&gt;
null_resource.this[&amp;quot;foo&amp;quot;]: Creating...&lt;br /&gt;
null_resource.this[&amp;quot;bar&amp;quot;]: Creation complete after 0s [id=7228791922218879597]&lt;br /&gt;
null_resource.this[&amp;quot;foo&amp;quot;]: Creation complete after 0s [id=7997705376010456213]&lt;br /&gt;
&lt;br /&gt;
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.&lt;br /&gt;
&lt;br /&gt;
Outputs:&lt;br /&gt;
&lt;br /&gt;
users_map = {&lt;br /&gt;
  &amp;quot;bar&amp;quot; = false&lt;br /&gt;
  &amp;quot;foo&amp;quot; = true&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Plan is more readable and explicit ==&lt;br /&gt;
[[Terraform/plan_tf_11_vs_12|See comparison]]&lt;br /&gt;
&lt;br /&gt;
== [https://www.hashicorp.com/blog/terraform-0-12-rich-value-types/ Rich Value Types] - for previewing whole resource object ==&lt;br /&gt;
'''Resources and Modules as Values''' Terraform 0.12 now permits using entire resources as object values within configuration, including returning them as outputs and passing them as input variables:&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
output &amp;quot;vpc&amp;quot; {&lt;br /&gt;
  value = aws_vpc.example&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The type of this output value is an object type derived from the schema of the &amp;lt;code&amp;gt;aws_vpc&amp;lt;/code&amp;gt; resource type. The calling module can then access attributes of this result in the same way as the returning module would use &amp;lt;code&amp;gt;aws_vpc.example&amp;lt;/code&amp;gt;, such as &amp;lt;code&amp;gt;module.example.vpc.cidr_block&amp;lt;/code&amp;gt;. This works also for modules with an expression like &amp;lt;code&amp;gt;module.vpc&amp;lt;/code&amp;gt; evaluating to an object value with attributes corresponding to the modules's named outputs.&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;code&amp;gt;for&amp;lt;/code&amp;gt; ==&lt;br /&gt;
* [https://discuss.hashicorp.com/t/produce-maps-from-list-of-strings-of-a-map/2197 Produce maps from list of strings of a map]&lt;br /&gt;
This is mostly used for parsing preexisting lists and maps rather than generating ones. For example, we are able to convert all elements in a list of strings to upper case using this expression.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
local {&lt;br /&gt;
  upper_list = [for i in var.list : upper(i)] # creates a new list &lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The For iterates over each element of the list and returns the value of upper(el) for each element in form of a list. We can also use this expression to generate maps.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
local {&lt;br /&gt;
  upper_map = {for i in var.list : i =&amp;gt; upper(i)} # creates a map with key = value&lt;br /&gt;
                                                  #                 { i[0] = upper(i[0])&lt;br /&gt;
                                                  #                   i[1] = upper(i[1]) }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Use ''if'' as a filter in ''for'' expression&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
[for i in var.list : upper(i) if i != &amp;quot;&amp;quot;]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
In this case, the original element from list now correspond to their uppercase version.&lt;br /&gt;
&lt;br /&gt;
Lastly, we can include an if statement as a filter in for expressions. Unfortunately, we are not able to use if in logical operations like the ternary operators we used before. The following state will try to return a list of all non-empty elements in their uppercase state.&lt;br /&gt;
&lt;br /&gt;
== Manipulate list and complex object ==&lt;br /&gt;
Build a new list by removing items that their string value do not match regex expression&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Resource that generates an object&lt;br /&gt;
resource &amp;quot;aws_acm_certificate&amp;quot; &amp;quot;main&amp;quot; {...}&lt;br /&gt;
&lt;br /&gt;
# Preview of input object 'aws_acm_certificate.main.domain_validation_options'&lt;br /&gt;
output &amp;quot;domain_validation_options&amp;quot; {&lt;br /&gt;
  value       = aws_acm_certificate.main.domain_validation_options&lt;br /&gt;
  description = &amp;quot;array/list of maps taken from resource object(aws_acm_certificate.issued) describing all validation domain records&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
$ terraform output domain_validation_options&lt;br /&gt;
[ # &amp;lt;- array starts here&lt;br /&gt;
  { # &amp;lt;- an item of array the map object&lt;br /&gt;
    &amp;quot;domain_name&amp;quot; = &amp;quot;*.dev.example.com&amp;quot;&lt;br /&gt;
    &amp;quot;resource_record_name&amp;quot; = &amp;quot;_11111111111111111111111111111111.dev.example.com.&amp;quot;&lt;br /&gt;
    &amp;quot;resource_record_type&amp;quot; = &amp;quot;CNAME&amp;quot;&lt;br /&gt;
    &amp;quot;resource_record_value&amp;quot; = &amp;quot;_22222222222222222222222222222222.mzlfeqexyx.acm-validations.aws.&amp;quot;&lt;br /&gt;
  },&lt;br /&gt;
  {&lt;br /&gt;
    &amp;quot;domain_name&amp;quot; = &amp;quot;api.example.com&amp;quot;&lt;br /&gt;
    &amp;quot;resource_record_name&amp;quot; = &amp;quot;_31111111111111111111111111111111.api.example.com.&amp;quot;&lt;br /&gt;
    &amp;quot;resource_record_type&amp;quot; = &amp;quot;CNAME&amp;quot;&lt;br /&gt;
    &amp;quot;resource_record_value&amp;quot; = &amp;quot;_42222222222222222222222222222222.vhzmpjdqfx.acm-validations.aws.&amp;quot;&lt;br /&gt;
                                 &lt;br /&gt;
  },&lt;br /&gt;
]&lt;br /&gt;
&lt;br /&gt;
# 'for k, v' syntax builds a new object 'validation_domains' by iterating over array of maps&lt;br /&gt;
# 'aws_acm_certificate.main.domain_validation_options' and conditinally changes a value of 'v'&lt;br /&gt;
# if contains the sting &amp;quot;*.dev.example.com&amp;quot;. tomap(v) is required to persist type across for expression.&lt;br /&gt;
locals {&lt;br /&gt;
  validation_domains = [&lt;br /&gt;
    for k, v in aws_acm_certificate.main.domain_validation_options : tomap(v) if contains(&lt;br /&gt;
      &amp;quot;*.dev.example.com&amp;quot;, replace(v.domain_name, &amp;quot;*.&amp;quot;, &amp;quot;&amp;quot;)&lt;br /&gt;
    )&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
$ terraform output local_distinct_domains&lt;br /&gt;
local_distinct_domains = [&lt;br /&gt;
  &amp;quot;api.example.com&amp;quot;,&lt;br /&gt;
  &amp;quot;dev.example.com&amp;quot;,&lt;br /&gt;
  &amp;quot;api-aat1.dev.example.com&amp;quot;,&lt;br /&gt;
  &amp;quot;api-aat2.dev.example.com&amp;quot;,&lt;br /&gt;
]&lt;br /&gt;
&lt;br /&gt;
# 'for domain' expession builds a new list only when a domain matches regexall string.&lt;br /&gt;
# checks regexall lengh &amp;gt; 0 of matched captured groups so true or false is return, so &lt;br /&gt;
# the 'for domain : if' statment conditionally adds the item to the new list&lt;br /&gt;
locals {&lt;br /&gt;
  distinct_domains_excluded = [ &lt;br /&gt;
    for domain in local.distinct_domains : domain if length(regexall(&amp;quot;dev.example.com&amp;quot;, domain)) &amp;gt; 0&lt;br /&gt;
  ]&lt;br /&gt;
&lt;br /&gt;
# Similar to the above but iterating over array of maps (k,v - key, value pairs)&lt;br /&gt;
  validation_domains = [&lt;br /&gt;
    for k,v in local.validation_domains : tomap(v) if length(regexall(&amp;quot;dev.example.com&amp;quot;, v.domain_name)) &amp;gt; 0&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Example of iterating over array of maps 'aws_acm_certificate.main.domain_validation_options' to build a list&lt;br /&gt;
# of fqdns that are store in 'aws_acm_certificate.main.domain_validation_options.resource_record_name' in .resource_record_name&lt;br /&gt;
# key.&lt;br /&gt;
# 'for fqdn' syntax on each iteration 'fqdn=aws_acm_certificate.main.domain_validation_options[index]', then&lt;br /&gt;
# anything after ':' means 'set to value equals' fqdn.resource_record_name&lt;br /&gt;
resource &amp;quot;aws_acm_certificate_validation&amp;quot; &amp;quot;main&amp;quot; {&lt;br /&gt;
  certificate_arn         = aws_acm_certificate.main.arn&lt;br /&gt;
  validation_record_fqdns = [ &lt;br /&gt;
    for fqdn in aws_acm_certificate.main.domain_validation_options : fqdn.resource_record_name&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== function: replace, regex ==&lt;br /&gt;
Snippet below removes comments and any empty lines from a &amp;lt;code&amp;gt;values.yaml.tpl&amp;lt;/code&amp;gt; file.&lt;br /&gt;
&amp;lt;source lang=json&amp;gt;&lt;br /&gt;
locals {&lt;br /&gt;
  match_comment = &amp;quot;/(?U)(?m)(?s)^[[:space:]]*#.*$/&amp;quot; # match anyline that starts with '#' or any 'whitespace(s) + #'&lt;br /&gt;
  match_empty_line = &amp;quot;/(?m)(?s)(^[\r\n])/&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
resource &amp;quot;helm_release&amp;quot; &amp;quot;myapp&amp;quot; {&lt;br /&gt;
  name             = &amp;quot;myapp&amp;quot;&lt;br /&gt;
  chart            = &amp;quot;${path.module}/charts/myapp&amp;quot;&lt;br /&gt;
  values = [&lt;br /&gt;
    replace(&lt;br /&gt;
        replace(&lt;br /&gt;
          templatefile(&amp;quot;${path.module}/templates/values.yaml.tpl&amp;quot;, {&lt;br /&gt;
            }), local.match_comment, &amp;quot;&amp;quot;), local.match_empty_line, &amp;quot;&amp;quot;)&lt;br /&gt;
  ]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Explanation:&lt;br /&gt;
* Terraform regex is using [https://github.com/google/re2/wiki/Syntax re2 library]&lt;br /&gt;
* Regex flags are enabled by prefixinf the search:&lt;br /&gt;
** &amp;lt;code&amp;gt;(?m)&amp;lt;/code&amp;gt; - multi-line mode (default false)&lt;br /&gt;
** &amp;lt;code&amp;gt;(?s)&amp;lt;/code&amp;gt; - let . match \n (default false)&lt;br /&gt;
** &amp;lt;code&amp;gt;(?U)&amp;lt;/code&amp;gt; - ungreedy (default false), so stop matching comments at EOL&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
*[https://www.hashicorp.com/blog/hashicorp-terraform-0-12-preview-for-and-for-each HashiCorp Terraform 0.12 Preview: For and For-Each]&lt;br /&gt;
&lt;br /&gt;
= Modules =&lt;br /&gt;
Modules are used in Terraform to modularize and encapsulate groups of resources in your infrastructure.&lt;br /&gt;
&lt;br /&gt;
When calling a module from .tf file you passing values for variables that are defined in a module to create resources to your specification. Before you can use any module it needs to be downloaded. Use &lt;br /&gt;
 $ terraform get&lt;br /&gt;
to download modules. You will notice that &amp;lt;code&amp;gt;.terraform&amp;lt;/code&amp;gt; directory will be created that contains symlinks to the module.&lt;br /&gt;
&lt;br /&gt;
;TF file &amp;lt;tt&amp;gt;~/git/dev101/vpc.tf&amp;lt;/tt&amp;gt; calling 'vpc' module&lt;br /&gt;
&lt;br /&gt;
 variable &amp;quot;vpc_name&amp;quot;       { description = &amp;quot;value comes from terrafrom.tfvars&amp;quot; }&lt;br /&gt;
 variable &amp;quot;vpc_cidr_base&amp;quot;  { description = &amp;quot;value comes from terrafrom.tfvars&amp;quot; }&lt;br /&gt;
 variable &amp;quot;vpc_cidr_range&amp;quot; { description = &amp;quot;value comes from terrafrom.tfvars&amp;quot; }&lt;br /&gt;
 &lt;br /&gt;
 module &amp;quot;vpc-dev&amp;quot; {&lt;br /&gt;
   source     = &amp;quot;../modules/vpc&amp;quot;&lt;br /&gt;
   &amp;lt;span style=&amp;quot;color: blue&amp;quot;&amp;gt;name&amp;lt;/span&amp;gt;       = &amp;quot;${var.vpc_name}&amp;quot;  #here we assign a value to 'name' variable&lt;br /&gt;
   &amp;lt;span style=&amp;quot;color: blue&amp;quot;&amp;gt;cidr&amp;lt;/span&amp;gt;       = &amp;quot;${var.vpc_cidr_base}.${var.vpc_cidr_range}&amp;quot; }&lt;br /&gt;
 &lt;br /&gt;
 output &amp;quot;vpc-name&amp;quot;         { value = &amp;quot;${var.vpc_name                  }&amp;quot;}&lt;br /&gt;
 output &amp;quot;vpc_id&amp;quot;           { value = &amp;quot;${module.vpc-dev.&amp;lt;span style=&amp;quot;color: green&amp;quot;&amp;gt;id-from_module&amp;lt;/span&amp;gt; }&amp;quot;}&lt;br /&gt;
&lt;br /&gt;
;Module in &amp;lt;tt&amp;gt;~/git/modules/vpc/main.tf&amp;lt;/tt&amp;gt;&lt;br /&gt;
 variable &amp;quot;name&amp;quot; { description = &amp;quot;variable local to the module, value comes when calling the module&amp;quot; }&lt;br /&gt;
 variable &amp;quot;cidr&amp;quot; { description = &amp;quot;local to the module, value passed on when calling the module&amp;quot; }&lt;br /&gt;
 &lt;br /&gt;
 resource &amp;quot;aws_vpc&amp;quot; &amp;quot;scope&amp;quot; {&lt;br /&gt;
    cidr_block  = &amp;quot;${var.&amp;lt;span style=&amp;quot;color: blue&amp;quot;&amp;gt;cidr&amp;lt;/span&amp;gt;}&amp;quot;&lt;br /&gt;
    tags { Name = &amp;quot;${var.&amp;lt;span style=&amp;quot;color: blue&amp;quot;&amp;gt;name&amp;lt;/span&amp;gt;}&amp;quot; }}&lt;br /&gt;
 &lt;br /&gt;
  output &amp;quot;&amp;lt;span style=&amp;quot;color: green&amp;quot;&amp;gt;id-from_module&amp;lt;/span&amp;gt;&amp;quot;    { value = &amp;quot;${aws_vpc.scope.id}&amp;quot; }&lt;br /&gt;
&lt;br /&gt;
Output variables is a way to output important data back when running &amp;lt;code&amp;gt;terraform apply&amp;lt;/code&amp;gt;. These variables also can be recalled when .tfstate file has been populated using &amp;lt;code&amp;gt;terraform output VARIABLE-NAME&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
 $ terraform apply     #this will use 'vpc' module&lt;br /&gt;
&lt;br /&gt;
[[File:Terraform-module-apply.png|400px|none|left|Terraform-module-apply]]&lt;br /&gt;
&lt;br /&gt;
Notice &amp;lt;span style=&amp;quot;color: green&amp;quot;&amp;gt;Outputs&amp;lt;/span&amp;gt;. These outputs can be recalled also by:&lt;br /&gt;
 $ terraform output vpc-name      $ terraform output vpc_id&lt;br /&gt;
 dev101                           vpc-00e00c67&lt;br /&gt;
&lt;br /&gt;
= Templates =&lt;br /&gt;
{{ Note | [https://github.com/hashicorp/terraform-guides/tree/master/infrastructure-as-code/terraform-0.12-examples/new-template-syntax Terraform 0.12+ New Template Syntax Example] }}&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
# Terraform version 0.12+ template syntax&lt;br /&gt;
%{ for name in var.names ~}&lt;br /&gt;
%{ if name == &amp;quot;Mary&amp;quot; }${name}%{ endif ~}&lt;br /&gt;
%{ endfor ~}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Dump a rendered &amp;lt;code&amp;gt;data.template_file&amp;lt;/code&amp;gt; into a file to preview correctness of interpolations&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
#Dumps rendered template&lt;br /&gt;
resource &amp;quot;null_resource&amp;quot; &amp;quot;export_rendered_template&amp;quot; {&lt;br /&gt;
  triggers = {&lt;br /&gt;
   uid = &amp;quot;${uuid()}&amp;quot;  #this causes to always run this resource&lt;br /&gt;
  }&lt;br /&gt;
  provisioner &amp;quot;local-exec&amp;quot; {&lt;br /&gt;
    command = &amp;quot;cat &amp;gt; waf-policy.output.txt &amp;lt;&amp;lt;EOL\n${data.template_file.waf-whitelist-policy.rendered}\nEOL&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example of creating &lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
resource &amp;quot;aws_instance&amp;quot; &amp;quot;microservices&amp;quot; {&lt;br /&gt;
  count      = &amp;quot;${var.instance_count}&amp;quot;&lt;br /&gt;
  subnet_id  = &amp;quot;${element(&amp;quot;${data.aws_subnet.private.*.id          }&amp;quot;, count.index)}&amp;quot;&lt;br /&gt;
  user_data  = &amp;quot;${element(&amp;quot;${data.template_file.userdata.*.rendered}&amp;quot;, count.index)}&amp;quot;&lt;br /&gt;
  ...&lt;br /&gt;
}&lt;br /&gt;
data &amp;quot;template_file&amp;quot; &amp;quot;userdata&amp;quot; {&lt;br /&gt;
  count      = &amp;quot;${var.instance_count}&amp;quot;&lt;br /&gt;
  template   = &amp;quot;${file(&amp;quot;${path.root}/templates/user-data.tpl&amp;quot;)}&amp;quot;&lt;br /&gt;
  vars = {&lt;br /&gt;
    vmname   = &amp;quot;ms-${count.index + 1}-${var.vpc_name}&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
#For debugging you can display an array of rendered templates with the output below:&lt;br /&gt;
output &amp;quot;userdata&amp;quot; { value = &amp;quot;${data.template_file.userdata.*.rendered}&amp;quot; }&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
{{ Note |&lt;br /&gt;
* resource &amp;lt;code&amp;gt;template_file is deprecated&amp;lt;/code&amp;gt; in favour of &amp;lt;code&amp;gt;data template_file&amp;lt;/code&amp;gt;&lt;br /&gt;
* Terraform 0.12+ offers new &amp;lt;code&amp;gt;template&amp;lt;/code&amp;gt; function without a need of using a &amp;lt;code&amp;gt;data&amp;lt;/code&amp;gt; object }}&lt;br /&gt;
== template json files ==&lt;br /&gt;
For working with JSON structures it's [https://www.terraform.io/docs/configuration/functions/templatefile.html#generating-json-or-yaml-from-a-template recommended] to use &amp;lt;code&amp;gt;jsonencode&amp;lt;/code&amp;gt; function to simplify escaping, delimiters and get validated json in return.&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
resource &amp;quot;aws_iam_policy&amp;quot; &amp;quot;s3Bucket&amp;quot; {&lt;br /&gt;
   name  = s3Bucket&amp;quot;&lt;br /&gt;
   policy = templatefile(&amp;quot;${path.module}/templates/s3Bucket.json.tpl&amp;quot;, {&lt;br /&gt;
     S3BUCKETS = var.s3_buckets&lt;br /&gt;
   })&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
variable &amp;quot;s3_buckets&amp;quot; {&lt;br /&gt;
  type        = list(string)&lt;br /&gt;
  default     = [ &amp;quot;aaa-bucket-111&amp;quot;, &amp;quot;bbb-bucket-222&amp;quot; ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Template file&lt;br /&gt;
&amp;lt;source lang=json&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
    &amp;quot;Version&amp;quot;: &amp;quot;2012-10-17&amp;quot;,&lt;br /&gt;
    &amp;quot;Statement&amp;quot;: [&lt;br /&gt;
        {&lt;br /&gt;
            &amp;quot;Effect&amp;quot;: &amp;quot;Allow&amp;quot;,&lt;br /&gt;
            &amp;quot;Action&amp;quot;: &amp;quot;s3:ListAllMyBuckets&amp;quot;,&lt;br /&gt;
            &amp;quot;Resource&amp;quot;: &amp;quot;*&amp;quot;&lt;br /&gt;
        },&lt;br /&gt;
        {&lt;br /&gt;
            &amp;quot;Effect&amp;quot;: &amp;quot;Allow&amp;quot;,&lt;br /&gt;
            &amp;quot;Action&amp;quot;: [&lt;br /&gt;
                &amp;quot;s3:ListBucket&amp;quot;,&lt;br /&gt;
                &amp;quot;s3:GetBucketLocation&amp;quot;&lt;br /&gt;
            ],&lt;br /&gt;
            &amp;quot;Resource&amp;quot;: ${jsonencode([for BUCKET in S3BUCKETS : &amp;quot;arn:aws:s3:::${BUCKET}&amp;quot;])}&lt;br /&gt;
# renders json array -&amp;gt; [ &amp;quot;arn:aws:s3:::aaa-bucket-111&amp;quot;, &amp;quot;arn:aws:s3:::bbb-bucket-222&amp;quot; ]&lt;br /&gt;
        }&lt;br /&gt;
    ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Explain&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
substitution syntax ${}    local loop variable&lt;br /&gt;
|  function jsonencode   /      templatefile function input variable, it's not ${} syntax&lt;br /&gt;
|  |                   /       /                                  &lt;br /&gt;
${jsonencode([for BUCKET in S3BUCKETS : &amp;quot;arn:aws:s3:::${BUCKET}&amp;quot;])}&lt;br /&gt;
             / |                                        /       |\&lt;br /&gt;
           /   for loop                     template variable   | function cloasing bracket&lt;br /&gt;
    indicates that the result to be an array[]               closing bracket of the json array&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resource ==&lt;br /&gt;
*[https://github.com/hashicorp/terraform/issues/1893 example of unique templates per instance]&lt;br /&gt;
*[https://github.com/hashicorp/terraform/pull/2140 recommendation of how to create unique templates per instance]&lt;br /&gt;
&lt;br /&gt;
= Execute arbitrary code using null_resource and local-exec =&lt;br /&gt;
The null_resource allows to create terraform managed resource also saved in the state file but it uses 3rd party provisoners like local-exec, remote-exec, etc., allowing for arbitrary code execution. This should be only used when Terraform core does not provide the solution for your use case.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
resource &amp;quot;null_resource&amp;quot; &amp;quot;attach_alb_am_wkr_ext&amp;quot; {&lt;br /&gt;
&lt;br /&gt;
  #depends_on sets up a dependency. So it depends on completion of another resource &lt;br /&gt;
  #and it won't run if the resource does not change&lt;br /&gt;
  #depends_on = [ &amp;quot;aws_cloudformation_stack.waf-alb&amp;quot; ]  &lt;br /&gt;
&lt;br /&gt;
  #triggers save computed strings in tfstate file, if value changes on the next run it triggers a resource to be created&lt;br /&gt;
  triggers = {   &lt;br /&gt;
    waf_id = &amp;quot;${aws_cloudformation_stack.waf-alb.outputs.wafWebACL}&amp;quot;   #produces WAF_id&lt;br /&gt;
    alb_id = &amp;quot;${module.balancer_external_alb_instance.arn         }&amp;quot;   #produces full ALB_arn name&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  provisioner &amp;quot;local-exec&amp;quot; {&lt;br /&gt;
    when    = &amp;quot;create&amp;quot;     #runs on: terraform apply&lt;br /&gt;
    command = &amp;lt;&amp;lt;EOF&lt;br /&gt;
ALBARN=$(aws elbv2 describe-load-balancers --region ${var.region} \&lt;br /&gt;
      --name ${var.vpc}-${var.alb_class} \&lt;br /&gt;
      --output text --query 'LoadBalancers[0].LoadBalancerArn') &amp;amp;&amp;amp;&lt;br /&gt;
aws waf-regional associate-web-acl --web-acl-id &amp;quot;${aws_cloudformation_stack.waf-alb.outputs.wafWebACL}&amp;quot; \&lt;br /&gt;
                                   --resource-arn $ALBARN --region ${var.region}&lt;br /&gt;
EOF&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  provisioner &amp;quot;local-exec&amp;quot; {&lt;br /&gt;
    when    = &amp;quot;destroy&amp;quot;  #runs only on: terraform destruct&lt;br /&gt;
    command = &amp;lt;&amp;lt;EOF&lt;br /&gt;
ALBARN=$(aws elbv2 describe-load-balancers --region ${var.region} \&lt;br /&gt;
      --name ${var.vpc}-${var.alb_class} \&lt;br /&gt;
      --output text --query 'LoadBalancers[0].LoadBalancerArn') &amp;amp;&amp;amp;&lt;br /&gt;
aws waf-regional disassociate-web-acl --resource-arn $ALBARN --region ${var.region}&lt;br /&gt;
EOF&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: By default the local-exec provisioner will use &amp;lt;code&amp;gt;/bin/sh -c &amp;quot;your&amp;lt;&amp;lt;EOFscript&amp;quot;&amp;lt;/code&amp;gt; so it will not strip down any meta-characters like &amp;quot;double quotes&amp;quot; causing &amp;lt;tt&amp;gt;aws cli&amp;lt;/tt&amp;gt; to fail. Therefore the output has been forced as &amp;lt;tt&amp;gt;text&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= &amp;lt;code&amp;gt;terraform providers&amp;lt;/code&amp;gt; =&lt;br /&gt;
List all providers in your project to see versions and dependencies.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
$ terraform providers&lt;br /&gt;
.&lt;br /&gt;
├── provider.aws ~&amp;gt; 2.44&lt;br /&gt;
├── provider.external ~&amp;gt; 1.2&lt;br /&gt;
├── provider.null ~&amp;gt; 2.1&lt;br /&gt;
├── provider.random ~&amp;gt; 2.2&lt;br /&gt;
├── provider.template ~&amp;gt; 2.1&lt;br /&gt;
├── module.kubernetes&lt;br /&gt;
│   ├── module.config&lt;br /&gt;
│   │   ├── provider.aws&lt;br /&gt;
│   │   ├── provider.helm ~&amp;gt; 0.10.4&lt;br /&gt;
│   │   ├── provider.kubernetes ~&amp;gt; 1.10.0&lt;br /&gt;
│   │   ├── provider.null (inherited)&lt;br /&gt;
│   │   ├── module.alb_ingress_controller&lt;br /&gt;
(...)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= terraform plugins cache =&lt;br /&gt;
Create &amp;lt;code&amp;gt;.terraformrc&amp;lt;/code&amp;gt; file in $HOME directory and specify the cache directory. Or set an environment variable. Then rerun &amp;lt;code&amp;gt;terraform init&amp;lt;/code&amp;gt; to save providers into shared (cache) directory.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=terraform&amp;gt;&lt;br /&gt;
# Option 1.&lt;br /&gt;
cat &amp;gt; ~/.terraformrc &amp;lt;&amp;lt;'EOF'&lt;br /&gt;
plugin_cache_dir   = &amp;quot;$HOME/.terraform.d/plugin-cache/&amp;quot;&lt;br /&gt;
disable_checkpoint = true&lt;br /&gt;
EOF&lt;br /&gt;
&lt;br /&gt;
# Option 2.&lt;br /&gt;
export TF_PLUGIN_CACHE_DIR=$HOME/.terraform.d/plugins-cache&lt;br /&gt;
&lt;br /&gt;
# Create the cache directory&lt;br /&gt;
mkdir $HOME/.terraform.d/plugin-cache&lt;br /&gt;
&lt;br /&gt;
# Delete per root module providers in &amp;lt;code&amp;gt;.terraform&amp;lt;/code&amp;gt; directory&lt;br /&gt;
find /git/repositories -type d -name &amp;quot;.terraform&amp;quot; -exec rm -rf {}/providers \;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Run &amp;lt;code&amp;gt;terraform init&amp;lt;/code&amp;gt;.&lt;br /&gt;
&amp;lt;source lang=terraform&amp;gt;&lt;br /&gt;
terraform init -backend-config=dev.backend.tfvars&lt;br /&gt;
Initializing the backend...&lt;br /&gt;
&lt;br /&gt;
Successfully configured the backend &amp;quot;s3&amp;quot;! Terraform will automatically&lt;br /&gt;
use this backend unless the backend configuration changes.&lt;br /&gt;
&lt;br /&gt;
Initializing provider plugins...&lt;br /&gt;
- Checking for available provider plugins...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;random&amp;quot; (hashicorp/random) 2.3.0...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;kubernetes&amp;quot; (hashicorp/kubernetes) 1.10.0...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;helm&amp;quot; (hashicorp/helm) 1.2.3...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;aws&amp;quot; (hashicorp/aws) 2.70.0...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;external&amp;quot; (hashicorp/external) 1.2.0...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;null&amp;quot; (hashicorp/null) 2.1.2...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;template&amp;quot; (hashicorp/template) 2.1.2...&lt;br /&gt;
&lt;br /&gt;
Terraform has been successfully initialized!&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
:[[File:ClipCapIt-200714-085009.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Although cache dir is used by all Terraform projects, the providers versioning still works and normal versioning restrictions apply. If you want to be sure which version is locked for use with your current project, you can inspect SHA256 of files saved in one of the files in the “.terraform” directory:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ cat .terraform/plugins/linux_amd64/lock.json &lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;aws&amp;quot;: &amp;quot;f08daaf64b9fca69978a40f88091d1a77fc9725fb04b0fec5e731609c53a025f&amp;quot;,&lt;br /&gt;
  &amp;quot;external&amp;quot;: &amp;quot;6dad56007a3cb0ae9c4655c67d13502e51e38ca2673cf0f22a5fadce6803f9e4&amp;quot;,&lt;br /&gt;
  &amp;quot;helm&amp;quot;: &amp;quot;09b8ccb993f7d776555e811c856de006ac12b9fedfca15b07a85a6814914fd04&amp;quot;,&lt;br /&gt;
  &amp;quot;kubernetes&amp;quot;: &amp;quot;7ebf3273e622d1adb736e98f6fa5cc7e664c61b9171105b13c3b5ea8f8ebc5ff&amp;quot;,&lt;br /&gt;
  &amp;quot;null&amp;quot;: &amp;quot;c56285e7bd25a806bf86fcd4893edbe46e621a46e20fe24ef209b6fd0b7cf5fc&amp;quot;,&lt;br /&gt;
  &amp;quot;random&amp;quot;: &amp;quot;791ef28ff31913d9b2ef0bedb97de98bebafe66d002bc2b9d01377e59a6cfaed&amp;quot;,&lt;br /&gt;
  &amp;quot;template&amp;quot;: &amp;quot;cd8665642bf0f5b5f57a53050d10fd83415428c2dc6713b85e174e007fcc93bf&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
 &lt;br /&gt;
find ~/.terraform.d/plugins -type f | xargs sha256sum&lt;br /&gt;
f08daaf64b9fca69978a40f88091d1a77fc9725fb04b0fec5e731609c53a025f  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-aws_v2.70.0_x4&lt;br /&gt;
6dad56007a3cb0ae9c4655c67d13502e51e38ca2673cf0f22a5fadce6803f9e4  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-external_v1.2.0_x4&lt;br /&gt;
c56285e7bd25a806bf86fcd4893edbe46e621a46e20fe24ef209b6fd0b7cf5fc  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-null_v2.1.2_x4&lt;br /&gt;
791ef28ff31913d9b2ef0bedb97de98bebafe66d002bc2b9d01377e59a6cfaed  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-random_v2.3.0_x4&lt;br /&gt;
09b8ccb993f7d776555e811c856de006ac12b9fedfca15b07a85a6814914fd04  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-helm_v1.2.3_x4&lt;br /&gt;
7ebf3273e622d1adb736e98f6fa5cc7e664c61b9171105b13c3b5ea8f8ebc5ff  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-kubernetes_v1.10.0_x4&lt;br /&gt;
cd8665642bf0f5b5f57a53050d10fd83415428c2dc6713b85e174e007fcc93bf  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-template_v2.1.2_x4&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
As you can see, the SHA256 hash for AWS provider saved in the &amp;lt;tt&amp;gt;lock.json&amp;lt;/tt&amp;gt; file matches the hash of providera saved in the cache directory.&lt;br /&gt;
&lt;br /&gt;
= AWS - [https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Updates.20180206.html#AuroraMySQL.Updates.20180206.CLI RDS aurora] - versioning =&lt;br /&gt;
[https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Updates.20180206.html#AuroraMySQL.Updates.20180206.CLI Engine name] 'aurora-mysql' refers to engine version 5.7.x and for version 5.6.10a engine name is aurora.&lt;br /&gt;
* The engine name for Aurora MySQL 2.x is aurora-mysql; the engine name for Aurora MySQL 1.x continues to be aurora.&lt;br /&gt;
* The engine version for Aurora MySQL 2.x is 5.7.12; the engine version for Aurora MySQL 1.x continues to be 5.6.10ann.&lt;br /&gt;
&amp;lt;syntaxhighlightjs lang=yaml&amp;gt;&lt;br /&gt;
module &amp;quot;db&amp;quot; {&lt;br /&gt;
  source  = &amp;quot;terraform-aws-modules/rds-aurora/aws&amp;quot;&lt;br /&gt;
  version = &amp;quot;2.29.0&amp;quot;&lt;br /&gt;
  name    = &amp;quot;db&amp;quot;&lt;br /&gt;
  engine          = &amp;quot;aurora&amp;quot;                  # v5.6&lt;br /&gt;
  engine_version  = &amp;quot;5.6.mysql_aurora.1.23.0&amp;quot; # v5.6&lt;br /&gt;
  #engine         = &amp;quot;aurora-mysql&amp;quot;            # v5.7&lt;br /&gt;
  #engine_version = &amp;quot;5.7.mysql_aurora.2.09.0&amp;quot; # v5.7&lt;br /&gt;
  ...&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlightjs&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/localstack/localstack localstack] - Mock AWS Services =&lt;br /&gt;
&amp;lt;source lang=&amp;quot;shell&amp;quot;&amp;gt;&lt;br /&gt;
pip install localstack&lt;br /&gt;
localstack start&lt;br /&gt;
SERVICES=kinesis,lambda,sqs,dynamodb DEBUG=1 localstack start&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
;Examples&lt;br /&gt;
* [https://github.com/MattSurabian/bad-terraform bad-terraform]&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/tfsec/tfsec tfsec] - Security Scanning TF code =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
LATEST=$(curl --silent -L &amp;quot;https://api.github.com/repos/tfsec/tfsec/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
sudo curl -L https://github.com/tfsec/tfsec/releases/download/${LATEST}/tfsec-linux-amd64 -o /usr/local/bin/tfsec &lt;br /&gt;
sudo chmod +x /usr/local/bin/tfsec&lt;br /&gt;
&lt;br /&gt;
# Use with docker&lt;br /&gt;
docker run --rm -it -v &amp;quot;$(pwd):/src&amp;quot; liamg/tfsec /src&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
tfsec .&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/terraform-linters/tflint tflint] - validate provider-specific issues =&lt;br /&gt;
Requires Terraform &amp;gt;= 0.12&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/terraform-linters/tflint/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d)&lt;br /&gt;
curl -L https://github.com/terraform-linters/tflint/releases/download/${LATEST}/tflint_linux_amd64.zip -o $TEMPDIR/tflint_linux_amd64.zip&lt;br /&gt;
sudo unzip $TEMPDIR/tflint_linux_amd64.zip -d /usr/local/bin&lt;br /&gt;
&lt;br /&gt;
# Configure tflint&lt;br /&gt;
# | Current directory (./.tflint.hcl)&lt;br /&gt;
# | Home directory (~/.tflint.hcl)&lt;br /&gt;
tflint --config other_config.hcl&lt;br /&gt;
&lt;br /&gt;
## Add plugins&lt;br /&gt;
https://github.com/terraform-linters/tflint/tree/master/docs/rules&lt;br /&gt;
cat &amp;gt; ./.tflint.hcl &amp;lt;&amp;lt;EOF&lt;br /&gt;
plugin &amp;quot;aws&amp;quot; {&lt;br /&gt;
  enabled = true&lt;br /&gt;
  version = &amp;quot;0.5.0&amp;quot;&lt;br /&gt;
  source  = &amp;quot;github.com/terraform-linters/tflint-ruleset-aws&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
plugin &amp;quot;google&amp;quot; {&lt;br /&gt;
    enabled = true&lt;br /&gt;
    version = &amp;quot;0.15.0&amp;quot;&lt;br /&gt;
    source  = &amp;quot;github.com/terraform-linters/tflint-ruleset-google&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
EOF&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
tflint --module&lt;br /&gt;
tflint --module --var-file=dev.tfvars&lt;br /&gt;
&lt;br /&gt;
# Use with docker&lt;br /&gt;
docker pull ghcr.io/terraform-linters/tflint:latest&lt;br /&gt;
docker run --rm -v $(pwd):/src -t ghcr.io/terraform-linters/tflint:v0.34.1&lt;br /&gt;
docker run --rm -v $(pwd):/src -t ghcr.io/terraform-linters/tflint:v0.34.1 -v&lt;br /&gt;
&lt;br /&gt;
# Init and check&lt;br /&gt;
docker run --rm -v $(pwd):/src -t --entrypoint /bin/sh  ghcr.io/terraform-linters/tflint:v0.34.1 -c &amp;quot;tflint --init; tflint /src/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
## It looks important that tflint is executed in terrafrom root path, thus `cd /src`&lt;br /&gt;
docker run --rm -v $(pwd):/src -t -e TFLINT_LOG=debug --entrypoint /bin/sh  ghcr.io/terraform-linters/tflint:v0.34.1 \&lt;br /&gt;
-c &amp;quot;cd /src; tflint --init; tflint --var-file=environments/gcp-dev.tfvars --module&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/terraform-docs/terraform-docs terraform-docs] - generate Terraform documentation = &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install the binary&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/terraform-docs/terraform-docs/releases/latest&amp;quot; | jq -r .tag_name); echo $VERSION&lt;br /&gt;
wget https://github.com/terraform-docs/terraform-docs/releases/download/$VERSION/terraform-docs-$VERSION-linux-amd64.tar.gz&lt;br /&gt;
tar xzvf terraform-docs-$VERSION-linux-amd64.tar.gz&lt;br /&gt;
sudo install terraform-docs /usr/local/bin/terraform-docs&lt;br /&gt;
&lt;br /&gt;
# Use with docker&lt;br /&gt;
docker run --rm --volume &amp;quot;$(pwd):/src&amp;quot; -u $(id -u) quay.io/terraform-docs/terraform-docs:0.16.0 markdown /src&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
terraform-docs . &amp;gt; README.md&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/cycloidio/inframap InfraMap] - plot your Terraform state =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/cycloidio/inframap/releases/latest&amp;quot; | jq -r .tag_name); echo $VERSION&lt;br /&gt;
TEMPDIR=$(mktemp -d)&lt;br /&gt;
curl -L https://github.com/cycloidio/inframap/releases/download/${VERSION}/inframap-linux-amd64.tar.gz -o $TEMPDIR/inframap-linux-amd64.tar.gz&lt;br /&gt;
tar xzvf $TEMPDIR/inframap-linux-amd64.tar.gz -C $TEMPDIR inframap-linux-amd64&lt;br /&gt;
sudo install $TEMPDIR/inframap-linux-amd64 /usr/local/bin/inframap&lt;br /&gt;
&lt;br /&gt;
# Install graphviz, it contains the `dot` program&lt;br /&gt;
sudo apt install graphviz&lt;br /&gt;
&lt;br /&gt;
# Install GraphEasy&lt;br /&gt;
## Cpan manager&lt;br /&gt;
sudo apt install cpanminus # install perl packet managet&lt;br /&gt;
sudo cpanm Graph::Easy # Graph-Easy-0.76 as of 2021-07&lt;br /&gt;
&lt;br /&gt;
## Apt-get (tested with Ubuntu 20.04 LTS)&lt;br /&gt;
sudo apt install libgraph-easy-perl # Graph::Easy v0.76&lt;br /&gt;
&lt;br /&gt;
# a sample usage&lt;br /&gt;
cat input.dot | graph-easy --from=dot --as_ascii&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Usage inframap&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
The most important subcommands are:&lt;br /&gt;
* generate: generates the graph from STDIN or file, STDIN can be .tf files/modules or .tfstate&lt;br /&gt;
* prune: removes all unnecessary information from the state or HCL (not supported yet) so it can be shared without any security concerns&lt;br /&gt;
&lt;br /&gt;
# Generate your infrastructure graph in a DOT representation from: Terraform files or state file&lt;br /&gt;
cat terraform.tf      | inframap generate --printer dot --hcl     | tee graph.dot &lt;br /&gt;
cat terraform.tfstate | inframap generate --printer dot --tfstate | tee graph.dot&lt;br /&gt;
&lt;br /&gt;
# `prune` command will sanitize and anonymize content of the files&lt;br /&gt;
cat terraform.tfstate | inframap prune --canonicals --tfstate &amp;gt; cleaned.tfstate &lt;br /&gt;
&lt;br /&gt;
# Pipe all the previous commands. ASCII graph is generated using graph-easy&lt;br /&gt;
cat terraform.tfstate | inframap prune --tfstate | inframap generate --tfstate | graph-easy&lt;br /&gt;
&lt;br /&gt;
# from State file - visualizing with `dot` or `graph-easy`&lt;br /&gt;
inframap generate state.tfstate | dot -Tpng &amp;gt; graph.png&lt;br /&gt;
inframap generate state.tfstate | graph-easy&lt;br /&gt;
&lt;br /&gt;
# from HCL&lt;br /&gt;
inframap generate terraform.tf | graph-easy&lt;br /&gt;
inframap generate ./my-module/ | graph-easy # or HCL module&lt;br /&gt;
&lt;br /&gt;
# using docker image (assuming that your Terraform files are in the working directory)&lt;br /&gt;
docker run --rm -v ${PWD}:/opt cycloid/inframap generate /opt/terraform.tfstate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example of EKS module&lt;br /&gt;
:[[File:ClipCapIt-210716-090202.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/Pluralith/pluralith-cli/releases Pluralith] =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/Pluralith/pluralith-cli/releases/latest&amp;quot; | jq -r .tag_name); echo $VERSION&lt;br /&gt;
curl -L https://github.com/Pluralith/pluralith-cli/releases/download/${VERSION}/pluralith_cli_linux_amd64_${VERSION} -o pluralith_cli_linux_amd64_${VERSION}&lt;br /&gt;
sudo install pluralith_cli_linux_amd64_${VERSION} /usr/local/bin/pluralith&lt;br /&gt;
&lt;br /&gt;
# Install pluralith-cli-graphing&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/Pluralith/pluralith-cli-graphing-release/releases/latest&amp;quot; | jq -r .tag_name | tr -d v); echo $VERSION&lt;br /&gt;
curl -L https://github.com/Pluralith/pluralith-cli-graphing-release/releases/download/v${VERSION}/pluralith_cli_graphing_linux_amd64_${VERSION} -o pluralith_cli_graphing_linux_amd64_${VERSION}&lt;br /&gt;
sudo install pluralith_cli_graphing_linux_amd64_${VERSION} ~/Pluralith/bin/pluralith-cli-graphing&lt;br /&gt;
&lt;br /&gt;
# Check versions&lt;br /&gt;
pluralith version&lt;br /&gt;
parsing response failed -&amp;gt; GetGitHubRelease: %!w(&amp;lt;nil&amp;gt;)&lt;br /&gt;
 _&lt;br /&gt;
|_)|    _ _ |._|_|_ &lt;br /&gt;
|  ||_|| (_||| | | |&lt;br /&gt;
&lt;br /&gt;
→ CLI Version: 0.2.2&lt;br /&gt;
→ Graph Module Version: 0.2.1&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
pluralith login --api-key $PLURALITH_API_KEY&lt;br /&gt;
&lt;br /&gt;
# Generate PDF graph locally&lt;br /&gt;
pluralith &amp;lt;terrafom-root-folder&amp;gt; --var-file environments/dev.tfvars graph --local-only&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/flosell/iam-policy-json-to-terraform iam-policy-json-to-terraform] =&lt;br /&gt;
Convert an IAM Policy in JSON format into a Terraform aws_iam_policy_document&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/flosell/iam-policy-json-to-terraform/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
sudo curl -L https://github.com/flosell/iam-policy-json-to-terraform/releases/download/${LATEST}/iam-policy-json-to-terraform_amd64 -o /usr/local/bin/iam-policy-json-to-terraform&lt;br /&gt;
sudo chmod +x /usr/local/bin/iam-policy-json-to-terraform&lt;br /&gt;
&lt;br /&gt;
# Usage:&lt;br /&gt;
iam-policy-json-to-terraform &amp;lt; some-policy.json&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/hieven/terraform-visual terraform-visual] =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install&lt;br /&gt;
sudo apt-get update&lt;br /&gt;
sudo apt install nodejs npm&lt;br /&gt;
sudo npm install -g @terraform-visual/cli&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
terraform plan -out=plan.out                # Run plan and output as a file&lt;br /&gt;
terraform show -json plan.out &amp;gt; plan.json   # Read plan file and output it in JSON format&lt;br /&gt;
terraform-visual --plan plan.json&lt;br /&gt;
firefox terraform-visual-report/index.html&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/cloudskiff/driftctl driftctl] =&lt;br /&gt;
Measures infrastructure as code coverage, and tracks infrastructure drift.&lt;br /&gt;
IaC: Terraform, Cloud providers: AWS, GitHub (Azure and GCP on the roadmap for 2021). Spot discrepancies as they happen: driftctl is a free and open-source CLI that warns of infrastructure drifts and fills in the missing piece in your DevSecOps toolbox.&lt;br /&gt;
&lt;br /&gt;
;Features [https://docs.driftctl.com/ docs]&lt;br /&gt;
* Scan cloud provider and map resources with IaC code&lt;br /&gt;
* Analyze diffs, and warn about drift and unwanted unmanaged resources&lt;br /&gt;
* Allow users to ignore resources&lt;br /&gt;
* Multiple output formats&lt;br /&gt;
&lt;br /&gt;
Install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
curl -L https://github.com/snyk/driftctl/releases/latest/download/driftctl_linux_amd64 -o driftctl&lt;br /&gt;
install ./driftctl /usr/local/bin/driftctl&lt;br /&gt;
driftctl version&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[https://docs.driftctl.com/0.39.0/usage/cmd/scan-usage Detect drift on GCP]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
source &amp;lt;(driftctl completion bash)&lt;br /&gt;
export GOOGLE_APPLICATION_CREDENTIALS=$HOME/.config/gcloud/application_default_credentials.json&lt;br /&gt;
export CLOUDSDK_CORE_PROJECT=&amp;lt;myproject_id&amp;gt;&lt;br /&gt;
driftctl scan --to=&amp;quot;gcp+tf&amp;quot;&lt;br /&gt;
driftctl scan --to=&amp;quot;gcp+tf&amp;quot; --deep --output html://output.html&lt;br /&gt;
driftctl scan --to=&amp;quot;gcp+tf&amp;quot; --from tfstate+gs://my-bucket/path/to/state.tfstate # Use this when working with workspaces&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/infracost/infracost infracost] =&lt;br /&gt;
Infracost shows cloud cost estimates for infrastructure-as-code projects such as Terraform.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Downloads the CLI based on your OS/arch and puts it in /usr/local/bin&lt;br /&gt;
curl -fsSL https://raw.githubusercontent.com/infracost/infracost/master/scripts/install.sh | sh&lt;br /&gt;
&lt;br /&gt;
# Register for a free API key&lt;br /&gt;
infracost register # The key is saved in ~/.config/infracost/credentials.yml.&lt;br /&gt;
&lt;br /&gt;
# Show cost breakdown on live infra&lt;br /&gt;
infracost breakdown --path terraform_nlb_static_eips&lt;br /&gt;
&lt;br /&gt;
# Show cost breakdown based on Terraform plan&lt;br /&gt;
cd path/to/src_code&lt;br /&gt;
terraform init&lt;br /&gt;
terraform plan -out  tfplan.binary&lt;br /&gt;
terraform show -json tfplan.binary &amp;gt; plan.json&lt;br /&gt;
&lt;br /&gt;
## run via binary&lt;br /&gt;
infracost breakdown --path plan.json&lt;br /&gt;
infracost breakdown --path plan.json --show-skipped --format html &amp;gt; /vagrant/infracost.html&lt;br /&gt;
infracost diff      --path plan.json&lt;br /&gt;
&lt;br /&gt;
## run via Docker&lt;br /&gt;
docker run -it --rm --volume &amp;quot;$(pwd):/src&amp;quot; -u $(id -u) -e INFRACOST_API_KEY=$INFRACOST_API_KEY infracost/infracost:0.9.15 breakdown --path /src/plan.json&lt;br /&gt;
docker run -it --rm --volume &amp;quot;$(pwd):/src&amp;quot; -u $(id -u) -e INFRACOST_API_KEY=$INFRACOST_API_KEY infracost/infracost:0.9.15 diff      --path /src/plan.json&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example output&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
## Cost breakdown&lt;br /&gt;
docker run -it --rm --volume &amp;quot;$(pwd):/src&amp;quot; -u $(id -u) -e INFRACOST_API_KEY=$INFRACOST_API_KEY infracost/infracost:0.9.15 breakdown --path /src/plan.json&lt;br /&gt;
Detected Terraform plan JSON file at /src/plan.json&lt;br /&gt;
✔ Calculating monthly cost estimate &lt;br /&gt;
&lt;br /&gt;
Project: /src/plan.json&lt;br /&gt;
 Name                                                              Monthly Qty  Unit   Monthly Cost &lt;br /&gt;
 module.gke.google_container_cluster.primary                                                        &lt;br /&gt;
 ├─ Cluster management fee                                                 730  hours        $73.00 &lt;br /&gt;
 └─ default_pool                                                                                    &lt;br /&gt;
    ├─ Instance usage (Linux/UNIX, on-demand, e2-medium)                 6,570  hours       $242.16 &lt;br /&gt;
    └─ Standard provisioned storage (pd-standard)                          900  GiB          $36.00 &lt;br /&gt;
 module.gke.google_container_node_pool.pools[&amp;quot;default-node-pool&amp;quot;]                                   &lt;br /&gt;
 ├─ Instance usage (Linux/UNIX, on-demand, e2-medium)                    6,570  hours       $242.16 &lt;br /&gt;
 └─ Standard provisioned storage (pd-standard)                             900  GiB          $36.00 &lt;br /&gt;
 OVERALL TOTAL                                                                              $629.31 &lt;br /&gt;
──────────────────────────────────&lt;br /&gt;
11 cloud resources were detected, rerun with --show-skipped to see details:&lt;br /&gt;
∙ 2 were estimated, 2 include usage-based costs, see https://infracost.io/usage-file&lt;br /&gt;
∙ 9 were free&lt;br /&gt;
&lt;br /&gt;
## Cost difference&lt;br /&gt;
docker run -it --rm --volume &amp;quot;$(pwd):/src&amp;quot; -u $(id -u) -e INFRACOST_API_KEY=$INFRACOST_API_KEY infracost/infracost:0.9.15 diff --path /src/plan.json&lt;br /&gt;
Detected Terraform plan JSON file at /src/plan.json&lt;br /&gt;
✔ Calculating monthly cost estimate &lt;br /&gt;
&lt;br /&gt;
Project: /src/plan.json&lt;br /&gt;
&lt;br /&gt;
+ module.gke.google_container_cluster.primary&lt;br /&gt;
  +$351&lt;br /&gt;
    + Cluster management fee&lt;br /&gt;
      +$73.00&lt;br /&gt;
    + default_pool&lt;br /&gt;
        + Instance usage (Linux/UNIX, on-demand, e2-medium)&lt;br /&gt;
          +$242&lt;br /&gt;
        + Standard provisioned storage (pd-standard)&lt;br /&gt;
          +$36.00&lt;br /&gt;
    + node_pool[0]&lt;br /&gt;
        + Instance usage (Linux/UNIX, on-demand, e2-medium)&lt;br /&gt;
          $0.00&lt;br /&gt;
        + Standard provisioned storage (pd-standard)&lt;br /&gt;
          $0.00&lt;br /&gt;
+ module.gke.google_container_node_pool.pools[&amp;quot;default-node-pool&amp;quot;]&lt;br /&gt;
  +$278&lt;br /&gt;
    + Instance usage (Linux/UNIX, on-demand, e2-medium)&lt;br /&gt;
      +$242&lt;br /&gt;
    + Standard provisioned storage (pd-standard)&lt;br /&gt;
      +$36.00&lt;br /&gt;
Monthly cost change for /src/plan.json&lt;br /&gt;
Amount:  +$629 ($0.00 → $629)&lt;br /&gt;
&lt;br /&gt;
──────────────────────────────────&lt;br /&gt;
Key: ~ changed, + added, - removed&lt;br /&gt;
&lt;br /&gt;
11 cloud resources were detected, rerun with --show-skipped to see details:&lt;br /&gt;
∙ 2 were estimated, 2 include usage-based costs, see https://infracost.io/usage-file&lt;br /&gt;
∙ 9 were free&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Resources:&lt;br /&gt;
* DockerHub: https://hub.docker.com/r/infracost/infracost/tags&lt;br /&gt;
&lt;br /&gt;
= [https://tfautomv.dev/ tfautomv - Terraform refactor] =&lt;br /&gt;
Tfautomv writes moved blocks for you so your refactoring is quicker and less error-prone.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
tfautomv -dry-run&lt;br /&gt;
tfautomv -show-analysis&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= [https://www.davidc.net/sites/default/subnets/subnets.html?network=192.168.0.0&amp;amp;mask=22&amp;amp;division=19.3d431 Subnetting] =&lt;br /&gt;
Very useful page for subnetting: https://www.davidc.net/sites/default/subnets/subnets.html&lt;br /&gt;
&lt;br /&gt;
= Resources =&lt;br /&gt;
*[https://discuss.hashicorp.com/u/apparentlymart apparentlymart] The Hero! discuss.hashicorp.com&lt;br /&gt;
*[https://blog.gruntwork.io/a-comprehensive-guide-to-terraform-b3d32832baca Comprehensive-guide-to-terraform] gruntwork.io&lt;br /&gt;
*[https://github.com/antonbabenko/terraform-best-practices Terraform good practices] naming conventions, etc..&lt;br /&gt;
*[https://www.runatlantis.io/ Atlantis] Terraform Pull Request Automation, Listens for webhooks from GitHub/GitLab/Bitbucket/Azure DevOps, Runs terraform commands remotely and comments back with their output.&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Terraform&amp;diff=7040</id>
		<title>Terraform</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Terraform&amp;diff=7040"/>
		<updated>2024-11-07T22:59:20Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Terraform Merge on Wildcard Tuple */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article is about utilising a tool from HashiCorp called Terraform to build infrastructure as a code - IoC.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Note| most of the paragraphs have examples of Terraform prior 0.12 version syntax that uses HCLv1. HCLv2 has been introduced with v0.12+ that contains significiant syntax and capabilites improvments. }}&lt;br /&gt;
&lt;br /&gt;
= Install terraform =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
wget https://releases.hashicorp.com/terraform/0.11.11/terraform_0.11.11_linux_amd64.zip&lt;br /&gt;
unzip terraform_0.11.11_linux_amd64.zip&lt;br /&gt;
sudo mv ./terraform /usr/local/bin&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== [https://github.com/kamatama41/tfenv tfenv] - manage multiple versions of Teraform ==&lt;br /&gt;
Install and usage&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
git clone https://github.com/tfutils/tfenv.git ~/.tfenv&lt;br /&gt;
echo &amp;quot;[ -d $HOME/.tfenv ] &amp;amp;&amp;amp; export PATH=$PATH:$HOME/.tfenv/bin/&amp;quot; &amp;gt;&amp;gt; ~/.bashrc # or ~/.bash_profile&lt;br /&gt;
&lt;br /&gt;
# Use&lt;br /&gt;
tfenv install 1.0.6&lt;br /&gt;
tfenv use 1.0.6&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== IDE ==&lt;br /&gt;
Development I use:&lt;br /&gt;
* VSCode with 1.41.1+ (for reference) with extensions:&lt;br /&gt;
** Terraform Autocomplete by erd0s&lt;br /&gt;
** Terraform by Mikael Olenfalk with enabled Language Server; open the command pallet with &amp;lt;code&amp;gt;Ctrl+Shift+P&amp;lt;/code&amp;gt;&lt;br /&gt;
:[[File:ClipCapIt-200202-153128.PNG]]&lt;br /&gt;
&lt;br /&gt;
= Basic configuration =&lt;br /&gt;
When terraform is run it looks for .tf file where configuration is stored. The look up process is limited to a flat directory and never leaves the directory that runs from. Therefore if you wish to address a common file a symbolic-link needs to be created within the directory you have .tf file.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
$ vi example.tf &lt;br /&gt;
provider &amp;quot;aws&amp;quot; {&lt;br /&gt;
  access_key = &amp;quot;AK01234567890OGD6WGA&amp;quot; &lt;br /&gt;
  secret_key = &amp;quot;N8012345678905acCY6XIc1bYjsvvlXHUXMaxOzN&amp;quot;&lt;br /&gt;
  region     = &amp;quot;eu-west-1&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
resource &amp;quot;aws_instance&amp;quot; &amp;quot;webserver&amp;quot; {&lt;br /&gt;
  ami           = &amp;quot;ami-405f7226&amp;quot;&lt;br /&gt;
  instance_type = &amp;quot;t2.nano&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since version 10.8.x major changes and features have been introduced including split of providers binary. Now each provider is a separate binary. Please see below example for Azure provider and other internal Terraform developed providers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Azure ==&lt;br /&gt;
Terraform credentials&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
export ARM_SUBSCRIPTION_ID=&amp;quot;YOUR_SUBSCRIPTION_ID&amp;quot;&lt;br /&gt;
export ARM_TENANT_ID=&amp;quot;TENANT_ID&amp;quot;&lt;br /&gt;
export ARM_CLIENT_ID=&amp;quot;CLIENT_ID&amp;quot;&lt;br /&gt;
export ARM_CLIENT_SECRET=&amp;quot;CLIENT_SECRET&amp;quot;&lt;br /&gt;
export TF_VAR_client_id=${ARM_CLIENT_ID}&lt;br /&gt;
export TF_VAR_client_secret=${ARM_CLIENT_SECRET}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example, how to source credentials&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
export VAULT_CLIENT_ADDR=http://10.1.1.1:8200&lt;br /&gt;
export VAULT_TOKEN=11111111-1111-1111-1111-1111111111111&lt;br /&gt;
vault read -format=json -address=$VAULT_CLIENT_ADDR secret/azure/subscription   | jq -r '.data | .subscription_id, .tenant_id'&lt;br /&gt;
vault read -format=json -address=$VAULT_CLIENT_ADDR secret/azure/${application} | jq -r '.data | .client_id, .client_secret'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Terraform providers, modules and backend config&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
$ vi providers.tf&lt;br /&gt;
provider &amp;quot;azurerm&amp;quot; {&lt;br /&gt;
  version         = &amp;quot;1.10.0&amp;quot;&lt;br /&gt;
  subscription_id = &amp;quot;${var.subscription_id}&amp;quot;&lt;br /&gt;
  tenant_id       = &amp;quot;${var.tenant_id}&amp;quot;&lt;br /&gt;
  client_id       = &amp;quot;${var.client_id}&amp;quot;&lt;br /&gt;
  client_secret   = &amp;quot;${var.client_secret}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# HashiCorp special providers https://github.com/terraform-providers&lt;br /&gt;
provider &amp;quot;template&amp;quot; { version = &amp;quot;1.0.0&amp;quot; }&lt;br /&gt;
provider &amp;quot;external&amp;quot; { version = &amp;quot;1.0.0&amp;quot; }&lt;br /&gt;
provider &amp;quot;local&amp;quot;    { version = &amp;quot;1.1.0&amp;quot; }&lt;br /&gt;
&lt;br /&gt;
terraform {&lt;br /&gt;
  backend &amp;quot;local&amp;quot; {}&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== AWS ==&lt;br /&gt;
;References&lt;br /&gt;
*[https://www.padok.fr/en/blog/terraform-s3-bucket-aws S3 bucket for all accounts]&lt;br /&gt;
*[https://www.padok.fr/en/blog/authentication-aws-profiles Multi account auth using aws profiles and &amp;lt;code&amp;gt;provider &amp;quot;aws&amp;quot; {}&amp;lt;/code&amp;gt;]&lt;br /&gt;
=== Local state ===&lt;br /&gt;
Local state configuration&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
vi backend.tf&lt;br /&gt;
terraform {&lt;br /&gt;
  version          = &amp;quot;~&amp;gt; 1.0&amp;quot;&lt;br /&gt;
  required_version = &amp;quot;= 0.12.29&amp;quot;&lt;br /&gt;
  backend &amp;quot;local&amp;quot; {}&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Remote state (single) for multi account deployments ===&lt;br /&gt;
There are many combination setting up backend and AWS credentials. Important understand is that &amp;lt;code&amp;gt;terraform { backend{} }&amp;lt;/code&amp;gt; block does NOT use &amp;lt;code&amp;gt;provider &amp;quot;aws {}&amp;quot;&amp;lt;/code&amp;gt; configuration in order to access the state bucket. It only uses the backend one.&lt;br /&gt;
* exporting credentials allows working with assume roles that are different in the backend and terraform blocks. &lt;br /&gt;
* specifying different &amp;lt;code&amp;gt;profile = &amp;lt;/code&amp;gt; in each blocks&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Credentials&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
## profile allows assumes roles in other accounts&lt;br /&gt;
#export AWS_PROFILE=&amp;quot;piotr&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Environment credentials for a user that can assume roles (eg. ) in other accounts:&lt;br /&gt;
#          | * arn:aws:iam::111111111111:role/terraform-s3state              - save state in s3 bucket&lt;br /&gt;
#          | * arn:aws:iam::222222222222:role/terraform-crossaccount-admin   - deploy resources&lt;br /&gt;
export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE&lt;br /&gt;
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY&lt;br /&gt;
export AWS_DEFAULT_REGION=us-east-1&lt;br /&gt;
&lt;br /&gt;
# unset all of them if need to &lt;br /&gt;
unset ${!AWS@}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;code&amp;gt;terraform {}&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
terraform {&lt;br /&gt;
  version          = &amp;quot;~&amp;gt; 1.0&amp;quot;&lt;br /&gt;
  required_version = &amp;quot;= 0.12.29&amp;quot;&lt;br /&gt;
# profile &amp;quot;dev-us&amp;quot; # we use 'role_arn' but could specify aws profile instead&lt;br /&gt;
  backend &amp;quot;s3&amp;quot; {&lt;br /&gt;
    bucket  = &amp;quot;tfstate-${var.project}-${var.account-id}&amp;quot; # must exist beforehand&lt;br /&gt;
    key     = &amp;quot;terraform/aws/${var.project}/tfstate&amp;quot;     # this could be much simpler when working with terraform workspaces&lt;br /&gt;
    region  = &amp;quot;${var.region}&amp;quot;&lt;br /&gt;
    role_arn  = &amp;quot;arn:aws:iam::111111111111:role/terraform-s3state&amp;quot; # role to assume in an infra account that the s3 state exists&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;code&amp;gt;provider {}&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
provider &amp;quot;aws&amp;quot; {&lt;br /&gt;
## We could use profiles but instead we use 'assume_role' option. Also on your laptop &lt;br /&gt;
## it should be your creds profile eg. 'piotr-xaccount-admin'&lt;br /&gt;
#profile = &amp;quot;terraform-crossaccount-admin&amp;quot;&lt;br /&gt;
#shared_credentials_file = &amp;quot;/home/piotr/.aws/credentials&amp;quot;&lt;br /&gt;
  assume_role = {&lt;br /&gt;
    role_arn  = &amp;quot;arn:aws:iam::&amp;lt;MY_PROD_ACCOUNT&amp;gt;:role/terraform-crossaccount-admin&amp;quot;       # assume role in target account&lt;br /&gt;
    role_arn  = &amp;quot;arn:aws:iam::${var.aws_account}:role/terraform-crossaccount-admin&amp;quot; # can use variables&lt;br /&gt;
  }&lt;br /&gt;
  region  = &amp;quot;var.aws_region&amp;quot;&lt;br /&gt;
  allowed_account_ids = [ &amp;quot;111111111111&amp;quot;, &amp;quot;222222222222&amp;quot; ] # safety net&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Workspace configuration&lt;br /&gt;
Dev configuration in &amp;lt;code&amp;gt;dev.tfvars&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
aws_region  = &amp;quot;eu-west-1&amp;quot;&lt;br /&gt;
aws_account = &amp;quot;&amp;lt;MY_DEV_ACCOUNT&amp;gt;&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Prod configuration in &amp;lt;code&amp;gt;prod.tfvars&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
aws_region  = &amp;quot;eu-west-1&amp;quot;&lt;br /&gt;
aws_account = &amp;quot;&amp;lt;MY_PROD_ACCOUNT&amp;gt;&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Workspaces&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
terraform init&lt;br /&gt;
terraform workspace new dev&lt;br /&gt;
terraform workspace new prod&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Apply on one account&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
terraform workspace select dev&lt;br /&gt;
terraform apply --var-file $(terraform workspace show).tfvars&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== GCP Google Cloud Platform ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Generate default app credentials&lt;br /&gt;
&lt;br /&gt;
gcloud auth application-default login&lt;br /&gt;
Go to the following link in your browser:&lt;br /&gt;
https://accounts.google.com/o/oauth2/auth?response_type=code&amp;amp;client_id=****_challenge_method=S256&lt;br /&gt;
Enter verification code: ***&lt;br /&gt;
Credentials saved to file: [/home/piotr/.config/gcloud/application_default_credentials.json]&lt;br /&gt;
&lt;br /&gt;
These credentials will be used by any library that requests Application Default Credentials (ADC).&lt;br /&gt;
Quota project &amp;quot;test-devops-candidate1&amp;quot; was added to ADC which can be used by Google client libraries for billing and quota. Note that some services may still bill the project owning the resource&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Plan / apply =&lt;br /&gt;
== Meaning of markings in a plan output ==&lt;br /&gt;
For now, here they are, until we get it included in the docs better:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;+&amp;lt;/code&amp;gt; create&lt;br /&gt;
* &amp;lt;code&amp;gt;-&amp;lt;/code&amp;gt; destroy&lt;br /&gt;
* &amp;lt;code&amp;gt;-/+&amp;lt;/code&amp;gt; replace (destroy and then create, or vice-versa if create-before-destroy is used)&lt;br /&gt;
* &amp;lt;code&amp;gt;~&amp;lt;/code&amp;gt; update in-place&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;lt;=&amp;lt;/code&amp;gt; applies only to data resources. You won't see this one often, because whenever possible Terraform does reads during the refresh phase. You will see it, though, if you have a data resource whose configuration depends on something that we don't know yet, such as an attribute of a resource that isn't yet created. In that case, it's necessary to wait until apply time to find out the final configuration before doing the read.&lt;br /&gt;
&lt;br /&gt;
== Plan and apply ==&lt;br /&gt;
Apply stage, if runs first time will create terraform.tfstate after all changes are done. This file should not be modified manually. It's used to compare what is out in cloud already so the next time APPLY stage runs it will look at the file and execute only necessary changes.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Terraform plan and apply&lt;br /&gt;
|- &lt;br /&gt;
! terraform plan&lt;br /&gt;
! terraform apply&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| &amp;lt;source&amp;gt;$ terraform plan&lt;br /&gt;
Refreshing Terraform state in-memory prior to plan...&lt;br /&gt;
The refreshed state will be used to calculate this plan, but will not be&lt;br /&gt;
persisted to local or remote state storage.&lt;br /&gt;
&amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
+ aws_instance.webserver&lt;br /&gt;
   ami:                         &amp;quot;ami-405f7226&amp;quot;&lt;br /&gt;
   associate_public_ip_address: &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   availability_zone:           &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   ebs_block_device.#:          &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   ephemeral_block_device.#:    &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   instance_state:              &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   instance_type:               &amp;quot;t2.nano&amp;quot;&lt;br /&gt;
   ipv6_addresses.#:            &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   key_name:                    &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   network_interface_id:        &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   placement_group:             &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   private_dns:                 &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   private_ip:                  &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   public_dns:                  &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   public_ip:                   &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   root_block_device.#:         &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   security_groups.#:           &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   source_dest_check:           &amp;quot;true&amp;quot;&lt;br /&gt;
   subnet_id:                   &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   tenancy:                     &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   vpc_security_group_ids.#:    &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
| &amp;lt;source&amp;gt;$ terraform apply&lt;br /&gt;
aws_instance.webserver: Creating...&lt;br /&gt;
 ami:                         &amp;quot;&amp;quot; =&amp;gt; &amp;quot;ami-405f7226&amp;quot;&lt;br /&gt;
 associate_public_ip_address: &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 availability_zone:           &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 ebs_block_device.#:          &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 ephemeral_block_device.#:    &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 instance_state:              &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 instance_type:               &amp;quot;&amp;quot; =&amp;gt; &amp;quot;t2.nano&amp;quot;&lt;br /&gt;
 ipv6_addresses.#:            &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 key_name:                    &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 network_interface_id:        &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 placement_group:             &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 private_dns:                 &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 private_ip:                  &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 public_dns:                  &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 public_ip:                   &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 root_block_device.#:         &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 security_groups.#:           &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 source_dest_check:           &amp;quot;&amp;quot; =&amp;gt; &amp;quot;true&amp;quot;&lt;br /&gt;
 subnet_id:                   &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 tenancy:                     &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 vpc_security_group_ids.#:    &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
aws_instance.webserver: Still creating... (10s elapsed)&lt;br /&gt;
aws_instance.webserver: Creation complete (ID: i-0eb33af34b94d1a78)&lt;br /&gt;
&lt;br /&gt;
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.&lt;br /&gt;
&lt;br /&gt;
The state of your infrastructure has been saved to the path&lt;br /&gt;
below. This state is required to modify and destroy your&lt;br /&gt;
infrastructure, so keep it safe. To inspect the complete state&lt;br /&gt;
use the `terraform show` command.&lt;br /&gt;
&lt;br /&gt;
State path:  &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Show ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ terraform show&lt;br /&gt;
aws_instance.webserver:&lt;br /&gt;
 id = i-0eb33af34b94d1a78&lt;br /&gt;
 ami = ami-405f7226&lt;br /&gt;
 associate_public_ip_address = true&lt;br /&gt;
 availability_zone = eu-west-1c&lt;br /&gt;
 disable_api_termination = false&lt;br /&gt;
(...)&lt;br /&gt;
 source_dest_check = true&lt;br /&gt;
 subnet_id = subnet-92a4bbf6&lt;br /&gt;
 tags.% = 0&lt;br /&gt;
 tenancy = default&lt;br /&gt;
 vpc_security_group_ids.# = 1&lt;br /&gt;
 vpc_security_group_ids.1039819662 = sg-5201fb2b&lt;br /&gt;
&lt;br /&gt;
$&amp;gt; terraform destroy&lt;br /&gt;
Do you really want to destroy?&lt;br /&gt;
 Terraform will delete all your managed infrastructure.&lt;br /&gt;
 There is no undo. Only 'yes' will be accepted to confirm.&lt;br /&gt;
 Enter a value: yes&lt;br /&gt;
aws_instance.webserver: Refreshing state... (ID: i-0eb33af34b94d1a78)&lt;br /&gt;
aws_instance.webserver: Destroying... (ID: i-0eb33af34b94d1a78)&lt;br /&gt;
aws_instance.webserver: Still destroying... (ID: i-0eb33af34b94d1a78, 10s elapsed)&lt;br /&gt;
aws_instance.webserver: Still destroying... (ID: i-0eb33af34b94d1a78, 20s elapsed)&lt;br /&gt;
aws_instance.webserver: Still destroying... (ID: i-0eb33af34b94d1a78, 30s elapsed)&lt;br /&gt;
aws_instance.webserver: Destruction complete&lt;br /&gt;
 &lt;br /&gt;
Destroy complete! Resources: 1 destroyed.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
After the instance has been terminated the terraform.tfstate looks like below:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi terraform.tfstate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=json&amp;gt;&lt;br /&gt;
 {&lt;br /&gt;
     &amp;quot;version&amp;quot;: 3,&lt;br /&gt;
     &amp;quot;terraform_version&amp;quot;: &amp;quot;0.9.1&amp;quot;,&lt;br /&gt;
     &amp;quot;serial&amp;quot;: 1,&lt;br /&gt;
     &amp;quot;lineage&amp;quot;: &amp;quot;c22ccad7-ff26-4b8a-bf19-819477b45202&amp;quot;,&lt;br /&gt;
     &amp;quot;modules&amp;quot;: [&lt;br /&gt;
         {&lt;br /&gt;
             &amp;quot;path&amp;quot;: [&lt;br /&gt;
                 &amp;quot;root&amp;quot;&lt;br /&gt;
             ],&lt;br /&gt;
             &amp;quot;outputs&amp;quot;: {},&lt;br /&gt;
             &amp;quot;resources&amp;quot;: {},&lt;br /&gt;
             &amp;quot;depends_on&amp;quot;: []&lt;br /&gt;
         }&lt;br /&gt;
     ]&lt;br /&gt;
 }&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= AWS credentials profiles and variable files=&lt;br /&gt;
Instead to reference secret_access keys within .tf file directly we can use AWS profile file. This file will be look at for the profile variable we specify in variables.tf file. Note: there is '''no double quotes'''.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ vi ~/.aws/credentials    #AWS credentials file with named profiles&lt;br /&gt;
[terraform-profile1]       #profile name&lt;br /&gt;
aws_access_key_id     = AAAAAAAAAAA&lt;br /&gt;
aws_secret_access_key = BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then we can now remove the secret_access keys from the main .tf file (example.tf) and amend as follows:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi provider.tf&lt;br /&gt;
terraform {&lt;br /&gt;
  version          = &amp;quot;~&amp;gt; 1.0&amp;quot;&lt;br /&gt;
  required_version = &amp;quot;= 0.11.11&amp;quot;&lt;br /&gt;
  region           = &amp;quot;eu-west-1&amp;quot;&lt;br /&gt;
  backend &amp;quot;s3&amp;quot; {}  # in this case all s3 details are passed as ENV vars&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
provider &amp;quot;aws&amp;quot; {&lt;br /&gt;
  version    =   &amp;quot;~&amp;gt; 1.57&amp;quot;&lt;br /&gt;
# Static credentials - provided directly&lt;br /&gt;
  access_key = &amp;quot;AAAAAAAAAAA&amp;quot;&lt;br /&gt;
  secret_key = &amp;quot;BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Shared Credentials file - $HOME/.aws/credentials, static credentials are not needed then&lt;br /&gt;
# profile                 = &amp;quot;terraform-profile1&amp;quot;           #profile name in credentials file, acc 111111111111&lt;br /&gt;
# shared_credentials_file = &amp;quot;/home/user1/.aws/credentials&amp;quot; #if different than default&lt;br /&gt;
&lt;br /&gt;
# If specified, assume role in another account using the user credentials&lt;br /&gt;
# defined in the profile above&lt;br /&gt;
# assume_role {&lt;br /&gt;
#   role_arn     = &amp;quot;${var.aws_xaccount_role}&amp;quot; #variable version&lt;br /&gt;
#   role_arn     = &amp;quot;arn:aws:iam::222222222222:role/CrossAccountSignin_Terraform&amp;quot;&lt;br /&gt;
# }&lt;br /&gt;
# allowed_account_ids = [ &amp;quot;111111111111&amp;quot;, &amp;quot;222222222222&amp;quot; ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
provider &amp;quot;template&amp;quot; {&lt;br /&gt;
  version = &amp;quot;~&amp;gt; 1.0.0&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
and create a variable file to reference it&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ vi variables.tf&lt;br /&gt;
variable &amp;quot;region&amp;quot; {&lt;br /&gt;
  default = &amp;quot;eu-west-1&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
variable &amp;quot;profile&amp;quot; {} #variable without a default value will prompt to type in the value. And that should be 'terraform-profile1'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Run terraform&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ terraform plan -var 'profile=terraform-profile1'  #this way value can be set&lt;br /&gt;
$ terraform plan -destroy -input=false&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= AWS example =&lt;br /&gt;
Prerequisites are:&lt;br /&gt;
*~/.aws/credential file exists&lt;br /&gt;
*variables.tf exist, with context below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you remove &amp;lt;tt&amp;gt;default&amp;lt;/tt&amp;gt; value you will be prompted for it.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;inputs.tf&amp;lt;/code&amp;gt; also known as a variable file.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; vi inputs.tf&lt;br /&gt;
variable &amp;quot;region&amp;quot; { default = &amp;quot;eu-west-1&amp;quot;  } &lt;br /&gt;
variable &amp;quot;profile&amp;quot; {&lt;br /&gt;
       description = &amp;quot;Provide AWS credentials profile you want to use, saved in ~/.aws/credentials file&amp;quot;&lt;br /&gt;
       default     = &amp;quot;terraform-profile&amp;quot; }&lt;br /&gt;
variable &amp;quot;key_name&amp;quot; {&lt;br /&gt;
        description = &amp;lt;&amp;lt;DESCRIPTION&lt;br /&gt;
Provide name of the ssh private key file name, ~/.ssh will be search&lt;br /&gt;
This is the key assosiated with the IAM user in AWS. Example: id_rsa&lt;br /&gt;
DESCRIPTION&lt;br /&gt;
        default     = &amp;quot;id_rsa&amp;quot; }&lt;br /&gt;
variable &amp;quot;public_key_path&amp;quot; {&lt;br /&gt;
        description = &amp;lt;&amp;lt;DESCRIPTION&lt;br /&gt;
Path to the SSH public keys for authentication. This key will be injected&lt;br /&gt;
into all ec2 instances created by Terraform.&lt;br /&gt;
Example: ~./ssh/terraform.pub&lt;br /&gt;
DESCRIPTION&lt;br /&gt;
        default     = &amp;quot;~/.ssh/id_rsa.pub&amp;quot; }&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Terraform .tf file&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ vi example.tf&lt;br /&gt;
provider &amp;quot;aws&amp;quot; {&lt;br /&gt;
  region = &amp;quot;${var.region}&amp;quot;&lt;br /&gt;
  profile = &amp;quot;${var.profile}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
resource &amp;quot;aws_vpc&amp;quot; &amp;quot;vpc&amp;quot; {&lt;br /&gt;
  cidr_block = &amp;quot;10.0.0.0/16&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
# Create an internet gateway to give our subnet access to the open internet&lt;br /&gt;
resource &amp;quot;aws_internet_gateway&amp;quot; &amp;quot;internet-gateway&amp;quot; {&lt;br /&gt;
  vpc_id = &amp;quot;${aws_vpc.vpc.id}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
# Give the VPC internet access on its main route table&lt;br /&gt;
resource &amp;quot;aws_route&amp;quot; &amp;quot;internet_access&amp;quot; {&lt;br /&gt;
  route_table_id         = &amp;quot;${aws_vpc.vpc.main_route_table_id}&amp;quot;&lt;br /&gt;
  destination_cidr_block = &amp;quot;0.0.0.0/0&amp;quot;&lt;br /&gt;
  gateway_id             = &amp;quot;${aws_internet_gateway.internet-gateway.id}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
# Create a subnet to launch our instances into&lt;br /&gt;
resource &amp;quot;aws_subnet&amp;quot; &amp;quot;default&amp;quot; {&lt;br /&gt;
  vpc_id                  = &amp;quot;${aws_vpc.vpc.id}&amp;quot;&lt;br /&gt;
  cidr_block              = &amp;quot;10.0.1.0/24&amp;quot;&lt;br /&gt;
  map_public_ip_on_launch = true&lt;br /&gt;
&lt;br /&gt;
  tags {&lt;br /&gt;
    Name = &amp;quot;Public&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
# Our default security group to access&lt;br /&gt;
# instances over SSH and HTTP&lt;br /&gt;
resource &amp;quot;aws_security_group&amp;quot; &amp;quot;default&amp;quot; {&lt;br /&gt;
  name        = &amp;quot;terraform_securitygroup&amp;quot;&lt;br /&gt;
  description = &amp;quot;Used for public instances&amp;quot;&lt;br /&gt;
  vpc_id      = &amp;quot;${aws_vpc.vpc.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  # SSH access from anywhere&lt;br /&gt;
  ingress {&lt;br /&gt;
    from_port   = 22&lt;br /&gt;
    to_port     = 22&lt;br /&gt;
    protocol    = &amp;quot;tcp&amp;quot;&lt;br /&gt;
    cidr_blocks = [&amp;quot;0.0.0.0/0&amp;quot;]&lt;br /&gt;
  }&lt;br /&gt;
  # HTTP access from the VPC&lt;br /&gt;
  ingress {&lt;br /&gt;
    from_port   = 80&lt;br /&gt;
    to_port     = 80&lt;br /&gt;
    protocol    = &amp;quot;tcp&amp;quot;&lt;br /&gt;
    cidr_blocks = [&amp;quot;10.0.0.0/16&amp;quot;]&lt;br /&gt;
  }&lt;br /&gt;
  # outbound internet access&lt;br /&gt;
  egress {&lt;br /&gt;
    from_port   = 0&lt;br /&gt;
    to_port     = 0&lt;br /&gt;
    protocol    = &amp;quot;-1&amp;quot; # all protocols&lt;br /&gt;
    cidr_blocks = [&amp;quot;0.0.0.0/0&amp;quot;]&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
resource &amp;quot;aws_key_pair&amp;quot; &amp;quot;auth&amp;quot; {&lt;br /&gt;
  key_name   = &amp;quot;${var.key_name}&amp;quot;&lt;br /&gt;
  public_key = &amp;quot;${file(var.public_key_path)}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
resource &amp;quot;aws_instance&amp;quot; &amp;quot;webserver&amp;quot; {&lt;br /&gt;
  ami = &amp;quot;ami-405f7226&amp;quot;&lt;br /&gt;
  instance_type = &amp;quot;t2.nano&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  key_name = &amp;quot;${aws_key_pair.auth.id}&amp;quot;&lt;br /&gt;
  vpc_security_group_ids = [&amp;quot;${aws_security_group.default.id}&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
  # We're going to launch into the public subnet for this.&lt;br /&gt;
  # Normally, in production environments, webservers would be in&lt;br /&gt;
  # private subnets.&lt;br /&gt;
  subnet_id = &amp;quot;${aws_subnet.default.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  # The connection block tells our provisioner how to&lt;br /&gt;
  # communicate with the instance&lt;br /&gt;
  connection {&lt;br /&gt;
    user = &amp;quot;ubuntu&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
  # We run a remote provisioner on the instance after creating it &lt;br /&gt;
  # to install Nginx. By default, this should be on port 80&lt;br /&gt;
  provisioner &amp;quot;remote-exec&amp;quot; {&lt;br /&gt;
    inline = [&lt;br /&gt;
      &amp;quot;sudo apt-get -y update&amp;quot;,&lt;br /&gt;
      &amp;quot;sudo apt-get -y install nginx&amp;quot;,&lt;br /&gt;
      &amp;quot;sudo service nginx start&amp;quot;&lt;br /&gt;
    ]&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Run a plan ==&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
$ terraform plan&lt;br /&gt;
var.key_name&lt;br /&gt;
  Name of the AWS key pair&lt;br /&gt;
&lt;br /&gt;
  Enter a value: id_rsa        #name of the key_pair&lt;br /&gt;
&lt;br /&gt;
var.profile&lt;br /&gt;
  AWS credentials profile you want to use&lt;br /&gt;
&lt;br /&gt;
  Enter a value: terraform-profile   #aws profile in ~/.aws/credentials file&lt;br /&gt;
&lt;br /&gt;
var.public_key_path&lt;br /&gt;
  Path to the SSH public keys for authentication.&lt;br /&gt;
  Example: ~./ssh/terraform.pub&lt;br /&gt;
&lt;br /&gt;
  Enter a value: ~/.ssh/id_rsa.pub  #path to the matching public key&lt;br /&gt;
&lt;br /&gt;
Refreshing Terraform state in-memory prior to plan...&lt;br /&gt;
The refreshed state will be used to calculate this plan, but will not be&lt;br /&gt;
persisted to local or remote state storage.&lt;br /&gt;
&lt;br /&gt;
The Terraform execution plan has been generated and is shown below.&lt;br /&gt;
Resources are shown in alphabetical order for quick scanning. Green resources&lt;br /&gt;
will be created (or destroyed and then created if an existing resource&lt;br /&gt;
exists), yellow resources are being changed in-place, and red resources&lt;br /&gt;
will be destroyed. Cyan entries are data sources to be read.&lt;br /&gt;
&lt;br /&gt;
+ aws_instance.webserver&lt;br /&gt;
    ami:                         &amp;quot;ami-405f7226&amp;quot;&lt;br /&gt;
    associate_public_ip_address: &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    availability_zone:           &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    ebs_block_device.#:          &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    ephemeral_block_device.#:    &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    instance_state:              &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    instance_type:               &amp;quot;t2.nano&amp;quot;&lt;br /&gt;
    ipv6_addresses.#:            &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    key_name:                    &amp;quot;${aws_key_pair.auth.id}&amp;quot;&lt;br /&gt;
    network_interface_id:        &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    placement_group:             &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    private_dns:                 &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    private_ip:                  &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    public_dns:                  &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    public_ip:                   &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    root_block_device.#:         &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    security_groups.#:           &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    source_dest_check:           &amp;quot;true&amp;quot;&lt;br /&gt;
    subnet_id:                   &amp;quot;${aws_subnet.default.id}&amp;quot;&lt;br /&gt;
    tenancy:                     &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    vpc_security_group_ids.#:    &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
+ aws_internet_gateway.internet-gateway&lt;br /&gt;
    vpc_id: &amp;quot;${aws_vpc.vpc.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
+ aws_key_pair.auth&lt;br /&gt;
    fingerprint: &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    key_name:    &amp;quot;id_rsa&amp;quot;&lt;br /&gt;
    public_key:  &amp;quot;ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDfc piotr@ubuntu&amp;quot;&lt;br /&gt;
&lt;br /&gt;
...omitted...&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
Plan: 7 to add, 0 to change, 0 to destroy.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Plan a single target&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; terraform plan -target=aws_ami_from_instance.golden&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Terraform apply ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; terraform apply&lt;br /&gt;
$&amp;gt; terraform show # shoe current resources in the state file&lt;br /&gt;
aws_instance.webserver:&lt;br /&gt;
 id = i-09c1c665cef284235&lt;br /&gt;
 ami = ami-405f7226&lt;br /&gt;
&amp;lt;...&amp;gt;&lt;br /&gt;
aws_security_group.default:&lt;br /&gt;
 id = sg-b14bb1c8&lt;br /&gt;
 description = Used for public instances&lt;br /&gt;
 egress.# = 1&lt;br /&gt;
&amp;lt;...&amp;gt;&lt;br /&gt;
aws_subnet.default:&lt;br /&gt;
 id = subnet-6f4f510b&lt;br /&gt;
&amp;lt;...&amp;gt;&lt;br /&gt;
aws_vpc.vpc:&lt;br /&gt;
 id = vpc-9ba0b7ff&lt;br /&gt;
&amp;lt;...&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Apply a single resource using &amp;lt;code&amp;gt;-target &amp;lt;resource&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; terraform apply -target=aws_ami_from_instance.golden&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Terraform destroy ==&lt;br /&gt;
Run destroy command to delete all resources that were created&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; terraform destroy&lt;br /&gt;
&lt;br /&gt;
aws_key_pair.auth: Refreshing state... (ID: id_rsa)&lt;br /&gt;
aws_vpc.vpc: Refreshing state... (ID: vpc-9ba0b7ff)&lt;br /&gt;
&amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Destroy complete! Resources: 7 destroyed.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Destroy a single resource - targeting&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; terraform show&lt;br /&gt;
$&amp;gt; terraform destroy -target=aws_ami_from_instance.golden&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Terraform taint ==&lt;br /&gt;
Get a resource list&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
terraform state list&lt;br /&gt;
# select item for the list #&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Version 0.11: resource index must be addressed as eg. &amp;lt;code&amp;gt;aws_instance.main.0&amp;lt;/code&amp;gt; not  &amp;lt;code&amp;gt;aws_instance.main[0]&amp;lt;/code&amp;gt;. It's not possible to tain whole module&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
terraform taint -module=&amp;lt;MODULE_NAME&amp;gt; aws_instance.main.0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Version 0.12: resources and modules can be addressed in more natural way&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
terraform taint module.MODULE_NAME.aws_instance.main.0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Use ansible from Terraform - Provision using Ansible =&lt;br /&gt;
Unsurr if this is the best approach due to the fact of how to store the state of local-exec Ansible run. Could be set to always run as Ansible playbooks are immutable. Exame: https://github.com/dzeban/c10k/blob/master/infrastructure/main.tf&lt;br /&gt;
&lt;br /&gt;
= Debug =&lt;br /&gt;
== Output complex object ==&lt;br /&gt;
Often it is required to manipulate a data structure that is an output of &amp;lt;tt&amp;gt;resource&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;data.resource&amp;lt;/tt&amp;gt; or simply a template that might be hidden computation not always displayed on your screen. You can use following techniques to iterate over you code output:&lt;br /&gt;
&lt;br /&gt;
;Output and [https://www.terraform.io/docs/providers/null/resource.html null_resource] - empty virtual container that can run any arbitrary commands&lt;br /&gt;
* '''Problem statement:''' Display computed Terrafom &amp;lt;code&amp;gt;templatefile&amp;lt;/code&amp;gt;&lt;br /&gt;
* '''Solution:''' Use &amp;lt;code&amp;gt;null_resource&amp;lt;/code&amp;gt; to create a template, such template will be shown in a &amp;lt;tt&amp;gt;plan&amp;lt;/tt&amp;gt;. If such template is Json policy, invalid policies fail and you cannot see why. Plan will show the object being constructed, running &amp;lt;code&amp;gt;terraform apply&amp;lt;/code&amp;gt; it can be saved into state file as output variable. Then the object can be re-used for further transformations.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;Terraform&amp;quot;&amp;gt;&lt;br /&gt;
data &amp;quot;aws_caller_identity&amp;quot; &amp;quot;current&amp;quot; {}&lt;br /&gt;
&lt;br /&gt;
# resource &amp;quot;aws_kms_key&amp;quot; &amp;quot;secretmanager&amp;quot; {&lt;br /&gt;
#  policy = templatefile(&amp;quot;./templates/kms_secretmanager.policy.json.tpl&amp;quot;, ... # debugging policy with &lt;br /&gt;
# }                                                                           # null_resource and ouput&lt;br /&gt;
&lt;br /&gt;
resource &amp;quot;aws_kms_alias&amp;quot; &amp;quot;secretmanager&amp;quot; {&lt;br /&gt;
  name          = &amp;quot;alias/secretmanager&amp;quot;&lt;br /&gt;
  target_key_id = aws_kms_key.secretmanager.key_id&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
resource &amp;quot;null_resource&amp;quot; &amp;quot;policytest&amp;quot; {&lt;br /&gt;
  triggers = {&lt;br /&gt;
    policytest = templatefile(&amp;quot;./templates/kms_secretmanager.policy.json.tpl&amp;quot;, &lt;br /&gt;
    {&lt;br /&gt;
      arns_json                 = jsonencode(var.crossAccountIamUsers_arns)&lt;br /&gt;
      currentAccountId          = data.aws_caller_identity.current.account_id&lt;br /&gt;
      crossAccountAccessEnabled = length([var.crossAccountIamUsers_arns]) &amp;gt; 0 ? true : false&lt;br /&gt;
    })&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
output &amp;quot;policy&amp;quot; {&lt;br /&gt;
  value = templatefile(&amp;quot;./templates/kms_secretmanager.policy.json.tpl&amp;quot;, &lt;br /&gt;
    {&lt;br /&gt;
      arns_json                 = jsonencode(var.crossAccountIamUsers_arns)&lt;br /&gt;
      currentAccountId          = data.aws_caller_identity.current.account_id&lt;br /&gt;
      crossAccountAccessEnabled = length([var.crossAccountIamUsers_arns]) &amp;gt; 0 ? true : false&lt;br /&gt;
    }&lt;br /&gt;
  )&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Policy template file &amp;lt;code&amp;gt;./templates/kms_secretmanager.policy.json.tpl&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=json&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
    &amp;quot;Version&amp;quot;: &amp;quot;2012-10-17&amp;quot;,&lt;br /&gt;
    &amp;quot;Id&amp;quot;: &amp;quot;key-consolepolicy-1&amp;quot;,&lt;br /&gt;
    &amp;quot;Statement&amp;quot;: [&lt;br /&gt;
        {&lt;br /&gt;
            &amp;quot;Sid&amp;quot;: &amp;quot;Enable IAM User Permissions&amp;quot;,&lt;br /&gt;
            &amp;quot;Effect&amp;quot;: &amp;quot;Allow&amp;quot;,&lt;br /&gt;
            &amp;quot;Principal&amp;quot;: {&lt;br /&gt;
                &amp;quot;AWS&amp;quot;: &amp;quot;arn:aws:iam::${currentAccountId}:root&amp;quot;&lt;br /&gt;
            },&lt;br /&gt;
            &amp;quot;Action&amp;quot;: &amp;quot;kms:*&amp;quot;,&lt;br /&gt;
            &amp;quot;Resource&amp;quot;: &amp;quot;*&amp;quot;&lt;br /&gt;
        },&lt;br /&gt;
%{ if crossAccountAccessEnabled == true ~}&lt;br /&gt;
        {&lt;br /&gt;
            &amp;quot;Sid&amp;quot;: &amp;quot;Allow cross-accounts retrieve secrets&amp;quot;,&lt;br /&gt;
            &amp;quot;Effect&amp;quot;: &amp;quot;Allow&amp;quot;,&lt;br /&gt;
            &amp;quot;Principal&amp;quot;: {&lt;br /&gt;
                &amp;quot;AWS&amp;quot;: ${arns_json}&lt;br /&gt;
            },&lt;br /&gt;
            &amp;quot;Action&amp;quot;: [&lt;br /&gt;
                &amp;quot;kms:Decrypt&amp;quot;,&lt;br /&gt;
                &amp;quot;kms:DescribeKey&amp;quot;&lt;br /&gt;
            ],&lt;br /&gt;
            &amp;quot;Resource&amp;quot;: &amp;quot;*&amp;quot;&lt;br /&gt;
        }&lt;br /&gt;
%{ endif ~}&lt;br /&gt;
    ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Run&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ terraform apply -var-file=test.tfvars -target null_resource.policytest # -var-file contains 'var.crossAccountIamUsers_arns' list variable&lt;br /&gt;
&lt;br /&gt;
Terraform will perform the following actions:&lt;br /&gt;
&lt;br /&gt;
  # null_resource.policytest will be created&lt;br /&gt;
  + resource &amp;quot;null_resource&amp;quot; &amp;quot;policytest&amp;quot; {&lt;br /&gt;
      + id       = (known after apply)&lt;br /&gt;
      + triggers = {&lt;br /&gt;
          + &amp;quot;policytest&amp;quot; = jsonencode(&lt;br /&gt;
                {&lt;br /&gt;
                  + Id        = &amp;quot;key-consolepolicy-1&amp;quot;&lt;br /&gt;
                  + Statement = [&lt;br /&gt;
                      + {&lt;br /&gt;
                          + Action    = &amp;quot;kms:*&amp;quot;&lt;br /&gt;
                          + Effect    = &amp;quot;Allow&amp;quot;&lt;br /&gt;
                          + Principal = {&lt;br /&gt;
                              + AWS = &amp;quot;arn:aws:iam::111111111111:root&amp;quot;&lt;br /&gt;
                            }&lt;br /&gt;
                          + Resource  = &amp;quot;*&amp;quot;&lt;br /&gt;
                          + Sid       = &amp;quot;Enable IAM User Permissions&amp;quot;&lt;br /&gt;
                        },&lt;br /&gt;
                      + {&lt;br /&gt;
                          + Action    = [&lt;br /&gt;
                              + &amp;quot;kms:Decrypt&amp;quot;,&lt;br /&gt;
                              + &amp;quot;kms:DescribeKey&amp;quot;,&lt;br /&gt;
                            ]&lt;br /&gt;
                          + Effect    = &amp;quot;Allow&amp;quot;&lt;br /&gt;
                          + Principal = {&lt;br /&gt;
                              + AWS = [&lt;br /&gt;
                                  + &amp;quot;arn:aws:iam::111111111111:user/dev&amp;quot;,&lt;br /&gt;
                                  + &amp;quot;arn:aws:iam::111111111111:user/test&amp;quot;,&lt;br /&gt;
                                ]&lt;br /&gt;
                            }&lt;br /&gt;
                          + Resource  = &amp;quot;*&amp;quot;&lt;br /&gt;
                          + Sid       = &amp;quot;Allow cross-accounts retrieve secrets&amp;quot;&lt;br /&gt;
                        },&lt;br /&gt;
                    ]&lt;br /&gt;
                  + Version   = &amp;quot;2012-10-17&amp;quot;&lt;br /&gt;
                }&lt;br /&gt;
            )&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
Plan: 1 to add, 0 to change, 0 to destroy.&lt;br /&gt;
&lt;br /&gt;
Do you want to perform these actions?&lt;br /&gt;
  Terraform will perform the actions described above.&lt;br /&gt;
  Only 'yes' will be accepted to approve.&lt;br /&gt;
&lt;br /&gt;
  Enter a value: yes # &amp;lt;- manual imput&lt;br /&gt;
&lt;br /&gt;
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.&lt;br /&gt;
&lt;br /&gt;
Outputs:&lt;br /&gt;
policy = {&lt;br /&gt;
    &amp;quot;Version&amp;quot;: &amp;quot;2012-10-17&amp;quot;,&lt;br /&gt;
    &amp;quot;Id&amp;quot;: &amp;quot;key-consolepolicy-1&amp;quot;,&lt;br /&gt;
    &amp;quot;Statement&amp;quot;: [&lt;br /&gt;
        {&lt;br /&gt;
            &amp;quot;Sid&amp;quot;: &amp;quot;Enable IAM User Permissions&amp;quot;,&lt;br /&gt;
            &amp;quot;Effect&amp;quot;: &amp;quot;Allow&amp;quot;,&lt;br /&gt;
            &amp;quot;Principal&amp;quot;: {&lt;br /&gt;
                &amp;quot;AWS&amp;quot;: &amp;quot;arn:aws:iam::111111111111:root&amp;quot;&lt;br /&gt;
            },&lt;br /&gt;
            &amp;quot;Action&amp;quot;: &amp;quot;kms:*&amp;quot;,&lt;br /&gt;
            &amp;quot;Resource&amp;quot;: &amp;quot;*&amp;quot;&lt;br /&gt;
        },&lt;br /&gt;
        {&lt;br /&gt;
            &amp;quot;Sid&amp;quot;: &amp;quot;Allow cross-accounts retrieve secrets&amp;quot;,&lt;br /&gt;
            &amp;quot;Effect&amp;quot;: &amp;quot;Allow&amp;quot;,&lt;br /&gt;
            &amp;quot;Principal&amp;quot;: {&lt;br /&gt;
                &amp;quot;AWS&amp;quot;: [&amp;quot;arn:aws:iam::111111111111:user/dev&amp;quot;,&amp;quot;arn:aws:iam::111111111111:user/test&amp;quot;]&lt;br /&gt;
            },&lt;br /&gt;
            &amp;quot;Action&amp;quot;: [&lt;br /&gt;
                &amp;quot;kms:Decrypt&amp;quot;,&lt;br /&gt;
                &amp;quot;kms:DescribeKey&amp;quot;&lt;br /&gt;
            ],&lt;br /&gt;
            &amp;quot;Resource&amp;quot;: &amp;quot;*&amp;quot;&lt;br /&gt;
        }&lt;br /&gt;
    ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Debug =&lt;br /&gt;
== Debug and analyze logs ==&lt;br /&gt;
We are going to enable logging to a file in Terraform. Convert log file to pdf and use sheri.ai to give us the answers.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Pre req - Ubuntu 22.04&lt;br /&gt;
sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install ghostscript # for ps2pdf converter&lt;br /&gt;
&lt;br /&gt;
# Set Terraform logging&lt;br /&gt;
export TF_LOG=TRACE # DEBUG&lt;br /&gt;
export TF_LOG_PATH=/tmp/tflogs.log&lt;br /&gt;
&lt;br /&gt;
terraform plan|apply&lt;br /&gt;
vim $TF_LOG_PATH -c &amp;quot;hardcopy &amp;gt; ${TF_LOG_PATH}.ps | q&amp;quot;; ps2pdf ${TF_LOG_PATH}.ps ${TF_LOG_PATH}-$(echo $(date +&amp;quot;%Y%m%d-%H%M&amp;quot;)).pdf&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Debug using &amp;lt;code&amp;gt;terraform console&amp;lt;/code&amp;gt;==&lt;br /&gt;
This command provides an interactive command-line console for evaluating and experimenting with expressions. This is useful for testing interpolations before using them in configurations, and for interacting with any values currently saved in state. Terraform console will read configured state even if it is remote.&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
$&amp;gt; terraform console #-state=path # note I have 'tfstate' available; this could be remote state&lt;br /&gt;
&amp;gt; var.vpc_cidr       # &amp;lt;- new syntax&lt;br /&gt;
10.123.0.0/16&lt;br /&gt;
&amp;gt; &amp;quot;${var.vpc_cidr}&amp;quot;  # &amp;lt;- old syntax&lt;br /&gt;
10.123.0.0/16&lt;br /&gt;
&amp;gt; aws_security_group.tf_public_sg.id   # interpolate from state&lt;br /&gt;
sg-04d51b5ae10e6f0b0&lt;br /&gt;
&amp;gt; help&lt;br /&gt;
The Terraform console allows you to experiment with Terraform interpolations.&lt;br /&gt;
You may access resources in the state (if you have one) just as you would&lt;br /&gt;
from a configuration. For example: &amp;quot;aws_instance.foo.id&amp;quot; would evaluate&lt;br /&gt;
to the ID of &amp;quot;aws_instance.foo&amp;quot; if it exists in your state.&lt;br /&gt;
&lt;br /&gt;
Type in the interpolation to test and hit &amp;lt;enter&amp;gt; to see the result.&lt;br /&gt;
&lt;br /&gt;
To exit the console, type &amp;quot;exit&amp;quot; and hit &amp;lt;enter&amp;gt;, or use Control-C or&lt;br /&gt;
Control-D.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ echo &amp;quot;aws_iam_user.notif.arn&amp;quot; | terraform console&lt;br /&gt;
arn:aws:iam::123456789:user/notif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Log user_data to console logs ==&lt;br /&gt;
In Linux add a line below after she-bang&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
exec &amp;gt; &amp;gt;(tee /var/log/user-data.log|logger -t user-data -s 2&amp;gt;/dev/console)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Now you can go and open System Logs in AWS Console to view user-data script logs.&lt;br /&gt;
&lt;br /&gt;
= terraform graph to visualise configuration =&lt;br /&gt;
== Graph dependencies ==&lt;br /&gt;
Create visualised file. You may need to install &amp;lt;code&amp;gt;sudo apt-get install graphviz&amp;lt;/code&amp;gt; if it is not in your system.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo apt install graphviz # installs 'dot'&lt;br /&gt;
terraform graph | dot -Tpng &amp;gt; graph.png&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
[[File:Example2.png|none|left|700px|Terraform visual configuration]]&lt;br /&gt;
&lt;br /&gt;
== [https://serverfault.com/questions/1005761/what-does-error-cycle-means-in-terraform Cycle error] ==&lt;br /&gt;
Example cycle error:&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
Error: Cycle: module.gke.google_container_node_pool.pools[&amp;quot;low-standard-n1&amp;quot;]&lt;br /&gt;
 module.gke.google_container_node_pool.pools[&amp;quot;medium-standard-n1&amp;quot;]&lt;br /&gt;
 module.gke.google_container_node_pool.pools[&amp;quot;large-standard-n1&amp;quot;]&lt;br /&gt;
 module.gke.local.cluster_endpoint (expand)&lt;br /&gt;
 module.gke.output.endpoint (expand)&lt;br /&gt;
 provider[&amp;quot;registry.terraform.io/gavinbunney/kubectl&amp;quot;]&lt;br /&gt;
 kubectl_manifest.sync[&amp;quot;source.toolkit.fluxcd.io/v1beta1/gitrepository/flux-system/flux-system&amp;quot;] (destroy)&lt;br /&gt;
 module.gke.google_container_node_pool.pools[&amp;quot;preemptible&amp;quot;] (destroy)&lt;br /&gt;
 module.gke.module.gcloud_delete_default_kube_dns_configmap.module.gcloud_kubectl.null_resource.additional_components[0] (destroy)&lt;br /&gt;
 module.gke.module.gcloud_delete_default_kube_dns_configmap.module.gcloud_kubectl.null_resource.run_command[0] (destroy)&lt;br /&gt;
 module.gke.module.gcloud_delete_default_kube_dns_configmap.module.gcloud_kubectl.null_resource.module_depends_on[0] (destroy)&lt;br /&gt;
 module.gke.module.gcloud_delete_default_kube_dns_configmap.module.gcloud_kubectl.null_resource.run_destroy_command[0] (destroy)&lt;br /&gt;
 module.gke.kubernetes_config_map.kube-dns[0] (destroy)&lt;br /&gt;
 module.gke.google_container_cluster.primary&lt;br /&gt;
 module.gke.local.cluster_output_master_auth (expand)&lt;br /&gt;
 module.gke.local.cluster_master_auth_list_layer1 (expand)&lt;br /&gt;
 module.gke.local.cluster_master_auth_list_layer2 (expand)&lt;br /&gt;
 module.gke.local.cluster_master_auth_map (expand)&lt;br /&gt;
 module.gke.local.cluster_ca_certificate (expand)&lt;br /&gt;
 module.gke.output.ca_certificate (expand)&lt;br /&gt;
 provider[&amp;quot;registry.terraform.io/hashicorp/kubernetes&amp;quot;]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;-draw-cycles&amp;lt;/code&amp;gt; command causes Terraform to mark the arrows that are related to the cycle being reported using the color red. If you cannot visually distinguish red from black, you may wish to first edit the generated Graphviz code to replace red with some other color you can distinguish.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo apt install graphviz&lt;br /&gt;
terraform graph -draw-cycles -type=plan &amp;gt; cycle-plan.graphviz&lt;br /&gt;
terraform graph -draw-cycles | dot -Tpng &amp;gt; cycles.png&lt;br /&gt;
terraform graph -draw-cycles | dot -Tsvg &amp;gt; cycles.svg&lt;br /&gt;
terraform graph -draw-cycles | dot -Tpdf &amp;gt; cycles.pdf&lt;br /&gt;
# | -draw-cycles - highlight any cycles in the graph with colored edges. This helps when diagnosing cycle errors.&lt;br /&gt;
# | -type=plan   - type of graph to output. Can be: plan, plan-destroy, apply, validate, input, refresh.&lt;br /&gt;
&lt;br /&gt;
# For large graphs you may want to install inkscape&lt;br /&gt;
sudo apt install inkscape --no-install-suggests --no-install-recommends&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Awoid cycle errors in modules by structuring your config to avoid cross-module references. So instead of directly accessing an output of one module from inside another, set it up as in input parameter instead and wire everything together on the top level.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;How to get it solved&lt;br /&gt;
With the cycling dependency issue, study the graph then decide on removing from the state a resource that should be generated later. If the graph is not clear or too complex to read you may need to guess and delete from the state a resource marked for deletion, ie:&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
terraform state  rm kubectl_manifest.install[\&amp;quot;apps/v1/deployment/flux-system/kustomize-controller\&amp;quot;]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Remote state =&lt;br /&gt;
== Enable ==&lt;br /&gt;
Create s3 bucket with unique name, enable versioning and choose a region.&lt;br /&gt;
&lt;br /&gt;
Then configure terraform:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ terraform remote config \&lt;br /&gt;
     -backend=s3 \&lt;br /&gt;
     -backend-config=&amp;quot;bucket=YOUR_BUCKET_NAME&amp;quot; \&lt;br /&gt;
     -backend-config=&amp;quot;key=terraform.tfstate&amp;quot; \&lt;br /&gt;
     -backend-config=&amp;quot;region=YOUR_BUCKET_REGION&amp;quot; \&lt;br /&gt;
     -backend-config=&amp;quot;encrypt=true&amp;quot;&lt;br /&gt;
 Remote configuration updated&lt;br /&gt;
 Remote state configured and pulled.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
After running this command, you should see your Terraform state show up in that S3 bucket.&lt;br /&gt;
&lt;br /&gt;
== Locking ==&lt;br /&gt;
Add &amp;lt;code&amp;gt;dynamodb_table&amp;lt;/code&amp;gt; name to backend configuration. &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
terraform {&lt;br /&gt;
  version          = &amp;quot;~&amp;gt; 1.0&amp;quot;&lt;br /&gt;
  required_version = &amp;quot;= 0.11.11&amp;quot;&lt;br /&gt;
  backend &amp;quot;s3&amp;quot; {&lt;br /&gt;
    dynamodb_table = &amp;quot;tfstate-lock&amp;quot;&lt;br /&gt;
    profile        = &amp;quot;terraform-agent&amp;quot;&lt;br /&gt;
#   assume_role {&lt;br /&gt;
#     role_arn     = &amp;quot;${var.aws_xaccount_role}&amp;quot;&lt;br /&gt;
#     session_name = &amp;quot;${var.aws_xsession_name}&amp;quot;&lt;br /&gt;
#   }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In AWS create dynamo-db table, named: &amp;lt;tt&amp;gt;tfsate-lock&amp;lt;/tt&amp;gt; with index &amp;lt;tt&amp;gt;LockID&amp;lt;/tt&amp;gt;; as on a picture below. It an event of taking a lock the entry similar to one below gets created.&lt;br /&gt;
[[File:Terraform-dynamo-db-state-locking.png|none|left|Terraform-dynamo-db-state-locking]]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
{&amp;quot;ID&amp;quot;:&amp;quot;62a453e8-7fbc-cfa2-e07f-be1381b82af3&amp;quot;,&amp;quot;Operation&amp;quot;:&amp;quot;OperationTypePlan&amp;quot;,&amp;quot;Info&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;Who&amp;quot;:&amp;quot;piotr@laptop1&amp;quot;,&amp;quot;Version&amp;quot;:&amp;quot;0.11.11&amp;quot;,&amp;quot;Created&amp;quot;:&amp;quot;2019-03-07T08:49:33.3078722Z&amp;quot;,&amp;quot;Path&amp;quot;:&amp;quot;tfstate-acmedev01-acmedev-111111111111/aws/acmedev01/state&amp;quot;}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Workspaces =&lt;br /&gt;
== [https://discuss.hashicorp.com/t/how-to-change-the-name-of-a-workspace/24010 Rename a workspace / move the state file] ==&lt;br /&gt;
{{Note|The state manipulation commands run through Terraform’s automatic state upgrading processes and so best to do this with the same Terraform CLI version that you’ve most recently been using against this workspace so that the state won’t be implicitly upgraded as part of the operation.}}&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
terraform workspace select old-name&lt;br /&gt;
terraform state pull &amp;gt;old-name.tfstate&lt;br /&gt;
terraform workspace new new-name&lt;br /&gt;
terraform state push old-name.tfstate&lt;br /&gt;
terraform show # confirm that the newly-imported state looks 'right', before deleting the old workspace&lt;br /&gt;
terraform workspace delete -force old-name&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
Variables can be provided via cli&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
terraform apply -var=&amp;quot;image_id=ami-abc123&amp;quot;&lt;br /&gt;
terraform apply -var='image_id_list=[&amp;quot;ami-abc123&amp;quot;,&amp;quot;ami-def456&amp;quot;]'&lt;br /&gt;
terraform apply -var='image_id_map={&amp;quot;us-east-1&amp;quot;:&amp;quot;ami-abc123&amp;quot;,&amp;quot;us-east-2&amp;quot;:&amp;quot;ami-def456&amp;quot;}'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Terraform also automatically loads a number of variable definitions files if they are present:&lt;br /&gt;
* Files named exactly &amp;lt;code&amp;gt;terraform.tfvars&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;terraform.tfvars.json&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Any files with names ending in &amp;lt;code&amp;gt;.auto.tfvars&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;.auto.tfvars.json&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=Syntax Terraform 0.12.6+=&lt;br /&gt;
{{Note|This [https://alexharv074.github.io/2019/06/02/adventures-in-the-terraform-dsl-part-iii-iteration-enhancements-in-terraform-0.12.html#for-expressions for-expressions] link is a little diamond for this subject}}&lt;br /&gt;
&lt;br /&gt;
== Map and nested block ==&lt;br /&gt;
Terrafom 0.12 introduces stricter validation for followings but allows map keys to be set dynamically from expressions. Note of &amp;quot;=&amp;quot; sign.&lt;br /&gt;
* a map attribute - usually have user-defined keys, like we see in the tags example &lt;br /&gt;
* a nested block always has a fixed set of supported arguments defined by the resource type schema, which Terraform will validate&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
resource &amp;quot;aws_instance&amp;quot; &amp;quot;example&amp;quot; {&lt;br /&gt;
  instance_type = &amp;quot;t2.micro&amp;quot;&lt;br /&gt;
  ami           = &amp;quot;ami-abcd1234&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  tags = {             # &amp;lt;- a map attribute, requires '='&lt;br /&gt;
    Name = &amp;quot;example instance&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  ebs_block_device {    # &amp;lt;- a nested block, no '='&lt;br /&gt;
    device_name = &amp;quot;sda2&amp;quot;&lt;br /&gt;
    volume_type = &amp;quot;gp2&amp;quot;&lt;br /&gt;
    volume_size = 24&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== [https://alexharv074.github.io/2019/06/02/adventures-in-the-terraform-dsl-part-iii-iteration-enhancements-in-terraform-0.12.html For_each] ==&lt;br /&gt;
* [https://alexharv074.github.io/2019/06/02/adventures-in-the-terraform-dsl-part-iii-iteration-enhancements-in-terraform-0.12.html terraform iterations]&lt;br /&gt;
* [https://discuss.hashicorp.com/t/produce-maps-from-list-of-strings-of-a-map/2197 Produce maps from list of strings of a map]&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ For_each and new allowed formatting without the need for &amp;quot;${var.vpc_cidr}&amp;quot; syntax = var.vpc_cidr&lt;br /&gt;
|- &lt;br /&gt;
! main.tf&lt;br /&gt;
! variables.tf and outputs.tf&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| &amp;lt;source&amp;gt;# vi main.tf&lt;br /&gt;
resource &amp;quot;aws_vpc&amp;quot; &amp;quot;tf_vpc&amp;quot; {&lt;br /&gt;
  cidr_block           = &amp;quot;${var.vpc_cidr}&amp;quot;&lt;br /&gt;
  enable_dns_hostnames = true&lt;br /&gt;
  enable_dns_support   = true&lt;br /&gt;
  tags =  {           #&amp;lt;-note of '=' as this is an argument&lt;br /&gt;
    Name = &amp;quot;tf_vpc&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
resource &amp;quot;aws_security_group&amp;quot; &amp;quot;tf_public_sg&amp;quot; {&lt;br /&gt;
  name        = &amp;quot;tf_public_sg&amp;quot;&lt;br /&gt;
  description = &amp;quot;Used for access to the public instances&amp;quot;&lt;br /&gt;
  vpc_id      = &amp;quot;${aws_vpc.tf_vpc.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  dynamic &amp;quot;ingress&amp;quot; {&lt;br /&gt;
    for_each = [ for s in var.service_ports: {&lt;br /&gt;
       from_port = s.from_port&lt;br /&gt;
       to_port   = s.to_port   }]&lt;br /&gt;
    content {&lt;br /&gt;
      from_port   = ingress.value.from_port&lt;br /&gt;
      to_port     = ingress.value.to_port&lt;br /&gt;
      protocol    = &amp;quot;tcp&amp;quot;&lt;br /&gt;
      cidr_blocks = [ var.accessip ]&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
# Commented block has been replaced by 'dynamic &amp;quot;ingress&amp;quot;'&lt;br /&gt;
# ingress {  #SSH&lt;br /&gt;
#   from_port   = 22&lt;br /&gt;
#   to_port     = 22&lt;br /&gt;
#   protocol    = &amp;quot;tcp&amp;quot;&lt;br /&gt;
#   cidr_blocks = [&amp;quot;${var.accessip}&amp;quot;]&lt;br /&gt;
# }&lt;br /&gt;
# ingress {  #HTTP&lt;br /&gt;
#   from_port   = 80&lt;br /&gt;
#   to_port     = 80&lt;br /&gt;
#   protocol    = &amp;quot;tcp&amp;quot;&lt;br /&gt;
#   cidr_blocks = [&amp;quot;${var.accessip}&amp;quot;]&lt;br /&gt;
# }&lt;br /&gt;
  egress { &lt;br /&gt;
    from_port   = 0&lt;br /&gt;
    to_port     = 0&lt;br /&gt;
    protocol    = &amp;quot;-1&amp;quot;&lt;br /&gt;
    cidr_blocks = [&amp;quot;0.0.0.0/0&amp;quot;]&lt;br /&gt;
  }&lt;br /&gt;
}&amp;lt;/source&amp;gt; &lt;br /&gt;
| &amp;lt;source&amp;gt;# vi variables.tf&lt;br /&gt;
variable &amp;quot;vpc_cidr&amp;quot; { default = &amp;quot;10.123.0.0/16&amp;quot; }&lt;br /&gt;
variable &amp;quot;accessip&amp;quot; { default = &amp;quot;0.0.0.0/0&amp;quot;     }&lt;br /&gt;
&lt;br /&gt;
variable &amp;quot;service_ports&amp;quot; {&lt;br /&gt;
  type = &amp;quot;list&amp;quot;&lt;br /&gt;
  default = [&lt;br /&gt;
    { from_port = 22, to_port = 22 },&lt;br /&gt;
    { from_port = 80, to_port = 80 }&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# vi outputs.tf&lt;br /&gt;
output &amp;quot;public_sg&amp;quot; { &lt;br /&gt;
  value = aws_security_group.tf_public_sg.id&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
output &amp;quot;ingress_port_mapping&amp;quot; {&lt;br /&gt;
  value = {&lt;br /&gt;
    for ingress in aws_security_group.tf_public_sg.ingress:&lt;br /&gt;
    format(&amp;quot;From %d&amp;quot;, ingress.from_port) =&amp;gt; format(&amp;quot;To %d&amp;quot;, ingress.to_port)&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Computed 'Outputs:'&lt;br /&gt;
ingress_port_mapping = {&lt;br /&gt;
  &amp;quot;From 22&amp;quot; = &amp;quot;To 22&amp;quot;&lt;br /&gt;
  &amp;quot;From 80&amp;quot; = &amp;quot;To 80&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
public_sg = sg-04d51b5ae10e6f0b0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== [https://www.sheldonhull.com/blog/how-to-iterate-through-a-list-of-objects-with-terraforms-for-each-function/ Iterate over list of objects] ===&lt;br /&gt;
[https://stackoverflow.com/questions/58594506/how-to-for-each-through-a-listobjects-in-terraform-0-12 how-to-for-each-through-a-listobjects]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
# debug.tf&lt;br /&gt;
locals {&lt;br /&gt;
  users = [&lt;br /&gt;
    # list of objects&lt;br /&gt;
    { name = &amp;quot;foo&amp;quot;, is_enabled = true  },&lt;br /&gt;
    { name = &amp;quot;bar&amp;quot;, is_enabled = false },&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
resource &amp;quot;null_resource&amp;quot; &amp;quot;this&amp;quot; {&lt;br /&gt;
    for_each = { for name in local.users: name.name =&amp;gt; name.is_enabled }&lt;br /&gt;
    connection {&lt;br /&gt;
      name     = each.key&lt;br /&gt;
      email    = each.value&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
output &amp;quot;users_map&amp;quot; {&lt;br /&gt;
  value = { for name in local.users: name.name =&amp;gt; name.is_enabled }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# terraform init&lt;br /&gt;
# terraform apply&lt;br /&gt;
&lt;br /&gt;
null_resource.this[&amp;quot;bar&amp;quot;]: Creating...&lt;br /&gt;
null_resource.this[&amp;quot;foo&amp;quot;]: Creating...&lt;br /&gt;
null_resource.this[&amp;quot;bar&amp;quot;]: Creation complete after 0s [id=7228791922218879597]&lt;br /&gt;
null_resource.this[&amp;quot;foo&amp;quot;]: Creation complete after 0s [id=7997705376010456213]&lt;br /&gt;
&lt;br /&gt;
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.&lt;br /&gt;
&lt;br /&gt;
Outputs:&lt;br /&gt;
&lt;br /&gt;
users_map = {&lt;br /&gt;
  &amp;quot;bar&amp;quot; = false&lt;br /&gt;
  &amp;quot;foo&amp;quot; = true&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Plan is more readable and explicit ==&lt;br /&gt;
[[Terraform/plan_tf_11_vs_12|See comparison]]&lt;br /&gt;
&lt;br /&gt;
== [https://www.hashicorp.com/blog/terraform-0-12-rich-value-types/ Rich Value Types] - for previewing whole resource object ==&lt;br /&gt;
'''Resources and Modules as Values''' Terraform 0.12 now permits using entire resources as object values within configuration, including returning them as outputs and passing them as input variables:&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
output &amp;quot;vpc&amp;quot; {&lt;br /&gt;
  value = aws_vpc.example&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The type of this output value is an object type derived from the schema of the &amp;lt;code&amp;gt;aws_vpc&amp;lt;/code&amp;gt; resource type. The calling module can then access attributes of this result in the same way as the returning module would use &amp;lt;code&amp;gt;aws_vpc.example&amp;lt;/code&amp;gt;, such as &amp;lt;code&amp;gt;module.example.vpc.cidr_block&amp;lt;/code&amp;gt;. This works also for modules with an expression like &amp;lt;code&amp;gt;module.vpc&amp;lt;/code&amp;gt; evaluating to an object value with attributes corresponding to the modules's named outputs.&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;code&amp;gt;for&amp;lt;/code&amp;gt; ==&lt;br /&gt;
* [https://discuss.hashicorp.com/t/produce-maps-from-list-of-strings-of-a-map/2197 Produce maps from list of strings of a map]&lt;br /&gt;
This is mostly used for parsing preexisting lists and maps rather than generating ones. For example, we are able to convert all elements in a list of strings to upper case using this expression.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
local {&lt;br /&gt;
  upper_list = [for i in var.list : upper(i)] # creates a new list &lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The For iterates over each element of the list and returns the value of upper(el) for each element in form of a list. We can also use this expression to generate maps.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
local {&lt;br /&gt;
  upper_map = {for i in var.list : i =&amp;gt; upper(i)} # creates a map with key = value&lt;br /&gt;
                                                  #                 { i[0] = upper(i[0])&lt;br /&gt;
                                                  #                   i[1] = upper(i[1]) }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Use ''if'' as a filter in ''for'' expression&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
[for i in var.list : upper(i) if i != &amp;quot;&amp;quot;]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
In this case, the original element from list now correspond to their uppercase version.&lt;br /&gt;
&lt;br /&gt;
Lastly, we can include an if statement as a filter in for expressions. Unfortunately, we are not able to use if in logical operations like the ternary operators we used before. The following state will try to return a list of all non-empty elements in their uppercase state.&lt;br /&gt;
&lt;br /&gt;
== Manipulate list and complex object ==&lt;br /&gt;
Build a new list by removing items that their string value do not match regex expression&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Resource that generates an object&lt;br /&gt;
resource &amp;quot;aws_acm_certificate&amp;quot; &amp;quot;main&amp;quot; {...}&lt;br /&gt;
&lt;br /&gt;
# Preview of input object 'aws_acm_certificate.main.domain_validation_options'&lt;br /&gt;
output &amp;quot;domain_validation_options&amp;quot; {&lt;br /&gt;
  value       = aws_acm_certificate.main.domain_validation_options&lt;br /&gt;
  description = &amp;quot;array/list of maps taken from resource object(aws_acm_certificate.issued) describing all validation domain records&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
$ terraform output domain_validation_options&lt;br /&gt;
[ # &amp;lt;- array starts here&lt;br /&gt;
  { # &amp;lt;- an item of array the map object&lt;br /&gt;
    &amp;quot;domain_name&amp;quot; = &amp;quot;*.dev.example.com&amp;quot;&lt;br /&gt;
    &amp;quot;resource_record_name&amp;quot; = &amp;quot;_11111111111111111111111111111111.dev.example.com.&amp;quot;&lt;br /&gt;
    &amp;quot;resource_record_type&amp;quot; = &amp;quot;CNAME&amp;quot;&lt;br /&gt;
    &amp;quot;resource_record_value&amp;quot; = &amp;quot;_22222222222222222222222222222222.mzlfeqexyx.acm-validations.aws.&amp;quot;&lt;br /&gt;
  },&lt;br /&gt;
  {&lt;br /&gt;
    &amp;quot;domain_name&amp;quot; = &amp;quot;api.example.com&amp;quot;&lt;br /&gt;
    &amp;quot;resource_record_name&amp;quot; = &amp;quot;_31111111111111111111111111111111.api.example.com.&amp;quot;&lt;br /&gt;
    &amp;quot;resource_record_type&amp;quot; = &amp;quot;CNAME&amp;quot;&lt;br /&gt;
    &amp;quot;resource_record_value&amp;quot; = &amp;quot;_42222222222222222222222222222222.vhzmpjdqfx.acm-validations.aws.&amp;quot;&lt;br /&gt;
                                 &lt;br /&gt;
  },&lt;br /&gt;
]&lt;br /&gt;
&lt;br /&gt;
# 'for k, v' syntax builds a new object 'validation_domains' by iterating over array of maps&lt;br /&gt;
# 'aws_acm_certificate.main.domain_validation_options' and conditinally changes a value of 'v'&lt;br /&gt;
# if contains the sting &amp;quot;*.dev.example.com&amp;quot;. tomap(v) is required to persist type across for expression.&lt;br /&gt;
locals {&lt;br /&gt;
  validation_domains = [&lt;br /&gt;
    for k, v in aws_acm_certificate.main.domain_validation_options : tomap(v) if contains(&lt;br /&gt;
      &amp;quot;*.dev.example.com&amp;quot;, replace(v.domain_name, &amp;quot;*.&amp;quot;, &amp;quot;&amp;quot;)&lt;br /&gt;
    )&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
$ terraform output local_distinct_domains&lt;br /&gt;
local_distinct_domains = [&lt;br /&gt;
  &amp;quot;api.example.com&amp;quot;,&lt;br /&gt;
  &amp;quot;dev.example.com&amp;quot;,&lt;br /&gt;
  &amp;quot;api-aat1.dev.example.com&amp;quot;,&lt;br /&gt;
  &amp;quot;api-aat2.dev.example.com&amp;quot;,&lt;br /&gt;
]&lt;br /&gt;
&lt;br /&gt;
# 'for domain' expession builds a new list only when a domain matches regexall string.&lt;br /&gt;
# checks regexall lengh &amp;gt; 0 of matched captured groups so true or false is return, so &lt;br /&gt;
# the 'for domain : if' statment conditionally adds the item to the new list&lt;br /&gt;
locals {&lt;br /&gt;
  distinct_domains_excluded = [ &lt;br /&gt;
    for domain in local.distinct_domains : domain if length(regexall(&amp;quot;dev.example.com&amp;quot;, domain)) &amp;gt; 0&lt;br /&gt;
  ]&lt;br /&gt;
&lt;br /&gt;
# Similar to the above but iterating over array of maps (k,v - key, value pairs)&lt;br /&gt;
  validation_domains = [&lt;br /&gt;
    for k,v in local.validation_domains : tomap(v) if length(regexall(&amp;quot;dev.example.com&amp;quot;, v.domain_name)) &amp;gt; 0&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Example of iterating over array of maps 'aws_acm_certificate.main.domain_validation_options' to build a list&lt;br /&gt;
# of fqdns that are store in 'aws_acm_certificate.main.domain_validation_options.resource_record_name' in .resource_record_name&lt;br /&gt;
# key.&lt;br /&gt;
# 'for fqdn' syntax on each iteration 'fqdn=aws_acm_certificate.main.domain_validation_options[index]', then&lt;br /&gt;
# anything after ':' means 'set to value equals' fqdn.resource_record_name&lt;br /&gt;
resource &amp;quot;aws_acm_certificate_validation&amp;quot; &amp;quot;main&amp;quot; {&lt;br /&gt;
  certificate_arn         = aws_acm_certificate.main.arn&lt;br /&gt;
  validation_record_fqdns = [ &lt;br /&gt;
    for fqdn in aws_acm_certificate.main.domain_validation_options : fqdn.resource_record_name&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== function: replace, regex ==&lt;br /&gt;
Snippet below removes comments and any empty lines from a &amp;lt;code&amp;gt;values.yaml.tpl&amp;lt;/code&amp;gt; file.&lt;br /&gt;
&amp;lt;source lang=json&amp;gt;&lt;br /&gt;
locals {&lt;br /&gt;
  match_comment = &amp;quot;/(?U)(?m)(?s)^[[:space:]]*#.*$/&amp;quot; # match anyline that starts with '#' or any 'whitespace(s) + #'&lt;br /&gt;
  match_empty_line = &amp;quot;/(?m)(?s)(^[\r\n])/&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
resource &amp;quot;helm_release&amp;quot; &amp;quot;myapp&amp;quot; {&lt;br /&gt;
  name             = &amp;quot;myapp&amp;quot;&lt;br /&gt;
  chart            = &amp;quot;${path.module}/charts/myapp&amp;quot;&lt;br /&gt;
  values = [&lt;br /&gt;
    replace(&lt;br /&gt;
        replace(&lt;br /&gt;
          templatefile(&amp;quot;${path.module}/templates/values.yaml.tpl&amp;quot;, {&lt;br /&gt;
            }), local.match_comment, &amp;quot;&amp;quot;), local.match_empty_line, &amp;quot;&amp;quot;)&lt;br /&gt;
  ]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Explanation:&lt;br /&gt;
* Terraform regex is using [https://github.com/google/re2/wiki/Syntax re2 library]&lt;br /&gt;
* Regex flags are enabled by prefixinf the search:&lt;br /&gt;
** &amp;lt;code&amp;gt;(?m)&amp;lt;/code&amp;gt; - multi-line mode (default false)&lt;br /&gt;
** &amp;lt;code&amp;gt;(?s)&amp;lt;/code&amp;gt; - let . match \n (default false)&lt;br /&gt;
** &amp;lt;code&amp;gt;(?U)&amp;lt;/code&amp;gt; - ungreedy (default false), so stop matching comments at EOL&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
*[https://www.hashicorp.com/blog/hashicorp-terraform-0-12-preview-for-and-for-each HashiCorp Terraform 0.12 Preview: For and For-Each]&lt;br /&gt;
&lt;br /&gt;
= Syntax Terraform ~0.11 =&lt;br /&gt;
== &amp;lt;code&amp;gt;if&amp;lt;/code&amp;gt; statements ==&lt;br /&gt;
;Terraform ~&amp;lt; 0.9&lt;br /&gt;
Old versions Terraform doesn't support if- or if-else statement but we can take an advantage of a boolean ''count'' attribute that most of resources have.&lt;br /&gt;
 boolean true  = 1&lt;br /&gt;
 boolean false = 0&lt;br /&gt;
&lt;br /&gt;
;Terrafrom ~0.11+&lt;br /&gt;
Newer version support if statements, the conditional syntax is the well-known ternary operation:&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
 CONDITION ? TRUEVAL  : FALSEVAL&lt;br /&gt;
 CONDITION ? caseTrue : caseFalse&lt;br /&gt;
 domain = &amp;quot;${var.frontend_domain != &amp;quot;&amp;quot; ? var.frontend_domain : var.domain}&amp;quot; # tf &amp;lt;0.12 syntax&lt;br /&gt;
 count = var.image_publisher == &amp;quot;MicrosoftWindowsServer&amp;quot; ? 0 : 3            # tf 0.12+ syntax&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The support operators are:&lt;br /&gt;
*Equality: == and !=&lt;br /&gt;
*Numerical comparison: &amp;gt;, &amp;lt;, &amp;gt;=, &amp;lt;=&lt;br /&gt;
*Boolean logic: &amp;amp;&amp;amp;, ||, unary !  (|| is  logical OR; “short-circuit” OR)&lt;br /&gt;
&lt;br /&gt;
= Modules =&lt;br /&gt;
Modules are used in Terraform to modularize and encapsulate groups of resources in your infrastructure.&lt;br /&gt;
&lt;br /&gt;
When calling a module from .tf file you passing values for variables that are defined in a module to create resources to your specification. Before you can use any module it needs to be downloaded. Use &lt;br /&gt;
 $ terraform get&lt;br /&gt;
to download modules. You will notice that &amp;lt;code&amp;gt;.terraform&amp;lt;/code&amp;gt; directory will be created that contains symlinks to the module.&lt;br /&gt;
&lt;br /&gt;
;TF file &amp;lt;tt&amp;gt;~/git/dev101/vpc.tf&amp;lt;/tt&amp;gt; calling 'vpc' module&lt;br /&gt;
&lt;br /&gt;
 variable &amp;quot;vpc_name&amp;quot;       { description = &amp;quot;value comes from terrafrom.tfvars&amp;quot; }&lt;br /&gt;
 variable &amp;quot;vpc_cidr_base&amp;quot;  { description = &amp;quot;value comes from terrafrom.tfvars&amp;quot; }&lt;br /&gt;
 variable &amp;quot;vpc_cidr_range&amp;quot; { description = &amp;quot;value comes from terrafrom.tfvars&amp;quot; }&lt;br /&gt;
 &lt;br /&gt;
 module &amp;quot;vpc-dev&amp;quot; {&lt;br /&gt;
   source     = &amp;quot;../modules/vpc&amp;quot;&lt;br /&gt;
   &amp;lt;span style=&amp;quot;color: blue&amp;quot;&amp;gt;name&amp;lt;/span&amp;gt;       = &amp;quot;${var.vpc_name}&amp;quot;  #here we assign a value to 'name' variable&lt;br /&gt;
   &amp;lt;span style=&amp;quot;color: blue&amp;quot;&amp;gt;cidr&amp;lt;/span&amp;gt;       = &amp;quot;${var.vpc_cidr_base}.${var.vpc_cidr_range}&amp;quot; }&lt;br /&gt;
 &lt;br /&gt;
 output &amp;quot;vpc-name&amp;quot;         { value = &amp;quot;${var.vpc_name                  }&amp;quot;}&lt;br /&gt;
 output &amp;quot;vpc_id&amp;quot;           { value = &amp;quot;${module.vpc-dev.&amp;lt;span style=&amp;quot;color: green&amp;quot;&amp;gt;id-from_module&amp;lt;/span&amp;gt; }&amp;quot;}&lt;br /&gt;
&lt;br /&gt;
;Module in &amp;lt;tt&amp;gt;~/git/modules/vpc/main.tf&amp;lt;/tt&amp;gt;&lt;br /&gt;
 variable &amp;quot;name&amp;quot; { description = &amp;quot;variable local to the module, value comes when calling the module&amp;quot; }&lt;br /&gt;
 variable &amp;quot;cidr&amp;quot; { description = &amp;quot;local to the module, value passed on when calling the module&amp;quot; }&lt;br /&gt;
 &lt;br /&gt;
 resource &amp;quot;aws_vpc&amp;quot; &amp;quot;scope&amp;quot; {&lt;br /&gt;
    cidr_block  = &amp;quot;${var.&amp;lt;span style=&amp;quot;color: blue&amp;quot;&amp;gt;cidr&amp;lt;/span&amp;gt;}&amp;quot;&lt;br /&gt;
    tags { Name = &amp;quot;${var.&amp;lt;span style=&amp;quot;color: blue&amp;quot;&amp;gt;name&amp;lt;/span&amp;gt;}&amp;quot; }}&lt;br /&gt;
 &lt;br /&gt;
  output &amp;quot;&amp;lt;span style=&amp;quot;color: green&amp;quot;&amp;gt;id-from_module&amp;lt;/span&amp;gt;&amp;quot;    { value = &amp;quot;${aws_vpc.scope.id}&amp;quot; }&lt;br /&gt;
&lt;br /&gt;
Output variables is a way to output important data back when running &amp;lt;code&amp;gt;terraform apply&amp;lt;/code&amp;gt;. These variables also can be recalled when .tfstate file has been populated using &amp;lt;code&amp;gt;terraform output VARIABLE-NAME&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
 $ terraform apply     #this will use 'vpc' module&lt;br /&gt;
&lt;br /&gt;
[[File:Terraform-module-apply.png|400px|none|left|Terraform-module-apply]]&lt;br /&gt;
&lt;br /&gt;
Notice &amp;lt;span style=&amp;quot;color: green&amp;quot;&amp;gt;Outputs&amp;lt;/span&amp;gt;. These outputs can be recalled also by:&lt;br /&gt;
 $ terraform output vpc-name      $ terraform output vpc_id&lt;br /&gt;
 dev101                           vpc-00e00c67&lt;br /&gt;
&lt;br /&gt;
= Templates =&lt;br /&gt;
{{ Note | [https://github.com/hashicorp/terraform-guides/tree/master/infrastructure-as-code/terraform-0.12-examples/new-template-syntax Terraform 0.12+ New Template Syntax Example] }}&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
# Terraform version 0.12+ template syntax&lt;br /&gt;
%{ for name in var.names ~}&lt;br /&gt;
%{ if name == &amp;quot;Mary&amp;quot; }${name}%{ endif ~}&lt;br /&gt;
%{ endfor ~}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Dump a rendered &amp;lt;code&amp;gt;data.template_file&amp;lt;/code&amp;gt; into a file to preview correctness of interpolations&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
#Dumps rendered template&lt;br /&gt;
resource &amp;quot;null_resource&amp;quot; &amp;quot;export_rendered_template&amp;quot; {&lt;br /&gt;
  triggers = {&lt;br /&gt;
   uid = &amp;quot;${uuid()}&amp;quot;  #this causes to always run this resource&lt;br /&gt;
  }&lt;br /&gt;
  provisioner &amp;quot;local-exec&amp;quot; {&lt;br /&gt;
    command = &amp;quot;cat &amp;gt; waf-policy.output.txt &amp;lt;&amp;lt;EOL\n${data.template_file.waf-whitelist-policy.rendered}\nEOL&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example of creating &lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
resource &amp;quot;aws_instance&amp;quot; &amp;quot;microservices&amp;quot; {&lt;br /&gt;
  count      = &amp;quot;${var.instance_count}&amp;quot;&lt;br /&gt;
  subnet_id  = &amp;quot;${element(&amp;quot;${data.aws_subnet.private.*.id          }&amp;quot;, count.index)}&amp;quot;&lt;br /&gt;
  user_data  = &amp;quot;${element(&amp;quot;${data.template_file.userdata.*.rendered}&amp;quot;, count.index)}&amp;quot;&lt;br /&gt;
  ...&lt;br /&gt;
}&lt;br /&gt;
data &amp;quot;template_file&amp;quot; &amp;quot;userdata&amp;quot; {&lt;br /&gt;
  count      = &amp;quot;${var.instance_count}&amp;quot;&lt;br /&gt;
  template   = &amp;quot;${file(&amp;quot;${path.root}/templates/user-data.tpl&amp;quot;)}&amp;quot;&lt;br /&gt;
  vars = {&lt;br /&gt;
    vmname   = &amp;quot;ms-${count.index + 1}-${var.vpc_name}&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
#For debugging you can display an array of rendered templates with the output below:&lt;br /&gt;
output &amp;quot;userdata&amp;quot; { value = &amp;quot;${data.template_file.userdata.*.rendered}&amp;quot; }&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
{{ Note |&lt;br /&gt;
* resource &amp;lt;code&amp;gt;template_file is deprecated&amp;lt;/code&amp;gt; in favour of &amp;lt;code&amp;gt;data template_file&amp;lt;/code&amp;gt;&lt;br /&gt;
* Terraform 0.12+ offers new &amp;lt;code&amp;gt;template&amp;lt;/code&amp;gt; function without a need of using a &amp;lt;code&amp;gt;data&amp;lt;/code&amp;gt; object }}&lt;br /&gt;
== template json files ==&lt;br /&gt;
For working with JSON structures it's [https://www.terraform.io/docs/configuration/functions/templatefile.html#generating-json-or-yaml-from-a-template recommended] to use &amp;lt;code&amp;gt;jsonencode&amp;lt;/code&amp;gt; function to simplify escaping, delimiters and get validated json in return.&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
resource &amp;quot;aws_iam_policy&amp;quot; &amp;quot;s3Bucket&amp;quot; {&lt;br /&gt;
   name  = s3Bucket&amp;quot;&lt;br /&gt;
   policy = templatefile(&amp;quot;${path.module}/templates/s3Bucket.json.tpl&amp;quot;, {&lt;br /&gt;
     S3BUCKETS = var.s3_buckets&lt;br /&gt;
   })&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
variable &amp;quot;s3_buckets&amp;quot; {&lt;br /&gt;
  type        = list(string)&lt;br /&gt;
  default     = [ &amp;quot;aaa-bucket-111&amp;quot;, &amp;quot;bbb-bucket-222&amp;quot; ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Template file&lt;br /&gt;
&amp;lt;source lang=json&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
    &amp;quot;Version&amp;quot;: &amp;quot;2012-10-17&amp;quot;,&lt;br /&gt;
    &amp;quot;Statement&amp;quot;: [&lt;br /&gt;
        {&lt;br /&gt;
            &amp;quot;Effect&amp;quot;: &amp;quot;Allow&amp;quot;,&lt;br /&gt;
            &amp;quot;Action&amp;quot;: &amp;quot;s3:ListAllMyBuckets&amp;quot;,&lt;br /&gt;
            &amp;quot;Resource&amp;quot;: &amp;quot;*&amp;quot;&lt;br /&gt;
        },&lt;br /&gt;
        {&lt;br /&gt;
            &amp;quot;Effect&amp;quot;: &amp;quot;Allow&amp;quot;,&lt;br /&gt;
            &amp;quot;Action&amp;quot;: [&lt;br /&gt;
                &amp;quot;s3:ListBucket&amp;quot;,&lt;br /&gt;
                &amp;quot;s3:GetBucketLocation&amp;quot;&lt;br /&gt;
            ],&lt;br /&gt;
            &amp;quot;Resource&amp;quot;: ${jsonencode([for BUCKET in S3BUCKETS : &amp;quot;arn:aws:s3:::${BUCKET}&amp;quot;])}&lt;br /&gt;
# renders json array -&amp;gt; [ &amp;quot;arn:aws:s3:::aaa-bucket-111&amp;quot;, &amp;quot;arn:aws:s3:::bbb-bucket-222&amp;quot; ]&lt;br /&gt;
        }&lt;br /&gt;
    ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Explain&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
substitution syntax ${}    local loop variable&lt;br /&gt;
|  function jsonencode   /      templatefile function input variable, it's not ${} syntax&lt;br /&gt;
|  |                   /       /                                  &lt;br /&gt;
${jsonencode([for BUCKET in S3BUCKETS : &amp;quot;arn:aws:s3:::${BUCKET}&amp;quot;])}&lt;br /&gt;
             / |                                        /       |\&lt;br /&gt;
           /   for loop                     template variable   | function cloasing bracket&lt;br /&gt;
    indicates that the result to be an array[]               closing bracket of the json array&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resource ==&lt;br /&gt;
*[https://github.com/hashicorp/terraform/issues/1893 example of unique templates per instance]&lt;br /&gt;
*[https://github.com/hashicorp/terraform/pull/2140 recommendation of how to create unique templates per instance]&lt;br /&gt;
&lt;br /&gt;
= Execute arbitrary code using null_resource and local-exec =&lt;br /&gt;
The null_resource allows to create terraform managed resource also saved in the state file but it uses 3rd party provisoners like local-exec, remote-exec, etc., allowing for arbitrary code execution. This should be only used when Terraform core does not provide the solution for your use case.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
resource &amp;quot;null_resource&amp;quot; &amp;quot;attach_alb_am_wkr_ext&amp;quot; {&lt;br /&gt;
&lt;br /&gt;
  #depends_on sets up a dependency. So it depends on completion of another resource &lt;br /&gt;
  #and it won't run if the resource does not change&lt;br /&gt;
  #depends_on = [ &amp;quot;aws_cloudformation_stack.waf-alb&amp;quot; ]  &lt;br /&gt;
&lt;br /&gt;
  #triggers save computed strings in tfstate file, if value changes on the next run it triggers a resource to be created&lt;br /&gt;
  triggers = {   &lt;br /&gt;
    waf_id = &amp;quot;${aws_cloudformation_stack.waf-alb.outputs.wafWebACL}&amp;quot;   #produces WAF_id&lt;br /&gt;
    alb_id = &amp;quot;${module.balancer_external_alb_instance.arn         }&amp;quot;   #produces full ALB_arn name&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  provisioner &amp;quot;local-exec&amp;quot; {&lt;br /&gt;
    when    = &amp;quot;create&amp;quot;     #runs on: terraform apply&lt;br /&gt;
    command = &amp;lt;&amp;lt;EOF&lt;br /&gt;
ALBARN=$(aws elbv2 describe-load-balancers --region ${var.region} \&lt;br /&gt;
      --name ${var.vpc}-${var.alb_class} \&lt;br /&gt;
      --output text --query 'LoadBalancers[0].LoadBalancerArn') &amp;amp;&amp;amp;&lt;br /&gt;
aws waf-regional associate-web-acl --web-acl-id &amp;quot;${aws_cloudformation_stack.waf-alb.outputs.wafWebACL}&amp;quot; \&lt;br /&gt;
                                   --resource-arn $ALBARN --region ${var.region}&lt;br /&gt;
EOF&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  provisioner &amp;quot;local-exec&amp;quot; {&lt;br /&gt;
    when    = &amp;quot;destroy&amp;quot;  #runs only on: terraform destruct&lt;br /&gt;
    command = &amp;lt;&amp;lt;EOF&lt;br /&gt;
ALBARN=$(aws elbv2 describe-load-balancers --region ${var.region} \&lt;br /&gt;
      --name ${var.vpc}-${var.alb_class} \&lt;br /&gt;
      --output text --query 'LoadBalancers[0].LoadBalancerArn') &amp;amp;&amp;amp;&lt;br /&gt;
aws waf-regional disassociate-web-acl --resource-arn $ALBARN --region ${var.region}&lt;br /&gt;
EOF&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: By default the local-exec provisioner will use &amp;lt;code&amp;gt;/bin/sh -c &amp;quot;your&amp;lt;&amp;lt;EOFscript&amp;quot;&amp;lt;/code&amp;gt; so it will not strip down any meta-characters like &amp;quot;double quotes&amp;quot; causing &amp;lt;tt&amp;gt;aws cli&amp;lt;/tt&amp;gt; to fail. Therefore the output has been forced as &amp;lt;tt&amp;gt;text&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= &amp;lt;code&amp;gt;terraform providers&amp;lt;/code&amp;gt; =&lt;br /&gt;
List all providers in your project to see versions and dependencies.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
$ terraform providers&lt;br /&gt;
.&lt;br /&gt;
├── provider.aws ~&amp;gt; 2.44&lt;br /&gt;
├── provider.external ~&amp;gt; 1.2&lt;br /&gt;
├── provider.null ~&amp;gt; 2.1&lt;br /&gt;
├── provider.random ~&amp;gt; 2.2&lt;br /&gt;
├── provider.template ~&amp;gt; 2.1&lt;br /&gt;
├── module.kubernetes&lt;br /&gt;
│   ├── module.config&lt;br /&gt;
│   │   ├── provider.aws&lt;br /&gt;
│   │   ├── provider.helm ~&amp;gt; 0.10.4&lt;br /&gt;
│   │   ├── provider.kubernetes ~&amp;gt; 1.10.0&lt;br /&gt;
│   │   ├── provider.null (inherited)&lt;br /&gt;
│   │   ├── module.alb_ingress_controller&lt;br /&gt;
(...)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= terraform plugins cache =&lt;br /&gt;
Create &amp;lt;code&amp;gt;.terraformrc&amp;lt;/code&amp;gt; file in $HOME directory and specify the cache directory. Or set an environment variable. Then rerun &amp;lt;code&amp;gt;terraform init&amp;lt;/code&amp;gt; to save providers into shared (cache) directory.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=terraform&amp;gt;&lt;br /&gt;
# Option 1.&lt;br /&gt;
cat &amp;gt; ~/.terraformrc &amp;lt;&amp;lt;'EOF'&lt;br /&gt;
plugin_cache_dir   = &amp;quot;$HOME/.terraform.d/plugin-cache/&amp;quot;&lt;br /&gt;
disable_checkpoint = true&lt;br /&gt;
EOF&lt;br /&gt;
&lt;br /&gt;
# Option 2.&lt;br /&gt;
export TF_PLUGIN_CACHE_DIR=$HOME/.terraform.d/plugins-cache&lt;br /&gt;
&lt;br /&gt;
# Create the cache directory&lt;br /&gt;
mkdir $HOME/.terraform.d/plugin-cache&lt;br /&gt;
&lt;br /&gt;
# Delete per root module providers in &amp;lt;code&amp;gt;.terraform&amp;lt;/code&amp;gt; directory&lt;br /&gt;
find /git/repositories -type d -name &amp;quot;.terraform&amp;quot; -exec rm -rf {}/providers \;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Run &amp;lt;code&amp;gt;terraform init&amp;lt;/code&amp;gt;.&lt;br /&gt;
&amp;lt;source lang=terraform&amp;gt;&lt;br /&gt;
terraform init -backend-config=dev.backend.tfvars&lt;br /&gt;
Initializing the backend...&lt;br /&gt;
&lt;br /&gt;
Successfully configured the backend &amp;quot;s3&amp;quot;! Terraform will automatically&lt;br /&gt;
use this backend unless the backend configuration changes.&lt;br /&gt;
&lt;br /&gt;
Initializing provider plugins...&lt;br /&gt;
- Checking for available provider plugins...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;random&amp;quot; (hashicorp/random) 2.3.0...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;kubernetes&amp;quot; (hashicorp/kubernetes) 1.10.0...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;helm&amp;quot; (hashicorp/helm) 1.2.3...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;aws&amp;quot; (hashicorp/aws) 2.70.0...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;external&amp;quot; (hashicorp/external) 1.2.0...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;null&amp;quot; (hashicorp/null) 2.1.2...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;template&amp;quot; (hashicorp/template) 2.1.2...&lt;br /&gt;
&lt;br /&gt;
Terraform has been successfully initialized!&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
:[[File:ClipCapIt-200714-085009.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Although cache dir is used by all Terraform projects, the providers versioning still works and normal versioning restrictions apply. If you want to be sure which version is locked for use with your current project, you can inspect SHA256 of files saved in one of the files in the “.terraform” directory:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ cat .terraform/plugins/linux_amd64/lock.json &lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;aws&amp;quot;: &amp;quot;f08daaf64b9fca69978a40f88091d1a77fc9725fb04b0fec5e731609c53a025f&amp;quot;,&lt;br /&gt;
  &amp;quot;external&amp;quot;: &amp;quot;6dad56007a3cb0ae9c4655c67d13502e51e38ca2673cf0f22a5fadce6803f9e4&amp;quot;,&lt;br /&gt;
  &amp;quot;helm&amp;quot;: &amp;quot;09b8ccb993f7d776555e811c856de006ac12b9fedfca15b07a85a6814914fd04&amp;quot;,&lt;br /&gt;
  &amp;quot;kubernetes&amp;quot;: &amp;quot;7ebf3273e622d1adb736e98f6fa5cc7e664c61b9171105b13c3b5ea8f8ebc5ff&amp;quot;,&lt;br /&gt;
  &amp;quot;null&amp;quot;: &amp;quot;c56285e7bd25a806bf86fcd4893edbe46e621a46e20fe24ef209b6fd0b7cf5fc&amp;quot;,&lt;br /&gt;
  &amp;quot;random&amp;quot;: &amp;quot;791ef28ff31913d9b2ef0bedb97de98bebafe66d002bc2b9d01377e59a6cfaed&amp;quot;,&lt;br /&gt;
  &amp;quot;template&amp;quot;: &amp;quot;cd8665642bf0f5b5f57a53050d10fd83415428c2dc6713b85e174e007fcc93bf&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
 &lt;br /&gt;
find ~/.terraform.d/plugins -type f | xargs sha256sum&lt;br /&gt;
f08daaf64b9fca69978a40f88091d1a77fc9725fb04b0fec5e731609c53a025f  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-aws_v2.70.0_x4&lt;br /&gt;
6dad56007a3cb0ae9c4655c67d13502e51e38ca2673cf0f22a5fadce6803f9e4  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-external_v1.2.0_x4&lt;br /&gt;
c56285e7bd25a806bf86fcd4893edbe46e621a46e20fe24ef209b6fd0b7cf5fc  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-null_v2.1.2_x4&lt;br /&gt;
791ef28ff31913d9b2ef0bedb97de98bebafe66d002bc2b9d01377e59a6cfaed  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-random_v2.3.0_x4&lt;br /&gt;
09b8ccb993f7d776555e811c856de006ac12b9fedfca15b07a85a6814914fd04  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-helm_v1.2.3_x4&lt;br /&gt;
7ebf3273e622d1adb736e98f6fa5cc7e664c61b9171105b13c3b5ea8f8ebc5ff  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-kubernetes_v1.10.0_x4&lt;br /&gt;
cd8665642bf0f5b5f57a53050d10fd83415428c2dc6713b85e174e007fcc93bf  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-template_v2.1.2_x4&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
As you can see, the SHA256 hash for AWS provider saved in the &amp;lt;tt&amp;gt;lock.json&amp;lt;/tt&amp;gt; file matches the hash of providera saved in the cache directory.&lt;br /&gt;
&lt;br /&gt;
= AWS - [https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Updates.20180206.html#AuroraMySQL.Updates.20180206.CLI RDS aurora] - versioning =&lt;br /&gt;
[https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Updates.20180206.html#AuroraMySQL.Updates.20180206.CLI Engine name] 'aurora-mysql' refers to engine version 5.7.x and for version 5.6.10a engine name is aurora.&lt;br /&gt;
* The engine name for Aurora MySQL 2.x is aurora-mysql; the engine name for Aurora MySQL 1.x continues to be aurora.&lt;br /&gt;
* The engine version for Aurora MySQL 2.x is 5.7.12; the engine version for Aurora MySQL 1.x continues to be 5.6.10ann.&lt;br /&gt;
&amp;lt;syntaxhighlightjs lang=yaml&amp;gt;&lt;br /&gt;
module &amp;quot;db&amp;quot; {&lt;br /&gt;
  source  = &amp;quot;terraform-aws-modules/rds-aurora/aws&amp;quot;&lt;br /&gt;
  version = &amp;quot;2.29.0&amp;quot;&lt;br /&gt;
  name    = &amp;quot;db&amp;quot;&lt;br /&gt;
  engine          = &amp;quot;aurora&amp;quot;                  # v5.6&lt;br /&gt;
  engine_version  = &amp;quot;5.6.mysql_aurora.1.23.0&amp;quot; # v5.6&lt;br /&gt;
  #engine         = &amp;quot;aurora-mysql&amp;quot;            # v5.7&lt;br /&gt;
  #engine_version = &amp;quot;5.7.mysql_aurora.2.09.0&amp;quot; # v5.7&lt;br /&gt;
  ...&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlightjs&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/localstack/localstack localstack] - Mock AWS Services =&lt;br /&gt;
&amp;lt;source lang=&amp;quot;shell&amp;quot;&amp;gt;&lt;br /&gt;
pip install localstack&lt;br /&gt;
localstack start&lt;br /&gt;
SERVICES=kinesis,lambda,sqs,dynamodb DEBUG=1 localstack start&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
;Examples&lt;br /&gt;
* [https://github.com/MattSurabian/bad-terraform bad-terraform]&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/tfsec/tfsec tfsec] - Security Scanning TF code =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
LATEST=$(curl --silent -L &amp;quot;https://api.github.com/repos/tfsec/tfsec/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
sudo curl -L https://github.com/tfsec/tfsec/releases/download/${LATEST}/tfsec-linux-amd64 -o /usr/local/bin/tfsec &lt;br /&gt;
sudo chmod +x /usr/local/bin/tfsec&lt;br /&gt;
&lt;br /&gt;
# Use with docker&lt;br /&gt;
docker run --rm -it -v &amp;quot;$(pwd):/src&amp;quot; liamg/tfsec /src&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
tfsec .&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/terraform-linters/tflint tflint] - validate provider-specific issues =&lt;br /&gt;
Requires Terraform &amp;gt;= 0.12&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/terraform-linters/tflint/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d)&lt;br /&gt;
curl -L https://github.com/terraform-linters/tflint/releases/download/${LATEST}/tflint_linux_amd64.zip -o $TEMPDIR/tflint_linux_amd64.zip&lt;br /&gt;
sudo unzip $TEMPDIR/tflint_linux_amd64.zip -d /usr/local/bin&lt;br /&gt;
&lt;br /&gt;
# Configure tflint&lt;br /&gt;
# | Current directory (./.tflint.hcl)&lt;br /&gt;
# | Home directory (~/.tflint.hcl)&lt;br /&gt;
tflint --config other_config.hcl&lt;br /&gt;
&lt;br /&gt;
## Add plugins&lt;br /&gt;
https://github.com/terraform-linters/tflint/tree/master/docs/rules&lt;br /&gt;
cat &amp;gt; ./.tflint.hcl &amp;lt;&amp;lt;EOF&lt;br /&gt;
plugin &amp;quot;aws&amp;quot; {&lt;br /&gt;
  enabled = true&lt;br /&gt;
  version = &amp;quot;0.5.0&amp;quot;&lt;br /&gt;
  source  = &amp;quot;github.com/terraform-linters/tflint-ruleset-aws&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
plugin &amp;quot;google&amp;quot; {&lt;br /&gt;
    enabled = true&lt;br /&gt;
    version = &amp;quot;0.15.0&amp;quot;&lt;br /&gt;
    source  = &amp;quot;github.com/terraform-linters/tflint-ruleset-google&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
EOF&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
tflint --module&lt;br /&gt;
tflint --module --var-file=dev.tfvars&lt;br /&gt;
&lt;br /&gt;
# Use with docker&lt;br /&gt;
docker pull ghcr.io/terraform-linters/tflint:latest&lt;br /&gt;
docker run --rm -v $(pwd):/src -t ghcr.io/terraform-linters/tflint:v0.34.1&lt;br /&gt;
docker run --rm -v $(pwd):/src -t ghcr.io/terraform-linters/tflint:v0.34.1 -v&lt;br /&gt;
&lt;br /&gt;
# Init and check&lt;br /&gt;
docker run --rm -v $(pwd):/src -t --entrypoint /bin/sh  ghcr.io/terraform-linters/tflint:v0.34.1 -c &amp;quot;tflint --init; tflint /src/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
## It looks important that tflint is executed in terrafrom root path, thus `cd /src`&lt;br /&gt;
docker run --rm -v $(pwd):/src -t -e TFLINT_LOG=debug --entrypoint /bin/sh  ghcr.io/terraform-linters/tflint:v0.34.1 \&lt;br /&gt;
-c &amp;quot;cd /src; tflint --init; tflint --var-file=environments/gcp-dev.tfvars --module&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/terraform-docs/terraform-docs terraform-docs] - generate Terraform documentation = &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install the binary&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/terraform-docs/terraform-docs/releases/latest&amp;quot; | jq -r .tag_name); echo $VERSION&lt;br /&gt;
wget https://github.com/terraform-docs/terraform-docs/releases/download/$VERSION/terraform-docs-$VERSION-linux-amd64.tar.gz&lt;br /&gt;
tar xzvf terraform-docs-$VERSION-linux-amd64.tar.gz&lt;br /&gt;
sudo install terraform-docs /usr/local/bin/terraform-docs&lt;br /&gt;
&lt;br /&gt;
# Use with docker&lt;br /&gt;
docker run --rm --volume &amp;quot;$(pwd):/src&amp;quot; -u $(id -u) quay.io/terraform-docs/terraform-docs:0.16.0 markdown /src&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
terraform-docs . &amp;gt; README.md&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/cycloidio/inframap InfraMap] - plot your Terraform state =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/cycloidio/inframap/releases/latest&amp;quot; | jq -r .tag_name); echo $VERSION&lt;br /&gt;
TEMPDIR=$(mktemp -d)&lt;br /&gt;
curl -L https://github.com/cycloidio/inframap/releases/download/${VERSION}/inframap-linux-amd64.tar.gz -o $TEMPDIR/inframap-linux-amd64.tar.gz&lt;br /&gt;
tar xzvf $TEMPDIR/inframap-linux-amd64.tar.gz -C $TEMPDIR inframap-linux-amd64&lt;br /&gt;
sudo install $TEMPDIR/inframap-linux-amd64 /usr/local/bin/inframap&lt;br /&gt;
&lt;br /&gt;
# Install graphviz, it contains the `dot` program&lt;br /&gt;
sudo apt install graphviz&lt;br /&gt;
&lt;br /&gt;
# Install GraphEasy&lt;br /&gt;
## Cpan manager&lt;br /&gt;
sudo apt install cpanminus # install perl packet managet&lt;br /&gt;
sudo cpanm Graph::Easy # Graph-Easy-0.76 as of 2021-07&lt;br /&gt;
&lt;br /&gt;
## Apt-get (tested with Ubuntu 20.04 LTS)&lt;br /&gt;
sudo apt install libgraph-easy-perl # Graph::Easy v0.76&lt;br /&gt;
&lt;br /&gt;
# a sample usage&lt;br /&gt;
cat input.dot | graph-easy --from=dot --as_ascii&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Usage inframap&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
The most important subcommands are:&lt;br /&gt;
* generate: generates the graph from STDIN or file, STDIN can be .tf files/modules or .tfstate&lt;br /&gt;
* prune: removes all unnecessary information from the state or HCL (not supported yet) so it can be shared without any security concerns&lt;br /&gt;
&lt;br /&gt;
# Generate your infrastructure graph in a DOT representation from: Terraform files or state file&lt;br /&gt;
cat terraform.tf      | inframap generate --printer dot --hcl     | tee graph.dot &lt;br /&gt;
cat terraform.tfstate | inframap generate --printer dot --tfstate | tee graph.dot&lt;br /&gt;
&lt;br /&gt;
# `prune` command will sanitize and anonymize content of the files&lt;br /&gt;
cat terraform.tfstate | inframap prune --canonicals --tfstate &amp;gt; cleaned.tfstate &lt;br /&gt;
&lt;br /&gt;
# Pipe all the previous commands. ASCII graph is generated using graph-easy&lt;br /&gt;
cat terraform.tfstate | inframap prune --tfstate | inframap generate --tfstate | graph-easy&lt;br /&gt;
&lt;br /&gt;
# from State file - visualizing with `dot` or `graph-easy`&lt;br /&gt;
inframap generate state.tfstate | dot -Tpng &amp;gt; graph.png&lt;br /&gt;
inframap generate state.tfstate | graph-easy&lt;br /&gt;
&lt;br /&gt;
# from HCL&lt;br /&gt;
inframap generate terraform.tf | graph-easy&lt;br /&gt;
inframap generate ./my-module/ | graph-easy # or HCL module&lt;br /&gt;
&lt;br /&gt;
# using docker image (assuming that your Terraform files are in the working directory)&lt;br /&gt;
docker run --rm -v ${PWD}:/opt cycloid/inframap generate /opt/terraform.tfstate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example of EKS module&lt;br /&gt;
:[[File:ClipCapIt-210716-090202.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/Pluralith/pluralith-cli/releases Pluralith] =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/Pluralith/pluralith-cli/releases/latest&amp;quot; | jq -r .tag_name); echo $VERSION&lt;br /&gt;
curl -L https://github.com/Pluralith/pluralith-cli/releases/download/${VERSION}/pluralith_cli_linux_amd64_${VERSION} -o pluralith_cli_linux_amd64_${VERSION}&lt;br /&gt;
sudo install pluralith_cli_linux_amd64_${VERSION} /usr/local/bin/pluralith&lt;br /&gt;
&lt;br /&gt;
# Install pluralith-cli-graphing&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/Pluralith/pluralith-cli-graphing-release/releases/latest&amp;quot; | jq -r .tag_name | tr -d v); echo $VERSION&lt;br /&gt;
curl -L https://github.com/Pluralith/pluralith-cli-graphing-release/releases/download/v${VERSION}/pluralith_cli_graphing_linux_amd64_${VERSION} -o pluralith_cli_graphing_linux_amd64_${VERSION}&lt;br /&gt;
sudo install pluralith_cli_graphing_linux_amd64_${VERSION} ~/Pluralith/bin/pluralith-cli-graphing&lt;br /&gt;
&lt;br /&gt;
# Check versions&lt;br /&gt;
pluralith version&lt;br /&gt;
parsing response failed -&amp;gt; GetGitHubRelease: %!w(&amp;lt;nil&amp;gt;)&lt;br /&gt;
 _&lt;br /&gt;
|_)|    _ _ |._|_|_ &lt;br /&gt;
|  ||_|| (_||| | | |&lt;br /&gt;
&lt;br /&gt;
→ CLI Version: 0.2.2&lt;br /&gt;
→ Graph Module Version: 0.2.1&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
pluralith login --api-key $PLURALITH_API_KEY&lt;br /&gt;
&lt;br /&gt;
# Generate PDF graph locally&lt;br /&gt;
pluralith &amp;lt;terrafom-root-folder&amp;gt; --var-file environments/dev.tfvars graph --local-only&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/flosell/iam-policy-json-to-terraform iam-policy-json-to-terraform] =&lt;br /&gt;
Convert an IAM Policy in JSON format into a Terraform aws_iam_policy_document&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/flosell/iam-policy-json-to-terraform/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
sudo curl -L https://github.com/flosell/iam-policy-json-to-terraform/releases/download/${LATEST}/iam-policy-json-to-terraform_amd64 -o /usr/local/bin/iam-policy-json-to-terraform&lt;br /&gt;
sudo chmod +x /usr/local/bin/iam-policy-json-to-terraform&lt;br /&gt;
&lt;br /&gt;
# Usage:&lt;br /&gt;
iam-policy-json-to-terraform &amp;lt; some-policy.json&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/hieven/terraform-visual terraform-visual] =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install&lt;br /&gt;
sudo apt-get update&lt;br /&gt;
sudo apt install nodejs npm&lt;br /&gt;
sudo npm install -g @terraform-visual/cli&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
terraform plan -out=plan.out                # Run plan and output as a file&lt;br /&gt;
terraform show -json plan.out &amp;gt; plan.json   # Read plan file and output it in JSON format&lt;br /&gt;
terraform-visual --plan plan.json&lt;br /&gt;
firefox terraform-visual-report/index.html&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/cloudskiff/driftctl driftctl] =&lt;br /&gt;
Measures infrastructure as code coverage, and tracks infrastructure drift.&lt;br /&gt;
IaC: Terraform, Cloud providers: AWS, GitHub (Azure and GCP on the roadmap for 2021). Spot discrepancies as they happen: driftctl is a free and open-source CLI that warns of infrastructure drifts and fills in the missing piece in your DevSecOps toolbox.&lt;br /&gt;
&lt;br /&gt;
;Features [https://docs.driftctl.com/ docs]&lt;br /&gt;
* Scan cloud provider and map resources with IaC code&lt;br /&gt;
* Analyze diffs, and warn about drift and unwanted unmanaged resources&lt;br /&gt;
* Allow users to ignore resources&lt;br /&gt;
* Multiple output formats&lt;br /&gt;
&lt;br /&gt;
Install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
curl -L https://github.com/snyk/driftctl/releases/latest/download/driftctl_linux_amd64 -o driftctl&lt;br /&gt;
install ./driftctl /usr/local/bin/driftctl&lt;br /&gt;
driftctl version&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[https://docs.driftctl.com/0.39.0/usage/cmd/scan-usage Detect drift on GCP]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
source &amp;lt;(driftctl completion bash)&lt;br /&gt;
export GOOGLE_APPLICATION_CREDENTIALS=$HOME/.config/gcloud/application_default_credentials.json&lt;br /&gt;
export CLOUDSDK_CORE_PROJECT=&amp;lt;myproject_id&amp;gt;&lt;br /&gt;
driftctl scan --to=&amp;quot;gcp+tf&amp;quot;&lt;br /&gt;
driftctl scan --to=&amp;quot;gcp+tf&amp;quot; --deep --output html://output.html&lt;br /&gt;
driftctl scan --to=&amp;quot;gcp+tf&amp;quot; --from tfstate+gs://my-bucket/path/to/state.tfstate # Use this when working with workspaces&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/infracost/infracost infracost] =&lt;br /&gt;
Infracost shows cloud cost estimates for infrastructure-as-code projects such as Terraform.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Downloads the CLI based on your OS/arch and puts it in /usr/local/bin&lt;br /&gt;
curl -fsSL https://raw.githubusercontent.com/infracost/infracost/master/scripts/install.sh | sh&lt;br /&gt;
&lt;br /&gt;
# Register for a free API key&lt;br /&gt;
infracost register # The key is saved in ~/.config/infracost/credentials.yml.&lt;br /&gt;
&lt;br /&gt;
# Show cost breakdown on live infra&lt;br /&gt;
infracost breakdown --path terraform_nlb_static_eips&lt;br /&gt;
&lt;br /&gt;
# Show cost breakdown based on Terraform plan&lt;br /&gt;
cd path/to/src_code&lt;br /&gt;
terraform init&lt;br /&gt;
terraform plan -out  tfplan.binary&lt;br /&gt;
terraform show -json tfplan.binary &amp;gt; plan.json&lt;br /&gt;
&lt;br /&gt;
## run via binary&lt;br /&gt;
infracost breakdown --path plan.json&lt;br /&gt;
infracost breakdown --path plan.json --show-skipped --format html &amp;gt; /vagrant/infracost.html&lt;br /&gt;
infracost diff      --path plan.json&lt;br /&gt;
&lt;br /&gt;
## run via Docker&lt;br /&gt;
docker run -it --rm --volume &amp;quot;$(pwd):/src&amp;quot; -u $(id -u) -e INFRACOST_API_KEY=$INFRACOST_API_KEY infracost/infracost:0.9.15 breakdown --path /src/plan.json&lt;br /&gt;
docker run -it --rm --volume &amp;quot;$(pwd):/src&amp;quot; -u $(id -u) -e INFRACOST_API_KEY=$INFRACOST_API_KEY infracost/infracost:0.9.15 diff      --path /src/plan.json&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example output&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
## Cost breakdown&lt;br /&gt;
docker run -it --rm --volume &amp;quot;$(pwd):/src&amp;quot; -u $(id -u) -e INFRACOST_API_KEY=$INFRACOST_API_KEY infracost/infracost:0.9.15 breakdown --path /src/plan.json&lt;br /&gt;
Detected Terraform plan JSON file at /src/plan.json&lt;br /&gt;
✔ Calculating monthly cost estimate &lt;br /&gt;
&lt;br /&gt;
Project: /src/plan.json&lt;br /&gt;
 Name                                                              Monthly Qty  Unit   Monthly Cost &lt;br /&gt;
 module.gke.google_container_cluster.primary                                                        &lt;br /&gt;
 ├─ Cluster management fee                                                 730  hours        $73.00 &lt;br /&gt;
 └─ default_pool                                                                                    &lt;br /&gt;
    ├─ Instance usage (Linux/UNIX, on-demand, e2-medium)                 6,570  hours       $242.16 &lt;br /&gt;
    └─ Standard provisioned storage (pd-standard)                          900  GiB          $36.00 &lt;br /&gt;
 module.gke.google_container_node_pool.pools[&amp;quot;default-node-pool&amp;quot;]                                   &lt;br /&gt;
 ├─ Instance usage (Linux/UNIX, on-demand, e2-medium)                    6,570  hours       $242.16 &lt;br /&gt;
 └─ Standard provisioned storage (pd-standard)                             900  GiB          $36.00 &lt;br /&gt;
 OVERALL TOTAL                                                                              $629.31 &lt;br /&gt;
──────────────────────────────────&lt;br /&gt;
11 cloud resources were detected, rerun with --show-skipped to see details:&lt;br /&gt;
∙ 2 were estimated, 2 include usage-based costs, see https://infracost.io/usage-file&lt;br /&gt;
∙ 9 were free&lt;br /&gt;
&lt;br /&gt;
## Cost difference&lt;br /&gt;
docker run -it --rm --volume &amp;quot;$(pwd):/src&amp;quot; -u $(id -u) -e INFRACOST_API_KEY=$INFRACOST_API_KEY infracost/infracost:0.9.15 diff --path /src/plan.json&lt;br /&gt;
Detected Terraform plan JSON file at /src/plan.json&lt;br /&gt;
✔ Calculating monthly cost estimate &lt;br /&gt;
&lt;br /&gt;
Project: /src/plan.json&lt;br /&gt;
&lt;br /&gt;
+ module.gke.google_container_cluster.primary&lt;br /&gt;
  +$351&lt;br /&gt;
    + Cluster management fee&lt;br /&gt;
      +$73.00&lt;br /&gt;
    + default_pool&lt;br /&gt;
        + Instance usage (Linux/UNIX, on-demand, e2-medium)&lt;br /&gt;
          +$242&lt;br /&gt;
        + Standard provisioned storage (pd-standard)&lt;br /&gt;
          +$36.00&lt;br /&gt;
    + node_pool[0]&lt;br /&gt;
        + Instance usage (Linux/UNIX, on-demand, e2-medium)&lt;br /&gt;
          $0.00&lt;br /&gt;
        + Standard provisioned storage (pd-standard)&lt;br /&gt;
          $0.00&lt;br /&gt;
+ module.gke.google_container_node_pool.pools[&amp;quot;default-node-pool&amp;quot;]&lt;br /&gt;
  +$278&lt;br /&gt;
    + Instance usage (Linux/UNIX, on-demand, e2-medium)&lt;br /&gt;
      +$242&lt;br /&gt;
    + Standard provisioned storage (pd-standard)&lt;br /&gt;
      +$36.00&lt;br /&gt;
Monthly cost change for /src/plan.json&lt;br /&gt;
Amount:  +$629 ($0.00 → $629)&lt;br /&gt;
&lt;br /&gt;
──────────────────────────────────&lt;br /&gt;
Key: ~ changed, + added, - removed&lt;br /&gt;
&lt;br /&gt;
11 cloud resources were detected, rerun with --show-skipped to see details:&lt;br /&gt;
∙ 2 were estimated, 2 include usage-based costs, see https://infracost.io/usage-file&lt;br /&gt;
∙ 9 were free&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Resources:&lt;br /&gt;
* DockerHub: https://hub.docker.com/r/infracost/infracost/tags&lt;br /&gt;
&lt;br /&gt;
= [https://tfautomv.dev/ tfautomv - Terraform refactor] =&lt;br /&gt;
Tfautomv writes moved blocks for you so your refactoring is quicker and less error-prone.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
tfautomv -dry-run&lt;br /&gt;
tfautomv -show-analysis&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= [https://www.davidc.net/sites/default/subnets/subnets.html?network=192.168.0.0&amp;amp;mask=22&amp;amp;division=19.3d431 Subnetting] =&lt;br /&gt;
Very useful page for subnetting: https://www.davidc.net/sites/default/subnets/subnets.html&lt;br /&gt;
&lt;br /&gt;
= Resources =&lt;br /&gt;
*[https://discuss.hashicorp.com/u/apparentlymart apparentlymart] The Hero! discuss.hashicorp.com&lt;br /&gt;
*[https://blog.gruntwork.io/a-comprehensive-guide-to-terraform-b3d32832baca Comprehensive-guide-to-terraform] gruntwork.io&lt;br /&gt;
*[https://github.com/antonbabenko/terraform-best-practices Terraform good practices] naming conventions, etc..&lt;br /&gt;
*[https://www.runatlantis.io/ Atlantis] Terraform Pull Request Automation, Listens for webhooks from GitHub/GitLab/Bitbucket/Azure DevOps, Runs terraform commands remotely and comments back with their output.&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Terraform&amp;diff=7039</id>
		<title>Terraform</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Terraform&amp;diff=7039"/>
		<updated>2024-11-07T22:55:45Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* terraform plugins cache */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article is about utilising a tool from HashiCorp called Terraform to build infrastructure as a code - IoC.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Note| most of the paragraphs have examples of Terraform prior 0.12 version syntax that uses HCLv1. HCLv2 has been introduced with v0.12+ that contains significiant syntax and capabilites improvments. }}&lt;br /&gt;
&lt;br /&gt;
= Install terraform =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
wget https://releases.hashicorp.com/terraform/0.11.11/terraform_0.11.11_linux_amd64.zip&lt;br /&gt;
unzip terraform_0.11.11_linux_amd64.zip&lt;br /&gt;
sudo mv ./terraform /usr/local/bin&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== [https://github.com/kamatama41/tfenv tfenv] - manage multiple versions of Teraform ==&lt;br /&gt;
Install and usage&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
git clone https://github.com/tfutils/tfenv.git ~/.tfenv&lt;br /&gt;
echo &amp;quot;[ -d $HOME/.tfenv ] &amp;amp;&amp;amp; export PATH=$PATH:$HOME/.tfenv/bin/&amp;quot; &amp;gt;&amp;gt; ~/.bashrc # or ~/.bash_profile&lt;br /&gt;
&lt;br /&gt;
# Use&lt;br /&gt;
tfenv install 1.0.6&lt;br /&gt;
tfenv use 1.0.6&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== IDE ==&lt;br /&gt;
Development I use:&lt;br /&gt;
* VSCode with 1.41.1+ (for reference) with extensions:&lt;br /&gt;
** Terraform Autocomplete by erd0s&lt;br /&gt;
** Terraform by Mikael Olenfalk with enabled Language Server; open the command pallet with &amp;lt;code&amp;gt;Ctrl+Shift+P&amp;lt;/code&amp;gt;&lt;br /&gt;
:[[File:ClipCapIt-200202-153128.PNG]]&lt;br /&gt;
&lt;br /&gt;
= Basic configuration =&lt;br /&gt;
When terraform is run it looks for .tf file where configuration is stored. The look up process is limited to a flat directory and never leaves the directory that runs from. Therefore if you wish to address a common file a symbolic-link needs to be created within the directory you have .tf file.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
$ vi example.tf &lt;br /&gt;
provider &amp;quot;aws&amp;quot; {&lt;br /&gt;
  access_key = &amp;quot;AK01234567890OGD6WGA&amp;quot; &lt;br /&gt;
  secret_key = &amp;quot;N8012345678905acCY6XIc1bYjsvvlXHUXMaxOzN&amp;quot;&lt;br /&gt;
  region     = &amp;quot;eu-west-1&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
resource &amp;quot;aws_instance&amp;quot; &amp;quot;webserver&amp;quot; {&lt;br /&gt;
  ami           = &amp;quot;ami-405f7226&amp;quot;&lt;br /&gt;
  instance_type = &amp;quot;t2.nano&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since version 10.8.x major changes and features have been introduced including split of providers binary. Now each provider is a separate binary. Please see below example for Azure provider and other internal Terraform developed providers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Azure ==&lt;br /&gt;
Terraform credentials&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
export ARM_SUBSCRIPTION_ID=&amp;quot;YOUR_SUBSCRIPTION_ID&amp;quot;&lt;br /&gt;
export ARM_TENANT_ID=&amp;quot;TENANT_ID&amp;quot;&lt;br /&gt;
export ARM_CLIENT_ID=&amp;quot;CLIENT_ID&amp;quot;&lt;br /&gt;
export ARM_CLIENT_SECRET=&amp;quot;CLIENT_SECRET&amp;quot;&lt;br /&gt;
export TF_VAR_client_id=${ARM_CLIENT_ID}&lt;br /&gt;
export TF_VAR_client_secret=${ARM_CLIENT_SECRET}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example, how to source credentials&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
export VAULT_CLIENT_ADDR=http://10.1.1.1:8200&lt;br /&gt;
export VAULT_TOKEN=11111111-1111-1111-1111-1111111111111&lt;br /&gt;
vault read -format=json -address=$VAULT_CLIENT_ADDR secret/azure/subscription   | jq -r '.data | .subscription_id, .tenant_id'&lt;br /&gt;
vault read -format=json -address=$VAULT_CLIENT_ADDR secret/azure/${application} | jq -r '.data | .client_id, .client_secret'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Terraform providers, modules and backend config&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
$ vi providers.tf&lt;br /&gt;
provider &amp;quot;azurerm&amp;quot; {&lt;br /&gt;
  version         = &amp;quot;1.10.0&amp;quot;&lt;br /&gt;
  subscription_id = &amp;quot;${var.subscription_id}&amp;quot;&lt;br /&gt;
  tenant_id       = &amp;quot;${var.tenant_id}&amp;quot;&lt;br /&gt;
  client_id       = &amp;quot;${var.client_id}&amp;quot;&lt;br /&gt;
  client_secret   = &amp;quot;${var.client_secret}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# HashiCorp special providers https://github.com/terraform-providers&lt;br /&gt;
provider &amp;quot;template&amp;quot; { version = &amp;quot;1.0.0&amp;quot; }&lt;br /&gt;
provider &amp;quot;external&amp;quot; { version = &amp;quot;1.0.0&amp;quot; }&lt;br /&gt;
provider &amp;quot;local&amp;quot;    { version = &amp;quot;1.1.0&amp;quot; }&lt;br /&gt;
&lt;br /&gt;
terraform {&lt;br /&gt;
  backend &amp;quot;local&amp;quot; {}&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== AWS ==&lt;br /&gt;
;References&lt;br /&gt;
*[https://www.padok.fr/en/blog/terraform-s3-bucket-aws S3 bucket for all accounts]&lt;br /&gt;
*[https://www.padok.fr/en/blog/authentication-aws-profiles Multi account auth using aws profiles and &amp;lt;code&amp;gt;provider &amp;quot;aws&amp;quot; {}&amp;lt;/code&amp;gt;]&lt;br /&gt;
=== Local state ===&lt;br /&gt;
Local state configuration&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
vi backend.tf&lt;br /&gt;
terraform {&lt;br /&gt;
  version          = &amp;quot;~&amp;gt; 1.0&amp;quot;&lt;br /&gt;
  required_version = &amp;quot;= 0.12.29&amp;quot;&lt;br /&gt;
  backend &amp;quot;local&amp;quot; {}&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Remote state (single) for multi account deployments ===&lt;br /&gt;
There are many combination setting up backend and AWS credentials. Important understand is that &amp;lt;code&amp;gt;terraform { backend{} }&amp;lt;/code&amp;gt; block does NOT use &amp;lt;code&amp;gt;provider &amp;quot;aws {}&amp;quot;&amp;lt;/code&amp;gt; configuration in order to access the state bucket. It only uses the backend one.&lt;br /&gt;
* exporting credentials allows working with assume roles that are different in the backend and terraform blocks. &lt;br /&gt;
* specifying different &amp;lt;code&amp;gt;profile = &amp;lt;/code&amp;gt; in each blocks&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Credentials&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
## profile allows assumes roles in other accounts&lt;br /&gt;
#export AWS_PROFILE=&amp;quot;piotr&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Environment credentials for a user that can assume roles (eg. ) in other accounts:&lt;br /&gt;
#          | * arn:aws:iam::111111111111:role/terraform-s3state              - save state in s3 bucket&lt;br /&gt;
#          | * arn:aws:iam::222222222222:role/terraform-crossaccount-admin   - deploy resources&lt;br /&gt;
export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE&lt;br /&gt;
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY&lt;br /&gt;
export AWS_DEFAULT_REGION=us-east-1&lt;br /&gt;
&lt;br /&gt;
# unset all of them if need to &lt;br /&gt;
unset ${!AWS@}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;code&amp;gt;terraform {}&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
terraform {&lt;br /&gt;
  version          = &amp;quot;~&amp;gt; 1.0&amp;quot;&lt;br /&gt;
  required_version = &amp;quot;= 0.12.29&amp;quot;&lt;br /&gt;
# profile &amp;quot;dev-us&amp;quot; # we use 'role_arn' but could specify aws profile instead&lt;br /&gt;
  backend &amp;quot;s3&amp;quot; {&lt;br /&gt;
    bucket  = &amp;quot;tfstate-${var.project}-${var.account-id}&amp;quot; # must exist beforehand&lt;br /&gt;
    key     = &amp;quot;terraform/aws/${var.project}/tfstate&amp;quot;     # this could be much simpler when working with terraform workspaces&lt;br /&gt;
    region  = &amp;quot;${var.region}&amp;quot;&lt;br /&gt;
    role_arn  = &amp;quot;arn:aws:iam::111111111111:role/terraform-s3state&amp;quot; # role to assume in an infra account that the s3 state exists&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;code&amp;gt;provider {}&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
provider &amp;quot;aws&amp;quot; {&lt;br /&gt;
## We could use profiles but instead we use 'assume_role' option. Also on your laptop &lt;br /&gt;
## it should be your creds profile eg. 'piotr-xaccount-admin'&lt;br /&gt;
#profile = &amp;quot;terraform-crossaccount-admin&amp;quot;&lt;br /&gt;
#shared_credentials_file = &amp;quot;/home/piotr/.aws/credentials&amp;quot;&lt;br /&gt;
  assume_role = {&lt;br /&gt;
    role_arn  = &amp;quot;arn:aws:iam::&amp;lt;MY_PROD_ACCOUNT&amp;gt;:role/terraform-crossaccount-admin&amp;quot;       # assume role in target account&lt;br /&gt;
    role_arn  = &amp;quot;arn:aws:iam::${var.aws_account}:role/terraform-crossaccount-admin&amp;quot; # can use variables&lt;br /&gt;
  }&lt;br /&gt;
  region  = &amp;quot;var.aws_region&amp;quot;&lt;br /&gt;
  allowed_account_ids = [ &amp;quot;111111111111&amp;quot;, &amp;quot;222222222222&amp;quot; ] # safety net&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Workspace configuration&lt;br /&gt;
Dev configuration in &amp;lt;code&amp;gt;dev.tfvars&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
aws_region  = &amp;quot;eu-west-1&amp;quot;&lt;br /&gt;
aws_account = &amp;quot;&amp;lt;MY_DEV_ACCOUNT&amp;gt;&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Prod configuration in &amp;lt;code&amp;gt;prod.tfvars&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
aws_region  = &amp;quot;eu-west-1&amp;quot;&lt;br /&gt;
aws_account = &amp;quot;&amp;lt;MY_PROD_ACCOUNT&amp;gt;&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Workspaces&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
terraform init&lt;br /&gt;
terraform workspace new dev&lt;br /&gt;
terraform workspace new prod&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Apply on one account&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
terraform workspace select dev&lt;br /&gt;
terraform apply --var-file $(terraform workspace show).tfvars&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== GCP Google Cloud Platform ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Generate default app credentials&lt;br /&gt;
&lt;br /&gt;
gcloud auth application-default login&lt;br /&gt;
Go to the following link in your browser:&lt;br /&gt;
https://accounts.google.com/o/oauth2/auth?response_type=code&amp;amp;client_id=****_challenge_method=S256&lt;br /&gt;
Enter verification code: ***&lt;br /&gt;
Credentials saved to file: [/home/piotr/.config/gcloud/application_default_credentials.json]&lt;br /&gt;
&lt;br /&gt;
These credentials will be used by any library that requests Application Default Credentials (ADC).&lt;br /&gt;
Quota project &amp;quot;test-devops-candidate1&amp;quot; was added to ADC which can be used by Google client libraries for billing and quota. Note that some services may still bill the project owning the resource&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Plan / apply =&lt;br /&gt;
== Meaning of markings in a plan output ==&lt;br /&gt;
For now, here they are, until we get it included in the docs better:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;+&amp;lt;/code&amp;gt; create&lt;br /&gt;
* &amp;lt;code&amp;gt;-&amp;lt;/code&amp;gt; destroy&lt;br /&gt;
* &amp;lt;code&amp;gt;-/+&amp;lt;/code&amp;gt; replace (destroy and then create, or vice-versa if create-before-destroy is used)&lt;br /&gt;
* &amp;lt;code&amp;gt;~&amp;lt;/code&amp;gt; update in-place&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;lt;=&amp;lt;/code&amp;gt; applies only to data resources. You won't see this one often, because whenever possible Terraform does reads during the refresh phase. You will see it, though, if you have a data resource whose configuration depends on something that we don't know yet, such as an attribute of a resource that isn't yet created. In that case, it's necessary to wait until apply time to find out the final configuration before doing the read.&lt;br /&gt;
&lt;br /&gt;
== Plan and apply ==&lt;br /&gt;
Apply stage, if runs first time will create terraform.tfstate after all changes are done. This file should not be modified manually. It's used to compare what is out in cloud already so the next time APPLY stage runs it will look at the file and execute only necessary changes.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Terraform plan and apply&lt;br /&gt;
|- &lt;br /&gt;
! terraform plan&lt;br /&gt;
! terraform apply&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| &amp;lt;source&amp;gt;$ terraform plan&lt;br /&gt;
Refreshing Terraform state in-memory prior to plan...&lt;br /&gt;
The refreshed state will be used to calculate this plan, but will not be&lt;br /&gt;
persisted to local or remote state storage.&lt;br /&gt;
&amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
+ aws_instance.webserver&lt;br /&gt;
   ami:                         &amp;quot;ami-405f7226&amp;quot;&lt;br /&gt;
   associate_public_ip_address: &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   availability_zone:           &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   ebs_block_device.#:          &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   ephemeral_block_device.#:    &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   instance_state:              &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   instance_type:               &amp;quot;t2.nano&amp;quot;&lt;br /&gt;
   ipv6_addresses.#:            &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   key_name:                    &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   network_interface_id:        &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   placement_group:             &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   private_dns:                 &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   private_ip:                  &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   public_dns:                  &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   public_ip:                   &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   root_block_device.#:         &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   security_groups.#:           &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   source_dest_check:           &amp;quot;true&amp;quot;&lt;br /&gt;
   subnet_id:                   &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   tenancy:                     &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   vpc_security_group_ids.#:    &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
| &amp;lt;source&amp;gt;$ terraform apply&lt;br /&gt;
aws_instance.webserver: Creating...&lt;br /&gt;
 ami:                         &amp;quot;&amp;quot; =&amp;gt; &amp;quot;ami-405f7226&amp;quot;&lt;br /&gt;
 associate_public_ip_address: &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 availability_zone:           &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 ebs_block_device.#:          &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 ephemeral_block_device.#:    &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 instance_state:              &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 instance_type:               &amp;quot;&amp;quot; =&amp;gt; &amp;quot;t2.nano&amp;quot;&lt;br /&gt;
 ipv6_addresses.#:            &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 key_name:                    &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 network_interface_id:        &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 placement_group:             &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 private_dns:                 &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 private_ip:                  &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 public_dns:                  &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 public_ip:                   &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 root_block_device.#:         &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 security_groups.#:           &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 source_dest_check:           &amp;quot;&amp;quot; =&amp;gt; &amp;quot;true&amp;quot;&lt;br /&gt;
 subnet_id:                   &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 tenancy:                     &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 vpc_security_group_ids.#:    &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
aws_instance.webserver: Still creating... (10s elapsed)&lt;br /&gt;
aws_instance.webserver: Creation complete (ID: i-0eb33af34b94d1a78)&lt;br /&gt;
&lt;br /&gt;
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.&lt;br /&gt;
&lt;br /&gt;
The state of your infrastructure has been saved to the path&lt;br /&gt;
below. This state is required to modify and destroy your&lt;br /&gt;
infrastructure, so keep it safe. To inspect the complete state&lt;br /&gt;
use the `terraform show` command.&lt;br /&gt;
&lt;br /&gt;
State path:  &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Show ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ terraform show&lt;br /&gt;
aws_instance.webserver:&lt;br /&gt;
 id = i-0eb33af34b94d1a78&lt;br /&gt;
 ami = ami-405f7226&lt;br /&gt;
 associate_public_ip_address = true&lt;br /&gt;
 availability_zone = eu-west-1c&lt;br /&gt;
 disable_api_termination = false&lt;br /&gt;
(...)&lt;br /&gt;
 source_dest_check = true&lt;br /&gt;
 subnet_id = subnet-92a4bbf6&lt;br /&gt;
 tags.% = 0&lt;br /&gt;
 tenancy = default&lt;br /&gt;
 vpc_security_group_ids.# = 1&lt;br /&gt;
 vpc_security_group_ids.1039819662 = sg-5201fb2b&lt;br /&gt;
&lt;br /&gt;
$&amp;gt; terraform destroy&lt;br /&gt;
Do you really want to destroy?&lt;br /&gt;
 Terraform will delete all your managed infrastructure.&lt;br /&gt;
 There is no undo. Only 'yes' will be accepted to confirm.&lt;br /&gt;
 Enter a value: yes&lt;br /&gt;
aws_instance.webserver: Refreshing state... (ID: i-0eb33af34b94d1a78)&lt;br /&gt;
aws_instance.webserver: Destroying... (ID: i-0eb33af34b94d1a78)&lt;br /&gt;
aws_instance.webserver: Still destroying... (ID: i-0eb33af34b94d1a78, 10s elapsed)&lt;br /&gt;
aws_instance.webserver: Still destroying... (ID: i-0eb33af34b94d1a78, 20s elapsed)&lt;br /&gt;
aws_instance.webserver: Still destroying... (ID: i-0eb33af34b94d1a78, 30s elapsed)&lt;br /&gt;
aws_instance.webserver: Destruction complete&lt;br /&gt;
 &lt;br /&gt;
Destroy complete! Resources: 1 destroyed.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
After the instance has been terminated the terraform.tfstate looks like below:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi terraform.tfstate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=json&amp;gt;&lt;br /&gt;
 {&lt;br /&gt;
     &amp;quot;version&amp;quot;: 3,&lt;br /&gt;
     &amp;quot;terraform_version&amp;quot;: &amp;quot;0.9.1&amp;quot;,&lt;br /&gt;
     &amp;quot;serial&amp;quot;: 1,&lt;br /&gt;
     &amp;quot;lineage&amp;quot;: &amp;quot;c22ccad7-ff26-4b8a-bf19-819477b45202&amp;quot;,&lt;br /&gt;
     &amp;quot;modules&amp;quot;: [&lt;br /&gt;
         {&lt;br /&gt;
             &amp;quot;path&amp;quot;: [&lt;br /&gt;
                 &amp;quot;root&amp;quot;&lt;br /&gt;
             ],&lt;br /&gt;
             &amp;quot;outputs&amp;quot;: {},&lt;br /&gt;
             &amp;quot;resources&amp;quot;: {},&lt;br /&gt;
             &amp;quot;depends_on&amp;quot;: []&lt;br /&gt;
         }&lt;br /&gt;
     ]&lt;br /&gt;
 }&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= AWS credentials profiles and variable files=&lt;br /&gt;
Instead to reference secret_access keys within .tf file directly we can use AWS profile file. This file will be look at for the profile variable we specify in variables.tf file. Note: there is '''no double quotes'''.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ vi ~/.aws/credentials    #AWS credentials file with named profiles&lt;br /&gt;
[terraform-profile1]       #profile name&lt;br /&gt;
aws_access_key_id     = AAAAAAAAAAA&lt;br /&gt;
aws_secret_access_key = BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then we can now remove the secret_access keys from the main .tf file (example.tf) and amend as follows:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi provider.tf&lt;br /&gt;
terraform {&lt;br /&gt;
  version          = &amp;quot;~&amp;gt; 1.0&amp;quot;&lt;br /&gt;
  required_version = &amp;quot;= 0.11.11&amp;quot;&lt;br /&gt;
  region           = &amp;quot;eu-west-1&amp;quot;&lt;br /&gt;
  backend &amp;quot;s3&amp;quot; {}  # in this case all s3 details are passed as ENV vars&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
provider &amp;quot;aws&amp;quot; {&lt;br /&gt;
  version    =   &amp;quot;~&amp;gt; 1.57&amp;quot;&lt;br /&gt;
# Static credentials - provided directly&lt;br /&gt;
  access_key = &amp;quot;AAAAAAAAAAA&amp;quot;&lt;br /&gt;
  secret_key = &amp;quot;BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Shared Credentials file - $HOME/.aws/credentials, static credentials are not needed then&lt;br /&gt;
# profile                 = &amp;quot;terraform-profile1&amp;quot;           #profile name in credentials file, acc 111111111111&lt;br /&gt;
# shared_credentials_file = &amp;quot;/home/user1/.aws/credentials&amp;quot; #if different than default&lt;br /&gt;
&lt;br /&gt;
# If specified, assume role in another account using the user credentials&lt;br /&gt;
# defined in the profile above&lt;br /&gt;
# assume_role {&lt;br /&gt;
#   role_arn     = &amp;quot;${var.aws_xaccount_role}&amp;quot; #variable version&lt;br /&gt;
#   role_arn     = &amp;quot;arn:aws:iam::222222222222:role/CrossAccountSignin_Terraform&amp;quot;&lt;br /&gt;
# }&lt;br /&gt;
# allowed_account_ids = [ &amp;quot;111111111111&amp;quot;, &amp;quot;222222222222&amp;quot; ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
provider &amp;quot;template&amp;quot; {&lt;br /&gt;
  version = &amp;quot;~&amp;gt; 1.0.0&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
and create a variable file to reference it&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ vi variables.tf&lt;br /&gt;
variable &amp;quot;region&amp;quot; {&lt;br /&gt;
  default = &amp;quot;eu-west-1&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
variable &amp;quot;profile&amp;quot; {} #variable without a default value will prompt to type in the value. And that should be 'terraform-profile1'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Run terraform&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ terraform plan -var 'profile=terraform-profile1'  #this way value can be set&lt;br /&gt;
$ terraform plan -destroy -input=false&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= AWS example =&lt;br /&gt;
Prerequisites are:&lt;br /&gt;
*~/.aws/credential file exists&lt;br /&gt;
*variables.tf exist, with context below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you remove &amp;lt;tt&amp;gt;default&amp;lt;/tt&amp;gt; value you will be prompted for it.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;inputs.tf&amp;lt;/code&amp;gt; also known as a variable file.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; vi inputs.tf&lt;br /&gt;
variable &amp;quot;region&amp;quot; { default = &amp;quot;eu-west-1&amp;quot;  } &lt;br /&gt;
variable &amp;quot;profile&amp;quot; {&lt;br /&gt;
       description = &amp;quot;Provide AWS credentials profile you want to use, saved in ~/.aws/credentials file&amp;quot;&lt;br /&gt;
       default     = &amp;quot;terraform-profile&amp;quot; }&lt;br /&gt;
variable &amp;quot;key_name&amp;quot; {&lt;br /&gt;
        description = &amp;lt;&amp;lt;DESCRIPTION&lt;br /&gt;
Provide name of the ssh private key file name, ~/.ssh will be search&lt;br /&gt;
This is the key assosiated with the IAM user in AWS. Example: id_rsa&lt;br /&gt;
DESCRIPTION&lt;br /&gt;
        default     = &amp;quot;id_rsa&amp;quot; }&lt;br /&gt;
variable &amp;quot;public_key_path&amp;quot; {&lt;br /&gt;
        description = &amp;lt;&amp;lt;DESCRIPTION&lt;br /&gt;
Path to the SSH public keys for authentication. This key will be injected&lt;br /&gt;
into all ec2 instances created by Terraform.&lt;br /&gt;
Example: ~./ssh/terraform.pub&lt;br /&gt;
DESCRIPTION&lt;br /&gt;
        default     = &amp;quot;~/.ssh/id_rsa.pub&amp;quot; }&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Terraform .tf file&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ vi example.tf&lt;br /&gt;
provider &amp;quot;aws&amp;quot; {&lt;br /&gt;
  region = &amp;quot;${var.region}&amp;quot;&lt;br /&gt;
  profile = &amp;quot;${var.profile}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
resource &amp;quot;aws_vpc&amp;quot; &amp;quot;vpc&amp;quot; {&lt;br /&gt;
  cidr_block = &amp;quot;10.0.0.0/16&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
# Create an internet gateway to give our subnet access to the open internet&lt;br /&gt;
resource &amp;quot;aws_internet_gateway&amp;quot; &amp;quot;internet-gateway&amp;quot; {&lt;br /&gt;
  vpc_id = &amp;quot;${aws_vpc.vpc.id}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
# Give the VPC internet access on its main route table&lt;br /&gt;
resource &amp;quot;aws_route&amp;quot; &amp;quot;internet_access&amp;quot; {&lt;br /&gt;
  route_table_id         = &amp;quot;${aws_vpc.vpc.main_route_table_id}&amp;quot;&lt;br /&gt;
  destination_cidr_block = &amp;quot;0.0.0.0/0&amp;quot;&lt;br /&gt;
  gateway_id             = &amp;quot;${aws_internet_gateway.internet-gateway.id}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
# Create a subnet to launch our instances into&lt;br /&gt;
resource &amp;quot;aws_subnet&amp;quot; &amp;quot;default&amp;quot; {&lt;br /&gt;
  vpc_id                  = &amp;quot;${aws_vpc.vpc.id}&amp;quot;&lt;br /&gt;
  cidr_block              = &amp;quot;10.0.1.0/24&amp;quot;&lt;br /&gt;
  map_public_ip_on_launch = true&lt;br /&gt;
&lt;br /&gt;
  tags {&lt;br /&gt;
    Name = &amp;quot;Public&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
# Our default security group to access&lt;br /&gt;
# instances over SSH and HTTP&lt;br /&gt;
resource &amp;quot;aws_security_group&amp;quot; &amp;quot;default&amp;quot; {&lt;br /&gt;
  name        = &amp;quot;terraform_securitygroup&amp;quot;&lt;br /&gt;
  description = &amp;quot;Used for public instances&amp;quot;&lt;br /&gt;
  vpc_id      = &amp;quot;${aws_vpc.vpc.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  # SSH access from anywhere&lt;br /&gt;
  ingress {&lt;br /&gt;
    from_port   = 22&lt;br /&gt;
    to_port     = 22&lt;br /&gt;
    protocol    = &amp;quot;tcp&amp;quot;&lt;br /&gt;
    cidr_blocks = [&amp;quot;0.0.0.0/0&amp;quot;]&lt;br /&gt;
  }&lt;br /&gt;
  # HTTP access from the VPC&lt;br /&gt;
  ingress {&lt;br /&gt;
    from_port   = 80&lt;br /&gt;
    to_port     = 80&lt;br /&gt;
    protocol    = &amp;quot;tcp&amp;quot;&lt;br /&gt;
    cidr_blocks = [&amp;quot;10.0.0.0/16&amp;quot;]&lt;br /&gt;
  }&lt;br /&gt;
  # outbound internet access&lt;br /&gt;
  egress {&lt;br /&gt;
    from_port   = 0&lt;br /&gt;
    to_port     = 0&lt;br /&gt;
    protocol    = &amp;quot;-1&amp;quot; # all protocols&lt;br /&gt;
    cidr_blocks = [&amp;quot;0.0.0.0/0&amp;quot;]&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
resource &amp;quot;aws_key_pair&amp;quot; &amp;quot;auth&amp;quot; {&lt;br /&gt;
  key_name   = &amp;quot;${var.key_name}&amp;quot;&lt;br /&gt;
  public_key = &amp;quot;${file(var.public_key_path)}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
resource &amp;quot;aws_instance&amp;quot; &amp;quot;webserver&amp;quot; {&lt;br /&gt;
  ami = &amp;quot;ami-405f7226&amp;quot;&lt;br /&gt;
  instance_type = &amp;quot;t2.nano&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  key_name = &amp;quot;${aws_key_pair.auth.id}&amp;quot;&lt;br /&gt;
  vpc_security_group_ids = [&amp;quot;${aws_security_group.default.id}&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
  # We're going to launch into the public subnet for this.&lt;br /&gt;
  # Normally, in production environments, webservers would be in&lt;br /&gt;
  # private subnets.&lt;br /&gt;
  subnet_id = &amp;quot;${aws_subnet.default.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  # The connection block tells our provisioner how to&lt;br /&gt;
  # communicate with the instance&lt;br /&gt;
  connection {&lt;br /&gt;
    user = &amp;quot;ubuntu&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
  # We run a remote provisioner on the instance after creating it &lt;br /&gt;
  # to install Nginx. By default, this should be on port 80&lt;br /&gt;
  provisioner &amp;quot;remote-exec&amp;quot; {&lt;br /&gt;
    inline = [&lt;br /&gt;
      &amp;quot;sudo apt-get -y update&amp;quot;,&lt;br /&gt;
      &amp;quot;sudo apt-get -y install nginx&amp;quot;,&lt;br /&gt;
      &amp;quot;sudo service nginx start&amp;quot;&lt;br /&gt;
    ]&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Run a plan ==&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
$ terraform plan&lt;br /&gt;
var.key_name&lt;br /&gt;
  Name of the AWS key pair&lt;br /&gt;
&lt;br /&gt;
  Enter a value: id_rsa        #name of the key_pair&lt;br /&gt;
&lt;br /&gt;
var.profile&lt;br /&gt;
  AWS credentials profile you want to use&lt;br /&gt;
&lt;br /&gt;
  Enter a value: terraform-profile   #aws profile in ~/.aws/credentials file&lt;br /&gt;
&lt;br /&gt;
var.public_key_path&lt;br /&gt;
  Path to the SSH public keys for authentication.&lt;br /&gt;
  Example: ~./ssh/terraform.pub&lt;br /&gt;
&lt;br /&gt;
  Enter a value: ~/.ssh/id_rsa.pub  #path to the matching public key&lt;br /&gt;
&lt;br /&gt;
Refreshing Terraform state in-memory prior to plan...&lt;br /&gt;
The refreshed state will be used to calculate this plan, but will not be&lt;br /&gt;
persisted to local or remote state storage.&lt;br /&gt;
&lt;br /&gt;
The Terraform execution plan has been generated and is shown below.&lt;br /&gt;
Resources are shown in alphabetical order for quick scanning. Green resources&lt;br /&gt;
will be created (or destroyed and then created if an existing resource&lt;br /&gt;
exists), yellow resources are being changed in-place, and red resources&lt;br /&gt;
will be destroyed. Cyan entries are data sources to be read.&lt;br /&gt;
&lt;br /&gt;
+ aws_instance.webserver&lt;br /&gt;
    ami:                         &amp;quot;ami-405f7226&amp;quot;&lt;br /&gt;
    associate_public_ip_address: &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    availability_zone:           &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    ebs_block_device.#:          &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    ephemeral_block_device.#:    &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    instance_state:              &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    instance_type:               &amp;quot;t2.nano&amp;quot;&lt;br /&gt;
    ipv6_addresses.#:            &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    key_name:                    &amp;quot;${aws_key_pair.auth.id}&amp;quot;&lt;br /&gt;
    network_interface_id:        &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    placement_group:             &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    private_dns:                 &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    private_ip:                  &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    public_dns:                  &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    public_ip:                   &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    root_block_device.#:         &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    security_groups.#:           &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    source_dest_check:           &amp;quot;true&amp;quot;&lt;br /&gt;
    subnet_id:                   &amp;quot;${aws_subnet.default.id}&amp;quot;&lt;br /&gt;
    tenancy:                     &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    vpc_security_group_ids.#:    &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
+ aws_internet_gateway.internet-gateway&lt;br /&gt;
    vpc_id: &amp;quot;${aws_vpc.vpc.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
+ aws_key_pair.auth&lt;br /&gt;
    fingerprint: &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    key_name:    &amp;quot;id_rsa&amp;quot;&lt;br /&gt;
    public_key:  &amp;quot;ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDfc piotr@ubuntu&amp;quot;&lt;br /&gt;
&lt;br /&gt;
...omitted...&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
Plan: 7 to add, 0 to change, 0 to destroy.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Plan a single target&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; terraform plan -target=aws_ami_from_instance.golden&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Terraform apply ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; terraform apply&lt;br /&gt;
$&amp;gt; terraform show # shoe current resources in the state file&lt;br /&gt;
aws_instance.webserver:&lt;br /&gt;
 id = i-09c1c665cef284235&lt;br /&gt;
 ami = ami-405f7226&lt;br /&gt;
&amp;lt;...&amp;gt;&lt;br /&gt;
aws_security_group.default:&lt;br /&gt;
 id = sg-b14bb1c8&lt;br /&gt;
 description = Used for public instances&lt;br /&gt;
 egress.# = 1&lt;br /&gt;
&amp;lt;...&amp;gt;&lt;br /&gt;
aws_subnet.default:&lt;br /&gt;
 id = subnet-6f4f510b&lt;br /&gt;
&amp;lt;...&amp;gt;&lt;br /&gt;
aws_vpc.vpc:&lt;br /&gt;
 id = vpc-9ba0b7ff&lt;br /&gt;
&amp;lt;...&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Apply a single resource using &amp;lt;code&amp;gt;-target &amp;lt;resource&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; terraform apply -target=aws_ami_from_instance.golden&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Terraform destroy ==&lt;br /&gt;
Run destroy command to delete all resources that were created&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; terraform destroy&lt;br /&gt;
&lt;br /&gt;
aws_key_pair.auth: Refreshing state... (ID: id_rsa)&lt;br /&gt;
aws_vpc.vpc: Refreshing state... (ID: vpc-9ba0b7ff)&lt;br /&gt;
&amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Destroy complete! Resources: 7 destroyed.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Destroy a single resource - targeting&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; terraform show&lt;br /&gt;
$&amp;gt; terraform destroy -target=aws_ami_from_instance.golden&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Terraform taint ==&lt;br /&gt;
Get a resource list&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
terraform state list&lt;br /&gt;
# select item for the list #&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Version 0.11: resource index must be addressed as eg. &amp;lt;code&amp;gt;aws_instance.main.0&amp;lt;/code&amp;gt; not  &amp;lt;code&amp;gt;aws_instance.main[0]&amp;lt;/code&amp;gt;. It's not possible to tain whole module&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
terraform taint -module=&amp;lt;MODULE_NAME&amp;gt; aws_instance.main.0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Version 0.12: resources and modules can be addressed in more natural way&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
terraform taint module.MODULE_NAME.aws_instance.main.0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Use ansible from Terraform - Provision using Ansible =&lt;br /&gt;
Unsurr if this is the best approach due to the fact of how to store the state of local-exec Ansible run. Could be set to always run as Ansible playbooks are immutable. Exame: https://github.com/dzeban/c10k/blob/master/infrastructure/main.tf&lt;br /&gt;
&lt;br /&gt;
= Debug =&lt;br /&gt;
== Output complex object ==&lt;br /&gt;
Often it is required to manipulate a data structure that is an output of &amp;lt;tt&amp;gt;resource&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;data.resource&amp;lt;/tt&amp;gt; or simply a template that might be hidden computation not always displayed on your screen. You can use following techniques to iterate over you code output:&lt;br /&gt;
&lt;br /&gt;
;Output and [https://www.terraform.io/docs/providers/null/resource.html null_resource] - empty virtual container that can run any arbitrary commands&lt;br /&gt;
* '''Problem statement:''' Display computed Terrafom &amp;lt;code&amp;gt;templatefile&amp;lt;/code&amp;gt;&lt;br /&gt;
* '''Solution:''' Use &amp;lt;code&amp;gt;null_resource&amp;lt;/code&amp;gt; to create a template, such template will be shown in a &amp;lt;tt&amp;gt;plan&amp;lt;/tt&amp;gt;. If such template is Json policy, invalid policies fail and you cannot see why. Plan will show the object being constructed, running &amp;lt;code&amp;gt;terraform apply&amp;lt;/code&amp;gt; it can be saved into state file as output variable. Then the object can be re-used for further transformations.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;Terraform&amp;quot;&amp;gt;&lt;br /&gt;
data &amp;quot;aws_caller_identity&amp;quot; &amp;quot;current&amp;quot; {}&lt;br /&gt;
&lt;br /&gt;
# resource &amp;quot;aws_kms_key&amp;quot; &amp;quot;secretmanager&amp;quot; {&lt;br /&gt;
#  policy = templatefile(&amp;quot;./templates/kms_secretmanager.policy.json.tpl&amp;quot;, ... # debugging policy with &lt;br /&gt;
# }                                                                           # null_resource and ouput&lt;br /&gt;
&lt;br /&gt;
resource &amp;quot;aws_kms_alias&amp;quot; &amp;quot;secretmanager&amp;quot; {&lt;br /&gt;
  name          = &amp;quot;alias/secretmanager&amp;quot;&lt;br /&gt;
  target_key_id = aws_kms_key.secretmanager.key_id&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
resource &amp;quot;null_resource&amp;quot; &amp;quot;policytest&amp;quot; {&lt;br /&gt;
  triggers = {&lt;br /&gt;
    policytest = templatefile(&amp;quot;./templates/kms_secretmanager.policy.json.tpl&amp;quot;, &lt;br /&gt;
    {&lt;br /&gt;
      arns_json                 = jsonencode(var.crossAccountIamUsers_arns)&lt;br /&gt;
      currentAccountId          = data.aws_caller_identity.current.account_id&lt;br /&gt;
      crossAccountAccessEnabled = length([var.crossAccountIamUsers_arns]) &amp;gt; 0 ? true : false&lt;br /&gt;
    })&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
output &amp;quot;policy&amp;quot; {&lt;br /&gt;
  value = templatefile(&amp;quot;./templates/kms_secretmanager.policy.json.tpl&amp;quot;, &lt;br /&gt;
    {&lt;br /&gt;
      arns_json                 = jsonencode(var.crossAccountIamUsers_arns)&lt;br /&gt;
      currentAccountId          = data.aws_caller_identity.current.account_id&lt;br /&gt;
      crossAccountAccessEnabled = length([var.crossAccountIamUsers_arns]) &amp;gt; 0 ? true : false&lt;br /&gt;
    }&lt;br /&gt;
  )&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Policy template file &amp;lt;code&amp;gt;./templates/kms_secretmanager.policy.json.tpl&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=json&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
    &amp;quot;Version&amp;quot;: &amp;quot;2012-10-17&amp;quot;,&lt;br /&gt;
    &amp;quot;Id&amp;quot;: &amp;quot;key-consolepolicy-1&amp;quot;,&lt;br /&gt;
    &amp;quot;Statement&amp;quot;: [&lt;br /&gt;
        {&lt;br /&gt;
            &amp;quot;Sid&amp;quot;: &amp;quot;Enable IAM User Permissions&amp;quot;,&lt;br /&gt;
            &amp;quot;Effect&amp;quot;: &amp;quot;Allow&amp;quot;,&lt;br /&gt;
            &amp;quot;Principal&amp;quot;: {&lt;br /&gt;
                &amp;quot;AWS&amp;quot;: &amp;quot;arn:aws:iam::${currentAccountId}:root&amp;quot;&lt;br /&gt;
            },&lt;br /&gt;
            &amp;quot;Action&amp;quot;: &amp;quot;kms:*&amp;quot;,&lt;br /&gt;
            &amp;quot;Resource&amp;quot;: &amp;quot;*&amp;quot;&lt;br /&gt;
        },&lt;br /&gt;
%{ if crossAccountAccessEnabled == true ~}&lt;br /&gt;
        {&lt;br /&gt;
            &amp;quot;Sid&amp;quot;: &amp;quot;Allow cross-accounts retrieve secrets&amp;quot;,&lt;br /&gt;
            &amp;quot;Effect&amp;quot;: &amp;quot;Allow&amp;quot;,&lt;br /&gt;
            &amp;quot;Principal&amp;quot;: {&lt;br /&gt;
                &amp;quot;AWS&amp;quot;: ${arns_json}&lt;br /&gt;
            },&lt;br /&gt;
            &amp;quot;Action&amp;quot;: [&lt;br /&gt;
                &amp;quot;kms:Decrypt&amp;quot;,&lt;br /&gt;
                &amp;quot;kms:DescribeKey&amp;quot;&lt;br /&gt;
            ],&lt;br /&gt;
            &amp;quot;Resource&amp;quot;: &amp;quot;*&amp;quot;&lt;br /&gt;
        }&lt;br /&gt;
%{ endif ~}&lt;br /&gt;
    ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Run&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ terraform apply -var-file=test.tfvars -target null_resource.policytest # -var-file contains 'var.crossAccountIamUsers_arns' list variable&lt;br /&gt;
&lt;br /&gt;
Terraform will perform the following actions:&lt;br /&gt;
&lt;br /&gt;
  # null_resource.policytest will be created&lt;br /&gt;
  + resource &amp;quot;null_resource&amp;quot; &amp;quot;policytest&amp;quot; {&lt;br /&gt;
      + id       = (known after apply)&lt;br /&gt;
      + triggers = {&lt;br /&gt;
          + &amp;quot;policytest&amp;quot; = jsonencode(&lt;br /&gt;
                {&lt;br /&gt;
                  + Id        = &amp;quot;key-consolepolicy-1&amp;quot;&lt;br /&gt;
                  + Statement = [&lt;br /&gt;
                      + {&lt;br /&gt;
                          + Action    = &amp;quot;kms:*&amp;quot;&lt;br /&gt;
                          + Effect    = &amp;quot;Allow&amp;quot;&lt;br /&gt;
                          + Principal = {&lt;br /&gt;
                              + AWS = &amp;quot;arn:aws:iam::111111111111:root&amp;quot;&lt;br /&gt;
                            }&lt;br /&gt;
                          + Resource  = &amp;quot;*&amp;quot;&lt;br /&gt;
                          + Sid       = &amp;quot;Enable IAM User Permissions&amp;quot;&lt;br /&gt;
                        },&lt;br /&gt;
                      + {&lt;br /&gt;
                          + Action    = [&lt;br /&gt;
                              + &amp;quot;kms:Decrypt&amp;quot;,&lt;br /&gt;
                              + &amp;quot;kms:DescribeKey&amp;quot;,&lt;br /&gt;
                            ]&lt;br /&gt;
                          + Effect    = &amp;quot;Allow&amp;quot;&lt;br /&gt;
                          + Principal = {&lt;br /&gt;
                              + AWS = [&lt;br /&gt;
                                  + &amp;quot;arn:aws:iam::111111111111:user/dev&amp;quot;,&lt;br /&gt;
                                  + &amp;quot;arn:aws:iam::111111111111:user/test&amp;quot;,&lt;br /&gt;
                                ]&lt;br /&gt;
                            }&lt;br /&gt;
                          + Resource  = &amp;quot;*&amp;quot;&lt;br /&gt;
                          + Sid       = &amp;quot;Allow cross-accounts retrieve secrets&amp;quot;&lt;br /&gt;
                        },&lt;br /&gt;
                    ]&lt;br /&gt;
                  + Version   = &amp;quot;2012-10-17&amp;quot;&lt;br /&gt;
                }&lt;br /&gt;
            )&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
Plan: 1 to add, 0 to change, 0 to destroy.&lt;br /&gt;
&lt;br /&gt;
Do you want to perform these actions?&lt;br /&gt;
  Terraform will perform the actions described above.&lt;br /&gt;
  Only 'yes' will be accepted to approve.&lt;br /&gt;
&lt;br /&gt;
  Enter a value: yes # &amp;lt;- manual imput&lt;br /&gt;
&lt;br /&gt;
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.&lt;br /&gt;
&lt;br /&gt;
Outputs:&lt;br /&gt;
policy = {&lt;br /&gt;
    &amp;quot;Version&amp;quot;: &amp;quot;2012-10-17&amp;quot;,&lt;br /&gt;
    &amp;quot;Id&amp;quot;: &amp;quot;key-consolepolicy-1&amp;quot;,&lt;br /&gt;
    &amp;quot;Statement&amp;quot;: [&lt;br /&gt;
        {&lt;br /&gt;
            &amp;quot;Sid&amp;quot;: &amp;quot;Enable IAM User Permissions&amp;quot;,&lt;br /&gt;
            &amp;quot;Effect&amp;quot;: &amp;quot;Allow&amp;quot;,&lt;br /&gt;
            &amp;quot;Principal&amp;quot;: {&lt;br /&gt;
                &amp;quot;AWS&amp;quot;: &amp;quot;arn:aws:iam::111111111111:root&amp;quot;&lt;br /&gt;
            },&lt;br /&gt;
            &amp;quot;Action&amp;quot;: &amp;quot;kms:*&amp;quot;,&lt;br /&gt;
            &amp;quot;Resource&amp;quot;: &amp;quot;*&amp;quot;&lt;br /&gt;
        },&lt;br /&gt;
        {&lt;br /&gt;
            &amp;quot;Sid&amp;quot;: &amp;quot;Allow cross-accounts retrieve secrets&amp;quot;,&lt;br /&gt;
            &amp;quot;Effect&amp;quot;: &amp;quot;Allow&amp;quot;,&lt;br /&gt;
            &amp;quot;Principal&amp;quot;: {&lt;br /&gt;
                &amp;quot;AWS&amp;quot;: [&amp;quot;arn:aws:iam::111111111111:user/dev&amp;quot;,&amp;quot;arn:aws:iam::111111111111:user/test&amp;quot;]&lt;br /&gt;
            },&lt;br /&gt;
            &amp;quot;Action&amp;quot;: [&lt;br /&gt;
                &amp;quot;kms:Decrypt&amp;quot;,&lt;br /&gt;
                &amp;quot;kms:DescribeKey&amp;quot;&lt;br /&gt;
            ],&lt;br /&gt;
            &amp;quot;Resource&amp;quot;: &amp;quot;*&amp;quot;&lt;br /&gt;
        }&lt;br /&gt;
    ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Debug =&lt;br /&gt;
== Debug and analyze logs ==&lt;br /&gt;
We are going to enable logging to a file in Terraform. Convert log file to pdf and use sheri.ai to give us the answers.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Pre req - Ubuntu 22.04&lt;br /&gt;
sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install ghostscript # for ps2pdf converter&lt;br /&gt;
&lt;br /&gt;
# Set Terraform logging&lt;br /&gt;
export TF_LOG=TRACE # DEBUG&lt;br /&gt;
export TF_LOG_PATH=/tmp/tflogs.log&lt;br /&gt;
&lt;br /&gt;
terraform plan|apply&lt;br /&gt;
vim $TF_LOG_PATH -c &amp;quot;hardcopy &amp;gt; ${TF_LOG_PATH}.ps | q&amp;quot;; ps2pdf ${TF_LOG_PATH}.ps ${TF_LOG_PATH}-$(echo $(date +&amp;quot;%Y%m%d-%H%M&amp;quot;)).pdf&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Debug using &amp;lt;code&amp;gt;terraform console&amp;lt;/code&amp;gt;==&lt;br /&gt;
This command provides an interactive command-line console for evaluating and experimenting with expressions. This is useful for testing interpolations before using them in configurations, and for interacting with any values currently saved in state. Terraform console will read configured state even if it is remote.&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
$&amp;gt; terraform console #-state=path # note I have 'tfstate' available; this could be remote state&lt;br /&gt;
&amp;gt; var.vpc_cidr       # &amp;lt;- new syntax&lt;br /&gt;
10.123.0.0/16&lt;br /&gt;
&amp;gt; &amp;quot;${var.vpc_cidr}&amp;quot;  # &amp;lt;- old syntax&lt;br /&gt;
10.123.0.0/16&lt;br /&gt;
&amp;gt; aws_security_group.tf_public_sg.id   # interpolate from state&lt;br /&gt;
sg-04d51b5ae10e6f0b0&lt;br /&gt;
&amp;gt; help&lt;br /&gt;
The Terraform console allows you to experiment with Terraform interpolations.&lt;br /&gt;
You may access resources in the state (if you have one) just as you would&lt;br /&gt;
from a configuration. For example: &amp;quot;aws_instance.foo.id&amp;quot; would evaluate&lt;br /&gt;
to the ID of &amp;quot;aws_instance.foo&amp;quot; if it exists in your state.&lt;br /&gt;
&lt;br /&gt;
Type in the interpolation to test and hit &amp;lt;enter&amp;gt; to see the result.&lt;br /&gt;
&lt;br /&gt;
To exit the console, type &amp;quot;exit&amp;quot; and hit &amp;lt;enter&amp;gt;, or use Control-C or&lt;br /&gt;
Control-D.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ echo &amp;quot;aws_iam_user.notif.arn&amp;quot; | terraform console&lt;br /&gt;
arn:aws:iam::123456789:user/notif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Log user_data to console logs ==&lt;br /&gt;
In Linux add a line below after she-bang&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
exec &amp;gt; &amp;gt;(tee /var/log/user-data.log|logger -t user-data -s 2&amp;gt;/dev/console)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Now you can go and open System Logs in AWS Console to view user-data script logs.&lt;br /&gt;
&lt;br /&gt;
= terraform graph to visualise configuration =&lt;br /&gt;
== Graph dependencies ==&lt;br /&gt;
Create visualised file. You may need to install &amp;lt;code&amp;gt;sudo apt-get install graphviz&amp;lt;/code&amp;gt; if it is not in your system.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo apt install graphviz # installs 'dot'&lt;br /&gt;
terraform graph | dot -Tpng &amp;gt; graph.png&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
[[File:Example2.png|none|left|700px|Terraform visual configuration]]&lt;br /&gt;
&lt;br /&gt;
== [https://serverfault.com/questions/1005761/what-does-error-cycle-means-in-terraform Cycle error] ==&lt;br /&gt;
Example cycle error:&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
Error: Cycle: module.gke.google_container_node_pool.pools[&amp;quot;low-standard-n1&amp;quot;]&lt;br /&gt;
 module.gke.google_container_node_pool.pools[&amp;quot;medium-standard-n1&amp;quot;]&lt;br /&gt;
 module.gke.google_container_node_pool.pools[&amp;quot;large-standard-n1&amp;quot;]&lt;br /&gt;
 module.gke.local.cluster_endpoint (expand)&lt;br /&gt;
 module.gke.output.endpoint (expand)&lt;br /&gt;
 provider[&amp;quot;registry.terraform.io/gavinbunney/kubectl&amp;quot;]&lt;br /&gt;
 kubectl_manifest.sync[&amp;quot;source.toolkit.fluxcd.io/v1beta1/gitrepository/flux-system/flux-system&amp;quot;] (destroy)&lt;br /&gt;
 module.gke.google_container_node_pool.pools[&amp;quot;preemptible&amp;quot;] (destroy)&lt;br /&gt;
 module.gke.module.gcloud_delete_default_kube_dns_configmap.module.gcloud_kubectl.null_resource.additional_components[0] (destroy)&lt;br /&gt;
 module.gke.module.gcloud_delete_default_kube_dns_configmap.module.gcloud_kubectl.null_resource.run_command[0] (destroy)&lt;br /&gt;
 module.gke.module.gcloud_delete_default_kube_dns_configmap.module.gcloud_kubectl.null_resource.module_depends_on[0] (destroy)&lt;br /&gt;
 module.gke.module.gcloud_delete_default_kube_dns_configmap.module.gcloud_kubectl.null_resource.run_destroy_command[0] (destroy)&lt;br /&gt;
 module.gke.kubernetes_config_map.kube-dns[0] (destroy)&lt;br /&gt;
 module.gke.google_container_cluster.primary&lt;br /&gt;
 module.gke.local.cluster_output_master_auth (expand)&lt;br /&gt;
 module.gke.local.cluster_master_auth_list_layer1 (expand)&lt;br /&gt;
 module.gke.local.cluster_master_auth_list_layer2 (expand)&lt;br /&gt;
 module.gke.local.cluster_master_auth_map (expand)&lt;br /&gt;
 module.gke.local.cluster_ca_certificate (expand)&lt;br /&gt;
 module.gke.output.ca_certificate (expand)&lt;br /&gt;
 provider[&amp;quot;registry.terraform.io/hashicorp/kubernetes&amp;quot;]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;-draw-cycles&amp;lt;/code&amp;gt; command causes Terraform to mark the arrows that are related to the cycle being reported using the color red. If you cannot visually distinguish red from black, you may wish to first edit the generated Graphviz code to replace red with some other color you can distinguish.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo apt install graphviz&lt;br /&gt;
terraform graph -draw-cycles -type=plan &amp;gt; cycle-plan.graphviz&lt;br /&gt;
terraform graph -draw-cycles | dot -Tpng &amp;gt; cycles.png&lt;br /&gt;
terraform graph -draw-cycles | dot -Tsvg &amp;gt; cycles.svg&lt;br /&gt;
terraform graph -draw-cycles | dot -Tpdf &amp;gt; cycles.pdf&lt;br /&gt;
# | -draw-cycles - highlight any cycles in the graph with colored edges. This helps when diagnosing cycle errors.&lt;br /&gt;
# | -type=plan   - type of graph to output. Can be: plan, plan-destroy, apply, validate, input, refresh.&lt;br /&gt;
&lt;br /&gt;
# For large graphs you may want to install inkscape&lt;br /&gt;
sudo apt install inkscape --no-install-suggests --no-install-recommends&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Awoid cycle errors in modules by structuring your config to avoid cross-module references. So instead of directly accessing an output of one module from inside another, set it up as in input parameter instead and wire everything together on the top level.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;How to get it solved&lt;br /&gt;
With the cycling dependency issue, study the graph then decide on removing from the state a resource that should be generated later. If the graph is not clear or too complex to read you may need to guess and delete from the state a resource marked for deletion, ie:&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
terraform state  rm kubectl_manifest.install[\&amp;quot;apps/v1/deployment/flux-system/kustomize-controller\&amp;quot;]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Remote state =&lt;br /&gt;
== Enable ==&lt;br /&gt;
Create s3 bucket with unique name, enable versioning and choose a region.&lt;br /&gt;
&lt;br /&gt;
Then configure terraform:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ terraform remote config \&lt;br /&gt;
     -backend=s3 \&lt;br /&gt;
     -backend-config=&amp;quot;bucket=YOUR_BUCKET_NAME&amp;quot; \&lt;br /&gt;
     -backend-config=&amp;quot;key=terraform.tfstate&amp;quot; \&lt;br /&gt;
     -backend-config=&amp;quot;region=YOUR_BUCKET_REGION&amp;quot; \&lt;br /&gt;
     -backend-config=&amp;quot;encrypt=true&amp;quot;&lt;br /&gt;
 Remote configuration updated&lt;br /&gt;
 Remote state configured and pulled.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
After running this command, you should see your Terraform state show up in that S3 bucket.&lt;br /&gt;
&lt;br /&gt;
== Locking ==&lt;br /&gt;
Add &amp;lt;code&amp;gt;dynamodb_table&amp;lt;/code&amp;gt; name to backend configuration. &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
terraform {&lt;br /&gt;
  version          = &amp;quot;~&amp;gt; 1.0&amp;quot;&lt;br /&gt;
  required_version = &amp;quot;= 0.11.11&amp;quot;&lt;br /&gt;
  backend &amp;quot;s3&amp;quot; {&lt;br /&gt;
    dynamodb_table = &amp;quot;tfstate-lock&amp;quot;&lt;br /&gt;
    profile        = &amp;quot;terraform-agent&amp;quot;&lt;br /&gt;
#   assume_role {&lt;br /&gt;
#     role_arn     = &amp;quot;${var.aws_xaccount_role}&amp;quot;&lt;br /&gt;
#     session_name = &amp;quot;${var.aws_xsession_name}&amp;quot;&lt;br /&gt;
#   }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In AWS create dynamo-db table, named: &amp;lt;tt&amp;gt;tfsate-lock&amp;lt;/tt&amp;gt; with index &amp;lt;tt&amp;gt;LockID&amp;lt;/tt&amp;gt;; as on a picture below. It an event of taking a lock the entry similar to one below gets created.&lt;br /&gt;
[[File:Terraform-dynamo-db-state-locking.png|none|left|Terraform-dynamo-db-state-locking]]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
{&amp;quot;ID&amp;quot;:&amp;quot;62a453e8-7fbc-cfa2-e07f-be1381b82af3&amp;quot;,&amp;quot;Operation&amp;quot;:&amp;quot;OperationTypePlan&amp;quot;,&amp;quot;Info&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;Who&amp;quot;:&amp;quot;piotr@laptop1&amp;quot;,&amp;quot;Version&amp;quot;:&amp;quot;0.11.11&amp;quot;,&amp;quot;Created&amp;quot;:&amp;quot;2019-03-07T08:49:33.3078722Z&amp;quot;,&amp;quot;Path&amp;quot;:&amp;quot;tfstate-acmedev01-acmedev-111111111111/aws/acmedev01/state&amp;quot;}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Workspaces =&lt;br /&gt;
== [https://discuss.hashicorp.com/t/how-to-change-the-name-of-a-workspace/24010 Rename a workspace / move the state file] ==&lt;br /&gt;
{{Note|The state manipulation commands run through Terraform’s automatic state upgrading processes and so best to do this with the same Terraform CLI version that you’ve most recently been using against this workspace so that the state won’t be implicitly upgraded as part of the operation.}}&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
terraform workspace select old-name&lt;br /&gt;
terraform state pull &amp;gt;old-name.tfstate&lt;br /&gt;
terraform workspace new new-name&lt;br /&gt;
terraform state push old-name.tfstate&lt;br /&gt;
terraform show # confirm that the newly-imported state looks 'right', before deleting the old workspace&lt;br /&gt;
terraform workspace delete -force old-name&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
Variables can be provided via cli&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
terraform apply -var=&amp;quot;image_id=ami-abc123&amp;quot;&lt;br /&gt;
terraform apply -var='image_id_list=[&amp;quot;ami-abc123&amp;quot;,&amp;quot;ami-def456&amp;quot;]'&lt;br /&gt;
terraform apply -var='image_id_map={&amp;quot;us-east-1&amp;quot;:&amp;quot;ami-abc123&amp;quot;,&amp;quot;us-east-2&amp;quot;:&amp;quot;ami-def456&amp;quot;}'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Terraform also automatically loads a number of variable definitions files if they are present:&lt;br /&gt;
* Files named exactly &amp;lt;code&amp;gt;terraform.tfvars&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;terraform.tfvars.json&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Any files with names ending in &amp;lt;code&amp;gt;.auto.tfvars&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;.auto.tfvars.json&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=Syntax Terraform 0.12.6+=&lt;br /&gt;
{{Note|This [https://alexharv074.github.io/2019/06/02/adventures-in-the-terraform-dsl-part-iii-iteration-enhancements-in-terraform-0.12.html#for-expressions for-expressions] link is a little diamond for this subject}}&lt;br /&gt;
&lt;br /&gt;
== Map and nested block ==&lt;br /&gt;
Terrafom 0.12 introduces stricter validation for followings but allows map keys to be set dynamically from expressions. Note of &amp;quot;=&amp;quot; sign.&lt;br /&gt;
* a map attribute - usually have user-defined keys, like we see in the tags example &lt;br /&gt;
* a nested block always has a fixed set of supported arguments defined by the resource type schema, which Terraform will validate&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
resource &amp;quot;aws_instance&amp;quot; &amp;quot;example&amp;quot; {&lt;br /&gt;
  instance_type = &amp;quot;t2.micro&amp;quot;&lt;br /&gt;
  ami           = &amp;quot;ami-abcd1234&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  tags = {             # &amp;lt;- a map attribute, requires '='&lt;br /&gt;
    Name = &amp;quot;example instance&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  ebs_block_device {    # &amp;lt;- a nested block, no '='&lt;br /&gt;
    device_name = &amp;quot;sda2&amp;quot;&lt;br /&gt;
    volume_type = &amp;quot;gp2&amp;quot;&lt;br /&gt;
    volume_size = 24&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== [https://alexharv074.github.io/2019/06/02/adventures-in-the-terraform-dsl-part-iii-iteration-enhancements-in-terraform-0.12.html For_each] ==&lt;br /&gt;
* [https://alexharv074.github.io/2019/06/02/adventures-in-the-terraform-dsl-part-iii-iteration-enhancements-in-terraform-0.12.html terraform iterations]&lt;br /&gt;
* [https://discuss.hashicorp.com/t/produce-maps-from-list-of-strings-of-a-map/2197 Produce maps from list of strings of a map]&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ For_each and new allowed formatting without the need for &amp;quot;${var.vpc_cidr}&amp;quot; syntax = var.vpc_cidr&lt;br /&gt;
|- &lt;br /&gt;
! main.tf&lt;br /&gt;
! variables.tf and outputs.tf&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| &amp;lt;source&amp;gt;# vi main.tf&lt;br /&gt;
resource &amp;quot;aws_vpc&amp;quot; &amp;quot;tf_vpc&amp;quot; {&lt;br /&gt;
  cidr_block           = &amp;quot;${var.vpc_cidr}&amp;quot;&lt;br /&gt;
  enable_dns_hostnames = true&lt;br /&gt;
  enable_dns_support   = true&lt;br /&gt;
  tags =  {           #&amp;lt;-note of '=' as this is an argument&lt;br /&gt;
    Name = &amp;quot;tf_vpc&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
resource &amp;quot;aws_security_group&amp;quot; &amp;quot;tf_public_sg&amp;quot; {&lt;br /&gt;
  name        = &amp;quot;tf_public_sg&amp;quot;&lt;br /&gt;
  description = &amp;quot;Used for access to the public instances&amp;quot;&lt;br /&gt;
  vpc_id      = &amp;quot;${aws_vpc.tf_vpc.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  dynamic &amp;quot;ingress&amp;quot; {&lt;br /&gt;
    for_each = [ for s in var.service_ports: {&lt;br /&gt;
       from_port = s.from_port&lt;br /&gt;
       to_port   = s.to_port   }]&lt;br /&gt;
    content {&lt;br /&gt;
      from_port   = ingress.value.from_port&lt;br /&gt;
      to_port     = ingress.value.to_port&lt;br /&gt;
      protocol    = &amp;quot;tcp&amp;quot;&lt;br /&gt;
      cidr_blocks = [ var.accessip ]&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
# Commented block has been replaced by 'dynamic &amp;quot;ingress&amp;quot;'&lt;br /&gt;
# ingress {  #SSH&lt;br /&gt;
#   from_port   = 22&lt;br /&gt;
#   to_port     = 22&lt;br /&gt;
#   protocol    = &amp;quot;tcp&amp;quot;&lt;br /&gt;
#   cidr_blocks = [&amp;quot;${var.accessip}&amp;quot;]&lt;br /&gt;
# }&lt;br /&gt;
# ingress {  #HTTP&lt;br /&gt;
#   from_port   = 80&lt;br /&gt;
#   to_port     = 80&lt;br /&gt;
#   protocol    = &amp;quot;tcp&amp;quot;&lt;br /&gt;
#   cidr_blocks = [&amp;quot;${var.accessip}&amp;quot;]&lt;br /&gt;
# }&lt;br /&gt;
  egress { &lt;br /&gt;
    from_port   = 0&lt;br /&gt;
    to_port     = 0&lt;br /&gt;
    protocol    = &amp;quot;-1&amp;quot;&lt;br /&gt;
    cidr_blocks = [&amp;quot;0.0.0.0/0&amp;quot;]&lt;br /&gt;
  }&lt;br /&gt;
}&amp;lt;/source&amp;gt; &lt;br /&gt;
| &amp;lt;source&amp;gt;# vi variables.tf&lt;br /&gt;
variable &amp;quot;vpc_cidr&amp;quot; { default = &amp;quot;10.123.0.0/16&amp;quot; }&lt;br /&gt;
variable &amp;quot;accessip&amp;quot; { default = &amp;quot;0.0.0.0/0&amp;quot;     }&lt;br /&gt;
&lt;br /&gt;
variable &amp;quot;service_ports&amp;quot; {&lt;br /&gt;
  type = &amp;quot;list&amp;quot;&lt;br /&gt;
  default = [&lt;br /&gt;
    { from_port = 22, to_port = 22 },&lt;br /&gt;
    { from_port = 80, to_port = 80 }&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# vi outputs.tf&lt;br /&gt;
output &amp;quot;public_sg&amp;quot; { &lt;br /&gt;
  value = aws_security_group.tf_public_sg.id&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
output &amp;quot;ingress_port_mapping&amp;quot; {&lt;br /&gt;
  value = {&lt;br /&gt;
    for ingress in aws_security_group.tf_public_sg.ingress:&lt;br /&gt;
    format(&amp;quot;From %d&amp;quot;, ingress.from_port) =&amp;gt; format(&amp;quot;To %d&amp;quot;, ingress.to_port)&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Computed 'Outputs:'&lt;br /&gt;
ingress_port_mapping = {&lt;br /&gt;
  &amp;quot;From 22&amp;quot; = &amp;quot;To 22&amp;quot;&lt;br /&gt;
  &amp;quot;From 80&amp;quot; = &amp;quot;To 80&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
public_sg = sg-04d51b5ae10e6f0b0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== [https://www.sheldonhull.com/blog/how-to-iterate-through-a-list-of-objects-with-terraforms-for-each-function/ Iterate over list of objects] ===&lt;br /&gt;
[https://stackoverflow.com/questions/58594506/how-to-for-each-through-a-listobjects-in-terraform-0-12 how-to-for-each-through-a-listobjects]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
# debug.tf&lt;br /&gt;
locals {&lt;br /&gt;
  users = [&lt;br /&gt;
    # list of objects&lt;br /&gt;
    { name = &amp;quot;foo&amp;quot;, is_enabled = true  },&lt;br /&gt;
    { name = &amp;quot;bar&amp;quot;, is_enabled = false },&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
resource &amp;quot;null_resource&amp;quot; &amp;quot;this&amp;quot; {&lt;br /&gt;
    for_each = { for name in local.users: name.name =&amp;gt; name.is_enabled }&lt;br /&gt;
    connection {&lt;br /&gt;
      name     = each.key&lt;br /&gt;
      email    = each.value&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
output &amp;quot;users_map&amp;quot; {&lt;br /&gt;
  value = { for name in local.users: name.name =&amp;gt; name.is_enabled }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# terraform init&lt;br /&gt;
# terraform apply&lt;br /&gt;
&lt;br /&gt;
null_resource.this[&amp;quot;bar&amp;quot;]: Creating...&lt;br /&gt;
null_resource.this[&amp;quot;foo&amp;quot;]: Creating...&lt;br /&gt;
null_resource.this[&amp;quot;bar&amp;quot;]: Creation complete after 0s [id=7228791922218879597]&lt;br /&gt;
null_resource.this[&amp;quot;foo&amp;quot;]: Creation complete after 0s [id=7997705376010456213]&lt;br /&gt;
&lt;br /&gt;
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.&lt;br /&gt;
&lt;br /&gt;
Outputs:&lt;br /&gt;
&lt;br /&gt;
users_map = {&lt;br /&gt;
  &amp;quot;bar&amp;quot; = false&lt;br /&gt;
  &amp;quot;foo&amp;quot; = true&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Plan is more readable and explicit ==&lt;br /&gt;
[[Terraform/plan_tf_11_vs_12|See comparison]]&lt;br /&gt;
&lt;br /&gt;
== [https://www.hashicorp.com/blog/terraform-0-12-rich-value-types/ Rich Value Types] - for previewing whole resource object ==&lt;br /&gt;
'''Resources and Modules as Values''' Terraform 0.12 now permits using entire resources as object values within configuration, including returning them as outputs and passing them as input variables:&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
output &amp;quot;vpc&amp;quot; {&lt;br /&gt;
  value = aws_vpc.example&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The type of this output value is an object type derived from the schema of the &amp;lt;code&amp;gt;aws_vpc&amp;lt;/code&amp;gt; resource type. The calling module can then access attributes of this result in the same way as the returning module would use &amp;lt;code&amp;gt;aws_vpc.example&amp;lt;/code&amp;gt;, such as &amp;lt;code&amp;gt;module.example.vpc.cidr_block&amp;lt;/code&amp;gt;. This works also for modules with an expression like &amp;lt;code&amp;gt;module.vpc&amp;lt;/code&amp;gt; evaluating to an object value with attributes corresponding to the modules's named outputs.&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;code&amp;gt;for&amp;lt;/code&amp;gt; ==&lt;br /&gt;
* [https://discuss.hashicorp.com/t/produce-maps-from-list-of-strings-of-a-map/2197 Produce maps from list of strings of a map]&lt;br /&gt;
This is mostly used for parsing preexisting lists and maps rather than generating ones. For example, we are able to convert all elements in a list of strings to upper case using this expression.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
local {&lt;br /&gt;
  upper_list = [for i in var.list : upper(i)] # creates a new list &lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The For iterates over each element of the list and returns the value of upper(el) for each element in form of a list. We can also use this expression to generate maps.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
local {&lt;br /&gt;
  upper_map = {for i in var.list : i =&amp;gt; upper(i)} # creates a map with key = value&lt;br /&gt;
                                                  #                 { i[0] = upper(i[0])&lt;br /&gt;
                                                  #                   i[1] = upper(i[1]) }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Use ''if'' as a filter in ''for'' expression&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
[for i in var.list : upper(i) if i != &amp;quot;&amp;quot;]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
In this case, the original element from list now correspond to their uppercase version.&lt;br /&gt;
&lt;br /&gt;
Lastly, we can include an if statement as a filter in for expressions. Unfortunately, we are not able to use if in logical operations like the ternary operators we used before. The following state will try to return a list of all non-empty elements in their uppercase state.&lt;br /&gt;
&lt;br /&gt;
== Manipulate list and complex object ==&lt;br /&gt;
Build a new list by removing items that their string value do not match regex expression&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Resource that generates an object&lt;br /&gt;
resource &amp;quot;aws_acm_certificate&amp;quot; &amp;quot;main&amp;quot; {...}&lt;br /&gt;
&lt;br /&gt;
# Preview of input object 'aws_acm_certificate.main.domain_validation_options'&lt;br /&gt;
output &amp;quot;domain_validation_options&amp;quot; {&lt;br /&gt;
  value       = aws_acm_certificate.main.domain_validation_options&lt;br /&gt;
  description = &amp;quot;array/list of maps taken from resource object(aws_acm_certificate.issued) describing all validation domain records&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
$ terraform output domain_validation_options&lt;br /&gt;
[ # &amp;lt;- array starts here&lt;br /&gt;
  { # &amp;lt;- an item of array the map object&lt;br /&gt;
    &amp;quot;domain_name&amp;quot; = &amp;quot;*.dev.example.com&amp;quot;&lt;br /&gt;
    &amp;quot;resource_record_name&amp;quot; = &amp;quot;_11111111111111111111111111111111.dev.example.com.&amp;quot;&lt;br /&gt;
    &amp;quot;resource_record_type&amp;quot; = &amp;quot;CNAME&amp;quot;&lt;br /&gt;
    &amp;quot;resource_record_value&amp;quot; = &amp;quot;_22222222222222222222222222222222.mzlfeqexyx.acm-validations.aws.&amp;quot;&lt;br /&gt;
  },&lt;br /&gt;
  {&lt;br /&gt;
    &amp;quot;domain_name&amp;quot; = &amp;quot;api.example.com&amp;quot;&lt;br /&gt;
    &amp;quot;resource_record_name&amp;quot; = &amp;quot;_31111111111111111111111111111111.api.example.com.&amp;quot;&lt;br /&gt;
    &amp;quot;resource_record_type&amp;quot; = &amp;quot;CNAME&amp;quot;&lt;br /&gt;
    &amp;quot;resource_record_value&amp;quot; = &amp;quot;_42222222222222222222222222222222.vhzmpjdqfx.acm-validations.aws.&amp;quot;&lt;br /&gt;
                                 &lt;br /&gt;
  },&lt;br /&gt;
]&lt;br /&gt;
&lt;br /&gt;
# 'for k, v' syntax builds a new object 'validation_domains' by iterating over array of maps&lt;br /&gt;
# 'aws_acm_certificate.main.domain_validation_options' and conditinally changes a value of 'v'&lt;br /&gt;
# if contains the sting &amp;quot;*.dev.example.com&amp;quot;. tomap(v) is required to persist type across for expression.&lt;br /&gt;
locals {&lt;br /&gt;
  validation_domains = [&lt;br /&gt;
    for k, v in aws_acm_certificate.main.domain_validation_options : tomap(v) if contains(&lt;br /&gt;
      &amp;quot;*.dev.example.com&amp;quot;, replace(v.domain_name, &amp;quot;*.&amp;quot;, &amp;quot;&amp;quot;)&lt;br /&gt;
    )&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
$ terraform output local_distinct_domains&lt;br /&gt;
local_distinct_domains = [&lt;br /&gt;
  &amp;quot;api.example.com&amp;quot;,&lt;br /&gt;
  &amp;quot;dev.example.com&amp;quot;,&lt;br /&gt;
  &amp;quot;api-aat1.dev.example.com&amp;quot;,&lt;br /&gt;
  &amp;quot;api-aat2.dev.example.com&amp;quot;,&lt;br /&gt;
]&lt;br /&gt;
&lt;br /&gt;
# 'for domain' expession builds a new list only when a domain matches regexall string.&lt;br /&gt;
# checks regexall lengh &amp;gt; 0 of matched captured groups so true or false is return, so &lt;br /&gt;
# the 'for domain : if' statment conditionally adds the item to the new list&lt;br /&gt;
locals {&lt;br /&gt;
  distinct_domains_excluded = [ &lt;br /&gt;
    for domain in local.distinct_domains : domain if length(regexall(&amp;quot;dev.example.com&amp;quot;, domain)) &amp;gt; 0&lt;br /&gt;
  ]&lt;br /&gt;
&lt;br /&gt;
# Similar to the above but iterating over array of maps (k,v - key, value pairs)&lt;br /&gt;
  validation_domains = [&lt;br /&gt;
    for k,v in local.validation_domains : tomap(v) if length(regexall(&amp;quot;dev.example.com&amp;quot;, v.domain_name)) &amp;gt; 0&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Example of iterating over array of maps 'aws_acm_certificate.main.domain_validation_options' to build a list&lt;br /&gt;
# of fqdns that are store in 'aws_acm_certificate.main.domain_validation_options.resource_record_name' in .resource_record_name&lt;br /&gt;
# key.&lt;br /&gt;
# 'for fqdn' syntax on each iteration 'fqdn=aws_acm_certificate.main.domain_validation_options[index]', then&lt;br /&gt;
# anything after ':' means 'set to value equals' fqdn.resource_record_name&lt;br /&gt;
resource &amp;quot;aws_acm_certificate_validation&amp;quot; &amp;quot;main&amp;quot; {&lt;br /&gt;
  certificate_arn         = aws_acm_certificate.main.arn&lt;br /&gt;
  validation_record_fqdns = [ &lt;br /&gt;
    for fqdn in aws_acm_certificate.main.domain_validation_options : fqdn.resource_record_name&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Terraform Merge on Wildcard Tuple ==&lt;br /&gt;
Ideally the solution should be as simple as:&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
merge(local.policy_definitions.*.parameters...)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
* [https://github.com/hashicorp/terraform/issues/24645 Terraform Merge on Wildcard Tuple] TF, GitHub issue&lt;br /&gt;
* [https://stackoverflow.com/questions/62683298/merge-list-of-objects-in-terraform merge-list-of-objects-in-terraform] Stackoverflow&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Workaround&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
policy_parameters = [&lt;br /&gt;
    for key,value in data.azurerm_policy_definition.d_policy_definitions:&lt;br /&gt;
      {&lt;br /&gt;
        parameters = jsondecode(value.parameters)&lt;br /&gt;
      }&lt;br /&gt;
  ]&lt;br /&gt;
  ph_parameters = local.policy_parameters[*].parameters&lt;br /&gt;
  input_parameter = [for item in local.ph_parameters: merge(item,local.ph_parameters...)][0]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Break down&lt;br /&gt;
Extracts the parameter values into a list of JSON values&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
policy_parameters = [&lt;br /&gt;
    for key,value in data.azurerm_policy_definition.d_policy_definitions:&lt;br /&gt;
      {&lt;br /&gt;
        parameters = jsondecode(value.parameters)&lt;br /&gt;
      }&lt;br /&gt;
  ]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Reference the parameters as a variable&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
ph_parameters = local.policy_parameters[*].parameters&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Merge all item content into each item.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
input_parameter = [for item in local.ph_parameters: merge(item,local.ph_parameters...)]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
The 3rd step gives all items in the list the same value, so we can use any index.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Usage&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
parameters = &amp;quot;${jsonencode(local.input_parameter[n])}&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== function: replace, regex ==&lt;br /&gt;
Snippet below removes comments and any empty lines from a &amp;lt;code&amp;gt;values.yaml.tpl&amp;lt;/code&amp;gt; file.&lt;br /&gt;
&amp;lt;source lang=json&amp;gt;&lt;br /&gt;
locals {&lt;br /&gt;
  match_comment = &amp;quot;/(?U)(?m)(?s)^[[:space:]]*#.*$/&amp;quot; # match anyline that starts with '#' or any 'whitespace(s) + #'&lt;br /&gt;
  match_empty_line = &amp;quot;/(?m)(?s)(^[\r\n])/&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
resource &amp;quot;helm_release&amp;quot; &amp;quot;myapp&amp;quot; {&lt;br /&gt;
  name             = &amp;quot;myapp&amp;quot;&lt;br /&gt;
  chart            = &amp;quot;${path.module}/charts/myapp&amp;quot;&lt;br /&gt;
  values = [&lt;br /&gt;
    replace(&lt;br /&gt;
        replace(&lt;br /&gt;
          templatefile(&amp;quot;${path.module}/templates/values.yaml.tpl&amp;quot;, {&lt;br /&gt;
            }), local.match_comment, &amp;quot;&amp;quot;), local.match_empty_line, &amp;quot;&amp;quot;)&lt;br /&gt;
  ]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Explanation:&lt;br /&gt;
* Terraform regex is using [https://github.com/google/re2/wiki/Syntax re2 library]&lt;br /&gt;
* Regex flags are enabled by prefixinf the search:&lt;br /&gt;
** &amp;lt;code&amp;gt;(?m)&amp;lt;/code&amp;gt; - multi-line mode (default false)&lt;br /&gt;
** &amp;lt;code&amp;gt;(?s)&amp;lt;/code&amp;gt; - let . match \n (default false)&lt;br /&gt;
** &amp;lt;code&amp;gt;(?U)&amp;lt;/code&amp;gt; - ungreedy (default false), so stop matching comments at EOL&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
*[https://www.hashicorp.com/blog/hashicorp-terraform-0-12-preview-for-and-for-each HashiCorp Terraform 0.12 Preview: For and For-Each]&lt;br /&gt;
&lt;br /&gt;
= Syntax Terraform ~0.11 =&lt;br /&gt;
== &amp;lt;code&amp;gt;if&amp;lt;/code&amp;gt; statements ==&lt;br /&gt;
;Terraform ~&amp;lt; 0.9&lt;br /&gt;
Old versions Terraform doesn't support if- or if-else statement but we can take an advantage of a boolean ''count'' attribute that most of resources have.&lt;br /&gt;
 boolean true  = 1&lt;br /&gt;
 boolean false = 0&lt;br /&gt;
&lt;br /&gt;
;Terrafrom ~0.11+&lt;br /&gt;
Newer version support if statements, the conditional syntax is the well-known ternary operation:&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
 CONDITION ? TRUEVAL  : FALSEVAL&lt;br /&gt;
 CONDITION ? caseTrue : caseFalse&lt;br /&gt;
 domain = &amp;quot;${var.frontend_domain != &amp;quot;&amp;quot; ? var.frontend_domain : var.domain}&amp;quot; # tf &amp;lt;0.12 syntax&lt;br /&gt;
 count = var.image_publisher == &amp;quot;MicrosoftWindowsServer&amp;quot; ? 0 : 3            # tf 0.12+ syntax&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The support operators are:&lt;br /&gt;
*Equality: == and !=&lt;br /&gt;
*Numerical comparison: &amp;gt;, &amp;lt;, &amp;gt;=, &amp;lt;=&lt;br /&gt;
*Boolean logic: &amp;amp;&amp;amp;, ||, unary !  (|| is  logical OR; “short-circuit” OR)&lt;br /&gt;
&lt;br /&gt;
= Modules =&lt;br /&gt;
Modules are used in Terraform to modularize and encapsulate groups of resources in your infrastructure.&lt;br /&gt;
&lt;br /&gt;
When calling a module from .tf file you passing values for variables that are defined in a module to create resources to your specification. Before you can use any module it needs to be downloaded. Use &lt;br /&gt;
 $ terraform get&lt;br /&gt;
to download modules. You will notice that &amp;lt;code&amp;gt;.terraform&amp;lt;/code&amp;gt; directory will be created that contains symlinks to the module.&lt;br /&gt;
&lt;br /&gt;
;TF file &amp;lt;tt&amp;gt;~/git/dev101/vpc.tf&amp;lt;/tt&amp;gt; calling 'vpc' module&lt;br /&gt;
&lt;br /&gt;
 variable &amp;quot;vpc_name&amp;quot;       { description = &amp;quot;value comes from terrafrom.tfvars&amp;quot; }&lt;br /&gt;
 variable &amp;quot;vpc_cidr_base&amp;quot;  { description = &amp;quot;value comes from terrafrom.tfvars&amp;quot; }&lt;br /&gt;
 variable &amp;quot;vpc_cidr_range&amp;quot; { description = &amp;quot;value comes from terrafrom.tfvars&amp;quot; }&lt;br /&gt;
 &lt;br /&gt;
 module &amp;quot;vpc-dev&amp;quot; {&lt;br /&gt;
   source     = &amp;quot;../modules/vpc&amp;quot;&lt;br /&gt;
   &amp;lt;span style=&amp;quot;color: blue&amp;quot;&amp;gt;name&amp;lt;/span&amp;gt;       = &amp;quot;${var.vpc_name}&amp;quot;  #here we assign a value to 'name' variable&lt;br /&gt;
   &amp;lt;span style=&amp;quot;color: blue&amp;quot;&amp;gt;cidr&amp;lt;/span&amp;gt;       = &amp;quot;${var.vpc_cidr_base}.${var.vpc_cidr_range}&amp;quot; }&lt;br /&gt;
 &lt;br /&gt;
 output &amp;quot;vpc-name&amp;quot;         { value = &amp;quot;${var.vpc_name                  }&amp;quot;}&lt;br /&gt;
 output &amp;quot;vpc_id&amp;quot;           { value = &amp;quot;${module.vpc-dev.&amp;lt;span style=&amp;quot;color: green&amp;quot;&amp;gt;id-from_module&amp;lt;/span&amp;gt; }&amp;quot;}&lt;br /&gt;
&lt;br /&gt;
;Module in &amp;lt;tt&amp;gt;~/git/modules/vpc/main.tf&amp;lt;/tt&amp;gt;&lt;br /&gt;
 variable &amp;quot;name&amp;quot; { description = &amp;quot;variable local to the module, value comes when calling the module&amp;quot; }&lt;br /&gt;
 variable &amp;quot;cidr&amp;quot; { description = &amp;quot;local to the module, value passed on when calling the module&amp;quot; }&lt;br /&gt;
 &lt;br /&gt;
 resource &amp;quot;aws_vpc&amp;quot; &amp;quot;scope&amp;quot; {&lt;br /&gt;
    cidr_block  = &amp;quot;${var.&amp;lt;span style=&amp;quot;color: blue&amp;quot;&amp;gt;cidr&amp;lt;/span&amp;gt;}&amp;quot;&lt;br /&gt;
    tags { Name = &amp;quot;${var.&amp;lt;span style=&amp;quot;color: blue&amp;quot;&amp;gt;name&amp;lt;/span&amp;gt;}&amp;quot; }}&lt;br /&gt;
 &lt;br /&gt;
  output &amp;quot;&amp;lt;span style=&amp;quot;color: green&amp;quot;&amp;gt;id-from_module&amp;lt;/span&amp;gt;&amp;quot;    { value = &amp;quot;${aws_vpc.scope.id}&amp;quot; }&lt;br /&gt;
&lt;br /&gt;
Output variables is a way to output important data back when running &amp;lt;code&amp;gt;terraform apply&amp;lt;/code&amp;gt;. These variables also can be recalled when .tfstate file has been populated using &amp;lt;code&amp;gt;terraform output VARIABLE-NAME&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
 $ terraform apply     #this will use 'vpc' module&lt;br /&gt;
&lt;br /&gt;
[[File:Terraform-module-apply.png|400px|none|left|Terraform-module-apply]]&lt;br /&gt;
&lt;br /&gt;
Notice &amp;lt;span style=&amp;quot;color: green&amp;quot;&amp;gt;Outputs&amp;lt;/span&amp;gt;. These outputs can be recalled also by:&lt;br /&gt;
 $ terraform output vpc-name      $ terraform output vpc_id&lt;br /&gt;
 dev101                           vpc-00e00c67&lt;br /&gt;
&lt;br /&gt;
= Templates =&lt;br /&gt;
{{ Note | [https://github.com/hashicorp/terraform-guides/tree/master/infrastructure-as-code/terraform-0.12-examples/new-template-syntax Terraform 0.12+ New Template Syntax Example] }}&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
# Terraform version 0.12+ template syntax&lt;br /&gt;
%{ for name in var.names ~}&lt;br /&gt;
%{ if name == &amp;quot;Mary&amp;quot; }${name}%{ endif ~}&lt;br /&gt;
%{ endfor ~}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Dump a rendered &amp;lt;code&amp;gt;data.template_file&amp;lt;/code&amp;gt; into a file to preview correctness of interpolations&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
#Dumps rendered template&lt;br /&gt;
resource &amp;quot;null_resource&amp;quot; &amp;quot;export_rendered_template&amp;quot; {&lt;br /&gt;
  triggers = {&lt;br /&gt;
   uid = &amp;quot;${uuid()}&amp;quot;  #this causes to always run this resource&lt;br /&gt;
  }&lt;br /&gt;
  provisioner &amp;quot;local-exec&amp;quot; {&lt;br /&gt;
    command = &amp;quot;cat &amp;gt; waf-policy.output.txt &amp;lt;&amp;lt;EOL\n${data.template_file.waf-whitelist-policy.rendered}\nEOL&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example of creating &lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
resource &amp;quot;aws_instance&amp;quot; &amp;quot;microservices&amp;quot; {&lt;br /&gt;
  count      = &amp;quot;${var.instance_count}&amp;quot;&lt;br /&gt;
  subnet_id  = &amp;quot;${element(&amp;quot;${data.aws_subnet.private.*.id          }&amp;quot;, count.index)}&amp;quot;&lt;br /&gt;
  user_data  = &amp;quot;${element(&amp;quot;${data.template_file.userdata.*.rendered}&amp;quot;, count.index)}&amp;quot;&lt;br /&gt;
  ...&lt;br /&gt;
}&lt;br /&gt;
data &amp;quot;template_file&amp;quot; &amp;quot;userdata&amp;quot; {&lt;br /&gt;
  count      = &amp;quot;${var.instance_count}&amp;quot;&lt;br /&gt;
  template   = &amp;quot;${file(&amp;quot;${path.root}/templates/user-data.tpl&amp;quot;)}&amp;quot;&lt;br /&gt;
  vars = {&lt;br /&gt;
    vmname   = &amp;quot;ms-${count.index + 1}-${var.vpc_name}&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
#For debugging you can display an array of rendered templates with the output below:&lt;br /&gt;
output &amp;quot;userdata&amp;quot; { value = &amp;quot;${data.template_file.userdata.*.rendered}&amp;quot; }&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
{{ Note |&lt;br /&gt;
* resource &amp;lt;code&amp;gt;template_file is deprecated&amp;lt;/code&amp;gt; in favour of &amp;lt;code&amp;gt;data template_file&amp;lt;/code&amp;gt;&lt;br /&gt;
* Terraform 0.12+ offers new &amp;lt;code&amp;gt;template&amp;lt;/code&amp;gt; function without a need of using a &amp;lt;code&amp;gt;data&amp;lt;/code&amp;gt; object }}&lt;br /&gt;
== template json files ==&lt;br /&gt;
For working with JSON structures it's [https://www.terraform.io/docs/configuration/functions/templatefile.html#generating-json-or-yaml-from-a-template recommended] to use &amp;lt;code&amp;gt;jsonencode&amp;lt;/code&amp;gt; function to simplify escaping, delimiters and get validated json in return.&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
resource &amp;quot;aws_iam_policy&amp;quot; &amp;quot;s3Bucket&amp;quot; {&lt;br /&gt;
   name  = s3Bucket&amp;quot;&lt;br /&gt;
   policy = templatefile(&amp;quot;${path.module}/templates/s3Bucket.json.tpl&amp;quot;, {&lt;br /&gt;
     S3BUCKETS = var.s3_buckets&lt;br /&gt;
   })&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
variable &amp;quot;s3_buckets&amp;quot; {&lt;br /&gt;
  type        = list(string)&lt;br /&gt;
  default     = [ &amp;quot;aaa-bucket-111&amp;quot;, &amp;quot;bbb-bucket-222&amp;quot; ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Template file&lt;br /&gt;
&amp;lt;source lang=json&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
    &amp;quot;Version&amp;quot;: &amp;quot;2012-10-17&amp;quot;,&lt;br /&gt;
    &amp;quot;Statement&amp;quot;: [&lt;br /&gt;
        {&lt;br /&gt;
            &amp;quot;Effect&amp;quot;: &amp;quot;Allow&amp;quot;,&lt;br /&gt;
            &amp;quot;Action&amp;quot;: &amp;quot;s3:ListAllMyBuckets&amp;quot;,&lt;br /&gt;
            &amp;quot;Resource&amp;quot;: &amp;quot;*&amp;quot;&lt;br /&gt;
        },&lt;br /&gt;
        {&lt;br /&gt;
            &amp;quot;Effect&amp;quot;: &amp;quot;Allow&amp;quot;,&lt;br /&gt;
            &amp;quot;Action&amp;quot;: [&lt;br /&gt;
                &amp;quot;s3:ListBucket&amp;quot;,&lt;br /&gt;
                &amp;quot;s3:GetBucketLocation&amp;quot;&lt;br /&gt;
            ],&lt;br /&gt;
            &amp;quot;Resource&amp;quot;: ${jsonencode([for BUCKET in S3BUCKETS : &amp;quot;arn:aws:s3:::${BUCKET}&amp;quot;])}&lt;br /&gt;
# renders json array -&amp;gt; [ &amp;quot;arn:aws:s3:::aaa-bucket-111&amp;quot;, &amp;quot;arn:aws:s3:::bbb-bucket-222&amp;quot; ]&lt;br /&gt;
        }&lt;br /&gt;
    ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Explain&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
substitution syntax ${}    local loop variable&lt;br /&gt;
|  function jsonencode   /      templatefile function input variable, it's not ${} syntax&lt;br /&gt;
|  |                   /       /                                  &lt;br /&gt;
${jsonencode([for BUCKET in S3BUCKETS : &amp;quot;arn:aws:s3:::${BUCKET}&amp;quot;])}&lt;br /&gt;
             / |                                        /       |\&lt;br /&gt;
           /   for loop                     template variable   | function cloasing bracket&lt;br /&gt;
    indicates that the result to be an array[]               closing bracket of the json array&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resource ==&lt;br /&gt;
*[https://github.com/hashicorp/terraform/issues/1893 example of unique templates per instance]&lt;br /&gt;
*[https://github.com/hashicorp/terraform/pull/2140 recommendation of how to create unique templates per instance]&lt;br /&gt;
&lt;br /&gt;
= Execute arbitrary code using null_resource and local-exec =&lt;br /&gt;
The null_resource allows to create terraform managed resource also saved in the state file but it uses 3rd party provisoners like local-exec, remote-exec, etc., allowing for arbitrary code execution. This should be only used when Terraform core does not provide the solution for your use case.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
resource &amp;quot;null_resource&amp;quot; &amp;quot;attach_alb_am_wkr_ext&amp;quot; {&lt;br /&gt;
&lt;br /&gt;
  #depends_on sets up a dependency. So it depends on completion of another resource &lt;br /&gt;
  #and it won't run if the resource does not change&lt;br /&gt;
  #depends_on = [ &amp;quot;aws_cloudformation_stack.waf-alb&amp;quot; ]  &lt;br /&gt;
&lt;br /&gt;
  #triggers save computed strings in tfstate file, if value changes on the next run it triggers a resource to be created&lt;br /&gt;
  triggers = {   &lt;br /&gt;
    waf_id = &amp;quot;${aws_cloudformation_stack.waf-alb.outputs.wafWebACL}&amp;quot;   #produces WAF_id&lt;br /&gt;
    alb_id = &amp;quot;${module.balancer_external_alb_instance.arn         }&amp;quot;   #produces full ALB_arn name&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  provisioner &amp;quot;local-exec&amp;quot; {&lt;br /&gt;
    when    = &amp;quot;create&amp;quot;     #runs on: terraform apply&lt;br /&gt;
    command = &amp;lt;&amp;lt;EOF&lt;br /&gt;
ALBARN=$(aws elbv2 describe-load-balancers --region ${var.region} \&lt;br /&gt;
      --name ${var.vpc}-${var.alb_class} \&lt;br /&gt;
      --output text --query 'LoadBalancers[0].LoadBalancerArn') &amp;amp;&amp;amp;&lt;br /&gt;
aws waf-regional associate-web-acl --web-acl-id &amp;quot;${aws_cloudformation_stack.waf-alb.outputs.wafWebACL}&amp;quot; \&lt;br /&gt;
                                   --resource-arn $ALBARN --region ${var.region}&lt;br /&gt;
EOF&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  provisioner &amp;quot;local-exec&amp;quot; {&lt;br /&gt;
    when    = &amp;quot;destroy&amp;quot;  #runs only on: terraform destruct&lt;br /&gt;
    command = &amp;lt;&amp;lt;EOF&lt;br /&gt;
ALBARN=$(aws elbv2 describe-load-balancers --region ${var.region} \&lt;br /&gt;
      --name ${var.vpc}-${var.alb_class} \&lt;br /&gt;
      --output text --query 'LoadBalancers[0].LoadBalancerArn') &amp;amp;&amp;amp;&lt;br /&gt;
aws waf-regional disassociate-web-acl --resource-arn $ALBARN --region ${var.region}&lt;br /&gt;
EOF&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: By default the local-exec provisioner will use &amp;lt;code&amp;gt;/bin/sh -c &amp;quot;your&amp;lt;&amp;lt;EOFscript&amp;quot;&amp;lt;/code&amp;gt; so it will not strip down any meta-characters like &amp;quot;double quotes&amp;quot; causing &amp;lt;tt&amp;gt;aws cli&amp;lt;/tt&amp;gt; to fail. Therefore the output has been forced as &amp;lt;tt&amp;gt;text&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= &amp;lt;code&amp;gt;terraform providers&amp;lt;/code&amp;gt; =&lt;br /&gt;
List all providers in your project to see versions and dependencies.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
$ terraform providers&lt;br /&gt;
.&lt;br /&gt;
├── provider.aws ~&amp;gt; 2.44&lt;br /&gt;
├── provider.external ~&amp;gt; 1.2&lt;br /&gt;
├── provider.null ~&amp;gt; 2.1&lt;br /&gt;
├── provider.random ~&amp;gt; 2.2&lt;br /&gt;
├── provider.template ~&amp;gt; 2.1&lt;br /&gt;
├── module.kubernetes&lt;br /&gt;
│   ├── module.config&lt;br /&gt;
│   │   ├── provider.aws&lt;br /&gt;
│   │   ├── provider.helm ~&amp;gt; 0.10.4&lt;br /&gt;
│   │   ├── provider.kubernetes ~&amp;gt; 1.10.0&lt;br /&gt;
│   │   ├── provider.null (inherited)&lt;br /&gt;
│   │   ├── module.alb_ingress_controller&lt;br /&gt;
(...)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= terraform plugins cache =&lt;br /&gt;
Create &amp;lt;code&amp;gt;.terraformrc&amp;lt;/code&amp;gt; file in $HOME directory and specify the cache directory. Or set an environment variable. Then rerun &amp;lt;code&amp;gt;terraform init&amp;lt;/code&amp;gt; to save providers into shared (cache) directory.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=terraform&amp;gt;&lt;br /&gt;
# Option 1.&lt;br /&gt;
cat &amp;gt; ~/.terraformrc &amp;lt;&amp;lt;'EOF'&lt;br /&gt;
plugin_cache_dir   = &amp;quot;$HOME/.terraform.d/plugin-cache/&amp;quot;&lt;br /&gt;
disable_checkpoint = true&lt;br /&gt;
EOF&lt;br /&gt;
&lt;br /&gt;
# Option 2.&lt;br /&gt;
export TF_PLUGIN_CACHE_DIR=$HOME/.terraform.d/plugins-cache&lt;br /&gt;
&lt;br /&gt;
# Create the cache directory&lt;br /&gt;
mkdir $HOME/.terraform.d/plugin-cache&lt;br /&gt;
&lt;br /&gt;
# Delete per root module providers in &amp;lt;code&amp;gt;.terraform&amp;lt;/code&amp;gt; directory&lt;br /&gt;
find /git/repositories -type d -name &amp;quot;.terraform&amp;quot; -exec rm -rf {}/providers \;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Run &amp;lt;code&amp;gt;terraform init&amp;lt;/code&amp;gt;.&lt;br /&gt;
&amp;lt;source lang=terraform&amp;gt;&lt;br /&gt;
terraform init -backend-config=dev.backend.tfvars&lt;br /&gt;
Initializing the backend...&lt;br /&gt;
&lt;br /&gt;
Successfully configured the backend &amp;quot;s3&amp;quot;! Terraform will automatically&lt;br /&gt;
use this backend unless the backend configuration changes.&lt;br /&gt;
&lt;br /&gt;
Initializing provider plugins...&lt;br /&gt;
- Checking for available provider plugins...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;random&amp;quot; (hashicorp/random) 2.3.0...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;kubernetes&amp;quot; (hashicorp/kubernetes) 1.10.0...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;helm&amp;quot; (hashicorp/helm) 1.2.3...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;aws&amp;quot; (hashicorp/aws) 2.70.0...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;external&amp;quot; (hashicorp/external) 1.2.0...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;null&amp;quot; (hashicorp/null) 2.1.2...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;template&amp;quot; (hashicorp/template) 2.1.2...&lt;br /&gt;
&lt;br /&gt;
Terraform has been successfully initialized!&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
:[[File:ClipCapIt-200714-085009.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Although cache dir is used by all Terraform projects, the providers versioning still works and normal versioning restrictions apply. If you want to be sure which version is locked for use with your current project, you can inspect SHA256 of files saved in one of the files in the “.terraform” directory:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ cat .terraform/plugins/linux_amd64/lock.json &lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;aws&amp;quot;: &amp;quot;f08daaf64b9fca69978a40f88091d1a77fc9725fb04b0fec5e731609c53a025f&amp;quot;,&lt;br /&gt;
  &amp;quot;external&amp;quot;: &amp;quot;6dad56007a3cb0ae9c4655c67d13502e51e38ca2673cf0f22a5fadce6803f9e4&amp;quot;,&lt;br /&gt;
  &amp;quot;helm&amp;quot;: &amp;quot;09b8ccb993f7d776555e811c856de006ac12b9fedfca15b07a85a6814914fd04&amp;quot;,&lt;br /&gt;
  &amp;quot;kubernetes&amp;quot;: &amp;quot;7ebf3273e622d1adb736e98f6fa5cc7e664c61b9171105b13c3b5ea8f8ebc5ff&amp;quot;,&lt;br /&gt;
  &amp;quot;null&amp;quot;: &amp;quot;c56285e7bd25a806bf86fcd4893edbe46e621a46e20fe24ef209b6fd0b7cf5fc&amp;quot;,&lt;br /&gt;
  &amp;quot;random&amp;quot;: &amp;quot;791ef28ff31913d9b2ef0bedb97de98bebafe66d002bc2b9d01377e59a6cfaed&amp;quot;,&lt;br /&gt;
  &amp;quot;template&amp;quot;: &amp;quot;cd8665642bf0f5b5f57a53050d10fd83415428c2dc6713b85e174e007fcc93bf&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
 &lt;br /&gt;
find ~/.terraform.d/plugins -type f | xargs sha256sum&lt;br /&gt;
f08daaf64b9fca69978a40f88091d1a77fc9725fb04b0fec5e731609c53a025f  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-aws_v2.70.0_x4&lt;br /&gt;
6dad56007a3cb0ae9c4655c67d13502e51e38ca2673cf0f22a5fadce6803f9e4  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-external_v1.2.0_x4&lt;br /&gt;
c56285e7bd25a806bf86fcd4893edbe46e621a46e20fe24ef209b6fd0b7cf5fc  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-null_v2.1.2_x4&lt;br /&gt;
791ef28ff31913d9b2ef0bedb97de98bebafe66d002bc2b9d01377e59a6cfaed  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-random_v2.3.0_x4&lt;br /&gt;
09b8ccb993f7d776555e811c856de006ac12b9fedfca15b07a85a6814914fd04  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-helm_v1.2.3_x4&lt;br /&gt;
7ebf3273e622d1adb736e98f6fa5cc7e664c61b9171105b13c3b5ea8f8ebc5ff  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-kubernetes_v1.10.0_x4&lt;br /&gt;
cd8665642bf0f5b5f57a53050d10fd83415428c2dc6713b85e174e007fcc93bf  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-template_v2.1.2_x4&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
As you can see, the SHA256 hash for AWS provider saved in the &amp;lt;tt&amp;gt;lock.json&amp;lt;/tt&amp;gt; file matches the hash of providera saved in the cache directory.&lt;br /&gt;
&lt;br /&gt;
= AWS - [https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Updates.20180206.html#AuroraMySQL.Updates.20180206.CLI RDS aurora] - versioning =&lt;br /&gt;
[https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Updates.20180206.html#AuroraMySQL.Updates.20180206.CLI Engine name] 'aurora-mysql' refers to engine version 5.7.x and for version 5.6.10a engine name is aurora.&lt;br /&gt;
* The engine name for Aurora MySQL 2.x is aurora-mysql; the engine name for Aurora MySQL 1.x continues to be aurora.&lt;br /&gt;
* The engine version for Aurora MySQL 2.x is 5.7.12; the engine version for Aurora MySQL 1.x continues to be 5.6.10ann.&lt;br /&gt;
&amp;lt;syntaxhighlightjs lang=yaml&amp;gt;&lt;br /&gt;
module &amp;quot;db&amp;quot; {&lt;br /&gt;
  source  = &amp;quot;terraform-aws-modules/rds-aurora/aws&amp;quot;&lt;br /&gt;
  version = &amp;quot;2.29.0&amp;quot;&lt;br /&gt;
  name    = &amp;quot;db&amp;quot;&lt;br /&gt;
  engine          = &amp;quot;aurora&amp;quot;                  # v5.6&lt;br /&gt;
  engine_version  = &amp;quot;5.6.mysql_aurora.1.23.0&amp;quot; # v5.6&lt;br /&gt;
  #engine         = &amp;quot;aurora-mysql&amp;quot;            # v5.7&lt;br /&gt;
  #engine_version = &amp;quot;5.7.mysql_aurora.2.09.0&amp;quot; # v5.7&lt;br /&gt;
  ...&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlightjs&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/localstack/localstack localstack] - Mock AWS Services =&lt;br /&gt;
&amp;lt;source lang=&amp;quot;shell&amp;quot;&amp;gt;&lt;br /&gt;
pip install localstack&lt;br /&gt;
localstack start&lt;br /&gt;
SERVICES=kinesis,lambda,sqs,dynamodb DEBUG=1 localstack start&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
;Examples&lt;br /&gt;
* [https://github.com/MattSurabian/bad-terraform bad-terraform]&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/tfsec/tfsec tfsec] - Security Scanning TF code =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
LATEST=$(curl --silent -L &amp;quot;https://api.github.com/repos/tfsec/tfsec/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
sudo curl -L https://github.com/tfsec/tfsec/releases/download/${LATEST}/tfsec-linux-amd64 -o /usr/local/bin/tfsec &lt;br /&gt;
sudo chmod +x /usr/local/bin/tfsec&lt;br /&gt;
&lt;br /&gt;
# Use with docker&lt;br /&gt;
docker run --rm -it -v &amp;quot;$(pwd):/src&amp;quot; liamg/tfsec /src&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
tfsec .&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/terraform-linters/tflint tflint] - validate provider-specific issues =&lt;br /&gt;
Requires Terraform &amp;gt;= 0.12&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/terraform-linters/tflint/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d)&lt;br /&gt;
curl -L https://github.com/terraform-linters/tflint/releases/download/${LATEST}/tflint_linux_amd64.zip -o $TEMPDIR/tflint_linux_amd64.zip&lt;br /&gt;
sudo unzip $TEMPDIR/tflint_linux_amd64.zip -d /usr/local/bin&lt;br /&gt;
&lt;br /&gt;
# Configure tflint&lt;br /&gt;
# | Current directory (./.tflint.hcl)&lt;br /&gt;
# | Home directory (~/.tflint.hcl)&lt;br /&gt;
tflint --config other_config.hcl&lt;br /&gt;
&lt;br /&gt;
## Add plugins&lt;br /&gt;
https://github.com/terraform-linters/tflint/tree/master/docs/rules&lt;br /&gt;
cat &amp;gt; ./.tflint.hcl &amp;lt;&amp;lt;EOF&lt;br /&gt;
plugin &amp;quot;aws&amp;quot; {&lt;br /&gt;
  enabled = true&lt;br /&gt;
  version = &amp;quot;0.5.0&amp;quot;&lt;br /&gt;
  source  = &amp;quot;github.com/terraform-linters/tflint-ruleset-aws&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
plugin &amp;quot;google&amp;quot; {&lt;br /&gt;
    enabled = true&lt;br /&gt;
    version = &amp;quot;0.15.0&amp;quot;&lt;br /&gt;
    source  = &amp;quot;github.com/terraform-linters/tflint-ruleset-google&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
EOF&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
tflint --module&lt;br /&gt;
tflint --module --var-file=dev.tfvars&lt;br /&gt;
&lt;br /&gt;
# Use with docker&lt;br /&gt;
docker pull ghcr.io/terraform-linters/tflint:latest&lt;br /&gt;
docker run --rm -v $(pwd):/src -t ghcr.io/terraform-linters/tflint:v0.34.1&lt;br /&gt;
docker run --rm -v $(pwd):/src -t ghcr.io/terraform-linters/tflint:v0.34.1 -v&lt;br /&gt;
&lt;br /&gt;
# Init and check&lt;br /&gt;
docker run --rm -v $(pwd):/src -t --entrypoint /bin/sh  ghcr.io/terraform-linters/tflint:v0.34.1 -c &amp;quot;tflint --init; tflint /src/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
## It looks important that tflint is executed in terrafrom root path, thus `cd /src`&lt;br /&gt;
docker run --rm -v $(pwd):/src -t -e TFLINT_LOG=debug --entrypoint /bin/sh  ghcr.io/terraform-linters/tflint:v0.34.1 \&lt;br /&gt;
-c &amp;quot;cd /src; tflint --init; tflint --var-file=environments/gcp-dev.tfvars --module&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/terraform-docs/terraform-docs terraform-docs] - generate Terraform documentation = &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install the binary&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/terraform-docs/terraform-docs/releases/latest&amp;quot; | jq -r .tag_name); echo $VERSION&lt;br /&gt;
wget https://github.com/terraform-docs/terraform-docs/releases/download/$VERSION/terraform-docs-$VERSION-linux-amd64.tar.gz&lt;br /&gt;
tar xzvf terraform-docs-$VERSION-linux-amd64.tar.gz&lt;br /&gt;
sudo install terraform-docs /usr/local/bin/terraform-docs&lt;br /&gt;
&lt;br /&gt;
# Use with docker&lt;br /&gt;
docker run --rm --volume &amp;quot;$(pwd):/src&amp;quot; -u $(id -u) quay.io/terraform-docs/terraform-docs:0.16.0 markdown /src&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
terraform-docs . &amp;gt; README.md&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/cycloidio/inframap InfraMap] - plot your Terraform state =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/cycloidio/inframap/releases/latest&amp;quot; | jq -r .tag_name); echo $VERSION&lt;br /&gt;
TEMPDIR=$(mktemp -d)&lt;br /&gt;
curl -L https://github.com/cycloidio/inframap/releases/download/${VERSION}/inframap-linux-amd64.tar.gz -o $TEMPDIR/inframap-linux-amd64.tar.gz&lt;br /&gt;
tar xzvf $TEMPDIR/inframap-linux-amd64.tar.gz -C $TEMPDIR inframap-linux-amd64&lt;br /&gt;
sudo install $TEMPDIR/inframap-linux-amd64 /usr/local/bin/inframap&lt;br /&gt;
&lt;br /&gt;
# Install graphviz, it contains the `dot` program&lt;br /&gt;
sudo apt install graphviz&lt;br /&gt;
&lt;br /&gt;
# Install GraphEasy&lt;br /&gt;
## Cpan manager&lt;br /&gt;
sudo apt install cpanminus # install perl packet managet&lt;br /&gt;
sudo cpanm Graph::Easy # Graph-Easy-0.76 as of 2021-07&lt;br /&gt;
&lt;br /&gt;
## Apt-get (tested with Ubuntu 20.04 LTS)&lt;br /&gt;
sudo apt install libgraph-easy-perl # Graph::Easy v0.76&lt;br /&gt;
&lt;br /&gt;
# a sample usage&lt;br /&gt;
cat input.dot | graph-easy --from=dot --as_ascii&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Usage inframap&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
The most important subcommands are:&lt;br /&gt;
* generate: generates the graph from STDIN or file, STDIN can be .tf files/modules or .tfstate&lt;br /&gt;
* prune: removes all unnecessary information from the state or HCL (not supported yet) so it can be shared without any security concerns&lt;br /&gt;
&lt;br /&gt;
# Generate your infrastructure graph in a DOT representation from: Terraform files or state file&lt;br /&gt;
cat terraform.tf      | inframap generate --printer dot --hcl     | tee graph.dot &lt;br /&gt;
cat terraform.tfstate | inframap generate --printer dot --tfstate | tee graph.dot&lt;br /&gt;
&lt;br /&gt;
# `prune` command will sanitize and anonymize content of the files&lt;br /&gt;
cat terraform.tfstate | inframap prune --canonicals --tfstate &amp;gt; cleaned.tfstate &lt;br /&gt;
&lt;br /&gt;
# Pipe all the previous commands. ASCII graph is generated using graph-easy&lt;br /&gt;
cat terraform.tfstate | inframap prune --tfstate | inframap generate --tfstate | graph-easy&lt;br /&gt;
&lt;br /&gt;
# from State file - visualizing with `dot` or `graph-easy`&lt;br /&gt;
inframap generate state.tfstate | dot -Tpng &amp;gt; graph.png&lt;br /&gt;
inframap generate state.tfstate | graph-easy&lt;br /&gt;
&lt;br /&gt;
# from HCL&lt;br /&gt;
inframap generate terraform.tf | graph-easy&lt;br /&gt;
inframap generate ./my-module/ | graph-easy # or HCL module&lt;br /&gt;
&lt;br /&gt;
# using docker image (assuming that your Terraform files are in the working directory)&lt;br /&gt;
docker run --rm -v ${PWD}:/opt cycloid/inframap generate /opt/terraform.tfstate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example of EKS module&lt;br /&gt;
:[[File:ClipCapIt-210716-090202.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/Pluralith/pluralith-cli/releases Pluralith] =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/Pluralith/pluralith-cli/releases/latest&amp;quot; | jq -r .tag_name); echo $VERSION&lt;br /&gt;
curl -L https://github.com/Pluralith/pluralith-cli/releases/download/${VERSION}/pluralith_cli_linux_amd64_${VERSION} -o pluralith_cli_linux_amd64_${VERSION}&lt;br /&gt;
sudo install pluralith_cli_linux_amd64_${VERSION} /usr/local/bin/pluralith&lt;br /&gt;
&lt;br /&gt;
# Install pluralith-cli-graphing&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/Pluralith/pluralith-cli-graphing-release/releases/latest&amp;quot; | jq -r .tag_name | tr -d v); echo $VERSION&lt;br /&gt;
curl -L https://github.com/Pluralith/pluralith-cli-graphing-release/releases/download/v${VERSION}/pluralith_cli_graphing_linux_amd64_${VERSION} -o pluralith_cli_graphing_linux_amd64_${VERSION}&lt;br /&gt;
sudo install pluralith_cli_graphing_linux_amd64_${VERSION} ~/Pluralith/bin/pluralith-cli-graphing&lt;br /&gt;
&lt;br /&gt;
# Check versions&lt;br /&gt;
pluralith version&lt;br /&gt;
parsing response failed -&amp;gt; GetGitHubRelease: %!w(&amp;lt;nil&amp;gt;)&lt;br /&gt;
 _&lt;br /&gt;
|_)|    _ _ |._|_|_ &lt;br /&gt;
|  ||_|| (_||| | | |&lt;br /&gt;
&lt;br /&gt;
→ CLI Version: 0.2.2&lt;br /&gt;
→ Graph Module Version: 0.2.1&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
pluralith login --api-key $PLURALITH_API_KEY&lt;br /&gt;
&lt;br /&gt;
# Generate PDF graph locally&lt;br /&gt;
pluralith &amp;lt;terrafom-root-folder&amp;gt; --var-file environments/dev.tfvars graph --local-only&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/flosell/iam-policy-json-to-terraform iam-policy-json-to-terraform] =&lt;br /&gt;
Convert an IAM Policy in JSON format into a Terraform aws_iam_policy_document&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/flosell/iam-policy-json-to-terraform/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
sudo curl -L https://github.com/flosell/iam-policy-json-to-terraform/releases/download/${LATEST}/iam-policy-json-to-terraform_amd64 -o /usr/local/bin/iam-policy-json-to-terraform&lt;br /&gt;
sudo chmod +x /usr/local/bin/iam-policy-json-to-terraform&lt;br /&gt;
&lt;br /&gt;
# Usage:&lt;br /&gt;
iam-policy-json-to-terraform &amp;lt; some-policy.json&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/hieven/terraform-visual terraform-visual] =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install&lt;br /&gt;
sudo apt-get update&lt;br /&gt;
sudo apt install nodejs npm&lt;br /&gt;
sudo npm install -g @terraform-visual/cli&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
terraform plan -out=plan.out                # Run plan and output as a file&lt;br /&gt;
terraform show -json plan.out &amp;gt; plan.json   # Read plan file and output it in JSON format&lt;br /&gt;
terraform-visual --plan plan.json&lt;br /&gt;
firefox terraform-visual-report/index.html&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/cloudskiff/driftctl driftctl] =&lt;br /&gt;
Measures infrastructure as code coverage, and tracks infrastructure drift.&lt;br /&gt;
IaC: Terraform, Cloud providers: AWS, GitHub (Azure and GCP on the roadmap for 2021). Spot discrepancies as they happen: driftctl is a free and open-source CLI that warns of infrastructure drifts and fills in the missing piece in your DevSecOps toolbox.&lt;br /&gt;
&lt;br /&gt;
;Features [https://docs.driftctl.com/ docs]&lt;br /&gt;
* Scan cloud provider and map resources with IaC code&lt;br /&gt;
* Analyze diffs, and warn about drift and unwanted unmanaged resources&lt;br /&gt;
* Allow users to ignore resources&lt;br /&gt;
* Multiple output formats&lt;br /&gt;
&lt;br /&gt;
Install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
curl -L https://github.com/snyk/driftctl/releases/latest/download/driftctl_linux_amd64 -o driftctl&lt;br /&gt;
install ./driftctl /usr/local/bin/driftctl&lt;br /&gt;
driftctl version&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[https://docs.driftctl.com/0.39.0/usage/cmd/scan-usage Detect drift on GCP]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
source &amp;lt;(driftctl completion bash)&lt;br /&gt;
export GOOGLE_APPLICATION_CREDENTIALS=$HOME/.config/gcloud/application_default_credentials.json&lt;br /&gt;
export CLOUDSDK_CORE_PROJECT=&amp;lt;myproject_id&amp;gt;&lt;br /&gt;
driftctl scan --to=&amp;quot;gcp+tf&amp;quot;&lt;br /&gt;
driftctl scan --to=&amp;quot;gcp+tf&amp;quot; --deep --output html://output.html&lt;br /&gt;
driftctl scan --to=&amp;quot;gcp+tf&amp;quot; --from tfstate+gs://my-bucket/path/to/state.tfstate # Use this when working with workspaces&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/infracost/infracost infracost] =&lt;br /&gt;
Infracost shows cloud cost estimates for infrastructure-as-code projects such as Terraform.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Downloads the CLI based on your OS/arch and puts it in /usr/local/bin&lt;br /&gt;
curl -fsSL https://raw.githubusercontent.com/infracost/infracost/master/scripts/install.sh | sh&lt;br /&gt;
&lt;br /&gt;
# Register for a free API key&lt;br /&gt;
infracost register # The key is saved in ~/.config/infracost/credentials.yml.&lt;br /&gt;
&lt;br /&gt;
# Show cost breakdown on live infra&lt;br /&gt;
infracost breakdown --path terraform_nlb_static_eips&lt;br /&gt;
&lt;br /&gt;
# Show cost breakdown based on Terraform plan&lt;br /&gt;
cd path/to/src_code&lt;br /&gt;
terraform init&lt;br /&gt;
terraform plan -out  tfplan.binary&lt;br /&gt;
terraform show -json tfplan.binary &amp;gt; plan.json&lt;br /&gt;
&lt;br /&gt;
## run via binary&lt;br /&gt;
infracost breakdown --path plan.json&lt;br /&gt;
infracost breakdown --path plan.json --show-skipped --format html &amp;gt; /vagrant/infracost.html&lt;br /&gt;
infracost diff      --path plan.json&lt;br /&gt;
&lt;br /&gt;
## run via Docker&lt;br /&gt;
docker run -it --rm --volume &amp;quot;$(pwd):/src&amp;quot; -u $(id -u) -e INFRACOST_API_KEY=$INFRACOST_API_KEY infracost/infracost:0.9.15 breakdown --path /src/plan.json&lt;br /&gt;
docker run -it --rm --volume &amp;quot;$(pwd):/src&amp;quot; -u $(id -u) -e INFRACOST_API_KEY=$INFRACOST_API_KEY infracost/infracost:0.9.15 diff      --path /src/plan.json&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example output&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
## Cost breakdown&lt;br /&gt;
docker run -it --rm --volume &amp;quot;$(pwd):/src&amp;quot; -u $(id -u) -e INFRACOST_API_KEY=$INFRACOST_API_KEY infracost/infracost:0.9.15 breakdown --path /src/plan.json&lt;br /&gt;
Detected Terraform plan JSON file at /src/plan.json&lt;br /&gt;
✔ Calculating monthly cost estimate &lt;br /&gt;
&lt;br /&gt;
Project: /src/plan.json&lt;br /&gt;
 Name                                                              Monthly Qty  Unit   Monthly Cost &lt;br /&gt;
 module.gke.google_container_cluster.primary                                                        &lt;br /&gt;
 ├─ Cluster management fee                                                 730  hours        $73.00 &lt;br /&gt;
 └─ default_pool                                                                                    &lt;br /&gt;
    ├─ Instance usage (Linux/UNIX, on-demand, e2-medium)                 6,570  hours       $242.16 &lt;br /&gt;
    └─ Standard provisioned storage (pd-standard)                          900  GiB          $36.00 &lt;br /&gt;
 module.gke.google_container_node_pool.pools[&amp;quot;default-node-pool&amp;quot;]                                   &lt;br /&gt;
 ├─ Instance usage (Linux/UNIX, on-demand, e2-medium)                    6,570  hours       $242.16 &lt;br /&gt;
 └─ Standard provisioned storage (pd-standard)                             900  GiB          $36.00 &lt;br /&gt;
 OVERALL TOTAL                                                                              $629.31 &lt;br /&gt;
──────────────────────────────────&lt;br /&gt;
11 cloud resources were detected, rerun with --show-skipped to see details:&lt;br /&gt;
∙ 2 were estimated, 2 include usage-based costs, see https://infracost.io/usage-file&lt;br /&gt;
∙ 9 were free&lt;br /&gt;
&lt;br /&gt;
## Cost difference&lt;br /&gt;
docker run -it --rm --volume &amp;quot;$(pwd):/src&amp;quot; -u $(id -u) -e INFRACOST_API_KEY=$INFRACOST_API_KEY infracost/infracost:0.9.15 diff --path /src/plan.json&lt;br /&gt;
Detected Terraform plan JSON file at /src/plan.json&lt;br /&gt;
✔ Calculating monthly cost estimate &lt;br /&gt;
&lt;br /&gt;
Project: /src/plan.json&lt;br /&gt;
&lt;br /&gt;
+ module.gke.google_container_cluster.primary&lt;br /&gt;
  +$351&lt;br /&gt;
    + Cluster management fee&lt;br /&gt;
      +$73.00&lt;br /&gt;
    + default_pool&lt;br /&gt;
        + Instance usage (Linux/UNIX, on-demand, e2-medium)&lt;br /&gt;
          +$242&lt;br /&gt;
        + Standard provisioned storage (pd-standard)&lt;br /&gt;
          +$36.00&lt;br /&gt;
    + node_pool[0]&lt;br /&gt;
        + Instance usage (Linux/UNIX, on-demand, e2-medium)&lt;br /&gt;
          $0.00&lt;br /&gt;
        + Standard provisioned storage (pd-standard)&lt;br /&gt;
          $0.00&lt;br /&gt;
+ module.gke.google_container_node_pool.pools[&amp;quot;default-node-pool&amp;quot;]&lt;br /&gt;
  +$278&lt;br /&gt;
    + Instance usage (Linux/UNIX, on-demand, e2-medium)&lt;br /&gt;
      +$242&lt;br /&gt;
    + Standard provisioned storage (pd-standard)&lt;br /&gt;
      +$36.00&lt;br /&gt;
Monthly cost change for /src/plan.json&lt;br /&gt;
Amount:  +$629 ($0.00 → $629)&lt;br /&gt;
&lt;br /&gt;
──────────────────────────────────&lt;br /&gt;
Key: ~ changed, + added, - removed&lt;br /&gt;
&lt;br /&gt;
11 cloud resources were detected, rerun with --show-skipped to see details:&lt;br /&gt;
∙ 2 were estimated, 2 include usage-based costs, see https://infracost.io/usage-file&lt;br /&gt;
∙ 9 were free&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Resources:&lt;br /&gt;
* DockerHub: https://hub.docker.com/r/infracost/infracost/tags&lt;br /&gt;
&lt;br /&gt;
= [https://tfautomv.dev/ tfautomv - Terraform refactor] =&lt;br /&gt;
Tfautomv writes moved blocks for you so your refactoring is quicker and less error-prone.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
tfautomv -dry-run&lt;br /&gt;
tfautomv -show-analysis&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= [https://www.davidc.net/sites/default/subnets/subnets.html?network=192.168.0.0&amp;amp;mask=22&amp;amp;division=19.3d431 Subnetting] =&lt;br /&gt;
Very useful page for subnetting: https://www.davidc.net/sites/default/subnets/subnets.html&lt;br /&gt;
&lt;br /&gt;
= Resources =&lt;br /&gt;
*[https://discuss.hashicorp.com/u/apparentlymart apparentlymart] The Hero! discuss.hashicorp.com&lt;br /&gt;
*[https://blog.gruntwork.io/a-comprehensive-guide-to-terraform-b3d32832baca Comprehensive-guide-to-terraform] gruntwork.io&lt;br /&gt;
*[https://github.com/antonbabenko/terraform-best-practices Terraform good practices] naming conventions, etc..&lt;br /&gt;
*[https://www.runatlantis.io/ Atlantis] Terraform Pull Request Automation, Listens for webhooks from GitHub/GitLab/Bitbucket/Azure DevOps, Runs terraform commands remotely and comments back with their output.&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Terraform&amp;diff=7038</id>
		<title>Terraform</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Terraform&amp;diff=7038"/>
		<updated>2024-11-07T22:49:58Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* cache terraform plugins */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article is about utilising a tool from HashiCorp called Terraform to build infrastructure as a code - IoC.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Note| most of the paragraphs have examples of Terraform prior 0.12 version syntax that uses HCLv1. HCLv2 has been introduced with v0.12+ that contains significiant syntax and capabilites improvments. }}&lt;br /&gt;
&lt;br /&gt;
= Install terraform =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
wget https://releases.hashicorp.com/terraform/0.11.11/terraform_0.11.11_linux_amd64.zip&lt;br /&gt;
unzip terraform_0.11.11_linux_amd64.zip&lt;br /&gt;
sudo mv ./terraform /usr/local/bin&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== [https://github.com/kamatama41/tfenv tfenv] - manage multiple versions of Teraform ==&lt;br /&gt;
Install and usage&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
git clone https://github.com/tfutils/tfenv.git ~/.tfenv&lt;br /&gt;
echo &amp;quot;[ -d $HOME/.tfenv ] &amp;amp;&amp;amp; export PATH=$PATH:$HOME/.tfenv/bin/&amp;quot; &amp;gt;&amp;gt; ~/.bashrc # or ~/.bash_profile&lt;br /&gt;
&lt;br /&gt;
# Use&lt;br /&gt;
tfenv install 1.0.6&lt;br /&gt;
tfenv use 1.0.6&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== IDE ==&lt;br /&gt;
Development I use:&lt;br /&gt;
* VSCode with 1.41.1+ (for reference) with extensions:&lt;br /&gt;
** Terraform Autocomplete by erd0s&lt;br /&gt;
** Terraform by Mikael Olenfalk with enabled Language Server; open the command pallet with &amp;lt;code&amp;gt;Ctrl+Shift+P&amp;lt;/code&amp;gt;&lt;br /&gt;
:[[File:ClipCapIt-200202-153128.PNG]]&lt;br /&gt;
&lt;br /&gt;
= Basic configuration =&lt;br /&gt;
When terraform is run it looks for .tf file where configuration is stored. The look up process is limited to a flat directory and never leaves the directory that runs from. Therefore if you wish to address a common file a symbolic-link needs to be created within the directory you have .tf file.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
$ vi example.tf &lt;br /&gt;
provider &amp;quot;aws&amp;quot; {&lt;br /&gt;
  access_key = &amp;quot;AK01234567890OGD6WGA&amp;quot; &lt;br /&gt;
  secret_key = &amp;quot;N8012345678905acCY6XIc1bYjsvvlXHUXMaxOzN&amp;quot;&lt;br /&gt;
  region     = &amp;quot;eu-west-1&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
resource &amp;quot;aws_instance&amp;quot; &amp;quot;webserver&amp;quot; {&lt;br /&gt;
  ami           = &amp;quot;ami-405f7226&amp;quot;&lt;br /&gt;
  instance_type = &amp;quot;t2.nano&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Since version 10.8.x major changes and features have been introduced including split of providers binary. Now each provider is a separate binary. Please see below example for Azure provider and other internal Terraform developed providers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Azure ==&lt;br /&gt;
Terraform credentials&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
export ARM_SUBSCRIPTION_ID=&amp;quot;YOUR_SUBSCRIPTION_ID&amp;quot;&lt;br /&gt;
export ARM_TENANT_ID=&amp;quot;TENANT_ID&amp;quot;&lt;br /&gt;
export ARM_CLIENT_ID=&amp;quot;CLIENT_ID&amp;quot;&lt;br /&gt;
export ARM_CLIENT_SECRET=&amp;quot;CLIENT_SECRET&amp;quot;&lt;br /&gt;
export TF_VAR_client_id=${ARM_CLIENT_ID}&lt;br /&gt;
export TF_VAR_client_secret=${ARM_CLIENT_SECRET}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example, how to source credentials&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
export VAULT_CLIENT_ADDR=http://10.1.1.1:8200&lt;br /&gt;
export VAULT_TOKEN=11111111-1111-1111-1111-1111111111111&lt;br /&gt;
vault read -format=json -address=$VAULT_CLIENT_ADDR secret/azure/subscription   | jq -r '.data | .subscription_id, .tenant_id'&lt;br /&gt;
vault read -format=json -address=$VAULT_CLIENT_ADDR secret/azure/${application} | jq -r '.data | .client_id, .client_secret'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Terraform providers, modules and backend config&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
$ vi providers.tf&lt;br /&gt;
provider &amp;quot;azurerm&amp;quot; {&lt;br /&gt;
  version         = &amp;quot;1.10.0&amp;quot;&lt;br /&gt;
  subscription_id = &amp;quot;${var.subscription_id}&amp;quot;&lt;br /&gt;
  tenant_id       = &amp;quot;${var.tenant_id}&amp;quot;&lt;br /&gt;
  client_id       = &amp;quot;${var.client_id}&amp;quot;&lt;br /&gt;
  client_secret   = &amp;quot;${var.client_secret}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# HashiCorp special providers https://github.com/terraform-providers&lt;br /&gt;
provider &amp;quot;template&amp;quot; { version = &amp;quot;1.0.0&amp;quot; }&lt;br /&gt;
provider &amp;quot;external&amp;quot; { version = &amp;quot;1.0.0&amp;quot; }&lt;br /&gt;
provider &amp;quot;local&amp;quot;    { version = &amp;quot;1.1.0&amp;quot; }&lt;br /&gt;
&lt;br /&gt;
terraform {&lt;br /&gt;
  backend &amp;quot;local&amp;quot; {}&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== AWS ==&lt;br /&gt;
;References&lt;br /&gt;
*[https://www.padok.fr/en/blog/terraform-s3-bucket-aws S3 bucket for all accounts]&lt;br /&gt;
*[https://www.padok.fr/en/blog/authentication-aws-profiles Multi account auth using aws profiles and &amp;lt;code&amp;gt;provider &amp;quot;aws&amp;quot; {}&amp;lt;/code&amp;gt;]&lt;br /&gt;
=== Local state ===&lt;br /&gt;
Local state configuration&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
vi backend.tf&lt;br /&gt;
terraform {&lt;br /&gt;
  version          = &amp;quot;~&amp;gt; 1.0&amp;quot;&lt;br /&gt;
  required_version = &amp;quot;= 0.12.29&amp;quot;&lt;br /&gt;
  backend &amp;quot;local&amp;quot; {}&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Remote state (single) for multi account deployments ===&lt;br /&gt;
There are many combination setting up backend and AWS credentials. Important understand is that &amp;lt;code&amp;gt;terraform { backend{} }&amp;lt;/code&amp;gt; block does NOT use &amp;lt;code&amp;gt;provider &amp;quot;aws {}&amp;quot;&amp;lt;/code&amp;gt; configuration in order to access the state bucket. It only uses the backend one.&lt;br /&gt;
* exporting credentials allows working with assume roles that are different in the backend and terraform blocks. &lt;br /&gt;
* specifying different &amp;lt;code&amp;gt;profile = &amp;lt;/code&amp;gt; in each blocks&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Credentials&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
## profile allows assumes roles in other accounts&lt;br /&gt;
#export AWS_PROFILE=&amp;quot;piotr&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Environment credentials for a user that can assume roles (eg. ) in other accounts:&lt;br /&gt;
#          | * arn:aws:iam::111111111111:role/terraform-s3state              - save state in s3 bucket&lt;br /&gt;
#          | * arn:aws:iam::222222222222:role/terraform-crossaccount-admin   - deploy resources&lt;br /&gt;
export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE&lt;br /&gt;
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY&lt;br /&gt;
export AWS_DEFAULT_REGION=us-east-1&lt;br /&gt;
&lt;br /&gt;
# unset all of them if need to &lt;br /&gt;
unset ${!AWS@}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;code&amp;gt;terraform {}&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
terraform {&lt;br /&gt;
  version          = &amp;quot;~&amp;gt; 1.0&amp;quot;&lt;br /&gt;
  required_version = &amp;quot;= 0.12.29&amp;quot;&lt;br /&gt;
# profile &amp;quot;dev-us&amp;quot; # we use 'role_arn' but could specify aws profile instead&lt;br /&gt;
  backend &amp;quot;s3&amp;quot; {&lt;br /&gt;
    bucket  = &amp;quot;tfstate-${var.project}-${var.account-id}&amp;quot; # must exist beforehand&lt;br /&gt;
    key     = &amp;quot;terraform/aws/${var.project}/tfstate&amp;quot;     # this could be much simpler when working with terraform workspaces&lt;br /&gt;
    region  = &amp;quot;${var.region}&amp;quot;&lt;br /&gt;
    role_arn  = &amp;quot;arn:aws:iam::111111111111:role/terraform-s3state&amp;quot; # role to assume in an infra account that the s3 state exists&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;&amp;lt;code&amp;gt;provider {}&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
provider &amp;quot;aws&amp;quot; {&lt;br /&gt;
## We could use profiles but instead we use 'assume_role' option. Also on your laptop &lt;br /&gt;
## it should be your creds profile eg. 'piotr-xaccount-admin'&lt;br /&gt;
#profile = &amp;quot;terraform-crossaccount-admin&amp;quot;&lt;br /&gt;
#shared_credentials_file = &amp;quot;/home/piotr/.aws/credentials&amp;quot;&lt;br /&gt;
  assume_role = {&lt;br /&gt;
    role_arn  = &amp;quot;arn:aws:iam::&amp;lt;MY_PROD_ACCOUNT&amp;gt;:role/terraform-crossaccount-admin&amp;quot;       # assume role in target account&lt;br /&gt;
    role_arn  = &amp;quot;arn:aws:iam::${var.aws_account}:role/terraform-crossaccount-admin&amp;quot; # can use variables&lt;br /&gt;
  }&lt;br /&gt;
  region  = &amp;quot;var.aws_region&amp;quot;&lt;br /&gt;
  allowed_account_ids = [ &amp;quot;111111111111&amp;quot;, &amp;quot;222222222222&amp;quot; ] # safety net&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Workspace configuration&lt;br /&gt;
Dev configuration in &amp;lt;code&amp;gt;dev.tfvars&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
aws_region  = &amp;quot;eu-west-1&amp;quot;&lt;br /&gt;
aws_account = &amp;quot;&amp;lt;MY_DEV_ACCOUNT&amp;gt;&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Prod configuration in &amp;lt;code&amp;gt;prod.tfvars&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
aws_region  = &amp;quot;eu-west-1&amp;quot;&lt;br /&gt;
aws_account = &amp;quot;&amp;lt;MY_PROD_ACCOUNT&amp;gt;&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Workspaces&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
terraform init&lt;br /&gt;
terraform workspace new dev&lt;br /&gt;
terraform workspace new prod&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Apply on one account&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
terraform workspace select dev&lt;br /&gt;
terraform apply --var-file $(terraform workspace show).tfvars&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== GCP Google Cloud Platform ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Generate default app credentials&lt;br /&gt;
&lt;br /&gt;
gcloud auth application-default login&lt;br /&gt;
Go to the following link in your browser:&lt;br /&gt;
https://accounts.google.com/o/oauth2/auth?response_type=code&amp;amp;client_id=****_challenge_method=S256&lt;br /&gt;
Enter verification code: ***&lt;br /&gt;
Credentials saved to file: [/home/piotr/.config/gcloud/application_default_credentials.json]&lt;br /&gt;
&lt;br /&gt;
These credentials will be used by any library that requests Application Default Credentials (ADC).&lt;br /&gt;
Quota project &amp;quot;test-devops-candidate1&amp;quot; was added to ADC which can be used by Google client libraries for billing and quota. Note that some services may still bill the project owning the resource&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Plan / apply =&lt;br /&gt;
== Meaning of markings in a plan output ==&lt;br /&gt;
For now, here they are, until we get it included in the docs better:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;+&amp;lt;/code&amp;gt; create&lt;br /&gt;
* &amp;lt;code&amp;gt;-&amp;lt;/code&amp;gt; destroy&lt;br /&gt;
* &amp;lt;code&amp;gt;-/+&amp;lt;/code&amp;gt; replace (destroy and then create, or vice-versa if create-before-destroy is used)&lt;br /&gt;
* &amp;lt;code&amp;gt;~&amp;lt;/code&amp;gt; update in-place&lt;br /&gt;
* &amp;lt;code&amp;gt;&amp;lt;=&amp;lt;/code&amp;gt; applies only to data resources. You won't see this one often, because whenever possible Terraform does reads during the refresh phase. You will see it, though, if you have a data resource whose configuration depends on something that we don't know yet, such as an attribute of a resource that isn't yet created. In that case, it's necessary to wait until apply time to find out the final configuration before doing the read.&lt;br /&gt;
&lt;br /&gt;
== Plan and apply ==&lt;br /&gt;
Apply stage, if runs first time will create terraform.tfstate after all changes are done. This file should not be modified manually. It's used to compare what is out in cloud already so the next time APPLY stage runs it will look at the file and execute only necessary changes.&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Terraform plan and apply&lt;br /&gt;
|- &lt;br /&gt;
! terraform plan&lt;br /&gt;
! terraform apply&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| &amp;lt;source&amp;gt;$ terraform plan&lt;br /&gt;
Refreshing Terraform state in-memory prior to plan...&lt;br /&gt;
The refreshed state will be used to calculate this plan, but will not be&lt;br /&gt;
persisted to local or remote state storage.&lt;br /&gt;
&amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
+ aws_instance.webserver&lt;br /&gt;
   ami:                         &amp;quot;ami-405f7226&amp;quot;&lt;br /&gt;
   associate_public_ip_address: &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   availability_zone:           &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   ebs_block_device.#:          &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   ephemeral_block_device.#:    &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   instance_state:              &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   instance_type:               &amp;quot;t2.nano&amp;quot;&lt;br /&gt;
   ipv6_addresses.#:            &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   key_name:                    &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   network_interface_id:        &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   placement_group:             &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   private_dns:                 &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   private_ip:                  &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   public_dns:                  &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   public_ip:                   &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   root_block_device.#:         &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   security_groups.#:           &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   source_dest_check:           &amp;quot;true&amp;quot;&lt;br /&gt;
   subnet_id:                   &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   tenancy:                     &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
   vpc_security_group_ids.#:    &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
| &amp;lt;source&amp;gt;$ terraform apply&lt;br /&gt;
aws_instance.webserver: Creating...&lt;br /&gt;
 ami:                         &amp;quot;&amp;quot; =&amp;gt; &amp;quot;ami-405f7226&amp;quot;&lt;br /&gt;
 associate_public_ip_address: &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 availability_zone:           &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 ebs_block_device.#:          &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 ephemeral_block_device.#:    &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 instance_state:              &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 instance_type:               &amp;quot;&amp;quot; =&amp;gt; &amp;quot;t2.nano&amp;quot;&lt;br /&gt;
 ipv6_addresses.#:            &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 key_name:                    &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 network_interface_id:        &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 placement_group:             &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 private_dns:                 &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 private_ip:                  &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 public_dns:                  &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 public_ip:                   &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 root_block_device.#:         &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 security_groups.#:           &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 source_dest_check:           &amp;quot;&amp;quot; =&amp;gt; &amp;quot;true&amp;quot;&lt;br /&gt;
 subnet_id:                   &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 tenancy:                     &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
 vpc_security_group_ids.#:    &amp;quot;&amp;quot; =&amp;gt; &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
aws_instance.webserver: Still creating... (10s elapsed)&lt;br /&gt;
aws_instance.webserver: Creation complete (ID: i-0eb33af34b94d1a78)&lt;br /&gt;
&lt;br /&gt;
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.&lt;br /&gt;
&lt;br /&gt;
The state of your infrastructure has been saved to the path&lt;br /&gt;
below. This state is required to modify and destroy your&lt;br /&gt;
infrastructure, so keep it safe. To inspect the complete state&lt;br /&gt;
use the `terraform show` command.&lt;br /&gt;
&lt;br /&gt;
State path:  &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Show ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ terraform show&lt;br /&gt;
aws_instance.webserver:&lt;br /&gt;
 id = i-0eb33af34b94d1a78&lt;br /&gt;
 ami = ami-405f7226&lt;br /&gt;
 associate_public_ip_address = true&lt;br /&gt;
 availability_zone = eu-west-1c&lt;br /&gt;
 disable_api_termination = false&lt;br /&gt;
(...)&lt;br /&gt;
 source_dest_check = true&lt;br /&gt;
 subnet_id = subnet-92a4bbf6&lt;br /&gt;
 tags.% = 0&lt;br /&gt;
 tenancy = default&lt;br /&gt;
 vpc_security_group_ids.# = 1&lt;br /&gt;
 vpc_security_group_ids.1039819662 = sg-5201fb2b&lt;br /&gt;
&lt;br /&gt;
$&amp;gt; terraform destroy&lt;br /&gt;
Do you really want to destroy?&lt;br /&gt;
 Terraform will delete all your managed infrastructure.&lt;br /&gt;
 There is no undo. Only 'yes' will be accepted to confirm.&lt;br /&gt;
 Enter a value: yes&lt;br /&gt;
aws_instance.webserver: Refreshing state... (ID: i-0eb33af34b94d1a78)&lt;br /&gt;
aws_instance.webserver: Destroying... (ID: i-0eb33af34b94d1a78)&lt;br /&gt;
aws_instance.webserver: Still destroying... (ID: i-0eb33af34b94d1a78, 10s elapsed)&lt;br /&gt;
aws_instance.webserver: Still destroying... (ID: i-0eb33af34b94d1a78, 20s elapsed)&lt;br /&gt;
aws_instance.webserver: Still destroying... (ID: i-0eb33af34b94d1a78, 30s elapsed)&lt;br /&gt;
aws_instance.webserver: Destruction complete&lt;br /&gt;
 &lt;br /&gt;
Destroy complete! Resources: 1 destroyed.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
After the instance has been terminated the terraform.tfstate looks like below:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi terraform.tfstate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=json&amp;gt;&lt;br /&gt;
 {&lt;br /&gt;
     &amp;quot;version&amp;quot;: 3,&lt;br /&gt;
     &amp;quot;terraform_version&amp;quot;: &amp;quot;0.9.1&amp;quot;,&lt;br /&gt;
     &amp;quot;serial&amp;quot;: 1,&lt;br /&gt;
     &amp;quot;lineage&amp;quot;: &amp;quot;c22ccad7-ff26-4b8a-bf19-819477b45202&amp;quot;,&lt;br /&gt;
     &amp;quot;modules&amp;quot;: [&lt;br /&gt;
         {&lt;br /&gt;
             &amp;quot;path&amp;quot;: [&lt;br /&gt;
                 &amp;quot;root&amp;quot;&lt;br /&gt;
             ],&lt;br /&gt;
             &amp;quot;outputs&amp;quot;: {},&lt;br /&gt;
             &amp;quot;resources&amp;quot;: {},&lt;br /&gt;
             &amp;quot;depends_on&amp;quot;: []&lt;br /&gt;
         }&lt;br /&gt;
     ]&lt;br /&gt;
 }&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= AWS credentials profiles and variable files=&lt;br /&gt;
Instead to reference secret_access keys within .tf file directly we can use AWS profile file. This file will be look at for the profile variable we specify in variables.tf file. Note: there is '''no double quotes'''.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ vi ~/.aws/credentials    #AWS credentials file with named profiles&lt;br /&gt;
[terraform-profile1]       #profile name&lt;br /&gt;
aws_access_key_id     = AAAAAAAAAAA&lt;br /&gt;
aws_secret_access_key = BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then we can now remove the secret_access keys from the main .tf file (example.tf) and amend as follows:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi provider.tf&lt;br /&gt;
terraform {&lt;br /&gt;
  version          = &amp;quot;~&amp;gt; 1.0&amp;quot;&lt;br /&gt;
  required_version = &amp;quot;= 0.11.11&amp;quot;&lt;br /&gt;
  region           = &amp;quot;eu-west-1&amp;quot;&lt;br /&gt;
  backend &amp;quot;s3&amp;quot; {}  # in this case all s3 details are passed as ENV vars&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
provider &amp;quot;aws&amp;quot; {&lt;br /&gt;
  version    =   &amp;quot;~&amp;gt; 1.57&amp;quot;&lt;br /&gt;
# Static credentials - provided directly&lt;br /&gt;
  access_key = &amp;quot;AAAAAAAAAAA&amp;quot;&lt;br /&gt;
  secret_key = &amp;quot;BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Shared Credentials file - $HOME/.aws/credentials, static credentials are not needed then&lt;br /&gt;
# profile                 = &amp;quot;terraform-profile1&amp;quot;           #profile name in credentials file, acc 111111111111&lt;br /&gt;
# shared_credentials_file = &amp;quot;/home/user1/.aws/credentials&amp;quot; #if different than default&lt;br /&gt;
&lt;br /&gt;
# If specified, assume role in another account using the user credentials&lt;br /&gt;
# defined in the profile above&lt;br /&gt;
# assume_role {&lt;br /&gt;
#   role_arn     = &amp;quot;${var.aws_xaccount_role}&amp;quot; #variable version&lt;br /&gt;
#   role_arn     = &amp;quot;arn:aws:iam::222222222222:role/CrossAccountSignin_Terraform&amp;quot;&lt;br /&gt;
# }&lt;br /&gt;
# allowed_account_ids = [ &amp;quot;111111111111&amp;quot;, &amp;quot;222222222222&amp;quot; ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
provider &amp;quot;template&amp;quot; {&lt;br /&gt;
  version = &amp;quot;~&amp;gt; 1.0.0&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
and create a variable file to reference it&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ vi variables.tf&lt;br /&gt;
variable &amp;quot;region&amp;quot; {&lt;br /&gt;
  default = &amp;quot;eu-west-1&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
variable &amp;quot;profile&amp;quot; {} #variable without a default value will prompt to type in the value. And that should be 'terraform-profile1'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Run terraform&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ terraform plan -var 'profile=terraform-profile1'  #this way value can be set&lt;br /&gt;
$ terraform plan -destroy -input=false&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= AWS example =&lt;br /&gt;
Prerequisites are:&lt;br /&gt;
*~/.aws/credential file exists&lt;br /&gt;
*variables.tf exist, with context below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you remove &amp;lt;tt&amp;gt;default&amp;lt;/tt&amp;gt; value you will be prompted for it.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;inputs.tf&amp;lt;/code&amp;gt; also known as a variable file.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; vi inputs.tf&lt;br /&gt;
variable &amp;quot;region&amp;quot; { default = &amp;quot;eu-west-1&amp;quot;  } &lt;br /&gt;
variable &amp;quot;profile&amp;quot; {&lt;br /&gt;
       description = &amp;quot;Provide AWS credentials profile you want to use, saved in ~/.aws/credentials file&amp;quot;&lt;br /&gt;
       default     = &amp;quot;terraform-profile&amp;quot; }&lt;br /&gt;
variable &amp;quot;key_name&amp;quot; {&lt;br /&gt;
        description = &amp;lt;&amp;lt;DESCRIPTION&lt;br /&gt;
Provide name of the ssh private key file name, ~/.ssh will be search&lt;br /&gt;
This is the key assosiated with the IAM user in AWS. Example: id_rsa&lt;br /&gt;
DESCRIPTION&lt;br /&gt;
        default     = &amp;quot;id_rsa&amp;quot; }&lt;br /&gt;
variable &amp;quot;public_key_path&amp;quot; {&lt;br /&gt;
        description = &amp;lt;&amp;lt;DESCRIPTION&lt;br /&gt;
Path to the SSH public keys for authentication. This key will be injected&lt;br /&gt;
into all ec2 instances created by Terraform.&lt;br /&gt;
Example: ~./ssh/terraform.pub&lt;br /&gt;
DESCRIPTION&lt;br /&gt;
        default     = &amp;quot;~/.ssh/id_rsa.pub&amp;quot; }&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Terraform .tf file&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ vi example.tf&lt;br /&gt;
provider &amp;quot;aws&amp;quot; {&lt;br /&gt;
  region = &amp;quot;${var.region}&amp;quot;&lt;br /&gt;
  profile = &amp;quot;${var.profile}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
resource &amp;quot;aws_vpc&amp;quot; &amp;quot;vpc&amp;quot; {&lt;br /&gt;
  cidr_block = &amp;quot;10.0.0.0/16&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
# Create an internet gateway to give our subnet access to the open internet&lt;br /&gt;
resource &amp;quot;aws_internet_gateway&amp;quot; &amp;quot;internet-gateway&amp;quot; {&lt;br /&gt;
  vpc_id = &amp;quot;${aws_vpc.vpc.id}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
# Give the VPC internet access on its main route table&lt;br /&gt;
resource &amp;quot;aws_route&amp;quot; &amp;quot;internet_access&amp;quot; {&lt;br /&gt;
  route_table_id         = &amp;quot;${aws_vpc.vpc.main_route_table_id}&amp;quot;&lt;br /&gt;
  destination_cidr_block = &amp;quot;0.0.0.0/0&amp;quot;&lt;br /&gt;
  gateway_id             = &amp;quot;${aws_internet_gateway.internet-gateway.id}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
# Create a subnet to launch our instances into&lt;br /&gt;
resource &amp;quot;aws_subnet&amp;quot; &amp;quot;default&amp;quot; {&lt;br /&gt;
  vpc_id                  = &amp;quot;${aws_vpc.vpc.id}&amp;quot;&lt;br /&gt;
  cidr_block              = &amp;quot;10.0.1.0/24&amp;quot;&lt;br /&gt;
  map_public_ip_on_launch = true&lt;br /&gt;
&lt;br /&gt;
  tags {&lt;br /&gt;
    Name = &amp;quot;Public&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
# Our default security group to access&lt;br /&gt;
# instances over SSH and HTTP&lt;br /&gt;
resource &amp;quot;aws_security_group&amp;quot; &amp;quot;default&amp;quot; {&lt;br /&gt;
  name        = &amp;quot;terraform_securitygroup&amp;quot;&lt;br /&gt;
  description = &amp;quot;Used for public instances&amp;quot;&lt;br /&gt;
  vpc_id      = &amp;quot;${aws_vpc.vpc.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  # SSH access from anywhere&lt;br /&gt;
  ingress {&lt;br /&gt;
    from_port   = 22&lt;br /&gt;
    to_port     = 22&lt;br /&gt;
    protocol    = &amp;quot;tcp&amp;quot;&lt;br /&gt;
    cidr_blocks = [&amp;quot;0.0.0.0/0&amp;quot;]&lt;br /&gt;
  }&lt;br /&gt;
  # HTTP access from the VPC&lt;br /&gt;
  ingress {&lt;br /&gt;
    from_port   = 80&lt;br /&gt;
    to_port     = 80&lt;br /&gt;
    protocol    = &amp;quot;tcp&amp;quot;&lt;br /&gt;
    cidr_blocks = [&amp;quot;10.0.0.0/16&amp;quot;]&lt;br /&gt;
  }&lt;br /&gt;
  # outbound internet access&lt;br /&gt;
  egress {&lt;br /&gt;
    from_port   = 0&lt;br /&gt;
    to_port     = 0&lt;br /&gt;
    protocol    = &amp;quot;-1&amp;quot; # all protocols&lt;br /&gt;
    cidr_blocks = [&amp;quot;0.0.0.0/0&amp;quot;]&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
resource &amp;quot;aws_key_pair&amp;quot; &amp;quot;auth&amp;quot; {&lt;br /&gt;
  key_name   = &amp;quot;${var.key_name}&amp;quot;&lt;br /&gt;
  public_key = &amp;quot;${file(var.public_key_path)}&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
resource &amp;quot;aws_instance&amp;quot; &amp;quot;webserver&amp;quot; {&lt;br /&gt;
  ami = &amp;quot;ami-405f7226&amp;quot;&lt;br /&gt;
  instance_type = &amp;quot;t2.nano&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  key_name = &amp;quot;${aws_key_pair.auth.id}&amp;quot;&lt;br /&gt;
  vpc_security_group_ids = [&amp;quot;${aws_security_group.default.id}&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
  # We're going to launch into the public subnet for this.&lt;br /&gt;
  # Normally, in production environments, webservers would be in&lt;br /&gt;
  # private subnets.&lt;br /&gt;
  subnet_id = &amp;quot;${aws_subnet.default.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  # The connection block tells our provisioner how to&lt;br /&gt;
  # communicate with the instance&lt;br /&gt;
  connection {&lt;br /&gt;
    user = &amp;quot;ubuntu&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
  # We run a remote provisioner on the instance after creating it &lt;br /&gt;
  # to install Nginx. By default, this should be on port 80&lt;br /&gt;
  provisioner &amp;quot;remote-exec&amp;quot; {&lt;br /&gt;
    inline = [&lt;br /&gt;
      &amp;quot;sudo apt-get -y update&amp;quot;,&lt;br /&gt;
      &amp;quot;sudo apt-get -y install nginx&amp;quot;,&lt;br /&gt;
      &amp;quot;sudo service nginx start&amp;quot;&lt;br /&gt;
    ]&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Run a plan ==&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
$ terraform plan&lt;br /&gt;
var.key_name&lt;br /&gt;
  Name of the AWS key pair&lt;br /&gt;
&lt;br /&gt;
  Enter a value: id_rsa        #name of the key_pair&lt;br /&gt;
&lt;br /&gt;
var.profile&lt;br /&gt;
  AWS credentials profile you want to use&lt;br /&gt;
&lt;br /&gt;
  Enter a value: terraform-profile   #aws profile in ~/.aws/credentials file&lt;br /&gt;
&lt;br /&gt;
var.public_key_path&lt;br /&gt;
  Path to the SSH public keys for authentication.&lt;br /&gt;
  Example: ~./ssh/terraform.pub&lt;br /&gt;
&lt;br /&gt;
  Enter a value: ~/.ssh/id_rsa.pub  #path to the matching public key&lt;br /&gt;
&lt;br /&gt;
Refreshing Terraform state in-memory prior to plan...&lt;br /&gt;
The refreshed state will be used to calculate this plan, but will not be&lt;br /&gt;
persisted to local or remote state storage.&lt;br /&gt;
&lt;br /&gt;
The Terraform execution plan has been generated and is shown below.&lt;br /&gt;
Resources are shown in alphabetical order for quick scanning. Green resources&lt;br /&gt;
will be created (or destroyed and then created if an existing resource&lt;br /&gt;
exists), yellow resources are being changed in-place, and red resources&lt;br /&gt;
will be destroyed. Cyan entries are data sources to be read.&lt;br /&gt;
&lt;br /&gt;
+ aws_instance.webserver&lt;br /&gt;
    ami:                         &amp;quot;ami-405f7226&amp;quot;&lt;br /&gt;
    associate_public_ip_address: &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    availability_zone:           &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    ebs_block_device.#:          &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    ephemeral_block_device.#:    &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    instance_state:              &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    instance_type:               &amp;quot;t2.nano&amp;quot;&lt;br /&gt;
    ipv6_addresses.#:            &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    key_name:                    &amp;quot;${aws_key_pair.auth.id}&amp;quot;&lt;br /&gt;
    network_interface_id:        &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    placement_group:             &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    private_dns:                 &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    private_ip:                  &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    public_dns:                  &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    public_ip:                   &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    root_block_device.#:         &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    security_groups.#:           &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    source_dest_check:           &amp;quot;true&amp;quot;&lt;br /&gt;
    subnet_id:                   &amp;quot;${aws_subnet.default.id}&amp;quot;&lt;br /&gt;
    tenancy:                     &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    vpc_security_group_ids.#:    &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
+ aws_internet_gateway.internet-gateway&lt;br /&gt;
    vpc_id: &amp;quot;${aws_vpc.vpc.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
+ aws_key_pair.auth&lt;br /&gt;
    fingerprint: &amp;quot;&amp;lt;computed&amp;gt;&amp;quot;&lt;br /&gt;
    key_name:    &amp;quot;id_rsa&amp;quot;&lt;br /&gt;
    public_key:  &amp;quot;ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDfc piotr@ubuntu&amp;quot;&lt;br /&gt;
&lt;br /&gt;
...omitted...&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
Plan: 7 to add, 0 to change, 0 to destroy.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Plan a single target&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; terraform plan -target=aws_ami_from_instance.golden&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Terraform apply ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; terraform apply&lt;br /&gt;
$&amp;gt; terraform show # shoe current resources in the state file&lt;br /&gt;
aws_instance.webserver:&lt;br /&gt;
 id = i-09c1c665cef284235&lt;br /&gt;
 ami = ami-405f7226&lt;br /&gt;
&amp;lt;...&amp;gt;&lt;br /&gt;
aws_security_group.default:&lt;br /&gt;
 id = sg-b14bb1c8&lt;br /&gt;
 description = Used for public instances&lt;br /&gt;
 egress.# = 1&lt;br /&gt;
&amp;lt;...&amp;gt;&lt;br /&gt;
aws_subnet.default:&lt;br /&gt;
 id = subnet-6f4f510b&lt;br /&gt;
&amp;lt;...&amp;gt;&lt;br /&gt;
aws_vpc.vpc:&lt;br /&gt;
 id = vpc-9ba0b7ff&lt;br /&gt;
&amp;lt;...&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Apply a single resource using &amp;lt;code&amp;gt;-target &amp;lt;resource&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; terraform apply -target=aws_ami_from_instance.golden&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Terraform destroy ==&lt;br /&gt;
Run destroy command to delete all resources that were created&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; terraform destroy&lt;br /&gt;
&lt;br /&gt;
aws_key_pair.auth: Refreshing state... (ID: id_rsa)&lt;br /&gt;
aws_vpc.vpc: Refreshing state... (ID: vpc-9ba0b7ff)&lt;br /&gt;
&amp;lt;...&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Destroy complete! Resources: 7 destroyed.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Destroy a single resource - targeting&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; terraform show&lt;br /&gt;
$&amp;gt; terraform destroy -target=aws_ami_from_instance.golden&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Terraform taint ==&lt;br /&gt;
Get a resource list&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
terraform state list&lt;br /&gt;
# select item for the list #&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Version 0.11: resource index must be addressed as eg. &amp;lt;code&amp;gt;aws_instance.main.0&amp;lt;/code&amp;gt; not  &amp;lt;code&amp;gt;aws_instance.main[0]&amp;lt;/code&amp;gt;. It's not possible to tain whole module&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
terraform taint -module=&amp;lt;MODULE_NAME&amp;gt; aws_instance.main.0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Version 0.12: resources and modules can be addressed in more natural way&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
terraform taint module.MODULE_NAME.aws_instance.main.0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Use ansible from Terraform - Provision using Ansible =&lt;br /&gt;
Unsurr if this is the best approach due to the fact of how to store the state of local-exec Ansible run. Could be set to always run as Ansible playbooks are immutable. Exame: https://github.com/dzeban/c10k/blob/master/infrastructure/main.tf&lt;br /&gt;
&lt;br /&gt;
= Debug =&lt;br /&gt;
== Output complex object ==&lt;br /&gt;
Often it is required to manipulate a data structure that is an output of &amp;lt;tt&amp;gt;resource&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;data.resource&amp;lt;/tt&amp;gt; or simply a template that might be hidden computation not always displayed on your screen. You can use following techniques to iterate over you code output:&lt;br /&gt;
&lt;br /&gt;
;Output and [https://www.terraform.io/docs/providers/null/resource.html null_resource] - empty virtual container that can run any arbitrary commands&lt;br /&gt;
* '''Problem statement:''' Display computed Terrafom &amp;lt;code&amp;gt;templatefile&amp;lt;/code&amp;gt;&lt;br /&gt;
* '''Solution:''' Use &amp;lt;code&amp;gt;null_resource&amp;lt;/code&amp;gt; to create a template, such template will be shown in a &amp;lt;tt&amp;gt;plan&amp;lt;/tt&amp;gt;. If such template is Json policy, invalid policies fail and you cannot see why. Plan will show the object being constructed, running &amp;lt;code&amp;gt;terraform apply&amp;lt;/code&amp;gt; it can be saved into state file as output variable. Then the object can be re-used for further transformations.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;Terraform&amp;quot;&amp;gt;&lt;br /&gt;
data &amp;quot;aws_caller_identity&amp;quot; &amp;quot;current&amp;quot; {}&lt;br /&gt;
&lt;br /&gt;
# resource &amp;quot;aws_kms_key&amp;quot; &amp;quot;secretmanager&amp;quot; {&lt;br /&gt;
#  policy = templatefile(&amp;quot;./templates/kms_secretmanager.policy.json.tpl&amp;quot;, ... # debugging policy with &lt;br /&gt;
# }                                                                           # null_resource and ouput&lt;br /&gt;
&lt;br /&gt;
resource &amp;quot;aws_kms_alias&amp;quot; &amp;quot;secretmanager&amp;quot; {&lt;br /&gt;
  name          = &amp;quot;alias/secretmanager&amp;quot;&lt;br /&gt;
  target_key_id = aws_kms_key.secretmanager.key_id&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
resource &amp;quot;null_resource&amp;quot; &amp;quot;policytest&amp;quot; {&lt;br /&gt;
  triggers = {&lt;br /&gt;
    policytest = templatefile(&amp;quot;./templates/kms_secretmanager.policy.json.tpl&amp;quot;, &lt;br /&gt;
    {&lt;br /&gt;
      arns_json                 = jsonencode(var.crossAccountIamUsers_arns)&lt;br /&gt;
      currentAccountId          = data.aws_caller_identity.current.account_id&lt;br /&gt;
      crossAccountAccessEnabled = length([var.crossAccountIamUsers_arns]) &amp;gt; 0 ? true : false&lt;br /&gt;
    })&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
output &amp;quot;policy&amp;quot; {&lt;br /&gt;
  value = templatefile(&amp;quot;./templates/kms_secretmanager.policy.json.tpl&amp;quot;, &lt;br /&gt;
    {&lt;br /&gt;
      arns_json                 = jsonencode(var.crossAccountIamUsers_arns)&lt;br /&gt;
      currentAccountId          = data.aws_caller_identity.current.account_id&lt;br /&gt;
      crossAccountAccessEnabled = length([var.crossAccountIamUsers_arns]) &amp;gt; 0 ? true : false&lt;br /&gt;
    }&lt;br /&gt;
  )&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Policy template file &amp;lt;code&amp;gt;./templates/kms_secretmanager.policy.json.tpl&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=json&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
    &amp;quot;Version&amp;quot;: &amp;quot;2012-10-17&amp;quot;,&lt;br /&gt;
    &amp;quot;Id&amp;quot;: &amp;quot;key-consolepolicy-1&amp;quot;,&lt;br /&gt;
    &amp;quot;Statement&amp;quot;: [&lt;br /&gt;
        {&lt;br /&gt;
            &amp;quot;Sid&amp;quot;: &amp;quot;Enable IAM User Permissions&amp;quot;,&lt;br /&gt;
            &amp;quot;Effect&amp;quot;: &amp;quot;Allow&amp;quot;,&lt;br /&gt;
            &amp;quot;Principal&amp;quot;: {&lt;br /&gt;
                &amp;quot;AWS&amp;quot;: &amp;quot;arn:aws:iam::${currentAccountId}:root&amp;quot;&lt;br /&gt;
            },&lt;br /&gt;
            &amp;quot;Action&amp;quot;: &amp;quot;kms:*&amp;quot;,&lt;br /&gt;
            &amp;quot;Resource&amp;quot;: &amp;quot;*&amp;quot;&lt;br /&gt;
        },&lt;br /&gt;
%{ if crossAccountAccessEnabled == true ~}&lt;br /&gt;
        {&lt;br /&gt;
            &amp;quot;Sid&amp;quot;: &amp;quot;Allow cross-accounts retrieve secrets&amp;quot;,&lt;br /&gt;
            &amp;quot;Effect&amp;quot;: &amp;quot;Allow&amp;quot;,&lt;br /&gt;
            &amp;quot;Principal&amp;quot;: {&lt;br /&gt;
                &amp;quot;AWS&amp;quot;: ${arns_json}&lt;br /&gt;
            },&lt;br /&gt;
            &amp;quot;Action&amp;quot;: [&lt;br /&gt;
                &amp;quot;kms:Decrypt&amp;quot;,&lt;br /&gt;
                &amp;quot;kms:DescribeKey&amp;quot;&lt;br /&gt;
            ],&lt;br /&gt;
            &amp;quot;Resource&amp;quot;: &amp;quot;*&amp;quot;&lt;br /&gt;
        }&lt;br /&gt;
%{ endif ~}&lt;br /&gt;
    ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Run&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ terraform apply -var-file=test.tfvars -target null_resource.policytest # -var-file contains 'var.crossAccountIamUsers_arns' list variable&lt;br /&gt;
&lt;br /&gt;
Terraform will perform the following actions:&lt;br /&gt;
&lt;br /&gt;
  # null_resource.policytest will be created&lt;br /&gt;
  + resource &amp;quot;null_resource&amp;quot; &amp;quot;policytest&amp;quot; {&lt;br /&gt;
      + id       = (known after apply)&lt;br /&gt;
      + triggers = {&lt;br /&gt;
          + &amp;quot;policytest&amp;quot; = jsonencode(&lt;br /&gt;
                {&lt;br /&gt;
                  + Id        = &amp;quot;key-consolepolicy-1&amp;quot;&lt;br /&gt;
                  + Statement = [&lt;br /&gt;
                      + {&lt;br /&gt;
                          + Action    = &amp;quot;kms:*&amp;quot;&lt;br /&gt;
                          + Effect    = &amp;quot;Allow&amp;quot;&lt;br /&gt;
                          + Principal = {&lt;br /&gt;
                              + AWS = &amp;quot;arn:aws:iam::111111111111:root&amp;quot;&lt;br /&gt;
                            }&lt;br /&gt;
                          + Resource  = &amp;quot;*&amp;quot;&lt;br /&gt;
                          + Sid       = &amp;quot;Enable IAM User Permissions&amp;quot;&lt;br /&gt;
                        },&lt;br /&gt;
                      + {&lt;br /&gt;
                          + Action    = [&lt;br /&gt;
                              + &amp;quot;kms:Decrypt&amp;quot;,&lt;br /&gt;
                              + &amp;quot;kms:DescribeKey&amp;quot;,&lt;br /&gt;
                            ]&lt;br /&gt;
                          + Effect    = &amp;quot;Allow&amp;quot;&lt;br /&gt;
                          + Principal = {&lt;br /&gt;
                              + AWS = [&lt;br /&gt;
                                  + &amp;quot;arn:aws:iam::111111111111:user/dev&amp;quot;,&lt;br /&gt;
                                  + &amp;quot;arn:aws:iam::111111111111:user/test&amp;quot;,&lt;br /&gt;
                                ]&lt;br /&gt;
                            }&lt;br /&gt;
                          + Resource  = &amp;quot;*&amp;quot;&lt;br /&gt;
                          + Sid       = &amp;quot;Allow cross-accounts retrieve secrets&amp;quot;&lt;br /&gt;
                        },&lt;br /&gt;
                    ]&lt;br /&gt;
                  + Version   = &amp;quot;2012-10-17&amp;quot;&lt;br /&gt;
                }&lt;br /&gt;
            )&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
Plan: 1 to add, 0 to change, 0 to destroy.&lt;br /&gt;
&lt;br /&gt;
Do you want to perform these actions?&lt;br /&gt;
  Terraform will perform the actions described above.&lt;br /&gt;
  Only 'yes' will be accepted to approve.&lt;br /&gt;
&lt;br /&gt;
  Enter a value: yes # &amp;lt;- manual imput&lt;br /&gt;
&lt;br /&gt;
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.&lt;br /&gt;
&lt;br /&gt;
Outputs:&lt;br /&gt;
policy = {&lt;br /&gt;
    &amp;quot;Version&amp;quot;: &amp;quot;2012-10-17&amp;quot;,&lt;br /&gt;
    &amp;quot;Id&amp;quot;: &amp;quot;key-consolepolicy-1&amp;quot;,&lt;br /&gt;
    &amp;quot;Statement&amp;quot;: [&lt;br /&gt;
        {&lt;br /&gt;
            &amp;quot;Sid&amp;quot;: &amp;quot;Enable IAM User Permissions&amp;quot;,&lt;br /&gt;
            &amp;quot;Effect&amp;quot;: &amp;quot;Allow&amp;quot;,&lt;br /&gt;
            &amp;quot;Principal&amp;quot;: {&lt;br /&gt;
                &amp;quot;AWS&amp;quot;: &amp;quot;arn:aws:iam::111111111111:root&amp;quot;&lt;br /&gt;
            },&lt;br /&gt;
            &amp;quot;Action&amp;quot;: &amp;quot;kms:*&amp;quot;,&lt;br /&gt;
            &amp;quot;Resource&amp;quot;: &amp;quot;*&amp;quot;&lt;br /&gt;
        },&lt;br /&gt;
        {&lt;br /&gt;
            &amp;quot;Sid&amp;quot;: &amp;quot;Allow cross-accounts retrieve secrets&amp;quot;,&lt;br /&gt;
            &amp;quot;Effect&amp;quot;: &amp;quot;Allow&amp;quot;,&lt;br /&gt;
            &amp;quot;Principal&amp;quot;: {&lt;br /&gt;
                &amp;quot;AWS&amp;quot;: [&amp;quot;arn:aws:iam::111111111111:user/dev&amp;quot;,&amp;quot;arn:aws:iam::111111111111:user/test&amp;quot;]&lt;br /&gt;
            },&lt;br /&gt;
            &amp;quot;Action&amp;quot;: [&lt;br /&gt;
                &amp;quot;kms:Decrypt&amp;quot;,&lt;br /&gt;
                &amp;quot;kms:DescribeKey&amp;quot;&lt;br /&gt;
            ],&lt;br /&gt;
            &amp;quot;Resource&amp;quot;: &amp;quot;*&amp;quot;&lt;br /&gt;
        }&lt;br /&gt;
    ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Debug =&lt;br /&gt;
== Debug and analyze logs ==&lt;br /&gt;
We are going to enable logging to a file in Terraform. Convert log file to pdf and use sheri.ai to give us the answers.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Pre req - Ubuntu 22.04&lt;br /&gt;
sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install ghostscript # for ps2pdf converter&lt;br /&gt;
&lt;br /&gt;
# Set Terraform logging&lt;br /&gt;
export TF_LOG=TRACE # DEBUG&lt;br /&gt;
export TF_LOG_PATH=/tmp/tflogs.log&lt;br /&gt;
&lt;br /&gt;
terraform plan|apply&lt;br /&gt;
vim $TF_LOG_PATH -c &amp;quot;hardcopy &amp;gt; ${TF_LOG_PATH}.ps | q&amp;quot;; ps2pdf ${TF_LOG_PATH}.ps ${TF_LOG_PATH}-$(echo $(date +&amp;quot;%Y%m%d-%H%M&amp;quot;)).pdf&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Debug using &amp;lt;code&amp;gt;terraform console&amp;lt;/code&amp;gt;==&lt;br /&gt;
This command provides an interactive command-line console for evaluating and experimenting with expressions. This is useful for testing interpolations before using them in configurations, and for interacting with any values currently saved in state. Terraform console will read configured state even if it is remote.&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
$&amp;gt; terraform console #-state=path # note I have 'tfstate' available; this could be remote state&lt;br /&gt;
&amp;gt; var.vpc_cidr       # &amp;lt;- new syntax&lt;br /&gt;
10.123.0.0/16&lt;br /&gt;
&amp;gt; &amp;quot;${var.vpc_cidr}&amp;quot;  # &amp;lt;- old syntax&lt;br /&gt;
10.123.0.0/16&lt;br /&gt;
&amp;gt; aws_security_group.tf_public_sg.id   # interpolate from state&lt;br /&gt;
sg-04d51b5ae10e6f0b0&lt;br /&gt;
&amp;gt; help&lt;br /&gt;
The Terraform console allows you to experiment with Terraform interpolations.&lt;br /&gt;
You may access resources in the state (if you have one) just as you would&lt;br /&gt;
from a configuration. For example: &amp;quot;aws_instance.foo.id&amp;quot; would evaluate&lt;br /&gt;
to the ID of &amp;quot;aws_instance.foo&amp;quot; if it exists in your state.&lt;br /&gt;
&lt;br /&gt;
Type in the interpolation to test and hit &amp;lt;enter&amp;gt; to see the result.&lt;br /&gt;
&lt;br /&gt;
To exit the console, type &amp;quot;exit&amp;quot; and hit &amp;lt;enter&amp;gt;, or use Control-C or&lt;br /&gt;
Control-D.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ echo &amp;quot;aws_iam_user.notif.arn&amp;quot; | terraform console&lt;br /&gt;
arn:aws:iam::123456789:user/notif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Log user_data to console logs ==&lt;br /&gt;
In Linux add a line below after she-bang&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
exec &amp;gt; &amp;gt;(tee /var/log/user-data.log|logger -t user-data -s 2&amp;gt;/dev/console)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Now you can go and open System Logs in AWS Console to view user-data script logs.&lt;br /&gt;
&lt;br /&gt;
= terraform graph to visualise configuration =&lt;br /&gt;
== Graph dependencies ==&lt;br /&gt;
Create visualised file. You may need to install &amp;lt;code&amp;gt;sudo apt-get install graphviz&amp;lt;/code&amp;gt; if it is not in your system.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo apt install graphviz # installs 'dot'&lt;br /&gt;
terraform graph | dot -Tpng &amp;gt; graph.png&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
[[File:Example2.png|none|left|700px|Terraform visual configuration]]&lt;br /&gt;
&lt;br /&gt;
== [https://serverfault.com/questions/1005761/what-does-error-cycle-means-in-terraform Cycle error] ==&lt;br /&gt;
Example cycle error:&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
Error: Cycle: module.gke.google_container_node_pool.pools[&amp;quot;low-standard-n1&amp;quot;]&lt;br /&gt;
 module.gke.google_container_node_pool.pools[&amp;quot;medium-standard-n1&amp;quot;]&lt;br /&gt;
 module.gke.google_container_node_pool.pools[&amp;quot;large-standard-n1&amp;quot;]&lt;br /&gt;
 module.gke.local.cluster_endpoint (expand)&lt;br /&gt;
 module.gke.output.endpoint (expand)&lt;br /&gt;
 provider[&amp;quot;registry.terraform.io/gavinbunney/kubectl&amp;quot;]&lt;br /&gt;
 kubectl_manifest.sync[&amp;quot;source.toolkit.fluxcd.io/v1beta1/gitrepository/flux-system/flux-system&amp;quot;] (destroy)&lt;br /&gt;
 module.gke.google_container_node_pool.pools[&amp;quot;preemptible&amp;quot;] (destroy)&lt;br /&gt;
 module.gke.module.gcloud_delete_default_kube_dns_configmap.module.gcloud_kubectl.null_resource.additional_components[0] (destroy)&lt;br /&gt;
 module.gke.module.gcloud_delete_default_kube_dns_configmap.module.gcloud_kubectl.null_resource.run_command[0] (destroy)&lt;br /&gt;
 module.gke.module.gcloud_delete_default_kube_dns_configmap.module.gcloud_kubectl.null_resource.module_depends_on[0] (destroy)&lt;br /&gt;
 module.gke.module.gcloud_delete_default_kube_dns_configmap.module.gcloud_kubectl.null_resource.run_destroy_command[0] (destroy)&lt;br /&gt;
 module.gke.kubernetes_config_map.kube-dns[0] (destroy)&lt;br /&gt;
 module.gke.google_container_cluster.primary&lt;br /&gt;
 module.gke.local.cluster_output_master_auth (expand)&lt;br /&gt;
 module.gke.local.cluster_master_auth_list_layer1 (expand)&lt;br /&gt;
 module.gke.local.cluster_master_auth_list_layer2 (expand)&lt;br /&gt;
 module.gke.local.cluster_master_auth_map (expand)&lt;br /&gt;
 module.gke.local.cluster_ca_certificate (expand)&lt;br /&gt;
 module.gke.output.ca_certificate (expand)&lt;br /&gt;
 provider[&amp;quot;registry.terraform.io/hashicorp/kubernetes&amp;quot;]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;-draw-cycles&amp;lt;/code&amp;gt; command causes Terraform to mark the arrows that are related to the cycle being reported using the color red. If you cannot visually distinguish red from black, you may wish to first edit the generated Graphviz code to replace red with some other color you can distinguish.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo apt install graphviz&lt;br /&gt;
terraform graph -draw-cycles -type=plan &amp;gt; cycle-plan.graphviz&lt;br /&gt;
terraform graph -draw-cycles | dot -Tpng &amp;gt; cycles.png&lt;br /&gt;
terraform graph -draw-cycles | dot -Tsvg &amp;gt; cycles.svg&lt;br /&gt;
terraform graph -draw-cycles | dot -Tpdf &amp;gt; cycles.pdf&lt;br /&gt;
# | -draw-cycles - highlight any cycles in the graph with colored edges. This helps when diagnosing cycle errors.&lt;br /&gt;
# | -type=plan   - type of graph to output. Can be: plan, plan-destroy, apply, validate, input, refresh.&lt;br /&gt;
&lt;br /&gt;
# For large graphs you may want to install inkscape&lt;br /&gt;
sudo apt install inkscape --no-install-suggests --no-install-recommends&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Awoid cycle errors in modules by structuring your config to avoid cross-module references. So instead of directly accessing an output of one module from inside another, set it up as in input parameter instead and wire everything together on the top level.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;How to get it solved&lt;br /&gt;
With the cycling dependency issue, study the graph then decide on removing from the state a resource that should be generated later. If the graph is not clear or too complex to read you may need to guess and delete from the state a resource marked for deletion, ie:&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
terraform state  rm kubectl_manifest.install[\&amp;quot;apps/v1/deployment/flux-system/kustomize-controller\&amp;quot;]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Remote state =&lt;br /&gt;
== Enable ==&lt;br /&gt;
Create s3 bucket with unique name, enable versioning and choose a region.&lt;br /&gt;
&lt;br /&gt;
Then configure terraform:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ terraform remote config \&lt;br /&gt;
     -backend=s3 \&lt;br /&gt;
     -backend-config=&amp;quot;bucket=YOUR_BUCKET_NAME&amp;quot; \&lt;br /&gt;
     -backend-config=&amp;quot;key=terraform.tfstate&amp;quot; \&lt;br /&gt;
     -backend-config=&amp;quot;region=YOUR_BUCKET_REGION&amp;quot; \&lt;br /&gt;
     -backend-config=&amp;quot;encrypt=true&amp;quot;&lt;br /&gt;
 Remote configuration updated&lt;br /&gt;
 Remote state configured and pulled.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
After running this command, you should see your Terraform state show up in that S3 bucket.&lt;br /&gt;
&lt;br /&gt;
== Locking ==&lt;br /&gt;
Add &amp;lt;code&amp;gt;dynamodb_table&amp;lt;/code&amp;gt; name to backend configuration. &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
terraform {&lt;br /&gt;
  version          = &amp;quot;~&amp;gt; 1.0&amp;quot;&lt;br /&gt;
  required_version = &amp;quot;= 0.11.11&amp;quot;&lt;br /&gt;
  backend &amp;quot;s3&amp;quot; {&lt;br /&gt;
    dynamodb_table = &amp;quot;tfstate-lock&amp;quot;&lt;br /&gt;
    profile        = &amp;quot;terraform-agent&amp;quot;&lt;br /&gt;
#   assume_role {&lt;br /&gt;
#     role_arn     = &amp;quot;${var.aws_xaccount_role}&amp;quot;&lt;br /&gt;
#     session_name = &amp;quot;${var.aws_xsession_name}&amp;quot;&lt;br /&gt;
#   }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In AWS create dynamo-db table, named: &amp;lt;tt&amp;gt;tfsate-lock&amp;lt;/tt&amp;gt; with index &amp;lt;tt&amp;gt;LockID&amp;lt;/tt&amp;gt;; as on a picture below. It an event of taking a lock the entry similar to one below gets created.&lt;br /&gt;
[[File:Terraform-dynamo-db-state-locking.png|none|left|Terraform-dynamo-db-state-locking]]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
{&amp;quot;ID&amp;quot;:&amp;quot;62a453e8-7fbc-cfa2-e07f-be1381b82af3&amp;quot;,&amp;quot;Operation&amp;quot;:&amp;quot;OperationTypePlan&amp;quot;,&amp;quot;Info&amp;quot;:&amp;quot;&amp;quot;,&amp;quot;Who&amp;quot;:&amp;quot;piotr@laptop1&amp;quot;,&amp;quot;Version&amp;quot;:&amp;quot;0.11.11&amp;quot;,&amp;quot;Created&amp;quot;:&amp;quot;2019-03-07T08:49:33.3078722Z&amp;quot;,&amp;quot;Path&amp;quot;:&amp;quot;tfstate-acmedev01-acmedev-111111111111/aws/acmedev01/state&amp;quot;}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Workspaces =&lt;br /&gt;
== [https://discuss.hashicorp.com/t/how-to-change-the-name-of-a-workspace/24010 Rename a workspace / move the state file] ==&lt;br /&gt;
{{Note|The state manipulation commands run through Terraform’s automatic state upgrading processes and so best to do this with the same Terraform CLI version that you’ve most recently been using against this workspace so that the state won’t be implicitly upgraded as part of the operation.}}&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
terraform workspace select old-name&lt;br /&gt;
terraform state pull &amp;gt;old-name.tfstate&lt;br /&gt;
terraform workspace new new-name&lt;br /&gt;
terraform state push old-name.tfstate&lt;br /&gt;
terraform show # confirm that the newly-imported state looks 'right', before deleting the old workspace&lt;br /&gt;
terraform workspace delete -force old-name&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Variables =&lt;br /&gt;
Variables can be provided via cli&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
terraform apply -var=&amp;quot;image_id=ami-abc123&amp;quot;&lt;br /&gt;
terraform apply -var='image_id_list=[&amp;quot;ami-abc123&amp;quot;,&amp;quot;ami-def456&amp;quot;]'&lt;br /&gt;
terraform apply -var='image_id_map={&amp;quot;us-east-1&amp;quot;:&amp;quot;ami-abc123&amp;quot;,&amp;quot;us-east-2&amp;quot;:&amp;quot;ami-def456&amp;quot;}'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Terraform also automatically loads a number of variable definitions files if they are present:&lt;br /&gt;
* Files named exactly &amp;lt;code&amp;gt;terraform.tfvars&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;terraform.tfvars.json&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Any files with names ending in &amp;lt;code&amp;gt;.auto.tfvars&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;.auto.tfvars.json&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=Syntax Terraform 0.12.6+=&lt;br /&gt;
{{Note|This [https://alexharv074.github.io/2019/06/02/adventures-in-the-terraform-dsl-part-iii-iteration-enhancements-in-terraform-0.12.html#for-expressions for-expressions] link is a little diamond for this subject}}&lt;br /&gt;
&lt;br /&gt;
== Map and nested block ==&lt;br /&gt;
Terrafom 0.12 introduces stricter validation for followings but allows map keys to be set dynamically from expressions. Note of &amp;quot;=&amp;quot; sign.&lt;br /&gt;
* a map attribute - usually have user-defined keys, like we see in the tags example &lt;br /&gt;
* a nested block always has a fixed set of supported arguments defined by the resource type schema, which Terraform will validate&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
resource &amp;quot;aws_instance&amp;quot; &amp;quot;example&amp;quot; {&lt;br /&gt;
  instance_type = &amp;quot;t2.micro&amp;quot;&lt;br /&gt;
  ami           = &amp;quot;ami-abcd1234&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  tags = {             # &amp;lt;- a map attribute, requires '='&lt;br /&gt;
    Name = &amp;quot;example instance&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  ebs_block_device {    # &amp;lt;- a nested block, no '='&lt;br /&gt;
    device_name = &amp;quot;sda2&amp;quot;&lt;br /&gt;
    volume_type = &amp;quot;gp2&amp;quot;&lt;br /&gt;
    volume_size = 24&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== [https://alexharv074.github.io/2019/06/02/adventures-in-the-terraform-dsl-part-iii-iteration-enhancements-in-terraform-0.12.html For_each] ==&lt;br /&gt;
* [https://alexharv074.github.io/2019/06/02/adventures-in-the-terraform-dsl-part-iii-iteration-enhancements-in-terraform-0.12.html terraform iterations]&lt;br /&gt;
* [https://discuss.hashicorp.com/t/produce-maps-from-list-of-strings-of-a-map/2197 Produce maps from list of strings of a map]&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ For_each and new allowed formatting without the need for &amp;quot;${var.vpc_cidr}&amp;quot; syntax = var.vpc_cidr&lt;br /&gt;
|- &lt;br /&gt;
! main.tf&lt;br /&gt;
! variables.tf and outputs.tf&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| &amp;lt;source&amp;gt;# vi main.tf&lt;br /&gt;
resource &amp;quot;aws_vpc&amp;quot; &amp;quot;tf_vpc&amp;quot; {&lt;br /&gt;
  cidr_block           = &amp;quot;${var.vpc_cidr}&amp;quot;&lt;br /&gt;
  enable_dns_hostnames = true&lt;br /&gt;
  enable_dns_support   = true&lt;br /&gt;
  tags =  {           #&amp;lt;-note of '=' as this is an argument&lt;br /&gt;
    Name = &amp;quot;tf_vpc&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
resource &amp;quot;aws_security_group&amp;quot; &amp;quot;tf_public_sg&amp;quot; {&lt;br /&gt;
  name        = &amp;quot;tf_public_sg&amp;quot;&lt;br /&gt;
  description = &amp;quot;Used for access to the public instances&amp;quot;&lt;br /&gt;
  vpc_id      = &amp;quot;${aws_vpc.tf_vpc.id}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  dynamic &amp;quot;ingress&amp;quot; {&lt;br /&gt;
    for_each = [ for s in var.service_ports: {&lt;br /&gt;
       from_port = s.from_port&lt;br /&gt;
       to_port   = s.to_port   }]&lt;br /&gt;
    content {&lt;br /&gt;
      from_port   = ingress.value.from_port&lt;br /&gt;
      to_port     = ingress.value.to_port&lt;br /&gt;
      protocol    = &amp;quot;tcp&amp;quot;&lt;br /&gt;
      cidr_blocks = [ var.accessip ]&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
# Commented block has been replaced by 'dynamic &amp;quot;ingress&amp;quot;'&lt;br /&gt;
# ingress {  #SSH&lt;br /&gt;
#   from_port   = 22&lt;br /&gt;
#   to_port     = 22&lt;br /&gt;
#   protocol    = &amp;quot;tcp&amp;quot;&lt;br /&gt;
#   cidr_blocks = [&amp;quot;${var.accessip}&amp;quot;]&lt;br /&gt;
# }&lt;br /&gt;
# ingress {  #HTTP&lt;br /&gt;
#   from_port   = 80&lt;br /&gt;
#   to_port     = 80&lt;br /&gt;
#   protocol    = &amp;quot;tcp&amp;quot;&lt;br /&gt;
#   cidr_blocks = [&amp;quot;${var.accessip}&amp;quot;]&lt;br /&gt;
# }&lt;br /&gt;
  egress { &lt;br /&gt;
    from_port   = 0&lt;br /&gt;
    to_port     = 0&lt;br /&gt;
    protocol    = &amp;quot;-1&amp;quot;&lt;br /&gt;
    cidr_blocks = [&amp;quot;0.0.0.0/0&amp;quot;]&lt;br /&gt;
  }&lt;br /&gt;
}&amp;lt;/source&amp;gt; &lt;br /&gt;
| &amp;lt;source&amp;gt;# vi variables.tf&lt;br /&gt;
variable &amp;quot;vpc_cidr&amp;quot; { default = &amp;quot;10.123.0.0/16&amp;quot; }&lt;br /&gt;
variable &amp;quot;accessip&amp;quot; { default = &amp;quot;0.0.0.0/0&amp;quot;     }&lt;br /&gt;
&lt;br /&gt;
variable &amp;quot;service_ports&amp;quot; {&lt;br /&gt;
  type = &amp;quot;list&amp;quot;&lt;br /&gt;
  default = [&lt;br /&gt;
    { from_port = 22, to_port = 22 },&lt;br /&gt;
    { from_port = 80, to_port = 80 }&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# vi outputs.tf&lt;br /&gt;
output &amp;quot;public_sg&amp;quot; { &lt;br /&gt;
  value = aws_security_group.tf_public_sg.id&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
output &amp;quot;ingress_port_mapping&amp;quot; {&lt;br /&gt;
  value = {&lt;br /&gt;
    for ingress in aws_security_group.tf_public_sg.ingress:&lt;br /&gt;
    format(&amp;quot;From %d&amp;quot;, ingress.from_port) =&amp;gt; format(&amp;quot;To %d&amp;quot;, ingress.to_port)&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Computed 'Outputs:'&lt;br /&gt;
ingress_port_mapping = {&lt;br /&gt;
  &amp;quot;From 22&amp;quot; = &amp;quot;To 22&amp;quot;&lt;br /&gt;
  &amp;quot;From 80&amp;quot; = &amp;quot;To 80&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
public_sg = sg-04d51b5ae10e6f0b0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== [https://www.sheldonhull.com/blog/how-to-iterate-through-a-list-of-objects-with-terraforms-for-each-function/ Iterate over list of objects] ===&lt;br /&gt;
[https://stackoverflow.com/questions/58594506/how-to-for-each-through-a-listobjects-in-terraform-0-12 how-to-for-each-through-a-listobjects]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
# debug.tf&lt;br /&gt;
locals {&lt;br /&gt;
  users = [&lt;br /&gt;
    # list of objects&lt;br /&gt;
    { name = &amp;quot;foo&amp;quot;, is_enabled = true  },&lt;br /&gt;
    { name = &amp;quot;bar&amp;quot;, is_enabled = false },&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
resource &amp;quot;null_resource&amp;quot; &amp;quot;this&amp;quot; {&lt;br /&gt;
    for_each = { for name in local.users: name.name =&amp;gt; name.is_enabled }&lt;br /&gt;
    connection {&lt;br /&gt;
      name     = each.key&lt;br /&gt;
      email    = each.value&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
output &amp;quot;users_map&amp;quot; {&lt;br /&gt;
  value = { for name in local.users: name.name =&amp;gt; name.is_enabled }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# terraform init&lt;br /&gt;
# terraform apply&lt;br /&gt;
&lt;br /&gt;
null_resource.this[&amp;quot;bar&amp;quot;]: Creating...&lt;br /&gt;
null_resource.this[&amp;quot;foo&amp;quot;]: Creating...&lt;br /&gt;
null_resource.this[&amp;quot;bar&amp;quot;]: Creation complete after 0s [id=7228791922218879597]&lt;br /&gt;
null_resource.this[&amp;quot;foo&amp;quot;]: Creation complete after 0s [id=7997705376010456213]&lt;br /&gt;
&lt;br /&gt;
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.&lt;br /&gt;
&lt;br /&gt;
Outputs:&lt;br /&gt;
&lt;br /&gt;
users_map = {&lt;br /&gt;
  &amp;quot;bar&amp;quot; = false&lt;br /&gt;
  &amp;quot;foo&amp;quot; = true&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Plan is more readable and explicit ==&lt;br /&gt;
[[Terraform/plan_tf_11_vs_12|See comparison]]&lt;br /&gt;
&lt;br /&gt;
== [https://www.hashicorp.com/blog/terraform-0-12-rich-value-types/ Rich Value Types] - for previewing whole resource object ==&lt;br /&gt;
'''Resources and Modules as Values''' Terraform 0.12 now permits using entire resources as object values within configuration, including returning them as outputs and passing them as input variables:&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
output &amp;quot;vpc&amp;quot; {&lt;br /&gt;
  value = aws_vpc.example&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The type of this output value is an object type derived from the schema of the &amp;lt;code&amp;gt;aws_vpc&amp;lt;/code&amp;gt; resource type. The calling module can then access attributes of this result in the same way as the returning module would use &amp;lt;code&amp;gt;aws_vpc.example&amp;lt;/code&amp;gt;, such as &amp;lt;code&amp;gt;module.example.vpc.cidr_block&amp;lt;/code&amp;gt;. This works also for modules with an expression like &amp;lt;code&amp;gt;module.vpc&amp;lt;/code&amp;gt; evaluating to an object value with attributes corresponding to the modules's named outputs.&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;code&amp;gt;for&amp;lt;/code&amp;gt; ==&lt;br /&gt;
* [https://discuss.hashicorp.com/t/produce-maps-from-list-of-strings-of-a-map/2197 Produce maps from list of strings of a map]&lt;br /&gt;
This is mostly used for parsing preexisting lists and maps rather than generating ones. For example, we are able to convert all elements in a list of strings to upper case using this expression.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
local {&lt;br /&gt;
  upper_list = [for i in var.list : upper(i)] # creates a new list &lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The For iterates over each element of the list and returns the value of upper(el) for each element in form of a list. We can also use this expression to generate maps.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
local {&lt;br /&gt;
  upper_map = {for i in var.list : i =&amp;gt; upper(i)} # creates a map with key = value&lt;br /&gt;
                                                  #                 { i[0] = upper(i[0])&lt;br /&gt;
                                                  #                   i[1] = upper(i[1]) }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Use ''if'' as a filter in ''for'' expression&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
[for i in var.list : upper(i) if i != &amp;quot;&amp;quot;]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
In this case, the original element from list now correspond to their uppercase version.&lt;br /&gt;
&lt;br /&gt;
Lastly, we can include an if statement as a filter in for expressions. Unfortunately, we are not able to use if in logical operations like the ternary operators we used before. The following state will try to return a list of all non-empty elements in their uppercase state.&lt;br /&gt;
&lt;br /&gt;
== Manipulate list and complex object ==&lt;br /&gt;
Build a new list by removing items that their string value do not match regex expression&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Resource that generates an object&lt;br /&gt;
resource &amp;quot;aws_acm_certificate&amp;quot; &amp;quot;main&amp;quot; {...}&lt;br /&gt;
&lt;br /&gt;
# Preview of input object 'aws_acm_certificate.main.domain_validation_options'&lt;br /&gt;
output &amp;quot;domain_validation_options&amp;quot; {&lt;br /&gt;
  value       = aws_acm_certificate.main.domain_validation_options&lt;br /&gt;
  description = &amp;quot;array/list of maps taken from resource object(aws_acm_certificate.issued) describing all validation domain records&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
$ terraform output domain_validation_options&lt;br /&gt;
[ # &amp;lt;- array starts here&lt;br /&gt;
  { # &amp;lt;- an item of array the map object&lt;br /&gt;
    &amp;quot;domain_name&amp;quot; = &amp;quot;*.dev.example.com&amp;quot;&lt;br /&gt;
    &amp;quot;resource_record_name&amp;quot; = &amp;quot;_11111111111111111111111111111111.dev.example.com.&amp;quot;&lt;br /&gt;
    &amp;quot;resource_record_type&amp;quot; = &amp;quot;CNAME&amp;quot;&lt;br /&gt;
    &amp;quot;resource_record_value&amp;quot; = &amp;quot;_22222222222222222222222222222222.mzlfeqexyx.acm-validations.aws.&amp;quot;&lt;br /&gt;
  },&lt;br /&gt;
  {&lt;br /&gt;
    &amp;quot;domain_name&amp;quot; = &amp;quot;api.example.com&amp;quot;&lt;br /&gt;
    &amp;quot;resource_record_name&amp;quot; = &amp;quot;_31111111111111111111111111111111.api.example.com.&amp;quot;&lt;br /&gt;
    &amp;quot;resource_record_type&amp;quot; = &amp;quot;CNAME&amp;quot;&lt;br /&gt;
    &amp;quot;resource_record_value&amp;quot; = &amp;quot;_42222222222222222222222222222222.vhzmpjdqfx.acm-validations.aws.&amp;quot;&lt;br /&gt;
                                 &lt;br /&gt;
  },&lt;br /&gt;
]&lt;br /&gt;
&lt;br /&gt;
# 'for k, v' syntax builds a new object 'validation_domains' by iterating over array of maps&lt;br /&gt;
# 'aws_acm_certificate.main.domain_validation_options' and conditinally changes a value of 'v'&lt;br /&gt;
# if contains the sting &amp;quot;*.dev.example.com&amp;quot;. tomap(v) is required to persist type across for expression.&lt;br /&gt;
locals {&lt;br /&gt;
  validation_domains = [&lt;br /&gt;
    for k, v in aws_acm_certificate.main.domain_validation_options : tomap(v) if contains(&lt;br /&gt;
      &amp;quot;*.dev.example.com&amp;quot;, replace(v.domain_name, &amp;quot;*.&amp;quot;, &amp;quot;&amp;quot;)&lt;br /&gt;
    )&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
$ terraform output local_distinct_domains&lt;br /&gt;
local_distinct_domains = [&lt;br /&gt;
  &amp;quot;api.example.com&amp;quot;,&lt;br /&gt;
  &amp;quot;dev.example.com&amp;quot;,&lt;br /&gt;
  &amp;quot;api-aat1.dev.example.com&amp;quot;,&lt;br /&gt;
  &amp;quot;api-aat2.dev.example.com&amp;quot;,&lt;br /&gt;
]&lt;br /&gt;
&lt;br /&gt;
# 'for domain' expession builds a new list only when a domain matches regexall string.&lt;br /&gt;
# checks regexall lengh &amp;gt; 0 of matched captured groups so true or false is return, so &lt;br /&gt;
# the 'for domain : if' statment conditionally adds the item to the new list&lt;br /&gt;
locals {&lt;br /&gt;
  distinct_domains_excluded = [ &lt;br /&gt;
    for domain in local.distinct_domains : domain if length(regexall(&amp;quot;dev.example.com&amp;quot;, domain)) &amp;gt; 0&lt;br /&gt;
  ]&lt;br /&gt;
&lt;br /&gt;
# Similar to the above but iterating over array of maps (k,v - key, value pairs)&lt;br /&gt;
  validation_domains = [&lt;br /&gt;
    for k,v in local.validation_domains : tomap(v) if length(regexall(&amp;quot;dev.example.com&amp;quot;, v.domain_name)) &amp;gt; 0&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Example of iterating over array of maps 'aws_acm_certificate.main.domain_validation_options' to build a list&lt;br /&gt;
# of fqdns that are store in 'aws_acm_certificate.main.domain_validation_options.resource_record_name' in .resource_record_name&lt;br /&gt;
# key.&lt;br /&gt;
# 'for fqdn' syntax on each iteration 'fqdn=aws_acm_certificate.main.domain_validation_options[index]', then&lt;br /&gt;
# anything after ':' means 'set to value equals' fqdn.resource_record_name&lt;br /&gt;
resource &amp;quot;aws_acm_certificate_validation&amp;quot; &amp;quot;main&amp;quot; {&lt;br /&gt;
  certificate_arn         = aws_acm_certificate.main.arn&lt;br /&gt;
  validation_record_fqdns = [ &lt;br /&gt;
    for fqdn in aws_acm_certificate.main.domain_validation_options : fqdn.resource_record_name&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Terraform Merge on Wildcard Tuple ==&lt;br /&gt;
Ideally the solution should be as simple as:&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
merge(local.policy_definitions.*.parameters...)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
* [https://github.com/hashicorp/terraform/issues/24645 Terraform Merge on Wildcard Tuple] TF, GitHub issue&lt;br /&gt;
* [https://stackoverflow.com/questions/62683298/merge-list-of-objects-in-terraform merge-list-of-objects-in-terraform] Stackoverflow&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Workaround&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
policy_parameters = [&lt;br /&gt;
    for key,value in data.azurerm_policy_definition.d_policy_definitions:&lt;br /&gt;
      {&lt;br /&gt;
        parameters = jsondecode(value.parameters)&lt;br /&gt;
      }&lt;br /&gt;
  ]&lt;br /&gt;
  ph_parameters = local.policy_parameters[*].parameters&lt;br /&gt;
  input_parameter = [for item in local.ph_parameters: merge(item,local.ph_parameters...)][0]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Break down&lt;br /&gt;
Extracts the parameter values into a list of JSON values&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
policy_parameters = [&lt;br /&gt;
    for key,value in data.azurerm_policy_definition.d_policy_definitions:&lt;br /&gt;
      {&lt;br /&gt;
        parameters = jsondecode(value.parameters)&lt;br /&gt;
      }&lt;br /&gt;
  ]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Reference the parameters as a variable&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
ph_parameters = local.policy_parameters[*].parameters&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Merge all item content into each item.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
input_parameter = [for item in local.ph_parameters: merge(item,local.ph_parameters...)]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
The 3rd step gives all items in the list the same value, so we can use any index.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Usage&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
parameters = &amp;quot;${jsonencode(local.input_parameter[n])}&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== function: replace, regex ==&lt;br /&gt;
Snippet below removes comments and any empty lines from a &amp;lt;code&amp;gt;values.yaml.tpl&amp;lt;/code&amp;gt; file.&lt;br /&gt;
&amp;lt;source lang=json&amp;gt;&lt;br /&gt;
locals {&lt;br /&gt;
  match_comment = &amp;quot;/(?U)(?m)(?s)^[[:space:]]*#.*$/&amp;quot; # match anyline that starts with '#' or any 'whitespace(s) + #'&lt;br /&gt;
  match_empty_line = &amp;quot;/(?m)(?s)(^[\r\n])/&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
resource &amp;quot;helm_release&amp;quot; &amp;quot;myapp&amp;quot; {&lt;br /&gt;
  name             = &amp;quot;myapp&amp;quot;&lt;br /&gt;
  chart            = &amp;quot;${path.module}/charts/myapp&amp;quot;&lt;br /&gt;
  values = [&lt;br /&gt;
    replace(&lt;br /&gt;
        replace(&lt;br /&gt;
          templatefile(&amp;quot;${path.module}/templates/values.yaml.tpl&amp;quot;, {&lt;br /&gt;
            }), local.match_comment, &amp;quot;&amp;quot;), local.match_empty_line, &amp;quot;&amp;quot;)&lt;br /&gt;
  ]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Explanation:&lt;br /&gt;
* Terraform regex is using [https://github.com/google/re2/wiki/Syntax re2 library]&lt;br /&gt;
* Regex flags are enabled by prefixinf the search:&lt;br /&gt;
** &amp;lt;code&amp;gt;(?m)&amp;lt;/code&amp;gt; - multi-line mode (default false)&lt;br /&gt;
** &amp;lt;code&amp;gt;(?s)&amp;lt;/code&amp;gt; - let . match \n (default false)&lt;br /&gt;
** &amp;lt;code&amp;gt;(?U)&amp;lt;/code&amp;gt; - ungreedy (default false), so stop matching comments at EOL&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
*[https://www.hashicorp.com/blog/hashicorp-terraform-0-12-preview-for-and-for-each HashiCorp Terraform 0.12 Preview: For and For-Each]&lt;br /&gt;
&lt;br /&gt;
= Syntax Terraform ~0.11 =&lt;br /&gt;
== &amp;lt;code&amp;gt;if&amp;lt;/code&amp;gt; statements ==&lt;br /&gt;
;Terraform ~&amp;lt; 0.9&lt;br /&gt;
Old versions Terraform doesn't support if- or if-else statement but we can take an advantage of a boolean ''count'' attribute that most of resources have.&lt;br /&gt;
 boolean true  = 1&lt;br /&gt;
 boolean false = 0&lt;br /&gt;
&lt;br /&gt;
;Terrafrom ~0.11+&lt;br /&gt;
Newer version support if statements, the conditional syntax is the well-known ternary operation:&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
 CONDITION ? TRUEVAL  : FALSEVAL&lt;br /&gt;
 CONDITION ? caseTrue : caseFalse&lt;br /&gt;
 domain = &amp;quot;${var.frontend_domain != &amp;quot;&amp;quot; ? var.frontend_domain : var.domain}&amp;quot; # tf &amp;lt;0.12 syntax&lt;br /&gt;
 count = var.image_publisher == &amp;quot;MicrosoftWindowsServer&amp;quot; ? 0 : 3            # tf 0.12+ syntax&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The support operators are:&lt;br /&gt;
*Equality: == and !=&lt;br /&gt;
*Numerical comparison: &amp;gt;, &amp;lt;, &amp;gt;=, &amp;lt;=&lt;br /&gt;
*Boolean logic: &amp;amp;&amp;amp;, ||, unary !  (|| is  logical OR; “short-circuit” OR)&lt;br /&gt;
&lt;br /&gt;
= Modules =&lt;br /&gt;
Modules are used in Terraform to modularize and encapsulate groups of resources in your infrastructure.&lt;br /&gt;
&lt;br /&gt;
When calling a module from .tf file you passing values for variables that are defined in a module to create resources to your specification. Before you can use any module it needs to be downloaded. Use &lt;br /&gt;
 $ terraform get&lt;br /&gt;
to download modules. You will notice that &amp;lt;code&amp;gt;.terraform&amp;lt;/code&amp;gt; directory will be created that contains symlinks to the module.&lt;br /&gt;
&lt;br /&gt;
;TF file &amp;lt;tt&amp;gt;~/git/dev101/vpc.tf&amp;lt;/tt&amp;gt; calling 'vpc' module&lt;br /&gt;
&lt;br /&gt;
 variable &amp;quot;vpc_name&amp;quot;       { description = &amp;quot;value comes from terrafrom.tfvars&amp;quot; }&lt;br /&gt;
 variable &amp;quot;vpc_cidr_base&amp;quot;  { description = &amp;quot;value comes from terrafrom.tfvars&amp;quot; }&lt;br /&gt;
 variable &amp;quot;vpc_cidr_range&amp;quot; { description = &amp;quot;value comes from terrafrom.tfvars&amp;quot; }&lt;br /&gt;
 &lt;br /&gt;
 module &amp;quot;vpc-dev&amp;quot; {&lt;br /&gt;
   source     = &amp;quot;../modules/vpc&amp;quot;&lt;br /&gt;
   &amp;lt;span style=&amp;quot;color: blue&amp;quot;&amp;gt;name&amp;lt;/span&amp;gt;       = &amp;quot;${var.vpc_name}&amp;quot;  #here we assign a value to 'name' variable&lt;br /&gt;
   &amp;lt;span style=&amp;quot;color: blue&amp;quot;&amp;gt;cidr&amp;lt;/span&amp;gt;       = &amp;quot;${var.vpc_cidr_base}.${var.vpc_cidr_range}&amp;quot; }&lt;br /&gt;
 &lt;br /&gt;
 output &amp;quot;vpc-name&amp;quot;         { value = &amp;quot;${var.vpc_name                  }&amp;quot;}&lt;br /&gt;
 output &amp;quot;vpc_id&amp;quot;           { value = &amp;quot;${module.vpc-dev.&amp;lt;span style=&amp;quot;color: green&amp;quot;&amp;gt;id-from_module&amp;lt;/span&amp;gt; }&amp;quot;}&lt;br /&gt;
&lt;br /&gt;
;Module in &amp;lt;tt&amp;gt;~/git/modules/vpc/main.tf&amp;lt;/tt&amp;gt;&lt;br /&gt;
 variable &amp;quot;name&amp;quot; { description = &amp;quot;variable local to the module, value comes when calling the module&amp;quot; }&lt;br /&gt;
 variable &amp;quot;cidr&amp;quot; { description = &amp;quot;local to the module, value passed on when calling the module&amp;quot; }&lt;br /&gt;
 &lt;br /&gt;
 resource &amp;quot;aws_vpc&amp;quot; &amp;quot;scope&amp;quot; {&lt;br /&gt;
    cidr_block  = &amp;quot;${var.&amp;lt;span style=&amp;quot;color: blue&amp;quot;&amp;gt;cidr&amp;lt;/span&amp;gt;}&amp;quot;&lt;br /&gt;
    tags { Name = &amp;quot;${var.&amp;lt;span style=&amp;quot;color: blue&amp;quot;&amp;gt;name&amp;lt;/span&amp;gt;}&amp;quot; }}&lt;br /&gt;
 &lt;br /&gt;
  output &amp;quot;&amp;lt;span style=&amp;quot;color: green&amp;quot;&amp;gt;id-from_module&amp;lt;/span&amp;gt;&amp;quot;    { value = &amp;quot;${aws_vpc.scope.id}&amp;quot; }&lt;br /&gt;
&lt;br /&gt;
Output variables is a way to output important data back when running &amp;lt;code&amp;gt;terraform apply&amp;lt;/code&amp;gt;. These variables also can be recalled when .tfstate file has been populated using &amp;lt;code&amp;gt;terraform output VARIABLE-NAME&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
 $ terraform apply     #this will use 'vpc' module&lt;br /&gt;
&lt;br /&gt;
[[File:Terraform-module-apply.png|400px|none|left|Terraform-module-apply]]&lt;br /&gt;
&lt;br /&gt;
Notice &amp;lt;span style=&amp;quot;color: green&amp;quot;&amp;gt;Outputs&amp;lt;/span&amp;gt;. These outputs can be recalled also by:&lt;br /&gt;
 $ terraform output vpc-name      $ terraform output vpc_id&lt;br /&gt;
 dev101                           vpc-00e00c67&lt;br /&gt;
&lt;br /&gt;
= Templates =&lt;br /&gt;
{{ Note | [https://github.com/hashicorp/terraform-guides/tree/master/infrastructure-as-code/terraform-0.12-examples/new-template-syntax Terraform 0.12+ New Template Syntax Example] }}&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
# Terraform version 0.12+ template syntax&lt;br /&gt;
%{ for name in var.names ~}&lt;br /&gt;
%{ if name == &amp;quot;Mary&amp;quot; }${name}%{ endif ~}&lt;br /&gt;
%{ endfor ~}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Dump a rendered &amp;lt;code&amp;gt;data.template_file&amp;lt;/code&amp;gt; into a file to preview correctness of interpolations&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
#Dumps rendered template&lt;br /&gt;
resource &amp;quot;null_resource&amp;quot; &amp;quot;export_rendered_template&amp;quot; {&lt;br /&gt;
  triggers = {&lt;br /&gt;
   uid = &amp;quot;${uuid()}&amp;quot;  #this causes to always run this resource&lt;br /&gt;
  }&lt;br /&gt;
  provisioner &amp;quot;local-exec&amp;quot; {&lt;br /&gt;
    command = &amp;quot;cat &amp;gt; waf-policy.output.txt &amp;lt;&amp;lt;EOL\n${data.template_file.waf-whitelist-policy.rendered}\nEOL&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example of creating &lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
resource &amp;quot;aws_instance&amp;quot; &amp;quot;microservices&amp;quot; {&lt;br /&gt;
  count      = &amp;quot;${var.instance_count}&amp;quot;&lt;br /&gt;
  subnet_id  = &amp;quot;${element(&amp;quot;${data.aws_subnet.private.*.id          }&amp;quot;, count.index)}&amp;quot;&lt;br /&gt;
  user_data  = &amp;quot;${element(&amp;quot;${data.template_file.userdata.*.rendered}&amp;quot;, count.index)}&amp;quot;&lt;br /&gt;
  ...&lt;br /&gt;
}&lt;br /&gt;
data &amp;quot;template_file&amp;quot; &amp;quot;userdata&amp;quot; {&lt;br /&gt;
  count      = &amp;quot;${var.instance_count}&amp;quot;&lt;br /&gt;
  template   = &amp;quot;${file(&amp;quot;${path.root}/templates/user-data.tpl&amp;quot;)}&amp;quot;&lt;br /&gt;
  vars = {&lt;br /&gt;
    vmname   = &amp;quot;ms-${count.index + 1}-${var.vpc_name}&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
#For debugging you can display an array of rendered templates with the output below:&lt;br /&gt;
output &amp;quot;userdata&amp;quot; { value = &amp;quot;${data.template_file.userdata.*.rendered}&amp;quot; }&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
{{ Note |&lt;br /&gt;
* resource &amp;lt;code&amp;gt;template_file is deprecated&amp;lt;/code&amp;gt; in favour of &amp;lt;code&amp;gt;data template_file&amp;lt;/code&amp;gt;&lt;br /&gt;
* Terraform 0.12+ offers new &amp;lt;code&amp;gt;template&amp;lt;/code&amp;gt; function without a need of using a &amp;lt;code&amp;gt;data&amp;lt;/code&amp;gt; object }}&lt;br /&gt;
== template json files ==&lt;br /&gt;
For working with JSON structures it's [https://www.terraform.io/docs/configuration/functions/templatefile.html#generating-json-or-yaml-from-a-template recommended] to use &amp;lt;code&amp;gt;jsonencode&amp;lt;/code&amp;gt; function to simplify escaping, delimiters and get validated json in return.&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
resource &amp;quot;aws_iam_policy&amp;quot; &amp;quot;s3Bucket&amp;quot; {&lt;br /&gt;
   name  = s3Bucket&amp;quot;&lt;br /&gt;
   policy = templatefile(&amp;quot;${path.module}/templates/s3Bucket.json.tpl&amp;quot;, {&lt;br /&gt;
     S3BUCKETS = var.s3_buckets&lt;br /&gt;
   })&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
variable &amp;quot;s3_buckets&amp;quot; {&lt;br /&gt;
  type        = list(string)&lt;br /&gt;
  default     = [ &amp;quot;aaa-bucket-111&amp;quot;, &amp;quot;bbb-bucket-222&amp;quot; ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Template file&lt;br /&gt;
&amp;lt;source lang=json&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
    &amp;quot;Version&amp;quot;: &amp;quot;2012-10-17&amp;quot;,&lt;br /&gt;
    &amp;quot;Statement&amp;quot;: [&lt;br /&gt;
        {&lt;br /&gt;
            &amp;quot;Effect&amp;quot;: &amp;quot;Allow&amp;quot;,&lt;br /&gt;
            &amp;quot;Action&amp;quot;: &amp;quot;s3:ListAllMyBuckets&amp;quot;,&lt;br /&gt;
            &amp;quot;Resource&amp;quot;: &amp;quot;*&amp;quot;&lt;br /&gt;
        },&lt;br /&gt;
        {&lt;br /&gt;
            &amp;quot;Effect&amp;quot;: &amp;quot;Allow&amp;quot;,&lt;br /&gt;
            &amp;quot;Action&amp;quot;: [&lt;br /&gt;
                &amp;quot;s3:ListBucket&amp;quot;,&lt;br /&gt;
                &amp;quot;s3:GetBucketLocation&amp;quot;&lt;br /&gt;
            ],&lt;br /&gt;
            &amp;quot;Resource&amp;quot;: ${jsonencode([for BUCKET in S3BUCKETS : &amp;quot;arn:aws:s3:::${BUCKET}&amp;quot;])}&lt;br /&gt;
# renders json array -&amp;gt; [ &amp;quot;arn:aws:s3:::aaa-bucket-111&amp;quot;, &amp;quot;arn:aws:s3:::bbb-bucket-222&amp;quot; ]&lt;br /&gt;
        }&lt;br /&gt;
    ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Explain&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
substitution syntax ${}    local loop variable&lt;br /&gt;
|  function jsonencode   /      templatefile function input variable, it's not ${} syntax&lt;br /&gt;
|  |                   /       /                                  &lt;br /&gt;
${jsonencode([for BUCKET in S3BUCKETS : &amp;quot;arn:aws:s3:::${BUCKET}&amp;quot;])}&lt;br /&gt;
             / |                                        /       |\&lt;br /&gt;
           /   for loop                     template variable   | function cloasing bracket&lt;br /&gt;
    indicates that the result to be an array[]               closing bracket of the json array&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Resource ==&lt;br /&gt;
*[https://github.com/hashicorp/terraform/issues/1893 example of unique templates per instance]&lt;br /&gt;
*[https://github.com/hashicorp/terraform/pull/2140 recommendation of how to create unique templates per instance]&lt;br /&gt;
&lt;br /&gt;
= Execute arbitrary code using null_resource and local-exec =&lt;br /&gt;
The null_resource allows to create terraform managed resource also saved in the state file but it uses 3rd party provisoners like local-exec, remote-exec, etc., allowing for arbitrary code execution. This should be only used when Terraform core does not provide the solution for your use case.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
resource &amp;quot;null_resource&amp;quot; &amp;quot;attach_alb_am_wkr_ext&amp;quot; {&lt;br /&gt;
&lt;br /&gt;
  #depends_on sets up a dependency. So it depends on completion of another resource &lt;br /&gt;
  #and it won't run if the resource does not change&lt;br /&gt;
  #depends_on = [ &amp;quot;aws_cloudformation_stack.waf-alb&amp;quot; ]  &lt;br /&gt;
&lt;br /&gt;
  #triggers save computed strings in tfstate file, if value changes on the next run it triggers a resource to be created&lt;br /&gt;
  triggers = {   &lt;br /&gt;
    waf_id = &amp;quot;${aws_cloudformation_stack.waf-alb.outputs.wafWebACL}&amp;quot;   #produces WAF_id&lt;br /&gt;
    alb_id = &amp;quot;${module.balancer_external_alb_instance.arn         }&amp;quot;   #produces full ALB_arn name&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  provisioner &amp;quot;local-exec&amp;quot; {&lt;br /&gt;
    when    = &amp;quot;create&amp;quot;     #runs on: terraform apply&lt;br /&gt;
    command = &amp;lt;&amp;lt;EOF&lt;br /&gt;
ALBARN=$(aws elbv2 describe-load-balancers --region ${var.region} \&lt;br /&gt;
      --name ${var.vpc}-${var.alb_class} \&lt;br /&gt;
      --output text --query 'LoadBalancers[0].LoadBalancerArn') &amp;amp;&amp;amp;&lt;br /&gt;
aws waf-regional associate-web-acl --web-acl-id &amp;quot;${aws_cloudformation_stack.waf-alb.outputs.wafWebACL}&amp;quot; \&lt;br /&gt;
                                   --resource-arn $ALBARN --region ${var.region}&lt;br /&gt;
EOF&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  provisioner &amp;quot;local-exec&amp;quot; {&lt;br /&gt;
    when    = &amp;quot;destroy&amp;quot;  #runs only on: terraform destruct&lt;br /&gt;
    command = &amp;lt;&amp;lt;EOF&lt;br /&gt;
ALBARN=$(aws elbv2 describe-load-balancers --region ${var.region} \&lt;br /&gt;
      --name ${var.vpc}-${var.alb_class} \&lt;br /&gt;
      --output text --query 'LoadBalancers[0].LoadBalancerArn') &amp;amp;&amp;amp;&lt;br /&gt;
aws waf-regional disassociate-web-acl --resource-arn $ALBARN --region ${var.region}&lt;br /&gt;
EOF&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: By default the local-exec provisioner will use &amp;lt;code&amp;gt;/bin/sh -c &amp;quot;your&amp;lt;&amp;lt;EOFscript&amp;quot;&amp;lt;/code&amp;gt; so it will not strip down any meta-characters like &amp;quot;double quotes&amp;quot; causing &amp;lt;tt&amp;gt;aws cli&amp;lt;/tt&amp;gt; to fail. Therefore the output has been forced as &amp;lt;tt&amp;gt;text&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= &amp;lt;code&amp;gt;terraform providers&amp;lt;/code&amp;gt; =&lt;br /&gt;
List all providers in your project to see versions and dependencies.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
$ terraform providers&lt;br /&gt;
.&lt;br /&gt;
├── provider.aws ~&amp;gt; 2.44&lt;br /&gt;
├── provider.external ~&amp;gt; 1.2&lt;br /&gt;
├── provider.null ~&amp;gt; 2.1&lt;br /&gt;
├── provider.random ~&amp;gt; 2.2&lt;br /&gt;
├── provider.template ~&amp;gt; 2.1&lt;br /&gt;
├── module.kubernetes&lt;br /&gt;
│   ├── module.config&lt;br /&gt;
│   │   ├── provider.aws&lt;br /&gt;
│   │   ├── provider.helm ~&amp;gt; 0.10.4&lt;br /&gt;
│   │   ├── provider.kubernetes ~&amp;gt; 1.10.0&lt;br /&gt;
│   │   ├── provider.null (inherited)&lt;br /&gt;
│   │   ├── module.alb_ingress_controller&lt;br /&gt;
(...)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= terraform plugins cache =&lt;br /&gt;
&lt;br /&gt;
Option 1.&lt;br /&gt;
Create &amp;lt;code&amp;gt;.terraformrc&amp;lt;/code&amp;gt; file in $HOME directory and specify the cache directory.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=terraform&amp;gt;&lt;br /&gt;
cat &amp;gt; ~/.terraformrc &amp;lt;&amp;lt;'EOF'&lt;br /&gt;
plugin_cache_dir   = &amp;quot;$HOME/.terraform.d/plugin-cache/&amp;quot;&lt;br /&gt;
disable_checkpoint = true&lt;br /&gt;
EOF&lt;br /&gt;
&lt;br /&gt;
# Delete per root module providers in &amp;lt;code&amp;gt;.terraform&amp;lt;/code&amp;gt; directory&lt;br /&gt;
find /git/repositories -type d -name &amp;quot;.terraform&amp;quot; -exec rm -rf {}/providers \;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Option 2.&lt;br /&gt;
Set &amp;lt;code&amp;gt;TF_PLUGIN_CACHE_DIR&amp;lt;/code&amp;gt;  environment variable to an empty dir, then rerun &amp;lt;code&amp;gt;terraform init&amp;lt;/code&amp;gt; to save downloaded &lt;br /&gt;
providers into shared (cache) directory.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
export TF_PLUGIN_CACHE_DIR=$HOME/.terraform.d/plugins-cache&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Run &amp;lt;code&amp;gt;terraform init&amp;lt;/code&amp;gt;. Local &amp;lt;code&amp;gt;.terraform&amp;lt;/code&amp;gt; directory has been already deleted.&lt;br /&gt;
&amp;lt;source lang=terraform&amp;gt;&lt;br /&gt;
terraform init -backend-config=dev.backend.tfvars&lt;br /&gt;
Initializing the backend...&lt;br /&gt;
&lt;br /&gt;
Successfully configured the backend &amp;quot;s3&amp;quot;! Terraform will automatically&lt;br /&gt;
use this backend unless the backend configuration changes.&lt;br /&gt;
&lt;br /&gt;
Initializing provider plugins...&lt;br /&gt;
- Checking for available provider plugins...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;random&amp;quot; (hashicorp/random) 2.3.0...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;kubernetes&amp;quot; (hashicorp/kubernetes) 1.10.0...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;helm&amp;quot; (hashicorp/helm) 1.2.3...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;aws&amp;quot; (hashicorp/aws) 2.70.0...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;external&amp;quot; (hashicorp/external) 1.2.0...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;null&amp;quot; (hashicorp/null) 2.1.2...&lt;br /&gt;
- Downloading plugin for provider &amp;quot;template&amp;quot; (hashicorp/template) 2.1.2...&lt;br /&gt;
&lt;br /&gt;
Terraform has been successfully initialized!&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
:[[File:ClipCapIt-200714-085009.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Although cache dir is used by all Terraform projects, the providers versioning still works and normal versioning restrictions apply. If you want to be sure which version is locked for use with your current project, you can inspect SHA256 of files saved in one of the files in the “.terraform” directory:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ cat .terraform/plugins/linux_amd64/lock.json &lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;aws&amp;quot;: &amp;quot;f08daaf64b9fca69978a40f88091d1a77fc9725fb04b0fec5e731609c53a025f&amp;quot;,&lt;br /&gt;
  &amp;quot;external&amp;quot;: &amp;quot;6dad56007a3cb0ae9c4655c67d13502e51e38ca2673cf0f22a5fadce6803f9e4&amp;quot;,&lt;br /&gt;
  &amp;quot;helm&amp;quot;: &amp;quot;09b8ccb993f7d776555e811c856de006ac12b9fedfca15b07a85a6814914fd04&amp;quot;,&lt;br /&gt;
  &amp;quot;kubernetes&amp;quot;: &amp;quot;7ebf3273e622d1adb736e98f6fa5cc7e664c61b9171105b13c3b5ea8f8ebc5ff&amp;quot;,&lt;br /&gt;
  &amp;quot;null&amp;quot;: &amp;quot;c56285e7bd25a806bf86fcd4893edbe46e621a46e20fe24ef209b6fd0b7cf5fc&amp;quot;,&lt;br /&gt;
  &amp;quot;random&amp;quot;: &amp;quot;791ef28ff31913d9b2ef0bedb97de98bebafe66d002bc2b9d01377e59a6cfaed&amp;quot;,&lt;br /&gt;
  &amp;quot;template&amp;quot;: &amp;quot;cd8665642bf0f5b5f57a53050d10fd83415428c2dc6713b85e174e007fcc93bf&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
 &lt;br /&gt;
find ~/.terraform.d/plugins -type f | xargs sha256sum&lt;br /&gt;
f08daaf64b9fca69978a40f88091d1a77fc9725fb04b0fec5e731609c53a025f  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-aws_v2.70.0_x4&lt;br /&gt;
6dad56007a3cb0ae9c4655c67d13502e51e38ca2673cf0f22a5fadce6803f9e4  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-external_v1.2.0_x4&lt;br /&gt;
c56285e7bd25a806bf86fcd4893edbe46e621a46e20fe24ef209b6fd0b7cf5fc  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-null_v2.1.2_x4&lt;br /&gt;
791ef28ff31913d9b2ef0bedb97de98bebafe66d002bc2b9d01377e59a6cfaed  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-random_v2.3.0_x4&lt;br /&gt;
09b8ccb993f7d776555e811c856de006ac12b9fedfca15b07a85a6814914fd04  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-helm_v1.2.3_x4&lt;br /&gt;
7ebf3273e622d1adb736e98f6fa5cc7e664c61b9171105b13c3b5ea8f8ebc5ff  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-kubernetes_v1.10.0_x4&lt;br /&gt;
cd8665642bf0f5b5f57a53050d10fd83415428c2dc6713b85e174e007fcc93bf  /home/vagrant/.terraform.d/plugins/linux_amd64/terraform-provider-template_v2.1.2_x4&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
As you can see, the SHA256 hash for AWS provider saved in the &amp;lt;tt&amp;gt;lock.json&amp;lt;/tt&amp;gt; file matches the hash of providera saved in the cache directory.&lt;br /&gt;
&lt;br /&gt;
= AWS - [https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Updates.20180206.html#AuroraMySQL.Updates.20180206.CLI RDS aurora] - versioning =&lt;br /&gt;
[https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Updates.20180206.html#AuroraMySQL.Updates.20180206.CLI Engine name] 'aurora-mysql' refers to engine version 5.7.x and for version 5.6.10a engine name is aurora.&lt;br /&gt;
* The engine name for Aurora MySQL 2.x is aurora-mysql; the engine name for Aurora MySQL 1.x continues to be aurora.&lt;br /&gt;
* The engine version for Aurora MySQL 2.x is 5.7.12; the engine version for Aurora MySQL 1.x continues to be 5.6.10ann.&lt;br /&gt;
&amp;lt;syntaxhighlightjs lang=yaml&amp;gt;&lt;br /&gt;
module &amp;quot;db&amp;quot; {&lt;br /&gt;
  source  = &amp;quot;terraform-aws-modules/rds-aurora/aws&amp;quot;&lt;br /&gt;
  version = &amp;quot;2.29.0&amp;quot;&lt;br /&gt;
  name    = &amp;quot;db&amp;quot;&lt;br /&gt;
  engine          = &amp;quot;aurora&amp;quot;                  # v5.6&lt;br /&gt;
  engine_version  = &amp;quot;5.6.mysql_aurora.1.23.0&amp;quot; # v5.6&lt;br /&gt;
  #engine         = &amp;quot;aurora-mysql&amp;quot;            # v5.7&lt;br /&gt;
  #engine_version = &amp;quot;5.7.mysql_aurora.2.09.0&amp;quot; # v5.7&lt;br /&gt;
  ...&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlightjs&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/localstack/localstack localstack] - Mock AWS Services =&lt;br /&gt;
&amp;lt;source lang=&amp;quot;shell&amp;quot;&amp;gt;&lt;br /&gt;
pip install localstack&lt;br /&gt;
localstack start&lt;br /&gt;
SERVICES=kinesis,lambda,sqs,dynamodb DEBUG=1 localstack start&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
;Examples&lt;br /&gt;
* [https://github.com/MattSurabian/bad-terraform bad-terraform]&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/tfsec/tfsec tfsec] - Security Scanning TF code =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
LATEST=$(curl --silent -L &amp;quot;https://api.github.com/repos/tfsec/tfsec/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
sudo curl -L https://github.com/tfsec/tfsec/releases/download/${LATEST}/tfsec-linux-amd64 -o /usr/local/bin/tfsec &lt;br /&gt;
sudo chmod +x /usr/local/bin/tfsec&lt;br /&gt;
&lt;br /&gt;
# Use with docker&lt;br /&gt;
docker run --rm -it -v &amp;quot;$(pwd):/src&amp;quot; liamg/tfsec /src&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
tfsec .&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/terraform-linters/tflint tflint] - validate provider-specific issues =&lt;br /&gt;
Requires Terraform &amp;gt;= 0.12&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/terraform-linters/tflint/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d)&lt;br /&gt;
curl -L https://github.com/terraform-linters/tflint/releases/download/${LATEST}/tflint_linux_amd64.zip -o $TEMPDIR/tflint_linux_amd64.zip&lt;br /&gt;
sudo unzip $TEMPDIR/tflint_linux_amd64.zip -d /usr/local/bin&lt;br /&gt;
&lt;br /&gt;
# Configure tflint&lt;br /&gt;
# | Current directory (./.tflint.hcl)&lt;br /&gt;
# | Home directory (~/.tflint.hcl)&lt;br /&gt;
tflint --config other_config.hcl&lt;br /&gt;
&lt;br /&gt;
## Add plugins&lt;br /&gt;
https://github.com/terraform-linters/tflint/tree/master/docs/rules&lt;br /&gt;
cat &amp;gt; ./.tflint.hcl &amp;lt;&amp;lt;EOF&lt;br /&gt;
plugin &amp;quot;aws&amp;quot; {&lt;br /&gt;
  enabled = true&lt;br /&gt;
  version = &amp;quot;0.5.0&amp;quot;&lt;br /&gt;
  source  = &amp;quot;github.com/terraform-linters/tflint-ruleset-aws&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
plugin &amp;quot;google&amp;quot; {&lt;br /&gt;
    enabled = true&lt;br /&gt;
    version = &amp;quot;0.15.0&amp;quot;&lt;br /&gt;
    source  = &amp;quot;github.com/terraform-linters/tflint-ruleset-google&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
EOF&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
tflint --module&lt;br /&gt;
tflint --module --var-file=dev.tfvars&lt;br /&gt;
&lt;br /&gt;
# Use with docker&lt;br /&gt;
docker pull ghcr.io/terraform-linters/tflint:latest&lt;br /&gt;
docker run --rm -v $(pwd):/src -t ghcr.io/terraform-linters/tflint:v0.34.1&lt;br /&gt;
docker run --rm -v $(pwd):/src -t ghcr.io/terraform-linters/tflint:v0.34.1 -v&lt;br /&gt;
&lt;br /&gt;
# Init and check&lt;br /&gt;
docker run --rm -v $(pwd):/src -t --entrypoint /bin/sh  ghcr.io/terraform-linters/tflint:v0.34.1 -c &amp;quot;tflint --init; tflint /src/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
## It looks important that tflint is executed in terrafrom root path, thus `cd /src`&lt;br /&gt;
docker run --rm -v $(pwd):/src -t -e TFLINT_LOG=debug --entrypoint /bin/sh  ghcr.io/terraform-linters/tflint:v0.34.1 \&lt;br /&gt;
-c &amp;quot;cd /src; tflint --init; tflint --var-file=environments/gcp-dev.tfvars --module&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/terraform-docs/terraform-docs terraform-docs] - generate Terraform documentation = &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install the binary&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/terraform-docs/terraform-docs/releases/latest&amp;quot; | jq -r .tag_name); echo $VERSION&lt;br /&gt;
wget https://github.com/terraform-docs/terraform-docs/releases/download/$VERSION/terraform-docs-$VERSION-linux-amd64.tar.gz&lt;br /&gt;
tar xzvf terraform-docs-$VERSION-linux-amd64.tar.gz&lt;br /&gt;
sudo install terraform-docs /usr/local/bin/terraform-docs&lt;br /&gt;
&lt;br /&gt;
# Use with docker&lt;br /&gt;
docker run --rm --volume &amp;quot;$(pwd):/src&amp;quot; -u $(id -u) quay.io/terraform-docs/terraform-docs:0.16.0 markdown /src&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
terraform-docs . &amp;gt; README.md&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/cycloidio/inframap InfraMap] - plot your Terraform state =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/cycloidio/inframap/releases/latest&amp;quot; | jq -r .tag_name); echo $VERSION&lt;br /&gt;
TEMPDIR=$(mktemp -d)&lt;br /&gt;
curl -L https://github.com/cycloidio/inframap/releases/download/${VERSION}/inframap-linux-amd64.tar.gz -o $TEMPDIR/inframap-linux-amd64.tar.gz&lt;br /&gt;
tar xzvf $TEMPDIR/inframap-linux-amd64.tar.gz -C $TEMPDIR inframap-linux-amd64&lt;br /&gt;
sudo install $TEMPDIR/inframap-linux-amd64 /usr/local/bin/inframap&lt;br /&gt;
&lt;br /&gt;
# Install graphviz, it contains the `dot` program&lt;br /&gt;
sudo apt install graphviz&lt;br /&gt;
&lt;br /&gt;
# Install GraphEasy&lt;br /&gt;
## Cpan manager&lt;br /&gt;
sudo apt install cpanminus # install perl packet managet&lt;br /&gt;
sudo cpanm Graph::Easy # Graph-Easy-0.76 as of 2021-07&lt;br /&gt;
&lt;br /&gt;
## Apt-get (tested with Ubuntu 20.04 LTS)&lt;br /&gt;
sudo apt install libgraph-easy-perl # Graph::Easy v0.76&lt;br /&gt;
&lt;br /&gt;
# a sample usage&lt;br /&gt;
cat input.dot | graph-easy --from=dot --as_ascii&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Usage inframap&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
The most important subcommands are:&lt;br /&gt;
* generate: generates the graph from STDIN or file, STDIN can be .tf files/modules or .tfstate&lt;br /&gt;
* prune: removes all unnecessary information from the state or HCL (not supported yet) so it can be shared without any security concerns&lt;br /&gt;
&lt;br /&gt;
# Generate your infrastructure graph in a DOT representation from: Terraform files or state file&lt;br /&gt;
cat terraform.tf      | inframap generate --printer dot --hcl     | tee graph.dot &lt;br /&gt;
cat terraform.tfstate | inframap generate --printer dot --tfstate | tee graph.dot&lt;br /&gt;
&lt;br /&gt;
# `prune` command will sanitize and anonymize content of the files&lt;br /&gt;
cat terraform.tfstate | inframap prune --canonicals --tfstate &amp;gt; cleaned.tfstate &lt;br /&gt;
&lt;br /&gt;
# Pipe all the previous commands. ASCII graph is generated using graph-easy&lt;br /&gt;
cat terraform.tfstate | inframap prune --tfstate | inframap generate --tfstate | graph-easy&lt;br /&gt;
&lt;br /&gt;
# from State file - visualizing with `dot` or `graph-easy`&lt;br /&gt;
inframap generate state.tfstate | dot -Tpng &amp;gt; graph.png&lt;br /&gt;
inframap generate state.tfstate | graph-easy&lt;br /&gt;
&lt;br /&gt;
# from HCL&lt;br /&gt;
inframap generate terraform.tf | graph-easy&lt;br /&gt;
inframap generate ./my-module/ | graph-easy # or HCL module&lt;br /&gt;
&lt;br /&gt;
# using docker image (assuming that your Terraform files are in the working directory)&lt;br /&gt;
docker run --rm -v ${PWD}:/opt cycloid/inframap generate /opt/terraform.tfstate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example of EKS module&lt;br /&gt;
:[[File:ClipCapIt-210716-090202.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/Pluralith/pluralith-cli/releases Pluralith] =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/Pluralith/pluralith-cli/releases/latest&amp;quot; | jq -r .tag_name); echo $VERSION&lt;br /&gt;
curl -L https://github.com/Pluralith/pluralith-cli/releases/download/${VERSION}/pluralith_cli_linux_amd64_${VERSION} -o pluralith_cli_linux_amd64_${VERSION}&lt;br /&gt;
sudo install pluralith_cli_linux_amd64_${VERSION} /usr/local/bin/pluralith&lt;br /&gt;
&lt;br /&gt;
# Install pluralith-cli-graphing&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/Pluralith/pluralith-cli-graphing-release/releases/latest&amp;quot; | jq -r .tag_name | tr -d v); echo $VERSION&lt;br /&gt;
curl -L https://github.com/Pluralith/pluralith-cli-graphing-release/releases/download/v${VERSION}/pluralith_cli_graphing_linux_amd64_${VERSION} -o pluralith_cli_graphing_linux_amd64_${VERSION}&lt;br /&gt;
sudo install pluralith_cli_graphing_linux_amd64_${VERSION} ~/Pluralith/bin/pluralith-cli-graphing&lt;br /&gt;
&lt;br /&gt;
# Check versions&lt;br /&gt;
pluralith version&lt;br /&gt;
parsing response failed -&amp;gt; GetGitHubRelease: %!w(&amp;lt;nil&amp;gt;)&lt;br /&gt;
 _&lt;br /&gt;
|_)|    _ _ |._|_|_ &lt;br /&gt;
|  ||_|| (_||| | | |&lt;br /&gt;
&lt;br /&gt;
→ CLI Version: 0.2.2&lt;br /&gt;
→ Graph Module Version: 0.2.1&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
pluralith login --api-key $PLURALITH_API_KEY&lt;br /&gt;
&lt;br /&gt;
# Generate PDF graph locally&lt;br /&gt;
pluralith &amp;lt;terrafom-root-folder&amp;gt; --var-file environments/dev.tfvars graph --local-only&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/flosell/iam-policy-json-to-terraform iam-policy-json-to-terraform] =&lt;br /&gt;
Convert an IAM Policy in JSON format into a Terraform aws_iam_policy_document&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/flosell/iam-policy-json-to-terraform/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
sudo curl -L https://github.com/flosell/iam-policy-json-to-terraform/releases/download/${LATEST}/iam-policy-json-to-terraform_amd64 -o /usr/local/bin/iam-policy-json-to-terraform&lt;br /&gt;
sudo chmod +x /usr/local/bin/iam-policy-json-to-terraform&lt;br /&gt;
&lt;br /&gt;
# Usage:&lt;br /&gt;
iam-policy-json-to-terraform &amp;lt; some-policy.json&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/hieven/terraform-visual terraform-visual] =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install&lt;br /&gt;
sudo apt-get update&lt;br /&gt;
sudo apt install nodejs npm&lt;br /&gt;
sudo npm install -g @terraform-visual/cli&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
terraform plan -out=plan.out                # Run plan and output as a file&lt;br /&gt;
terraform show -json plan.out &amp;gt; plan.json   # Read plan file and output it in JSON format&lt;br /&gt;
terraform-visual --plan plan.json&lt;br /&gt;
firefox terraform-visual-report/index.html&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/cloudskiff/driftctl driftctl] =&lt;br /&gt;
Measures infrastructure as code coverage, and tracks infrastructure drift.&lt;br /&gt;
IaC: Terraform, Cloud providers: AWS, GitHub (Azure and GCP on the roadmap for 2021). Spot discrepancies as they happen: driftctl is a free and open-source CLI that warns of infrastructure drifts and fills in the missing piece in your DevSecOps toolbox.&lt;br /&gt;
&lt;br /&gt;
;Features [https://docs.driftctl.com/ docs]&lt;br /&gt;
* Scan cloud provider and map resources with IaC code&lt;br /&gt;
* Analyze diffs, and warn about drift and unwanted unmanaged resources&lt;br /&gt;
* Allow users to ignore resources&lt;br /&gt;
* Multiple output formats&lt;br /&gt;
&lt;br /&gt;
Install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
curl -L https://github.com/snyk/driftctl/releases/latest/download/driftctl_linux_amd64 -o driftctl&lt;br /&gt;
install ./driftctl /usr/local/bin/driftctl&lt;br /&gt;
driftctl version&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[https://docs.driftctl.com/0.39.0/usage/cmd/scan-usage Detect drift on GCP]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
source &amp;lt;(driftctl completion bash)&lt;br /&gt;
export GOOGLE_APPLICATION_CREDENTIALS=$HOME/.config/gcloud/application_default_credentials.json&lt;br /&gt;
export CLOUDSDK_CORE_PROJECT=&amp;lt;myproject_id&amp;gt;&lt;br /&gt;
driftctl scan --to=&amp;quot;gcp+tf&amp;quot;&lt;br /&gt;
driftctl scan --to=&amp;quot;gcp+tf&amp;quot; --deep --output html://output.html&lt;br /&gt;
driftctl scan --to=&amp;quot;gcp+tf&amp;quot; --from tfstate+gs://my-bucket/path/to/state.tfstate # Use this when working with workspaces&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/infracost/infracost infracost] =&lt;br /&gt;
Infracost shows cloud cost estimates for infrastructure-as-code projects such as Terraform.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Downloads the CLI based on your OS/arch and puts it in /usr/local/bin&lt;br /&gt;
curl -fsSL https://raw.githubusercontent.com/infracost/infracost/master/scripts/install.sh | sh&lt;br /&gt;
&lt;br /&gt;
# Register for a free API key&lt;br /&gt;
infracost register # The key is saved in ~/.config/infracost/credentials.yml.&lt;br /&gt;
&lt;br /&gt;
# Show cost breakdown on live infra&lt;br /&gt;
infracost breakdown --path terraform_nlb_static_eips&lt;br /&gt;
&lt;br /&gt;
# Show cost breakdown based on Terraform plan&lt;br /&gt;
cd path/to/src_code&lt;br /&gt;
terraform init&lt;br /&gt;
terraform plan -out  tfplan.binary&lt;br /&gt;
terraform show -json tfplan.binary &amp;gt; plan.json&lt;br /&gt;
&lt;br /&gt;
## run via binary&lt;br /&gt;
infracost breakdown --path plan.json&lt;br /&gt;
infracost breakdown --path plan.json --show-skipped --format html &amp;gt; /vagrant/infracost.html&lt;br /&gt;
infracost diff      --path plan.json&lt;br /&gt;
&lt;br /&gt;
## run via Docker&lt;br /&gt;
docker run -it --rm --volume &amp;quot;$(pwd):/src&amp;quot; -u $(id -u) -e INFRACOST_API_KEY=$INFRACOST_API_KEY infracost/infracost:0.9.15 breakdown --path /src/plan.json&lt;br /&gt;
docker run -it --rm --volume &amp;quot;$(pwd):/src&amp;quot; -u $(id -u) -e INFRACOST_API_KEY=$INFRACOST_API_KEY infracost/infracost:0.9.15 diff      --path /src/plan.json&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example output&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
## Cost breakdown&lt;br /&gt;
docker run -it --rm --volume &amp;quot;$(pwd):/src&amp;quot; -u $(id -u) -e INFRACOST_API_KEY=$INFRACOST_API_KEY infracost/infracost:0.9.15 breakdown --path /src/plan.json&lt;br /&gt;
Detected Terraform plan JSON file at /src/plan.json&lt;br /&gt;
✔ Calculating monthly cost estimate &lt;br /&gt;
&lt;br /&gt;
Project: /src/plan.json&lt;br /&gt;
 Name                                                              Monthly Qty  Unit   Monthly Cost &lt;br /&gt;
 module.gke.google_container_cluster.primary                                                        &lt;br /&gt;
 ├─ Cluster management fee                                                 730  hours        $73.00 &lt;br /&gt;
 └─ default_pool                                                                                    &lt;br /&gt;
    ├─ Instance usage (Linux/UNIX, on-demand, e2-medium)                 6,570  hours       $242.16 &lt;br /&gt;
    └─ Standard provisioned storage (pd-standard)                          900  GiB          $36.00 &lt;br /&gt;
 module.gke.google_container_node_pool.pools[&amp;quot;default-node-pool&amp;quot;]                                   &lt;br /&gt;
 ├─ Instance usage (Linux/UNIX, on-demand, e2-medium)                    6,570  hours       $242.16 &lt;br /&gt;
 └─ Standard provisioned storage (pd-standard)                             900  GiB          $36.00 &lt;br /&gt;
 OVERALL TOTAL                                                                              $629.31 &lt;br /&gt;
──────────────────────────────────&lt;br /&gt;
11 cloud resources were detected, rerun with --show-skipped to see details:&lt;br /&gt;
∙ 2 were estimated, 2 include usage-based costs, see https://infracost.io/usage-file&lt;br /&gt;
∙ 9 were free&lt;br /&gt;
&lt;br /&gt;
## Cost difference&lt;br /&gt;
docker run -it --rm --volume &amp;quot;$(pwd):/src&amp;quot; -u $(id -u) -e INFRACOST_API_KEY=$INFRACOST_API_KEY infracost/infracost:0.9.15 diff --path /src/plan.json&lt;br /&gt;
Detected Terraform plan JSON file at /src/plan.json&lt;br /&gt;
✔ Calculating monthly cost estimate &lt;br /&gt;
&lt;br /&gt;
Project: /src/plan.json&lt;br /&gt;
&lt;br /&gt;
+ module.gke.google_container_cluster.primary&lt;br /&gt;
  +$351&lt;br /&gt;
    + Cluster management fee&lt;br /&gt;
      +$73.00&lt;br /&gt;
    + default_pool&lt;br /&gt;
        + Instance usage (Linux/UNIX, on-demand, e2-medium)&lt;br /&gt;
          +$242&lt;br /&gt;
        + Standard provisioned storage (pd-standard)&lt;br /&gt;
          +$36.00&lt;br /&gt;
    + node_pool[0]&lt;br /&gt;
        + Instance usage (Linux/UNIX, on-demand, e2-medium)&lt;br /&gt;
          $0.00&lt;br /&gt;
        + Standard provisioned storage (pd-standard)&lt;br /&gt;
          $0.00&lt;br /&gt;
+ module.gke.google_container_node_pool.pools[&amp;quot;default-node-pool&amp;quot;]&lt;br /&gt;
  +$278&lt;br /&gt;
    + Instance usage (Linux/UNIX, on-demand, e2-medium)&lt;br /&gt;
      +$242&lt;br /&gt;
    + Standard provisioned storage (pd-standard)&lt;br /&gt;
      +$36.00&lt;br /&gt;
Monthly cost change for /src/plan.json&lt;br /&gt;
Amount:  +$629 ($0.00 → $629)&lt;br /&gt;
&lt;br /&gt;
──────────────────────────────────&lt;br /&gt;
Key: ~ changed, + added, - removed&lt;br /&gt;
&lt;br /&gt;
11 cloud resources were detected, rerun with --show-skipped to see details:&lt;br /&gt;
∙ 2 were estimated, 2 include usage-based costs, see https://infracost.io/usage-file&lt;br /&gt;
∙ 9 were free&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Resources:&lt;br /&gt;
* DockerHub: https://hub.docker.com/r/infracost/infracost/tags&lt;br /&gt;
&lt;br /&gt;
= [https://tfautomv.dev/ tfautomv - Terraform refactor] =&lt;br /&gt;
Tfautomv writes moved blocks for you so your refactoring is quicker and less error-prone.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
tfautomv -dry-run&lt;br /&gt;
tfautomv -show-analysis&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= [https://www.davidc.net/sites/default/subnets/subnets.html?network=192.168.0.0&amp;amp;mask=22&amp;amp;division=19.3d431 Subnetting] =&lt;br /&gt;
Very useful page for subnetting: https://www.davidc.net/sites/default/subnets/subnets.html&lt;br /&gt;
&lt;br /&gt;
= Resources =&lt;br /&gt;
*[https://discuss.hashicorp.com/u/apparentlymart apparentlymart] The Hero! discuss.hashicorp.com&lt;br /&gt;
*[https://blog.gruntwork.io/a-comprehensive-guide-to-terraform-b3d32832baca Comprehensive-guide-to-terraform] gruntwork.io&lt;br /&gt;
*[https://github.com/antonbabenko/terraform-best-practices Terraform good practices] naming conventions, etc..&lt;br /&gt;
*[https://www.runatlantis.io/ Atlantis] Terraform Pull Request Automation, Listens for webhooks from GitHub/GitLab/Bitbucket/Azure DevOps, Runs terraform commands remotely and comments back with their output.&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Kubernetes/Kustomize&amp;diff=7037</id>
		<title>Kubernetes/Kustomize</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Kubernetes/Kustomize&amp;diff=7037"/>
		<updated>2024-08-06T22:33:25Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Embedded versions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= [https://kustomize.io/ Kustomize] =&lt;br /&gt;
* [https://kubectl.docs.kubernetes.io/ kubectl+kustomize] SIG CLI&lt;br /&gt;
kustomize lets you customize raw, template-free YAML files for multiple purposes, leaving the original YAML untouched and usable as is.&lt;br /&gt;
&lt;br /&gt;
= Embedded versions =&lt;br /&gt;
;FluxCD (v2): runs kustomize-controller and helm-controller, use the following methods to find out versions of the embedded components:&lt;br /&gt;
&lt;br /&gt;
* kustomize-controller versions [https://github.com/fluxcd/kustomize-controller/blob/main/CHANGELOG.md#100 CHANGELOG]:&lt;br /&gt;
** 0.27.0 - Kustomize v4.5.7&lt;br /&gt;
** 1.0.0 - Kustomize v5.0.3 (introduced in 1.0.0-rc.4 release)&lt;br /&gt;
** 1.2.0 - Kustomize v5.3.0, SOPS v3.8.1&lt;br /&gt;
&lt;br /&gt;
* helm-controller versions [https://github.com/fluxcd/helm-controller/blob/main/CHANGELOG.md#0370 CHANGELOG]:&lt;br /&gt;
** v0.37.0 - helm v3.13.2, post-renderer kustomize v5.3.0&lt;br /&gt;
&lt;br /&gt;
;ArgoCD&lt;br /&gt;
* v2.10.6+d504d2b&lt;br /&gt;
** kustomize: v5.2.1 2023-10-19T20:13:51Z&lt;br /&gt;
** Helm: v3.14.3+gf03cc04&lt;br /&gt;
** kubectl: v0.26.11&lt;br /&gt;
&lt;br /&gt;
= Install =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Detects your OS and downloads kustomize binary to cwd&lt;br /&gt;
curl -s &amp;quot;https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh&amp;quot;  | bash&lt;br /&gt;
&lt;br /&gt;
# Install on Linux - option2&lt;br /&gt;
VERSION=v4.1.2&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/kubernetes-sigs/kustomize/releases&amp;quot; | jq -r '.[].tag_name | select(. | contains(&amp;quot;kustomize&amp;quot;))' | sort | tail -1 | cut -d&amp;quot;/&amp;quot; -f2); echo $VERSION&lt;br /&gt;
curl -L https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize%2F${VERSION}/kustomize_${VERSION}_linux_amd64.tar.gz -o kustomize_${VERSION}_linux_amd64.tar.gz&lt;br /&gt;
tar xzvf kustomize_${VERSION}_linux_amd64.tar.gz&lt;br /&gt;
sudo install ./kustomize /usr/local/bin/kustomize&lt;br /&gt;
sudo install ./kustomize /usr/local/bin/kustomize_${VERSION}&lt;br /&gt;
&lt;br /&gt;
kustomize version --short&lt;br /&gt;
{kustomize/v4.1.2  2021-04-15T20:38:06Z  }&lt;br /&gt;
&lt;br /&gt;
kustomize version&lt;br /&gt;
v5.3.0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Kustomize build workflow =&lt;br /&gt;
* [https://github.com/kubernetes-sigs/kustomize/issues/2052 kustomize vars] - use &amp;lt;code&amp;gt;envsubst&amp;lt;/code&amp;gt; instead&lt;br /&gt;
&amp;lt;source&amp;gt;$ kustomize build ~/target&amp;lt;/source&amp;gt;&lt;br /&gt;
# load universal k8s object descriptions&lt;br /&gt;
# read &amp;lt;code&amp;gt;kustomization.yaml&amp;lt;/code&amp;gt; from '''target'''&lt;br /&gt;
# kustomize '''bases''' (recurse 2-5)&lt;br /&gt;
# load and/or generate resources&lt;br /&gt;
# apply '''target's''' kustomization operations&lt;br /&gt;
# fix name references&lt;br /&gt;
# emit yaml&lt;br /&gt;
&lt;br /&gt;
= &amp;lt;code&amp;gt;kustomize.yaml&amp;lt;/code&amp;gt;&lt;br /&gt;
A build stage first &lt;br /&gt;
* processes resources, &lt;br /&gt;
* then it processes generators, adding to the resource list under consideration, &lt;br /&gt;
* then it processes transformers to modify the list, &lt;br /&gt;
* and finally runs validators to check the list for whatever error.&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
apiVersion: kustomize.config.k8s.io/v1beta1&lt;br /&gt;
kind: Kustomization&lt;br /&gt;
&lt;br /&gt;
resources:&lt;br /&gt;
- {pathOrUrl}&lt;br /&gt;
- what resources you want to customize &lt;br /&gt;
&lt;br /&gt;
# cross-cutting fields&lt;br /&gt;
namespace: custom&lt;br /&gt;
namePrefix: dev-&lt;br /&gt;
nameSuffix: &amp;quot;-svc&amp;quot;&lt;br /&gt;
commonLabels:&lt;br /&gt;
  app: web&lt;br /&gt;
commonAnnotations:&lt;br /&gt;
  value: app&lt;br /&gt;
&lt;br /&gt;
generators:&lt;br /&gt;
- {pathOrUrl}&lt;br /&gt;
- what new resources should be created.&lt;br /&gt;
generatorOptions:&lt;br /&gt;
  disableNameSuffixHash: true&lt;br /&gt;
  labels:&lt;br /&gt;
    env: prod&lt;br /&gt;
  annotations:&lt;br /&gt;
    app: custom&lt;br /&gt;
&lt;br /&gt;
transformers:&lt;br /&gt;
- {pathOrUrl}&lt;br /&gt;
- what to transform in above mentioned resources&lt;br /&gt;
&lt;br /&gt;
validators:&lt;br /&gt;
- {pathOrUrl}&lt;br /&gt;
- ...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Patches ==&lt;br /&gt;
;patchStrategicMerge: Kubernetes supports a customized version of JSON merge patch called strategic merge patch. This patch format is used by &amp;lt;code&amp;gt;kubectl apply&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;kubectl edit&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;kubectl patch&amp;lt;/code&amp;gt;, and contains specialized directives to control how specific fields are merged.&lt;br /&gt;
&lt;br /&gt;
= Example 101 =&lt;br /&gt;
{{Note|Bases have been deprecated in v2.1.0 [https://kubernetes-sigs.github.io/kustomize/blog/2019/06/18/v2.1.0/#resources-expanded-bases-deprecated resources-expanded-bases-deprecated]}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [https://kustomize.io/tutorial Kustomize builder] note that it operates on the 1st yaml document&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Example 101 - environment type overrides&lt;br /&gt;
|- &lt;br /&gt;
! base/kustomization.yaml&lt;br /&gt;
! overlays/dev/kustomization.yaml&lt;br /&gt;
! overlays/prod/kustomization.yaml&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| &amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
apiVersion: kustomize.config.k8s.io/v1beta1&lt;br /&gt;
kind: Kustomization&lt;br /&gt;
commonLabels:&lt;br /&gt;
  app: sonarqube&lt;br /&gt;
resources:&lt;br /&gt;
- gateway.yaml&lt;br /&gt;
- virtual-service.yaml&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
| &amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
apiVersion: ...&lt;br /&gt;
kind: Kustomization&lt;br /&gt;
patches:&lt;br /&gt;
- gateway_patch.yaml&lt;br /&gt;
- virtual-service_patch.yaml&lt;br /&gt;
resources:&lt;br /&gt;
- ../../base&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
| &amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
apiVersion: ...&lt;br /&gt;
kind: Kustomization&lt;br /&gt;
patches:&lt;br /&gt;
- gateway_patch.yaml&lt;br /&gt;
- virtual-service_patch.yaml&lt;br /&gt;
resources:&lt;br /&gt;
- ../../base&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
.&lt;br /&gt;
├── base&lt;br /&gt;
│   ├── gateway.yaml&lt;br /&gt;
│   ├── kustomization.yaml&lt;br /&gt;
│   └── virtual-service.yaml&lt;br /&gt;
└── overlays # more contextual this directory could be called 'environments'&lt;br /&gt;
    ├── dev&lt;br /&gt;
    │   ├── gateway_patch.yaml&lt;br /&gt;
    │   ├── kustomization.yaml&lt;br /&gt;
    │   └── virtual-service_patch.yaml&lt;br /&gt;
    └── prod&lt;br /&gt;
        ├── gateway_patch.yaml&lt;br /&gt;
        ├── kustomization.yaml&lt;br /&gt;
        └── virtual-service_patch.yaml&lt;br /&gt;
&lt;br /&gt;
# Build kuctomized output&lt;br /&gt;
kustomize version --short # -&amp;gt; {kustomize/v3.8.2  2020-08-29T17:44:01Z  }&lt;br /&gt;
kustomize build overlays/dev # apply patches&lt;br /&gt;
kustomize build base         # run common functions (as described in base/kustomize.yaml) against the whole code base &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
What happens?&lt;br /&gt;
# &amp;lt;code&amp;gt;kustomize build overlays/dev&amp;lt;/code&amp;gt; finds &amp;lt;code&amp;gt;kustomization.yaml&amp;lt;/code&amp;gt;, that describes:&lt;br /&gt;
## &amp;lt;code&amp;gt;patches: [gateway_patch.yaml, virtual-service_patch.yaml]&amp;lt;/code&amp;gt; to be used over the base &amp;lt;code&amp;gt;resources: [../../base]&amp;lt;/code&amp;gt;. There are 3 type of patches: patches, patchesStrategicMerge, [https://skryvets.com/blog/2019/05/15/kubernetes-kustomize-json-patches-6902 patchesJson6902] to choose from&lt;br /&gt;
# &amp;lt;code&amp;gt;overlays/dev/kustomization.yaml&amp;lt;/code&amp;gt; cascades to the base (source of manifests to be changed) via directive &amp;lt;code&amp;gt;resources: [&amp;quot;../../base&amp;quot;]&amp;lt;/code&amp;gt;&lt;br /&gt;
# The base directory contains and runs its own &amp;lt;code&amp;gt;kustomization.yaml&amp;lt;/code&amp;gt; file.&lt;br /&gt;
# The &amp;lt;code&amp;gt;base/kustomization.yaml&amp;lt;/code&amp;gt; contains common operations, eg. &amp;lt;code&amp;gt;commonLabels, namePrefix&amp;lt;/code&amp;gt; functions to be applied to whole code base.&lt;br /&gt;
# Then patch file(s) are applied eg. &amp;lt;code&amp;gt;gateway_patch.yaml&amp;lt;/code&amp;gt; contains enough information to identify a resource/object and apply changes.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
So, what happens&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Applying the patch: overlays/dev/gateway_patch.yaml &lt;br /&gt;
apiVersion: networking.istio.io/v1beta1&lt;br /&gt;
kind: Gateway&lt;br /&gt;
metadata:&lt;br /&gt;
  name: sonarqube &lt;br /&gt;
spec:&lt;br /&gt;
  servers:&lt;br /&gt;
  - port:&lt;br /&gt;
      number: 443&lt;br /&gt;
      name: http&lt;br /&gt;
      protocol: HTTP&lt;br /&gt;
    hosts:&lt;br /&gt;
     - sonarqube-dev.acme.com # &amp;lt;- override&lt;br /&gt;
#   | &lt;br /&gt;
#   | over the base&lt;br /&gt;
#   v &lt;br /&gt;
&lt;br /&gt;
# base/gateway.yaml&lt;br /&gt;
apiVersion: networking.istio.io/v1beta1&lt;br /&gt;
kind: Gateway&lt;br /&gt;
metadata:&lt;br /&gt;
  labels:&lt;br /&gt;
    app: sonarqube&lt;br /&gt;
  name: sonarqube&lt;br /&gt;
spec:&lt;br /&gt;
  selector:&lt;br /&gt;
    istio: ingressgateway&lt;br /&gt;
  servers:&lt;br /&gt;
  - hosts:&lt;br /&gt;
    - sonarqube.acme.com&lt;br /&gt;
    port:&lt;br /&gt;
      name: http&lt;br /&gt;
      number: 443&lt;br /&gt;
      protocol: HTTP&lt;br /&gt;
#   | &lt;br /&gt;
#   | results with&lt;br /&gt;
#   v &lt;br /&gt;
&lt;br /&gt;
apiVersion: networking.istio.io/v1beta1&lt;br /&gt;
kind: Gateway&lt;br /&gt;
metadata:&lt;br /&gt;
  labels:&lt;br /&gt;
    app: sonarqube&lt;br /&gt;
    owner: piotr # &amp;lt;- label added by base kustomize.yaml fn&lt;br /&gt;
  name: sonarqube&lt;br /&gt;
spec:&lt;br /&gt;
  selector:&lt;br /&gt;
    istio: ingressgateway&lt;br /&gt;
  servers:&lt;br /&gt;
  - hosts:&lt;br /&gt;
    - sonarqube-dev.acme.com # &amp;lt;- patch override&lt;br /&gt;
    port:&lt;br /&gt;
      name: http&lt;br /&gt;
      number: 443&lt;br /&gt;
      protocol: HTTP&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check yourselves&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
#         __unchanged manifest_    _base kustomization_    ___patch overlay____________&lt;br /&gt;
vimdiff &amp;lt;(cat base/gateway.yaml) &amp;lt;(kustomize build base) &amp;lt;(kustomize build overlays/dev)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
:[[File:ClipCapIt-200910-010734.PNG]]&lt;br /&gt;
&lt;br /&gt;
= Cheatsheet =&lt;br /&gt;
* [https://github.com/kubernetes-sigs/kustomize/blob/master/examples/chart.md Helm charts - last mile]&lt;br /&gt;
&lt;br /&gt;
= Patch [https://github.com/kubernetes-sigs/kustomize/blob/master/examples/patchMultipleObjects.md multiple objects] =&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;yaml&amp;quot;&amp;gt;&lt;br /&gt;
apiVersion: kustomize.config.k8s.io/v1beta1&lt;br /&gt;
kind: Kustomization&lt;br /&gt;
resources:&lt;br /&gt;
  - ../base&lt;br /&gt;
&lt;br /&gt;
patches:&lt;br /&gt;
  - path: patch.json&lt;br /&gt;
    target:&lt;br /&gt;
      kind: PersistentVolume&lt;br /&gt;
      version: v1&lt;br /&gt;
      group: &amp;quot;&amp;quot;&lt;br /&gt;
      name: volume-(data|master)-\d # regex match&lt;br /&gt;
      labelSelector: |&lt;br /&gt;
        app.kubernetes.io/component=storage,&lt;br /&gt;
        app.kubernetes.io/name=elasticsearch&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Component example&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;yaml&amp;quot;&amp;gt;&lt;br /&gt;
apiVersion: kustomize.config.k8s.io/v1alpha1&lt;br /&gt;
kind: Component&lt;br /&gt;
&lt;br /&gt;
patches:&lt;br /&gt;
- target: &lt;br /&gt;
    kind: HelmRelease&lt;br /&gt;
    version: v2beta1&lt;br /&gt;
    group: helm.toolkit.fluxcd.io&lt;br /&gt;
    name: external-dns-.+$ # [1]&lt;br /&gt;
  patch: |-&lt;br /&gt;
    apiVersion: helm.toolkit.fluxcd.io/v2beta1&lt;br /&gt;
    kind: HelmRelease&lt;br /&gt;
    metadata:&lt;br /&gt;
      name: ALL            # [2]&lt;br /&gt;
      namespace: flux-system&lt;br /&gt;
    spec:&lt;br /&gt;
      values:&lt;br /&gt;
        tolerations:&lt;br /&gt;
          - key: &amp;quot;components.gke.io/gke-managed-components&amp;quot;&lt;br /&gt;
            operator: Exists&lt;br /&gt;
        affinity:&lt;br /&gt;
          nodeAffinity:&lt;br /&gt;
            preferredDuringSchedulingIgnoredDuringExecution:&lt;br /&gt;
              - weight: 100&lt;br /&gt;
                preference:&lt;br /&gt;
                  matchExpressions:&lt;br /&gt;
                  - key: &amp;quot;predictx/workload&amp;quot;&lt;br /&gt;
                    operator: In&lt;br /&gt;
                    values:&lt;br /&gt;
                    - &amp;quot;infra&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# [1] Regex match&lt;br /&gt;
# [2] The name is replaced by a name from the matched object. Any value ie 'ALL' is required.&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Delete an object from the base =&lt;br /&gt;
''Strategic Merge Patch'' provides some patch options like replace, merge, and delete. The simple 'patch' can only patch (manipulate) it.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;yaml&amp;quot;&amp;gt;&lt;br /&gt;
apiVersion: kustomize.config.k8s.io/v1beta1&lt;br /&gt;
kind: Kustomization&lt;br /&gt;
resources:&lt;br /&gt;
  - ../base&lt;br /&gt;
&lt;br /&gt;
patchesStrategicMerge:&lt;br /&gt;
- |-&lt;br /&gt;
  apiVersion: v1&lt;br /&gt;
  kind: Namespace&lt;br /&gt;
  metadata:&lt;br /&gt;
    name: unwanted-namespace&lt;br /&gt;
  $patch: delete&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= secretGenerator =&lt;br /&gt;
Secrets can be generated from environment variables. Within a template file there is a list of variables, where the variable will become a key and it's value the value.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Environment variable secret template&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
GIT_USERNAME&lt;br /&gt;
GIT_PASSWORD&lt;br /&gt;
GIT_CREDENTIALS&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Kustomization&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
apiVersion: kustomize.config.k8s.io/v1beta1&lt;br /&gt;
kind: Kustomization&lt;br /&gt;
&lt;br /&gt;
secretGenerator:&lt;br /&gt;
  - name: argocd-git-secret&lt;br /&gt;
    envs:&lt;br /&gt;
      - git.env&lt;br /&gt;
    options:&lt;br /&gt;
      disableNameSuffixHash: true&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Patch - add an item to a list =&lt;br /&gt;
* https://stackoverflow.com/questions/71622419/adding-items-to-a-list-with-kubectl-kustomize&lt;br /&gt;
* [https://github.com/kubernetes/community/blob/master/contributors/devel/sig-api-machinery/strategic-merge-patch.md#strategic-merge-patch Strategic merge patch docs]&lt;br /&gt;
&lt;br /&gt;
In the standard JSON merge patch, JSON objects are always merged but lists are always replaced. Often that isn't what we want. To solve this problem, Strategic Merge Patch uses the go struct tag of the API objects to determine what lists should be merged and which ones should not. Read more at [https://github.com/kubernetes/community/blob/master/contributors/devel/sig-api-machinery/strategic-merge-patch.md#strategic-merge-patch strategic merge patch docs].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
patchesJson6902:&lt;br /&gt;
  - patch: |-&lt;br /&gt;
      - op: add&lt;br /&gt;
        path: /spec/valuesFrom/-&lt;br /&gt;
        value: # below map will be added as an item to the list, pay attention to `-` sign at the end of path&lt;br /&gt;
          kind: ConfigMap&lt;br /&gt;
          name: values-1-yaml&lt;br /&gt;
    target:&lt;br /&gt;
      group: helm.toolkit.fluxcd.io&lt;br /&gt;
      kind: HelmRelease&lt;br /&gt;
      name: kube-prometheus-stack&lt;br /&gt;
      version: v2beta1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/replacements/ Replacements] = &lt;br /&gt;
* https://stackoverflow.com/questions/71358674/kustomize-how-to-reference-a-value-from-a-configmap-in-another-resource-overlay&lt;br /&gt;
Use of vars is getting deprecated please use replacements.&lt;br /&gt;
&lt;br /&gt;
= Known issues =&lt;br /&gt;
* [https://github.com/kubernetes-sigs/kustomize/issues/2034 commonLabels altering podSelector.matchLabels] and [https://github.com/kubernetes-sigs/kustomize/issues/157 Allow excluding some label selectors from commonLabels]&lt;br /&gt;
In some settings it makes sense for &amp;lt;code&amp;gt;commonLabels&amp;lt;/code&amp;gt; to be included in selectors, and in some settings it doers not make sense to include them in selectors. Kustomize includes by default, and there is no way to opt out. As workaround, you can convert &amp;lt;code&amp;gt;matchLabels&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;matchExpressions&amp;lt;/code&amp;gt; and Kustomize won't touch them. API docs&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
  - podSelector:&lt;br /&gt;
      matchLabels:&lt;br /&gt;
        app: mongodb-backup&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
is equivalent with&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
  - podSelector:&lt;br /&gt;
      matchExpressions:&lt;br /&gt;
        - key: app&lt;br /&gt;
        operator: In&lt;br /&gt;
        values:&lt;br /&gt;
          - mongodb-backup&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and Kustomize will keep its hands off.&lt;br /&gt;
&lt;br /&gt;
= Resources =&lt;br /&gt;
* [https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/replacements/ Replacement transform is deprecating Vars]&lt;br /&gt;
* [https://github.com/kubernetes-sigs/kustomize Kustomize sig]&lt;br /&gt;
* [https://kubectl.docs.kubernetes.io/guides/config_management/components/ v3.7.0+ Components]&lt;br /&gt;
* [https://github.com/kubernetes-sigs/kustomize/blob/master/docs/glossary.md#kustomization Glossary]&lt;br /&gt;
* [https://kubectl.docs.kubernetes.io/references/kustomize/ Kustomization File Fields]&lt;br /&gt;
* [https://kubectl.docs.kubernetes.io/pages/examples/kustomize.html Kustomize - examples] kubectl.docs.kubernetes.io&lt;br /&gt;
* [https://kubectl.docs.kubernetes.io/pages/app_composition_and_deployment/structure_directories.html Kustomize structure_directories]&lt;br /&gt;
* [https://kubectl.docs.kubernetes.io/pages/reference/kustomize.html reference] Good!&lt;br /&gt;
* [https://github.com/kubernetes-sigs/kustomize/blob/master/examples/inlinePatch.md inlinePatch]&lt;br /&gt;
* [https://github.com/kubernetes-sigs/kustomize/blob/master/examples/chart.md kustomization of a helm chart]&lt;br /&gt;
* [https://github.com/kubernetes-sigs/kustomize/blob/master/examples/configureBuiltinPlugin.md#using-the-commonlabels-and-commonannotations-fields Customize Kustomize] annotation and label buildin transformer&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Kubernetes/Kustomize&amp;diff=7036</id>
		<title>Kubernetes/Kustomize</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Kubernetes/Kustomize&amp;diff=7036"/>
		<updated>2024-08-06T22:30:14Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Embedded versions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= [https://kustomize.io/ Kustomize] =&lt;br /&gt;
* [https://kubectl.docs.kubernetes.io/ kubectl+kustomize] SIG CLI&lt;br /&gt;
kustomize lets you customize raw, template-free YAML files for multiple purposes, leaving the original YAML untouched and usable as is.&lt;br /&gt;
&lt;br /&gt;
= Embedded versions =&lt;br /&gt;
;Flux deployment: runs kustomize-controller and helm-controller, use the following methods to find out versions of the embedded components:&lt;br /&gt;
&lt;br /&gt;
* kustomize-controller versions [https://github.com/fluxcd/kustomize-controller/blob/main/CHANGELOG.md#100 CHANGELOG]:&lt;br /&gt;
** 0.27.0 - Kustomization v4.5.7&lt;br /&gt;
** 1.0.0 - Kustomizatin v5.0.3 (introduced in 1.0.0-rc.4 release)&lt;br /&gt;
** 1.2.0 - Kustomize v5.3.0, SOPS v3.8.1&lt;br /&gt;
&lt;br /&gt;
* helm-controller versions [https://github.com/fluxcd/helm-controller/blob/main/CHANGELOG.md#0370 CHANGELOG]:&lt;br /&gt;
** v0.37.0 - helm v3.13.2, post-renderer kustomize v5.3.0&lt;br /&gt;
&lt;br /&gt;
= Install =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Detects your OS and downloads kustomize binary to cwd&lt;br /&gt;
curl -s &amp;quot;https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh&amp;quot;  | bash&lt;br /&gt;
&lt;br /&gt;
# Install on Linux - option2&lt;br /&gt;
VERSION=v4.1.2&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/kubernetes-sigs/kustomize/releases&amp;quot; | jq -r '.[].tag_name | select(. | contains(&amp;quot;kustomize&amp;quot;))' | sort | tail -1 | cut -d&amp;quot;/&amp;quot; -f2); echo $VERSION&lt;br /&gt;
curl -L https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize%2F${VERSION}/kustomize_${VERSION}_linux_amd64.tar.gz -o kustomize_${VERSION}_linux_amd64.tar.gz&lt;br /&gt;
tar xzvf kustomize_${VERSION}_linux_amd64.tar.gz&lt;br /&gt;
sudo install ./kustomize /usr/local/bin/kustomize&lt;br /&gt;
sudo install ./kustomize /usr/local/bin/kustomize_${VERSION}&lt;br /&gt;
&lt;br /&gt;
kustomize version --short&lt;br /&gt;
{kustomize/v4.1.2  2021-04-15T20:38:06Z  }&lt;br /&gt;
&lt;br /&gt;
kustomize version&lt;br /&gt;
v5.3.0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Kustomize build workflow =&lt;br /&gt;
* [https://github.com/kubernetes-sigs/kustomize/issues/2052 kustomize vars] - use &amp;lt;code&amp;gt;envsubst&amp;lt;/code&amp;gt; instead&lt;br /&gt;
&amp;lt;source&amp;gt;$ kustomize build ~/target&amp;lt;/source&amp;gt;&lt;br /&gt;
# load universal k8s object descriptions&lt;br /&gt;
# read &amp;lt;code&amp;gt;kustomization.yaml&amp;lt;/code&amp;gt; from '''target'''&lt;br /&gt;
# kustomize '''bases''' (recurse 2-5)&lt;br /&gt;
# load and/or generate resources&lt;br /&gt;
# apply '''target's''' kustomization operations&lt;br /&gt;
# fix name references&lt;br /&gt;
# emit yaml&lt;br /&gt;
&lt;br /&gt;
= &amp;lt;code&amp;gt;kustomize.yaml&amp;lt;/code&amp;gt;&lt;br /&gt;
A build stage first &lt;br /&gt;
* processes resources, &lt;br /&gt;
* then it processes generators, adding to the resource list under consideration, &lt;br /&gt;
* then it processes transformers to modify the list, &lt;br /&gt;
* and finally runs validators to check the list for whatever error.&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
apiVersion: kustomize.config.k8s.io/v1beta1&lt;br /&gt;
kind: Kustomization&lt;br /&gt;
&lt;br /&gt;
resources:&lt;br /&gt;
- {pathOrUrl}&lt;br /&gt;
- what resources you want to customize &lt;br /&gt;
&lt;br /&gt;
# cross-cutting fields&lt;br /&gt;
namespace: custom&lt;br /&gt;
namePrefix: dev-&lt;br /&gt;
nameSuffix: &amp;quot;-svc&amp;quot;&lt;br /&gt;
commonLabels:&lt;br /&gt;
  app: web&lt;br /&gt;
commonAnnotations:&lt;br /&gt;
  value: app&lt;br /&gt;
&lt;br /&gt;
generators:&lt;br /&gt;
- {pathOrUrl}&lt;br /&gt;
- what new resources should be created.&lt;br /&gt;
generatorOptions:&lt;br /&gt;
  disableNameSuffixHash: true&lt;br /&gt;
  labels:&lt;br /&gt;
    env: prod&lt;br /&gt;
  annotations:&lt;br /&gt;
    app: custom&lt;br /&gt;
&lt;br /&gt;
transformers:&lt;br /&gt;
- {pathOrUrl}&lt;br /&gt;
- what to transform in above mentioned resources&lt;br /&gt;
&lt;br /&gt;
validators:&lt;br /&gt;
- {pathOrUrl}&lt;br /&gt;
- ...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Patches ==&lt;br /&gt;
;patchStrategicMerge: Kubernetes supports a customized version of JSON merge patch called strategic merge patch. This patch format is used by &amp;lt;code&amp;gt;kubectl apply&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;kubectl edit&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;kubectl patch&amp;lt;/code&amp;gt;, and contains specialized directives to control how specific fields are merged.&lt;br /&gt;
&lt;br /&gt;
= Example 101 =&lt;br /&gt;
{{Note|Bases have been deprecated in v2.1.0 [https://kubernetes-sigs.github.io/kustomize/blog/2019/06/18/v2.1.0/#resources-expanded-bases-deprecated resources-expanded-bases-deprecated]}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [https://kustomize.io/tutorial Kustomize builder] note that it operates on the 1st yaml document&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Example 101 - environment type overrides&lt;br /&gt;
|- &lt;br /&gt;
! base/kustomization.yaml&lt;br /&gt;
! overlays/dev/kustomization.yaml&lt;br /&gt;
! overlays/prod/kustomization.yaml&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| &amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
apiVersion: kustomize.config.k8s.io/v1beta1&lt;br /&gt;
kind: Kustomization&lt;br /&gt;
commonLabels:&lt;br /&gt;
  app: sonarqube&lt;br /&gt;
resources:&lt;br /&gt;
- gateway.yaml&lt;br /&gt;
- virtual-service.yaml&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
| &amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
apiVersion: ...&lt;br /&gt;
kind: Kustomization&lt;br /&gt;
patches:&lt;br /&gt;
- gateway_patch.yaml&lt;br /&gt;
- virtual-service_patch.yaml&lt;br /&gt;
resources:&lt;br /&gt;
- ../../base&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
| &amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
apiVersion: ...&lt;br /&gt;
kind: Kustomization&lt;br /&gt;
patches:&lt;br /&gt;
- gateway_patch.yaml&lt;br /&gt;
- virtual-service_patch.yaml&lt;br /&gt;
resources:&lt;br /&gt;
- ../../base&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
.&lt;br /&gt;
├── base&lt;br /&gt;
│   ├── gateway.yaml&lt;br /&gt;
│   ├── kustomization.yaml&lt;br /&gt;
│   └── virtual-service.yaml&lt;br /&gt;
└── overlays # more contextual this directory could be called 'environments'&lt;br /&gt;
    ├── dev&lt;br /&gt;
    │   ├── gateway_patch.yaml&lt;br /&gt;
    │   ├── kustomization.yaml&lt;br /&gt;
    │   └── virtual-service_patch.yaml&lt;br /&gt;
    └── prod&lt;br /&gt;
        ├── gateway_patch.yaml&lt;br /&gt;
        ├── kustomization.yaml&lt;br /&gt;
        └── virtual-service_patch.yaml&lt;br /&gt;
&lt;br /&gt;
# Build kuctomized output&lt;br /&gt;
kustomize version --short # -&amp;gt; {kustomize/v3.8.2  2020-08-29T17:44:01Z  }&lt;br /&gt;
kustomize build overlays/dev # apply patches&lt;br /&gt;
kustomize build base         # run common functions (as described in base/kustomize.yaml) against the whole code base &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
What happens?&lt;br /&gt;
# &amp;lt;code&amp;gt;kustomize build overlays/dev&amp;lt;/code&amp;gt; finds &amp;lt;code&amp;gt;kustomization.yaml&amp;lt;/code&amp;gt;, that describes:&lt;br /&gt;
## &amp;lt;code&amp;gt;patches: [gateway_patch.yaml, virtual-service_patch.yaml]&amp;lt;/code&amp;gt; to be used over the base &amp;lt;code&amp;gt;resources: [../../base]&amp;lt;/code&amp;gt;. There are 3 type of patches: patches, patchesStrategicMerge, [https://skryvets.com/blog/2019/05/15/kubernetes-kustomize-json-patches-6902 patchesJson6902] to choose from&lt;br /&gt;
# &amp;lt;code&amp;gt;overlays/dev/kustomization.yaml&amp;lt;/code&amp;gt; cascades to the base (source of manifests to be changed) via directive &amp;lt;code&amp;gt;resources: [&amp;quot;../../base&amp;quot;]&amp;lt;/code&amp;gt;&lt;br /&gt;
# The base directory contains and runs its own &amp;lt;code&amp;gt;kustomization.yaml&amp;lt;/code&amp;gt; file.&lt;br /&gt;
# The &amp;lt;code&amp;gt;base/kustomization.yaml&amp;lt;/code&amp;gt; contains common operations, eg. &amp;lt;code&amp;gt;commonLabels, namePrefix&amp;lt;/code&amp;gt; functions to be applied to whole code base.&lt;br /&gt;
# Then patch file(s) are applied eg. &amp;lt;code&amp;gt;gateway_patch.yaml&amp;lt;/code&amp;gt; contains enough information to identify a resource/object and apply changes.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
So, what happens&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Applying the patch: overlays/dev/gateway_patch.yaml &lt;br /&gt;
apiVersion: networking.istio.io/v1beta1&lt;br /&gt;
kind: Gateway&lt;br /&gt;
metadata:&lt;br /&gt;
  name: sonarqube &lt;br /&gt;
spec:&lt;br /&gt;
  servers:&lt;br /&gt;
  - port:&lt;br /&gt;
      number: 443&lt;br /&gt;
      name: http&lt;br /&gt;
      protocol: HTTP&lt;br /&gt;
    hosts:&lt;br /&gt;
     - sonarqube-dev.acme.com # &amp;lt;- override&lt;br /&gt;
#   | &lt;br /&gt;
#   | over the base&lt;br /&gt;
#   v &lt;br /&gt;
&lt;br /&gt;
# base/gateway.yaml&lt;br /&gt;
apiVersion: networking.istio.io/v1beta1&lt;br /&gt;
kind: Gateway&lt;br /&gt;
metadata:&lt;br /&gt;
  labels:&lt;br /&gt;
    app: sonarqube&lt;br /&gt;
  name: sonarqube&lt;br /&gt;
spec:&lt;br /&gt;
  selector:&lt;br /&gt;
    istio: ingressgateway&lt;br /&gt;
  servers:&lt;br /&gt;
  - hosts:&lt;br /&gt;
    - sonarqube.acme.com&lt;br /&gt;
    port:&lt;br /&gt;
      name: http&lt;br /&gt;
      number: 443&lt;br /&gt;
      protocol: HTTP&lt;br /&gt;
#   | &lt;br /&gt;
#   | results with&lt;br /&gt;
#   v &lt;br /&gt;
&lt;br /&gt;
apiVersion: networking.istio.io/v1beta1&lt;br /&gt;
kind: Gateway&lt;br /&gt;
metadata:&lt;br /&gt;
  labels:&lt;br /&gt;
    app: sonarqube&lt;br /&gt;
    owner: piotr # &amp;lt;- label added by base kustomize.yaml fn&lt;br /&gt;
  name: sonarqube&lt;br /&gt;
spec:&lt;br /&gt;
  selector:&lt;br /&gt;
    istio: ingressgateway&lt;br /&gt;
  servers:&lt;br /&gt;
  - hosts:&lt;br /&gt;
    - sonarqube-dev.acme.com # &amp;lt;- patch override&lt;br /&gt;
    port:&lt;br /&gt;
      name: http&lt;br /&gt;
      number: 443&lt;br /&gt;
      protocol: HTTP&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check yourselves&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
#         __unchanged manifest_    _base kustomization_    ___patch overlay____________&lt;br /&gt;
vimdiff &amp;lt;(cat base/gateway.yaml) &amp;lt;(kustomize build base) &amp;lt;(kustomize build overlays/dev)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
:[[File:ClipCapIt-200910-010734.PNG]]&lt;br /&gt;
&lt;br /&gt;
= Cheatsheet =&lt;br /&gt;
* [https://github.com/kubernetes-sigs/kustomize/blob/master/examples/chart.md Helm charts - last mile]&lt;br /&gt;
&lt;br /&gt;
= Patch [https://github.com/kubernetes-sigs/kustomize/blob/master/examples/patchMultipleObjects.md multiple objects] =&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;yaml&amp;quot;&amp;gt;&lt;br /&gt;
apiVersion: kustomize.config.k8s.io/v1beta1&lt;br /&gt;
kind: Kustomization&lt;br /&gt;
resources:&lt;br /&gt;
  - ../base&lt;br /&gt;
&lt;br /&gt;
patches:&lt;br /&gt;
  - path: patch.json&lt;br /&gt;
    target:&lt;br /&gt;
      kind: PersistentVolume&lt;br /&gt;
      version: v1&lt;br /&gt;
      group: &amp;quot;&amp;quot;&lt;br /&gt;
      name: volume-(data|master)-\d # regex match&lt;br /&gt;
      labelSelector: |&lt;br /&gt;
        app.kubernetes.io/component=storage,&lt;br /&gt;
        app.kubernetes.io/name=elasticsearch&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Component example&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;yaml&amp;quot;&amp;gt;&lt;br /&gt;
apiVersion: kustomize.config.k8s.io/v1alpha1&lt;br /&gt;
kind: Component&lt;br /&gt;
&lt;br /&gt;
patches:&lt;br /&gt;
- target: &lt;br /&gt;
    kind: HelmRelease&lt;br /&gt;
    version: v2beta1&lt;br /&gt;
    group: helm.toolkit.fluxcd.io&lt;br /&gt;
    name: external-dns-.+$ # [1]&lt;br /&gt;
  patch: |-&lt;br /&gt;
    apiVersion: helm.toolkit.fluxcd.io/v2beta1&lt;br /&gt;
    kind: HelmRelease&lt;br /&gt;
    metadata:&lt;br /&gt;
      name: ALL            # [2]&lt;br /&gt;
      namespace: flux-system&lt;br /&gt;
    spec:&lt;br /&gt;
      values:&lt;br /&gt;
        tolerations:&lt;br /&gt;
          - key: &amp;quot;components.gke.io/gke-managed-components&amp;quot;&lt;br /&gt;
            operator: Exists&lt;br /&gt;
        affinity:&lt;br /&gt;
          nodeAffinity:&lt;br /&gt;
            preferredDuringSchedulingIgnoredDuringExecution:&lt;br /&gt;
              - weight: 100&lt;br /&gt;
                preference:&lt;br /&gt;
                  matchExpressions:&lt;br /&gt;
                  - key: &amp;quot;predictx/workload&amp;quot;&lt;br /&gt;
                    operator: In&lt;br /&gt;
                    values:&lt;br /&gt;
                    - &amp;quot;infra&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# [1] Regex match&lt;br /&gt;
# [2] The name is replaced by a name from the matched object. Any value ie 'ALL' is required.&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Delete an object from the base =&lt;br /&gt;
''Strategic Merge Patch'' provides some patch options like replace, merge, and delete. The simple 'patch' can only patch (manipulate) it.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;yaml&amp;quot;&amp;gt;&lt;br /&gt;
apiVersion: kustomize.config.k8s.io/v1beta1&lt;br /&gt;
kind: Kustomization&lt;br /&gt;
resources:&lt;br /&gt;
  - ../base&lt;br /&gt;
&lt;br /&gt;
patchesStrategicMerge:&lt;br /&gt;
- |-&lt;br /&gt;
  apiVersion: v1&lt;br /&gt;
  kind: Namespace&lt;br /&gt;
  metadata:&lt;br /&gt;
    name: unwanted-namespace&lt;br /&gt;
  $patch: delete&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= secretGenerator =&lt;br /&gt;
Secrets can be generated from environment variables. Within a template file there is a list of variables, where the variable will become a key and it's value the value.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Environment variable secret template&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
GIT_USERNAME&lt;br /&gt;
GIT_PASSWORD&lt;br /&gt;
GIT_CREDENTIALS&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Kustomization&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
apiVersion: kustomize.config.k8s.io/v1beta1&lt;br /&gt;
kind: Kustomization&lt;br /&gt;
&lt;br /&gt;
secretGenerator:&lt;br /&gt;
  - name: argocd-git-secret&lt;br /&gt;
    envs:&lt;br /&gt;
      - git.env&lt;br /&gt;
    options:&lt;br /&gt;
      disableNameSuffixHash: true&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Patch - add an item to a list =&lt;br /&gt;
* https://stackoverflow.com/questions/71622419/adding-items-to-a-list-with-kubectl-kustomize&lt;br /&gt;
* [https://github.com/kubernetes/community/blob/master/contributors/devel/sig-api-machinery/strategic-merge-patch.md#strategic-merge-patch Strategic merge patch docs]&lt;br /&gt;
&lt;br /&gt;
In the standard JSON merge patch, JSON objects are always merged but lists are always replaced. Often that isn't what we want. To solve this problem, Strategic Merge Patch uses the go struct tag of the API objects to determine what lists should be merged and which ones should not. Read more at [https://github.com/kubernetes/community/blob/master/contributors/devel/sig-api-machinery/strategic-merge-patch.md#strategic-merge-patch strategic merge patch docs].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
patchesJson6902:&lt;br /&gt;
  - patch: |-&lt;br /&gt;
      - op: add&lt;br /&gt;
        path: /spec/valuesFrom/-&lt;br /&gt;
        value: # below map will be added as an item to the list, pay attention to `-` sign at the end of path&lt;br /&gt;
          kind: ConfigMap&lt;br /&gt;
          name: values-1-yaml&lt;br /&gt;
    target:&lt;br /&gt;
      group: helm.toolkit.fluxcd.io&lt;br /&gt;
      kind: HelmRelease&lt;br /&gt;
      name: kube-prometheus-stack&lt;br /&gt;
      version: v2beta1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/replacements/ Replacements] = &lt;br /&gt;
* https://stackoverflow.com/questions/71358674/kustomize-how-to-reference-a-value-from-a-configmap-in-another-resource-overlay&lt;br /&gt;
Use of vars is getting deprecated please use replacements.&lt;br /&gt;
&lt;br /&gt;
= Known issues =&lt;br /&gt;
* [https://github.com/kubernetes-sigs/kustomize/issues/2034 commonLabels altering podSelector.matchLabels] and [https://github.com/kubernetes-sigs/kustomize/issues/157 Allow excluding some label selectors from commonLabels]&lt;br /&gt;
In some settings it makes sense for &amp;lt;code&amp;gt;commonLabels&amp;lt;/code&amp;gt; to be included in selectors, and in some settings it doers not make sense to include them in selectors. Kustomize includes by default, and there is no way to opt out. As workaround, you can convert &amp;lt;code&amp;gt;matchLabels&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;matchExpressions&amp;lt;/code&amp;gt; and Kustomize won't touch them. API docs&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
  - podSelector:&lt;br /&gt;
      matchLabels:&lt;br /&gt;
        app: mongodb-backup&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
is equivalent with&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
  - podSelector:&lt;br /&gt;
      matchExpressions:&lt;br /&gt;
        - key: app&lt;br /&gt;
        operator: In&lt;br /&gt;
        values:&lt;br /&gt;
          - mongodb-backup&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and Kustomize will keep its hands off.&lt;br /&gt;
&lt;br /&gt;
= Resources =&lt;br /&gt;
* [https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/replacements/ Replacement transform is deprecating Vars]&lt;br /&gt;
* [https://github.com/kubernetes-sigs/kustomize Kustomize sig]&lt;br /&gt;
* [https://kubectl.docs.kubernetes.io/guides/config_management/components/ v3.7.0+ Components]&lt;br /&gt;
* [https://github.com/kubernetes-sigs/kustomize/blob/master/docs/glossary.md#kustomization Glossary]&lt;br /&gt;
* [https://kubectl.docs.kubernetes.io/references/kustomize/ Kustomization File Fields]&lt;br /&gt;
* [https://kubectl.docs.kubernetes.io/pages/examples/kustomize.html Kustomize - examples] kubectl.docs.kubernetes.io&lt;br /&gt;
* [https://kubectl.docs.kubernetes.io/pages/app_composition_and_deployment/structure_directories.html Kustomize structure_directories]&lt;br /&gt;
* [https://kubectl.docs.kubernetes.io/pages/reference/kustomize.html reference] Good!&lt;br /&gt;
* [https://github.com/kubernetes-sigs/kustomize/blob/master/examples/inlinePatch.md inlinePatch]&lt;br /&gt;
* [https://github.com/kubernetes-sigs/kustomize/blob/master/examples/chart.md kustomization of a helm chart]&lt;br /&gt;
* [https://github.com/kubernetes-sigs/kustomize/blob/master/examples/configureBuiltinPlugin.md#using-the-commonlabels-and-commonannotations-fields Customize Kustomize] annotation and label buildin transformer&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Kubernetes/Kustomize&amp;diff=7035</id>
		<title>Kubernetes/Kustomize</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Kubernetes/Kustomize&amp;diff=7035"/>
		<updated>2024-08-06T22:27:07Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Embedded versions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= [https://kustomize.io/ Kustomize] =&lt;br /&gt;
* [https://kubectl.docs.kubernetes.io/ kubectl+kustomize] SIG CLI&lt;br /&gt;
kustomize lets you customize raw, template-free YAML files for multiple purposes, leaving the original YAML untouched and usable as is.&lt;br /&gt;
&lt;br /&gt;
= Embedded versions =&lt;br /&gt;
;Flux deployment: runs kustomize-controller and helm-controller, use the following methods to find out versions of the embedded components:&lt;br /&gt;
&lt;br /&gt;
;kustomize-controller - [https://github.com/fluxcd/kustomize-controller/blob/main/CHANGELOG.md#100 CHANGELOG]&lt;br /&gt;
* 0.27.0 - Kustomization v4.5.7&lt;br /&gt;
* 1.0.0 - Kustomizatin v5.0.3 (introduced in 1.0.0-rc.4 release)&lt;br /&gt;
* 1.2.0 - Kustomize v5.3.0, SOPS v3.8.1&lt;br /&gt;
&lt;br /&gt;
;helm-controller; [https://github.com/fluxcd/helm-controller/blob/main/CHANGELOG.md#0370 CHANGELOG]&lt;br /&gt;
* v0.37.0 - helm v3.13.2, post-renderer kustomize v5.3.0&lt;br /&gt;
&lt;br /&gt;
= Install =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Detects your OS and downloads kustomize binary to cwd&lt;br /&gt;
curl -s &amp;quot;https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh&amp;quot;  | bash&lt;br /&gt;
&lt;br /&gt;
# Install on Linux - option2&lt;br /&gt;
VERSION=v4.1.2&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/kubernetes-sigs/kustomize/releases&amp;quot; | jq -r '.[].tag_name | select(. | contains(&amp;quot;kustomize&amp;quot;))' | sort | tail -1 | cut -d&amp;quot;/&amp;quot; -f2); echo $VERSION&lt;br /&gt;
curl -L https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize%2F${VERSION}/kustomize_${VERSION}_linux_amd64.tar.gz -o kustomize_${VERSION}_linux_amd64.tar.gz&lt;br /&gt;
tar xzvf kustomize_${VERSION}_linux_amd64.tar.gz&lt;br /&gt;
sudo install ./kustomize /usr/local/bin/kustomize&lt;br /&gt;
sudo install ./kustomize /usr/local/bin/kustomize_${VERSION}&lt;br /&gt;
&lt;br /&gt;
kustomize version --short&lt;br /&gt;
{kustomize/v4.1.2  2021-04-15T20:38:06Z  }&lt;br /&gt;
&lt;br /&gt;
kustomize version&lt;br /&gt;
v5.3.0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Kustomize build workflow =&lt;br /&gt;
* [https://github.com/kubernetes-sigs/kustomize/issues/2052 kustomize vars] - use &amp;lt;code&amp;gt;envsubst&amp;lt;/code&amp;gt; instead&lt;br /&gt;
&amp;lt;source&amp;gt;$ kustomize build ~/target&amp;lt;/source&amp;gt;&lt;br /&gt;
# load universal k8s object descriptions&lt;br /&gt;
# read &amp;lt;code&amp;gt;kustomization.yaml&amp;lt;/code&amp;gt; from '''target'''&lt;br /&gt;
# kustomize '''bases''' (recurse 2-5)&lt;br /&gt;
# load and/or generate resources&lt;br /&gt;
# apply '''target's''' kustomization operations&lt;br /&gt;
# fix name references&lt;br /&gt;
# emit yaml&lt;br /&gt;
&lt;br /&gt;
= &amp;lt;code&amp;gt;kustomize.yaml&amp;lt;/code&amp;gt;&lt;br /&gt;
A build stage first &lt;br /&gt;
* processes resources, &lt;br /&gt;
* then it processes generators, adding to the resource list under consideration, &lt;br /&gt;
* then it processes transformers to modify the list, &lt;br /&gt;
* and finally runs validators to check the list for whatever error.&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
apiVersion: kustomize.config.k8s.io/v1beta1&lt;br /&gt;
kind: Kustomization&lt;br /&gt;
&lt;br /&gt;
resources:&lt;br /&gt;
- {pathOrUrl}&lt;br /&gt;
- what resources you want to customize &lt;br /&gt;
&lt;br /&gt;
# cross-cutting fields&lt;br /&gt;
namespace: custom&lt;br /&gt;
namePrefix: dev-&lt;br /&gt;
nameSuffix: &amp;quot;-svc&amp;quot;&lt;br /&gt;
commonLabels:&lt;br /&gt;
  app: web&lt;br /&gt;
commonAnnotations:&lt;br /&gt;
  value: app&lt;br /&gt;
&lt;br /&gt;
generators:&lt;br /&gt;
- {pathOrUrl}&lt;br /&gt;
- what new resources should be created.&lt;br /&gt;
generatorOptions:&lt;br /&gt;
  disableNameSuffixHash: true&lt;br /&gt;
  labels:&lt;br /&gt;
    env: prod&lt;br /&gt;
  annotations:&lt;br /&gt;
    app: custom&lt;br /&gt;
&lt;br /&gt;
transformers:&lt;br /&gt;
- {pathOrUrl}&lt;br /&gt;
- what to transform in above mentioned resources&lt;br /&gt;
&lt;br /&gt;
validators:&lt;br /&gt;
- {pathOrUrl}&lt;br /&gt;
- ...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Patches ==&lt;br /&gt;
;patchStrategicMerge: Kubernetes supports a customized version of JSON merge patch called strategic merge patch. This patch format is used by &amp;lt;code&amp;gt;kubectl apply&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;kubectl edit&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;kubectl patch&amp;lt;/code&amp;gt;, and contains specialized directives to control how specific fields are merged.&lt;br /&gt;
&lt;br /&gt;
= Example 101 =&lt;br /&gt;
{{Note|Bases have been deprecated in v2.1.0 [https://kubernetes-sigs.github.io/kustomize/blog/2019/06/18/v2.1.0/#resources-expanded-bases-deprecated resources-expanded-bases-deprecated]}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [https://kustomize.io/tutorial Kustomize builder] note that it operates on the 1st yaml document&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Example 101 - environment type overrides&lt;br /&gt;
|- &lt;br /&gt;
! base/kustomization.yaml&lt;br /&gt;
! overlays/dev/kustomization.yaml&lt;br /&gt;
! overlays/prod/kustomization.yaml&lt;br /&gt;
|- style=&amp;quot;vertical-align:top;&amp;quot;&lt;br /&gt;
| &amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
apiVersion: kustomize.config.k8s.io/v1beta1&lt;br /&gt;
kind: Kustomization&lt;br /&gt;
commonLabels:&lt;br /&gt;
  app: sonarqube&lt;br /&gt;
resources:&lt;br /&gt;
- gateway.yaml&lt;br /&gt;
- virtual-service.yaml&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
| &amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
apiVersion: ...&lt;br /&gt;
kind: Kustomization&lt;br /&gt;
patches:&lt;br /&gt;
- gateway_patch.yaml&lt;br /&gt;
- virtual-service_patch.yaml&lt;br /&gt;
resources:&lt;br /&gt;
- ../../base&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
| &amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
apiVersion: ...&lt;br /&gt;
kind: Kustomization&lt;br /&gt;
patches:&lt;br /&gt;
- gateway_patch.yaml&lt;br /&gt;
- virtual-service_patch.yaml&lt;br /&gt;
resources:&lt;br /&gt;
- ../../base&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
.&lt;br /&gt;
├── base&lt;br /&gt;
│   ├── gateway.yaml&lt;br /&gt;
│   ├── kustomization.yaml&lt;br /&gt;
│   └── virtual-service.yaml&lt;br /&gt;
└── overlays # more contextual this directory could be called 'environments'&lt;br /&gt;
    ├── dev&lt;br /&gt;
    │   ├── gateway_patch.yaml&lt;br /&gt;
    │   ├── kustomization.yaml&lt;br /&gt;
    │   └── virtual-service_patch.yaml&lt;br /&gt;
    └── prod&lt;br /&gt;
        ├── gateway_patch.yaml&lt;br /&gt;
        ├── kustomization.yaml&lt;br /&gt;
        └── virtual-service_patch.yaml&lt;br /&gt;
&lt;br /&gt;
# Build kuctomized output&lt;br /&gt;
kustomize version --short # -&amp;gt; {kustomize/v3.8.2  2020-08-29T17:44:01Z  }&lt;br /&gt;
kustomize build overlays/dev # apply patches&lt;br /&gt;
kustomize build base         # run common functions (as described in base/kustomize.yaml) against the whole code base &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
What happens?&lt;br /&gt;
# &amp;lt;code&amp;gt;kustomize build overlays/dev&amp;lt;/code&amp;gt; finds &amp;lt;code&amp;gt;kustomization.yaml&amp;lt;/code&amp;gt;, that describes:&lt;br /&gt;
## &amp;lt;code&amp;gt;patches: [gateway_patch.yaml, virtual-service_patch.yaml]&amp;lt;/code&amp;gt; to be used over the base &amp;lt;code&amp;gt;resources: [../../base]&amp;lt;/code&amp;gt;. There are 3 type of patches: patches, patchesStrategicMerge, [https://skryvets.com/blog/2019/05/15/kubernetes-kustomize-json-patches-6902 patchesJson6902] to choose from&lt;br /&gt;
# &amp;lt;code&amp;gt;overlays/dev/kustomization.yaml&amp;lt;/code&amp;gt; cascades to the base (source of manifests to be changed) via directive &amp;lt;code&amp;gt;resources: [&amp;quot;../../base&amp;quot;]&amp;lt;/code&amp;gt;&lt;br /&gt;
# The base directory contains and runs its own &amp;lt;code&amp;gt;kustomization.yaml&amp;lt;/code&amp;gt; file.&lt;br /&gt;
# The &amp;lt;code&amp;gt;base/kustomization.yaml&amp;lt;/code&amp;gt; contains common operations, eg. &amp;lt;code&amp;gt;commonLabels, namePrefix&amp;lt;/code&amp;gt; functions to be applied to whole code base.&lt;br /&gt;
# Then patch file(s) are applied eg. &amp;lt;code&amp;gt;gateway_patch.yaml&amp;lt;/code&amp;gt; contains enough information to identify a resource/object and apply changes.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
So, what happens&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Applying the patch: overlays/dev/gateway_patch.yaml &lt;br /&gt;
apiVersion: networking.istio.io/v1beta1&lt;br /&gt;
kind: Gateway&lt;br /&gt;
metadata:&lt;br /&gt;
  name: sonarqube &lt;br /&gt;
spec:&lt;br /&gt;
  servers:&lt;br /&gt;
  - port:&lt;br /&gt;
      number: 443&lt;br /&gt;
      name: http&lt;br /&gt;
      protocol: HTTP&lt;br /&gt;
    hosts:&lt;br /&gt;
     - sonarqube-dev.acme.com # &amp;lt;- override&lt;br /&gt;
#   | &lt;br /&gt;
#   | over the base&lt;br /&gt;
#   v &lt;br /&gt;
&lt;br /&gt;
# base/gateway.yaml&lt;br /&gt;
apiVersion: networking.istio.io/v1beta1&lt;br /&gt;
kind: Gateway&lt;br /&gt;
metadata:&lt;br /&gt;
  labels:&lt;br /&gt;
    app: sonarqube&lt;br /&gt;
  name: sonarqube&lt;br /&gt;
spec:&lt;br /&gt;
  selector:&lt;br /&gt;
    istio: ingressgateway&lt;br /&gt;
  servers:&lt;br /&gt;
  - hosts:&lt;br /&gt;
    - sonarqube.acme.com&lt;br /&gt;
    port:&lt;br /&gt;
      name: http&lt;br /&gt;
      number: 443&lt;br /&gt;
      protocol: HTTP&lt;br /&gt;
#   | &lt;br /&gt;
#   | results with&lt;br /&gt;
#   v &lt;br /&gt;
&lt;br /&gt;
apiVersion: networking.istio.io/v1beta1&lt;br /&gt;
kind: Gateway&lt;br /&gt;
metadata:&lt;br /&gt;
  labels:&lt;br /&gt;
    app: sonarqube&lt;br /&gt;
    owner: piotr # &amp;lt;- label added by base kustomize.yaml fn&lt;br /&gt;
  name: sonarqube&lt;br /&gt;
spec:&lt;br /&gt;
  selector:&lt;br /&gt;
    istio: ingressgateway&lt;br /&gt;
  servers:&lt;br /&gt;
  - hosts:&lt;br /&gt;
    - sonarqube-dev.acme.com # &amp;lt;- patch override&lt;br /&gt;
    port:&lt;br /&gt;
      name: http&lt;br /&gt;
      number: 443&lt;br /&gt;
      protocol: HTTP&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check yourselves&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
#         __unchanged manifest_    _base kustomization_    ___patch overlay____________&lt;br /&gt;
vimdiff &amp;lt;(cat base/gateway.yaml) &amp;lt;(kustomize build base) &amp;lt;(kustomize build overlays/dev)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
:[[File:ClipCapIt-200910-010734.PNG]]&lt;br /&gt;
&lt;br /&gt;
= Cheatsheet =&lt;br /&gt;
* [https://github.com/kubernetes-sigs/kustomize/blob/master/examples/chart.md Helm charts - last mile]&lt;br /&gt;
&lt;br /&gt;
= Patch [https://github.com/kubernetes-sigs/kustomize/blob/master/examples/patchMultipleObjects.md multiple objects] =&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;yaml&amp;quot;&amp;gt;&lt;br /&gt;
apiVersion: kustomize.config.k8s.io/v1beta1&lt;br /&gt;
kind: Kustomization&lt;br /&gt;
resources:&lt;br /&gt;
  - ../base&lt;br /&gt;
&lt;br /&gt;
patches:&lt;br /&gt;
  - path: patch.json&lt;br /&gt;
    target:&lt;br /&gt;
      kind: PersistentVolume&lt;br /&gt;
      version: v1&lt;br /&gt;
      group: &amp;quot;&amp;quot;&lt;br /&gt;
      name: volume-(data|master)-\d # regex match&lt;br /&gt;
      labelSelector: |&lt;br /&gt;
        app.kubernetes.io/component=storage,&lt;br /&gt;
        app.kubernetes.io/name=elasticsearch&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Component example&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;yaml&amp;quot;&amp;gt;&lt;br /&gt;
apiVersion: kustomize.config.k8s.io/v1alpha1&lt;br /&gt;
kind: Component&lt;br /&gt;
&lt;br /&gt;
patches:&lt;br /&gt;
- target: &lt;br /&gt;
    kind: HelmRelease&lt;br /&gt;
    version: v2beta1&lt;br /&gt;
    group: helm.toolkit.fluxcd.io&lt;br /&gt;
    name: external-dns-.+$ # [1]&lt;br /&gt;
  patch: |-&lt;br /&gt;
    apiVersion: helm.toolkit.fluxcd.io/v2beta1&lt;br /&gt;
    kind: HelmRelease&lt;br /&gt;
    metadata:&lt;br /&gt;
      name: ALL            # [2]&lt;br /&gt;
      namespace: flux-system&lt;br /&gt;
    spec:&lt;br /&gt;
      values:&lt;br /&gt;
        tolerations:&lt;br /&gt;
          - key: &amp;quot;components.gke.io/gke-managed-components&amp;quot;&lt;br /&gt;
            operator: Exists&lt;br /&gt;
        affinity:&lt;br /&gt;
          nodeAffinity:&lt;br /&gt;
            preferredDuringSchedulingIgnoredDuringExecution:&lt;br /&gt;
              - weight: 100&lt;br /&gt;
                preference:&lt;br /&gt;
                  matchExpressions:&lt;br /&gt;
                  - key: &amp;quot;predictx/workload&amp;quot;&lt;br /&gt;
                    operator: In&lt;br /&gt;
                    values:&lt;br /&gt;
                    - &amp;quot;infra&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# [1] Regex match&lt;br /&gt;
# [2] The name is replaced by a name from the matched object. Any value ie 'ALL' is required.&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Delete an object from the base =&lt;br /&gt;
''Strategic Merge Patch'' provides some patch options like replace, merge, and delete. The simple 'patch' can only patch (manipulate) it.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;yaml&amp;quot;&amp;gt;&lt;br /&gt;
apiVersion: kustomize.config.k8s.io/v1beta1&lt;br /&gt;
kind: Kustomization&lt;br /&gt;
resources:&lt;br /&gt;
  - ../base&lt;br /&gt;
&lt;br /&gt;
patchesStrategicMerge:&lt;br /&gt;
- |-&lt;br /&gt;
  apiVersion: v1&lt;br /&gt;
  kind: Namespace&lt;br /&gt;
  metadata:&lt;br /&gt;
    name: unwanted-namespace&lt;br /&gt;
  $patch: delete&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= secretGenerator =&lt;br /&gt;
Secrets can be generated from environment variables. Within a template file there is a list of variables, where the variable will become a key and it's value the value.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Environment variable secret template&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
GIT_USERNAME&lt;br /&gt;
GIT_PASSWORD&lt;br /&gt;
GIT_CREDENTIALS&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Kustomization&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
apiVersion: kustomize.config.k8s.io/v1beta1&lt;br /&gt;
kind: Kustomization&lt;br /&gt;
&lt;br /&gt;
secretGenerator:&lt;br /&gt;
  - name: argocd-git-secret&lt;br /&gt;
    envs:&lt;br /&gt;
      - git.env&lt;br /&gt;
    options:&lt;br /&gt;
      disableNameSuffixHash: true&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Patch - add an item to a list =&lt;br /&gt;
* https://stackoverflow.com/questions/71622419/adding-items-to-a-list-with-kubectl-kustomize&lt;br /&gt;
* [https://github.com/kubernetes/community/blob/master/contributors/devel/sig-api-machinery/strategic-merge-patch.md#strategic-merge-patch Strategic merge patch docs]&lt;br /&gt;
&lt;br /&gt;
In the standard JSON merge patch, JSON objects are always merged but lists are always replaced. Often that isn't what we want. To solve this problem, Strategic Merge Patch uses the go struct tag of the API objects to determine what lists should be merged and which ones should not. Read more at [https://github.com/kubernetes/community/blob/master/contributors/devel/sig-api-machinery/strategic-merge-patch.md#strategic-merge-patch strategic merge patch docs].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
patchesJson6902:&lt;br /&gt;
  - patch: |-&lt;br /&gt;
      - op: add&lt;br /&gt;
        path: /spec/valuesFrom/-&lt;br /&gt;
        value: # below map will be added as an item to the list, pay attention to `-` sign at the end of path&lt;br /&gt;
          kind: ConfigMap&lt;br /&gt;
          name: values-1-yaml&lt;br /&gt;
    target:&lt;br /&gt;
      group: helm.toolkit.fluxcd.io&lt;br /&gt;
      kind: HelmRelease&lt;br /&gt;
      name: kube-prometheus-stack&lt;br /&gt;
      version: v2beta1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/replacements/ Replacements] = &lt;br /&gt;
* https://stackoverflow.com/questions/71358674/kustomize-how-to-reference-a-value-from-a-configmap-in-another-resource-overlay&lt;br /&gt;
Use of vars is getting deprecated please use replacements.&lt;br /&gt;
&lt;br /&gt;
= Known issues =&lt;br /&gt;
* [https://github.com/kubernetes-sigs/kustomize/issues/2034 commonLabels altering podSelector.matchLabels] and [https://github.com/kubernetes-sigs/kustomize/issues/157 Allow excluding some label selectors from commonLabels]&lt;br /&gt;
In some settings it makes sense for &amp;lt;code&amp;gt;commonLabels&amp;lt;/code&amp;gt; to be included in selectors, and in some settings it doers not make sense to include them in selectors. Kustomize includes by default, and there is no way to opt out. As workaround, you can convert &amp;lt;code&amp;gt;matchLabels&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;matchExpressions&amp;lt;/code&amp;gt; and Kustomize won't touch them. API docs&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
  - podSelector:&lt;br /&gt;
      matchLabels:&lt;br /&gt;
        app: mongodb-backup&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
is equivalent with&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
  - podSelector:&lt;br /&gt;
      matchExpressions:&lt;br /&gt;
        - key: app&lt;br /&gt;
        operator: In&lt;br /&gt;
        values:&lt;br /&gt;
          - mongodb-backup&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and Kustomize will keep its hands off.&lt;br /&gt;
&lt;br /&gt;
= Resources =&lt;br /&gt;
* [https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/replacements/ Replacement transform is deprecating Vars]&lt;br /&gt;
* [https://github.com/kubernetes-sigs/kustomize Kustomize sig]&lt;br /&gt;
* [https://kubectl.docs.kubernetes.io/guides/config_management/components/ v3.7.0+ Components]&lt;br /&gt;
* [https://github.com/kubernetes-sigs/kustomize/blob/master/docs/glossary.md#kustomization Glossary]&lt;br /&gt;
* [https://kubectl.docs.kubernetes.io/references/kustomize/ Kustomization File Fields]&lt;br /&gt;
* [https://kubectl.docs.kubernetes.io/pages/examples/kustomize.html Kustomize - examples] kubectl.docs.kubernetes.io&lt;br /&gt;
* [https://kubectl.docs.kubernetes.io/pages/app_composition_and_deployment/structure_directories.html Kustomize structure_directories]&lt;br /&gt;
* [https://kubectl.docs.kubernetes.io/pages/reference/kustomize.html reference] Good!&lt;br /&gt;
* [https://github.com/kubernetes-sigs/kustomize/blob/master/examples/inlinePatch.md inlinePatch]&lt;br /&gt;
* [https://github.com/kubernetes-sigs/kustomize/blob/master/examples/chart.md kustomization of a helm chart]&lt;br /&gt;
* [https://github.com/kubernetes-sigs/kustomize/blob/master/examples/configureBuiltinPlugin.md#using-the-commonlabels-and-commonannotations-fields Customize Kustomize] annotation and label buildin transformer&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Ubuntu_Setup&amp;diff=7034</id>
		<title>Ubuntu Setup</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Ubuntu_Setup&amp;diff=7034"/>
		<updated>2024-07-15T10:40:22Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Useful setups */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;If you are using Ubuntu for various Linux projects you will find that as it comes with pre installed with many packages. On the other hand installing just minimal version seems to be too extreme. Therefore I started maitaining a list of unnecessary packages and one liner to that removes them all. Please feel free to modify for your needs.&lt;br /&gt;
&lt;br /&gt;
= Default partitioning =&lt;br /&gt;
On virtual systems schema below will be applied, eg on laptops:&lt;br /&gt;
:[[File:ClipCapIt-200620-131502.PNG]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#Eg. for 4G memory and 50G storage system&lt;br /&gt;
&lt;br /&gt;
/dev/mapper/ubuntu--vg-root        mount_point: /&lt;br /&gt;
/dev/mapper/ubuntu--vg-swapt_1&lt;br /&gt;
/dev/sda&lt;br /&gt;
 /dev/sda1 (50G)&lt;br /&gt;
&lt;br /&gt;
LVM VG ubuntu-vg, LV root    as ext4&lt;br /&gt;
LVM VG ubuntu-vg, LV swapt_1 as swap&lt;br /&gt;
&lt;br /&gt;
#Boot device:&lt;br /&gt;
/dev/mapper/ubuntu--vg-root&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
As a good handy practice you may create 100G virtual disk that you thin provision. Then create 2 PVs for  root and swap partitions. Don't utilize all space at once but extend partitions when needed. This method eliminates adding new disks to VMs saving time and efforts.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example LVM setup, here using 30G Physical Volume(99.9% used), 1 Volume Group and 2 Logical Volumes (root and swap). &lt;br /&gt;
&amp;lt;source lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
$ sudo pvs&lt;br /&gt;
  PV         VG        Fmt  Attr PSize   PFree &lt;br /&gt;
  /dev/sda1  ubuntu-vg lvm2 a--  &amp;lt;29.93g 36.00m&lt;br /&gt;
$ sudo vgs&lt;br /&gt;
  VG        #PV #LV #SN Attr   VSize   VFree &lt;br /&gt;
  ubuntu-vg   1   2   0 wz--n- &amp;lt;29.93g 36.00m&lt;br /&gt;
$ sudo lvs&lt;br /&gt;
  LV     VG        Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert&lt;br /&gt;
  root   ubuntu-vg -wi-ao----  28.94g                                                    &lt;br /&gt;
  swap_1 ubuntu-vg -wi-ao---- 976.00m                                                    &lt;br /&gt;
piotr@u18:~$&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
$ lsblk /dev/sda --fs&lt;br /&gt;
NAME                  FSTYPE      LABEL UUID                                   MOUNTPOINT&lt;br /&gt;
sda                                                                            &lt;br /&gt;
└─sda1                LVM2_member       rP18Kb-Q12j-wjVf-C1iV-uy42-BUJD-aWFuO7 &lt;br /&gt;
  ├─ubuntu--vg-root   ext4              fad04a3b-5fa3-4a03-bbd6-24a93cda1eb3   /&lt;br /&gt;
  └─ubuntu--vg-swap_1 swap              47cd084b-89b0-4cd5-bdb8-367238842ba1   [SWAP]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= List of unnecessary packages =&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get remove libreoffice-* #Remove LibreOffice&lt;br /&gt;
sudo apt-get remove unity-lens-* #This package contains photos scopes which allow Unity to search for local and online photos.&lt;br /&gt;
sudo apt-get remove shotwell* #Photo organizer&lt;br /&gt;
sudo apt-get remove simple-scan #Scanner software&lt;br /&gt;
sudo apt-get remove empathy* #Internet messaging ~13M&lt;br /&gt;
sudo apt-get remove thunderbird* #Email client ~61M&lt;br /&gt;
sudo apt-get remove unity-scope-gdrive #Google Drive scope for Unity ~116KB&lt;br /&gt;
sudo apt-get remove cheese* #Cheese Webcam Booth - webcam software&lt;br /&gt;
sudo apt-get remove brasero* #Brasero Disc Burner ~6.5MB&lt;br /&gt;
sudo apt-get remove gnome-bluetooth Package to manipulate bloototh devices using Gnome desktop ~2MB&lt;br /&gt;
sudo apt-get remove gnome-orca Orca Screen Reader -Provide access to graphical desktop environments via synthesised speech and/or refreshable braille&lt;br /&gt;
sudo apt-get remove unity-webapps-common #Amazon Unity WebApp integration scripts ~133KB&lt;br /&gt;
sudo apt-get remove ibus-pinyin #IBus Bopomofo Preferences - ibus-pinyin is a IBus based IM engine for Chinese ~1.4MB&lt;br /&gt;
sudo apt-get remove apt-get remove printer-driver-foo2zjs* #Reactivate HP LaserJet 1018/1020 after reloading paper ~3.2MB&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Remove unnecessary packages - one liner =&lt;br /&gt;
;Ubuntu 12, 14, 16&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sudo apt-get remove libreoffice-* unity-lens-* shotwell* simple-scan empathy* thunderbird* unity-scope-gdrive cheese*\&lt;br /&gt;
brasero* gnome-bluetooth gnome-orca unity-webapps-common ibus-pinyin printer-driver-foo2zjs*&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Ubuntu 18. It's recommended to choose ''Minimal Install'', so most of packages below won't get installed.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sudo apt-get purge libreoffice-* unity-lens-* shotwell* simple-scan empathy* thunderbird* cheese* \&lt;br /&gt;
brasero* gnome-bluetooth gnome-orca ibus-pinyin printer-driver-foo2zjs* xul-ext-ubufox speech-dispatcher* \&lt;br /&gt;
rhythmbox* printer-driver-* mythes-en-us mobile-broadband-provider-inf* \&lt;br /&gt;
evolution-data-server* espeak-ng-data:amd64 bluez* ubuntu-web-launchers \&lt;br /&gt;
transmission-*&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get purge xul-ext-ubufox                           # Canonical FF customizations for u14,16,18,20&lt;br /&gt;
sudo apt-get remove gnome-mahjongg gnome-mines gnome-sudoku # games, works for u14,16,18,20&lt;br /&gt;
sudo apt-get remove gnome-video-effects gstreamer1.0-* &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
; XTREME&lt;br /&gt;
UnInstallant Ubuntu software notifier&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get remove update-notifier&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Uninstall locales - unused languages etc =&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get install localepurge&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Set apt-get to not install recommended and suggested packages =&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo bash -c 'cat &amp;gt; /etc/apt/apt.conf.d/01no-recommend &amp;lt;&amp;lt; EOF&lt;br /&gt;
APT::Install-Recommends &amp;quot;0&amp;quot;;&lt;br /&gt;
APT::Install-Suggests &amp;quot;0&amp;quot;;&lt;br /&gt;
EOF'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To see if apt reads this, enter this in command line (as root or regular user):&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
apt-config dump | grep -e Recommends -e Suggests&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Install necessary packages =&lt;br /&gt;
&lt;br /&gt;
Adobe Flash Player&lt;br /&gt;
 sudo apt-get install flashplugin-installer&lt;br /&gt;
&lt;br /&gt;
Java JRE&lt;br /&gt;
This will install the default verison Java for you distro plus Icedtea plugin for using Firefox with Java&lt;br /&gt;
 sudo apt-get install default-jre icedtea-plugin&lt;br /&gt;
&lt;br /&gt;
Unity Settings&lt;br /&gt;
 sudo apt-get install unity-control-center&lt;br /&gt;
&lt;br /&gt;
Opera&lt;br /&gt;
&lt;br /&gt;
Add Opera repository &amp;lt;code&amp;gt;'''deb &amp;lt;nowiki&amp;gt;http://deb.opera.com/opera/&amp;lt;/nowiki&amp;gt; stable non-free'''&amp;lt;/code&amp;gt; to the apt-get source list in &amp;lt;code&amp;gt;/etc/apt/sources.list&amp;lt;/code&amp;gt;. Then import a public PGP repository key.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
echo &amp;quot;deb http://deb.opera.com/opera/ stable non-free&amp;quot; | sudo tee -a /etc/apt/sources.list&lt;br /&gt;
wget -qO - http://deb.opera.com/archive.key | sudo apt-key add -&lt;br /&gt;
sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install opera&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Silverlight&lt;br /&gt;
&lt;br /&gt;
Pipelight has been released and we can use it for silverlight as a best alternative moonlight.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-add-repository ppa:ehoover/compholio&lt;br /&gt;
sudo apt-add-repository ppa:mqchael/pipelight&lt;br /&gt;
sudo apt-get update&lt;br /&gt;
sudo apt-get install pipelight&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= GUI tools =&lt;br /&gt;
* [https://github.com/hluk/CopyQ/releases copyQ] clipboard manager&lt;br /&gt;
* VisualVM&lt;br /&gt;
&lt;br /&gt;
= Customise Ubuntu =&lt;br /&gt;
==Fix Ubuntu Unity Dash Search for Applications and Files==&lt;br /&gt;
 sudo apt-get install unity-lens-files unity-lens-applications #log out and log back in required&lt;br /&gt;
&lt;br /&gt;
==Fix Ubuntu &amp;lt;17.10 missing Control Center==&lt;br /&gt;
 sudo apt-get install unity-control-center --no-install-recommends&lt;br /&gt;
&lt;br /&gt;
==Fix Ubuntu &amp;gt;18.04 missing System Settings==&lt;br /&gt;
 sudo apt install gnome-control-center&lt;br /&gt;
&lt;br /&gt;
==Remove background wallpaper ==&lt;br /&gt;
Tested on Ubuntu 14,16,18&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
gsettings set org.gnome.settings-daemon.plugins.background active true&lt;br /&gt;
gsettings set org.gnome.desktop.background draw-background false        #disable &lt;br /&gt;
gsettings set org.gnome.desktop.background primary-color &amp;quot;#000000&amp;quot;      #set to black&lt;br /&gt;
gsettings set org.gnome.desktop.background secondary-color &amp;quot;#000000&amp;quot;    #set to black&lt;br /&gt;
gsettings set org.gnome.desktop.background color-shading-type &amp;quot;solid&amp;quot;   #set solid colour&lt;br /&gt;
gsettings set org.gnome.desktop.background picture-uri file:///dev/null #remove wallpaper, not perfect but nothing worked in U15.10&lt;br /&gt;
gsettings set com.canonical.unity-greeter draw-user-backgrounds false   #disable not worked&lt;br /&gt;
&lt;br /&gt;
# Reset background picture to origin, U15.10&lt;br /&gt;
gsettings set org.gnome.desktop.background picture-uri file:///usr/share/backgrounds/warty-final-ubuntu.png &lt;br /&gt;
&lt;br /&gt;
# Sets Unity greeter background, &amp;lt;17.04&lt;br /&gt;
gsettings set com.canonical.unity-greeter background /usr/share/backgrounds/warty-final-ubuntu.png&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Disable screen lock out==&lt;br /&gt;
&amp;lt;code&amp;gt;dconf&amp;lt;/code&amp;gt; is legacy tool to configure &amp;lt;tt&amp;gt;gnome&amp;lt;/tt&amp;gt; nowadays more modern way is to use &amp;lt;code&amp;gt;gsettings&amp;lt;/code&amp;gt;.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
dconf write /org/gnome/desktop/screensaver/idle-activation-enabled false  #gnome&lt;br /&gt;
dconf write /org/gnome/desktop/screensaver/lock-enabled            false&lt;br /&gt;
&lt;br /&gt;
# Unity - Ubuntu 14.04, 16.04&lt;br /&gt;
gsettings set org.gnome.desktop.session     idle-delay   0      #disable the screen blackout:(0 to disable)&lt;br /&gt;
gsettings set org.gnome.desktop.screensaver lock-enabled false  #disable the screen lock&lt;br /&gt;
&lt;br /&gt;
# VirtualBox &amp;gt; Ubuntu 18.04 Disabling Xserver screen timeouts&lt;br /&gt;
xset s off     # Xserver s parameter sets screensaver to off&lt;br /&gt;
xset s noblank # prevent the display from blanking &lt;br /&gt;
xset -dpms     # prevent the monitor's DPMS energy saver from kicking in&lt;br /&gt;
&lt;br /&gt;
# Gnome - Ubuntu 18.04 LTS, Settings &amp;gt; Power &amp;gt; Blank screen &amp;gt; set to: Never&lt;br /&gt;
gsettings get org.gnome.desktop.lockdown    disable-lock-screen      # verify status&lt;br /&gt;
gsettings set org.gnome.desktop.lockdown    disable-lock-screen true # set disabled&lt;br /&gt;
gsettings get org.gnome.desktop.screensaver lock-enabled             # verify status&lt;br /&gt;
gsettings set org.gnome.desktop.screensaver lock-enabled false       # set disabled&lt;br /&gt;
dconf write  /org/gnome/desktop/screensaver/lock-enabled false       # set disbaled using dconf&lt;br /&gt;
gsettings set org.gnome.desktop.screensaver idle-activation-enabled false # some say it's last resort :)&lt;br /&gt;
&lt;br /&gt;
# Power management&lt;br /&gt;
gsettings set org.gnome.settings-daemon.plugins.power active true  #set gnome to be the default power management run&lt;br /&gt;
gsettings set org.gnome.settings-daemon.plugins.power active false #turn off power management&lt;br /&gt;
&lt;br /&gt;
# last resort as it was a bud in Ubuntu 11.10 with DPMS&lt;br /&gt;
gsettings set org.gnome.desktop.screensaver idle-activation-enabled false&lt;br /&gt;
gsettings set org.gnome.desktop.session idle-delay 2400&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Verify by navigating in &amp;lt;tt&amp;gt;dconf-editor&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;/org/gnome/desktop/screensaver/&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Change number of workspaces==&lt;br /&gt;
To get the current values:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
dconf read /org/compiz/profiles/unity/plugins/core/hsize&lt;br /&gt;
dconf read /org/compiz/profiles/unity/plugins/core/vsize&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To set new values:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
dconf write /org/compiz/profiles/unity/plugins/core/hsize 2&lt;br /&gt;
# or&lt;br /&gt;
gsettings set org.compiz.core:/org/compiz/profiles/unity/plugins/core/ hsize 4&lt;br /&gt;
gsettings set org.compiz.core:/org/compiz/profiles/unity/plugins/core/ vsize 4&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Clenup motd messages ==&lt;br /&gt;
Ubuntu at login displays a number standard messages taking terminal space causing potential loosing context of previous operations. &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
Welcome to Ubuntu 16.04.5 LTS (GNU/Linux 4.4.0-134-generic x86_64)&lt;br /&gt;
&lt;br /&gt;
 * Documentation:  https://help.ubuntu.com&lt;br /&gt;
 * Management:     https://landscape.canonical.com&lt;br /&gt;
 * Support:        https://ubuntu.com/advantage&lt;br /&gt;
&lt;br /&gt;
  Get cloud support with Ubuntu Advantage Cloud Guest:&lt;br /&gt;
    http://www.ubuntu.com/business/services/cloud&lt;br /&gt;
&lt;br /&gt;
1 package can be updated.&lt;br /&gt;
0 updates are security updates.&lt;br /&gt;
&lt;br /&gt;
New release '18.04.1 LTS' available.&lt;br /&gt;
Run 'do-release-upgrade' to upgrade to it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Last login: Fri Aug 31 12:11:28 2018 from 10.0.2.2&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is managed by files in &amp;lt;code&amp;gt;/etc/update-motd.d/&amp;lt;/code&amp;gt;, so deleting them will remove clutter on a screen&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ls /etc/update-motd.d/&lt;br /&gt;
00-header             51-cloudguest         91-release-upgrade    98-fsck-at-reboot     &lt;br /&gt;
10-help-text          90-updates-available  97-overlayroot        98-reboot-required &lt;br /&gt;
&lt;br /&gt;
# Ubuntu Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-1022-azure x86_64)&lt;br /&gt;
# Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-1021-aws x86_64)&lt;br /&gt;
sudo rm /etc/update-motd.d/{10-help-text,50-landscape-sysinfo,50-motd-news,51-cloudguest,80-livepatch,95-hwe-eol}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This cuts down to this message, Ubuntu 18.04 in AWS&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
Welcome to Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-1021-aws x86_64)&lt;br /&gt;
&lt;br /&gt;
0 packages can be updated.&lt;br /&gt;
0 updates are security updates.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Last login: Thu Jan 31 17:09:38 2019 from 10.10.11.11&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Useful setups =&lt;br /&gt;
== Image converter ==&lt;br /&gt;
nautilus-image-converter is a nautilus extension to mass resize or rotate images. It adds two context menu items in nautlius so you can right-click and choose &amp;quot;Resize Image&amp;quot; or &amp;quot;Rotate Image&amp;quot;).&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# tested on Ubuntu 24.04 with Gnome&lt;br /&gt;
sudo apt-get install nautilus-image-converter&lt;br /&gt;
&lt;br /&gt;
# Restart to see the new context menu&lt;br /&gt;
nautilus -q&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Call screen saver from a terminal to blank all screens ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# tested on Ubuntu 18.04 with Gnome&lt;br /&gt;
sudo apt-get install gnome-screensaver&lt;br /&gt;
gnome-screensaver-command -a #controls GNOME screensaver, -a activate (blank the screen)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create application launcher ==&lt;br /&gt;
;Ubuntu 18.04&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install the GNOME-panel toolset&lt;br /&gt;
sudo apt-get install --no-install-recommends gnome-panel&lt;br /&gt;
&lt;br /&gt;
# Every user launcher&lt;br /&gt;
sudo gnome-desktop-item-edit /usr/share/applications/VisualVM.desktop --create-new&lt;br /&gt;
&lt;br /&gt;
# Local user only, the filename by default is a Name-of-appication.desktop&lt;br /&gt;
gnome-desktop-item-edit ~/.local/share/applications --create-new &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:[[File:ClipCapIt-190807-080016.PNG]]&lt;br /&gt;
&lt;br /&gt;
;Ubuntu 19.10, 20.04&lt;br /&gt;
In above releases &amp;lt;code&amp;gt;gnome-desktop-item-edit&amp;lt;/code&amp;gt; has been removed from the &amp;lt;code&amp;gt;gnome-panel&amp;lt;/code&amp;gt; package, as an alternative &amp;lt;code&amp;gt;.desktop&amp;lt;/code&amp;gt; files can be created manually.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi /usr/share/applications/APPNAME.desktop&lt;br /&gt;
[Desktop Entry]&lt;br /&gt;
Name=&amp;lt;NAME OF THE APPLICATION&amp;gt;&lt;br /&gt;
Comment=&amp;lt;A SHORT DESCRIPTION&amp;gt;&lt;br /&gt;
Exec=&amp;lt;COMMAND-OR-FULL-PATH-TO-LAUNCH-THE-APPLICATION&amp;gt;&lt;br /&gt;
Type=Application&lt;br /&gt;
Terminal=false&lt;br /&gt;
Icon=&amp;lt;ICON NAME OR PATH TO ICON&amp;gt;&lt;br /&gt;
NoDisplay=false&lt;br /&gt;
Keywords=&amp;lt;eg. sql&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It's optional but you may need to right click and set 'allow launching' with addition to set executable permissions. Usual locations of &amp;lt;code&amp;gt;.desktop&amp;lt;/code&amp;gt; files are:&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/share/applications/&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/var/lib/snapd/desktop/applications/&amp;lt;/code&amp;gt; for snap applications&lt;br /&gt;
&lt;br /&gt;
== [https://github.com/paradoxxxzero/gnome-shell-system-monitor-applet gnome-shell-system-monitor-applet] - cpu, memory indicators ==&lt;br /&gt;
System information such as memory usage, cpu usage, network rates and more can be displayed in the notification area in GNOME Shell.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
System-monitor extensions:&lt;br /&gt;
[https://extensions.gnome.org/extension/120/system-monitor/ system-monitor] by paradoxxxzero on [https://github.com/paradoxxxzero/gnome-shell-system-monitor-applet github] supports Gnome-shell up to v40. It seems like abandoned project.&lt;br /&gt;
[https://extensions.gnome.org/extension/3010/system-monitor-next/ system-monitor-next] by mgalgs on [https://github.com/mgalgs/gnome-shell-system-monitor-applet github] supports Gnome-shell v40+, it's a fork of the above.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All extensions:&lt;br /&gt;
* https://extensions.gnome.org&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Note|The current version of the browser Firefox is packaged as a snap version. One of the issues with this is that it cannot work with the Gnome Extensions website.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install on Ubuntu 24.04 (June 2024)&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Ubuntu 20/22/24&lt;br /&gt;
GNOME Shell 46.0                                      # version as of Ubuntu 24.04&lt;br /&gt;
sudo apt install gnome-shell-extensions               # Ubuntu 20.04 already has this package, 24.04 needs installing it&lt;br /&gt;
sudo apt install gnome-shell-extension-manager        # Ubuntu 22|24.04 (as Firefox is installed as snap) on 24.04 it's v0.5.0&lt;br /&gt;
&lt;br /&gt;
# Open `Extensions` app, turn &amp;quot;Use Extensions&amp;quot;.       # Already turned on on Ubuntu 24.04&lt;br /&gt;
# Open Browse tab &amp;gt; search for 'system-monitor-next'  # cpu/mem/net indicators will appear in the system tray&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Additional steps for Ubuntu &amp;lt; 24.04&lt;br /&gt;
sudo apt install gnome-tweaks                         # GUI to manage gnome-extensions&lt;br /&gt;
sudo apt install gir1.2-gtop-2.0 gir1.2-nm-1.0 gir1.2-clutter-1.0 gnome-system-monitor&lt;br /&gt;
sudo apt install gnome-shell-extension-system-monitor # after requires log out&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Download the extension from&lt;br /&gt;
## https://extensions.gnome.org/extension/120/system-monitor/&lt;br /&gt;
&lt;br /&gt;
# Never worked out how to use this direct download and install via 'gnome-extensions install &amp;lt;extension_name&amp;gt;'&lt;br /&gt;
## wget https://github.com/paradoxxxzero/gnome-shell-system-monitor-applet/archive/v38.zip&lt;br /&gt;
## gnome-extensions install &amp;lt;system-monitor@paradoxxx.zero.gmail.com&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Enable extension using cli&lt;br /&gt;
gnome-extensions enable system-monitor-next@paradoxxx.zero.gmail.com&lt;br /&gt;
gnome-extensions list --user&lt;br /&gt;
clipboard-indicator@tudmotu.com&lt;br /&gt;
system-monitor-next@paradoxxx.zero.gmail.com&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
:[[File:ClipCapIt-210105-084527.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== [https://github.com/paradoxxxzero/gnome-shell-system-monitor-applet/issues/737#issuecomment-1230654455 Ubuntu 22.04 workaround for the OUTDATED extension] ===&lt;br /&gt;
{{Note|Workaround still needed in August 2022}}&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo apt install gir1.2-gtop-2.0 gir1.2-nm-1.0 gir1.2-clutter-1.0 gnome-system-monitor&lt;br /&gt;
git clone https://github.com/paradoxxxzero/gnome-shell-system-monitor-applet.git&lt;br /&gt;
cd gnome-shell-system-monitor-applet # commit b359d88 verified&lt;br /&gt;
vi system-monitor@paradoxxx.zero.gmail.com/metadata.json &lt;br /&gt;
# | change &amp;quot;version&amp;quot;: -1 to &amp;quot;version&amp;quot;: 42&lt;br /&gt;
make install&lt;br /&gt;
# log out and back in (required)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Snapd - Chromium =&lt;br /&gt;
Recently in U19+ Chromium get installed via snapd package. This is classic installation that has limited access to only a certain directories. It happen that when working with AWS we need get access to &amp;lt;code&amp;gt;~/.ssh&amp;lt;/code&amp;gt; folder to get ec2 machine password. This folder is denied, but we can bind mount &amp;lt;code&amp;gt;~/.ssh&amp;lt;/code&amp;gt; folder into the snap container directory:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ snap list chromium &lt;br /&gt;
Name      Version        Rev   Tracking       Publisher   Notes&lt;br /&gt;
chromium  86.0.4240.111  1373  latest/stable  canonical✓  -&lt;br /&gt;
&lt;br /&gt;
# cd to chromim $HOME dir&lt;br /&gt;
mkdir ~/snap/chromium/current/.ssh&lt;br /&gt;
sudo mount --bind ~/.ssh/ ~/snap/chromium/current/.ssh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Screen shooting =&lt;br /&gt;
In Ubuntu 20.04 Shutter is not a part of default repositories. It can be added via PPA:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo add-apt-repository -y ppa:linuxuprising/shutter&lt;br /&gt;
sudo apt-get install shutter&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Audio - [https://rastating.github.io/setting-default-audio-device-in-ubuntu-18-04/ set defaults] =&lt;br /&gt;
For preserving settings using GUI you can install [https://freedesktop.org/software/pulseaudio/pavucontrol/ PulseAudio Volume Control] &amp;lt;code&amp;gt;pavucontrol&amp;lt;/code&amp;gt;.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# install&lt;br /&gt;
sudo apt install pavucontrol # Ubuntu 20.04&lt;br /&gt;
# run&lt;br /&gt;
pavucontrol&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set default output/input device. In Ubuntu PulseAudio is used to control audio devices. It contains following configuration files&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
/etc/pulse/default.pa # system wide&lt;br /&gt;
~/.config/pulse       # user configuration&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set defaults&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# List devices: modules, sinks, sources, sink-inputs, source-outputs, clients, samples, cards&lt;br /&gt;
# sinks - outputs, sink-inputs, sources - all input/output including RUNNING and SUSPENDED devices&lt;br /&gt;
$ pactl list short sources | column -t&lt;br /&gt;
5   alsa_output.pci-0000_00_1f.3-platform-skl_hda_dsp_generic.HiFi__hw_sofhdadsp_5__sink.monitor  module-alsa-card.c  s16le  2ch  48000Hz  SUSPENDED&lt;br /&gt;
6   alsa_output.pci-0000_00_1f.3-platform-skl_hda_dsp_generic.HiFi__hw_sofhdadsp_4__sink.monitor  module-alsa-card.c  s16le  2ch  48000Hz  RUNNING&lt;br /&gt;
7   alsa_output.pci-0000_00_1f.3-platform-skl_hda_dsp_generic.HiFi__hw_sofhdadsp_3__sink.monitor  module-alsa-card.c  s16le  2ch  48000Hz  SUSPENDED&lt;br /&gt;
8   alsa_output.pci-0000_00_1f.3-platform-skl_hda_dsp_generic.HiFi__hw_sofhdadsp__sink.monitor    module-alsa-card.c  s16le  2ch  48000Hz  SUSPENDED&lt;br /&gt;
9   alsa_input.pci-0000_00_1f.3-platform-skl_hda_dsp_generic.HiFi__hw_sofhdadsp__source           module-alsa-card.c  s16le  2ch  48000Hz  SUSPENDED&lt;br /&gt;
10  alsa_input.pci-0000_00_1f.3-platform-skl_hda_dsp_generic.HiFi__hw_sofhdadsp_6__source         module-alsa-card.c  s16le  4ch  48000Hz  SUSPENDED&lt;br /&gt;
15  alsa_output.usb-DisplayLink_Dell_Universal_Dock_D6000_1806021690-02.analog-stereo.monitor     module-alsa-card.c  s16le  2ch  48000Hz  SUSPENDED&lt;br /&gt;
17  alsa_output.usb-Plantronics_Plantronics_DA40-00.multichannel-output.monitor                   module-alsa-card.c  s16le  1ch  48000Hz  SUSPENDED&lt;br /&gt;
19  alsa_input.usb-Plantronics_Plantronics_DA40-00.mono-fallback                                  module-alsa-card.c  s16le  1ch  16000Hz  SUSPENDED&lt;br /&gt;
20  alsa_input.usb-DisplayLink_Dell_Universal_Dock_D6000_1806021690-02.iec958-stereo              module-alsa-card.c  s16le  2ch  48000Hz  RUNNING&lt;br /&gt;
&lt;br /&gt;
# Set defaut output device. Tab autocompletion should work (U20.04)&lt;br /&gt;
pactl set-default-sink alsa_output.usb-Plantronics_Plantronics_DA40-00.multichannel-output&lt;br /&gt;
# Set defaut input device&lt;br /&gt;
pactl set-default-source alsa_input.usb-Plantronics_Plantronics_DA40-00.mono-fallback&lt;br /&gt;
&lt;br /&gt;
# Test, play some audio then run. IDLE - means in use&lt;br /&gt;
pactl list short sources | column -t | grep -e RUNNING -e IDLE&lt;br /&gt;
17  alsa_output.usb-Plantronics_Plantronics_DA40-00.multichannel-output.monitor                   module-alsa-card.c  s16le  1ch  48000Hz  IDLE&lt;br /&gt;
19  alsa_input.usb-Plantronics_Plantronics_DA40-00.mono-fallback                                  module-alsa-card.c  s16le  1ch  16000Hz  RUNNING&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Make it permanent by setting default device in PulseAudio system configuration file&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Output device&lt;br /&gt;
OUTPUT_DEVICE=alsa_output.usb-Plantronics_Plantronics_DA40-00.mono-fallback&lt;br /&gt;
sudo sed -i &amp;quot;s/#\(set-default-sink\) output/\1 ${OUTPUT_DEVICE}/g&amp;quot; /etc/pulse/default.pa # remove '-i' to test before apply&lt;br /&gt;
# Input device&lt;br /&gt;
INPUT_DEVICE=alsa_input.usb-Plantronics_Plantronics_DA40-00.mono-fallback&lt;br /&gt;
sudo sed -i &amp;quot;s/#\(set-default-source\) input/\1 ${INPUT_DEVICE}/g&amp;quot; /etc/pulse/default.pa&lt;br /&gt;
&lt;br /&gt;
vi /etc/pulse/default.pa # make sure lines below are in place&lt;br /&gt;
### Make some devices default&lt;br /&gt;
set-default-sink   alsa_output.usb-Plantronics_Plantronics_DA40-00.mono-fallback&lt;br /&gt;
set-default-source  alsa_input.usb-Plantronics_Plantronics_DA40-00.mono-fallback&lt;br /&gt;
&lt;br /&gt;
# Delete local user profile and restart system, after boot new defaults should be set&lt;br /&gt;
rm -r ~/.config/pulse&lt;br /&gt;
&lt;br /&gt;
# After reboot, defaults should be set&lt;br /&gt;
cat ~/.config/pulse/*default*&lt;br /&gt;
alsa_output.usb-Plantronics_Plantronics_DA40-00.mono-fallback&lt;br /&gt;
alsa_input.usb-Plantronics_Plantronics_DA40-00.mono-fallback&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Troubleshooting&lt;br /&gt;
PulseAudio cli&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
pacmd&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; help # lists all available commands&lt;br /&gt;
&lt;br /&gt;
pulseaudio --check # Check if any pulseaudio instance is running. It normally prints no output, just exit code. 0 means running&lt;br /&gt;
pulseaudio --kill  # kill, then --start&lt;br /&gt;
pulseaudio -D      # start pulseaudio as a daemon&lt;br /&gt;
# | using /etc/pulse/daemon.conf&lt;br /&gt;
&lt;br /&gt;
# Pulseaudio is a user service&lt;br /&gt;
systemctl --user restart pulseaudio.service&lt;br /&gt;
systemctl --user restart pulseaudio.socket&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I have a port replicator Dell D-6000, that gets randomly disconnected causing switching audio to new connected device - means itself. As workaround commenting out lines below stops this behaviour.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi /etc/pulse/default.pa&lt;br /&gt;
### Use hot-plugged devices like Bluetooth or USB automatically (LP: #1702794)&lt;br /&gt;
# .ifexists module-switch-on-connect.so&lt;br /&gt;
# load-module module-switch-on-connect&lt;br /&gt;
# .endif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Input devices =&lt;br /&gt;
Motivation is to enable horizontal scrolling in Ubuntu 20.04 using Perixx Gamig Mouse Mx2000&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
xinput list&lt;br /&gt;
⎡ Virtual core pointer                    	id=2	[master pointer  (3)]&lt;br /&gt;
⎜   ↳ Virtual core XTEST pointer              	id=4	[slave  pointer  (2)]&lt;br /&gt;
⎜   ↳ Holtek USB Gaming Mouse                 	id=11	[slave  pointer  (2)]&lt;br /&gt;
⎜   ↳ SYNA8007:00 06CB:CD8C Mouse             	id=14	[slave  pointer  (2)]&lt;br /&gt;
⎜   ↳ SYNA8007:00 06CB:CD8C Touchpad          	id=15	[slave  pointer  (2)]&lt;br /&gt;
⎜   ↳ TPPS/2 Elan TrackPoint                  	id=19	[slave  pointer  (2)]&lt;br /&gt;
⎣ Virtual core keyboard                   	id=3	[master keyboard (2)]&lt;br /&gt;
    ↳ Virtual core XTEST keyboard             	id=5	[slave  keyboard (3)]&lt;br /&gt;
    ↳ Power Button                            	id=6	[slave  keyboard (3)]&lt;br /&gt;
    ↳ Video Bus                               	id=7	[slave  keyboard (3)]&lt;br /&gt;
    ↳ Sleep Button                            	id=8	[slave  keyboard (3)]&lt;br /&gt;
    ↳ CHICONY HP Basic USB Keyboard           	id=9	[slave  keyboard (3)]&lt;br /&gt;
    ↳ Holtek USB Gaming Mouse                 	id=10	[slave  keyboard (3)]&lt;br /&gt;
    ↳ Integrated Camera: Integrated C         	id=12	[slave  keyboard (3)]&lt;br /&gt;
    ↳ Integrated Camera: Integrated I         	id=13	[slave  keyboard (3)]&lt;br /&gt;
    ↳ sof-hda-dsp Headset Jack                	id=16	[slave  keyboard (3)]&lt;br /&gt;
    ↳ Intel HID events                        	id=17	[slave  keyboard (3)]&lt;br /&gt;
    ↳ AT Translated Set 2 keyboard            	id=18	[slave  keyboard (3)]&lt;br /&gt;
    ↳ ThinkPad Extra Buttons                  	id=20	[slave  keyboard (3)]&lt;br /&gt;
    ↳ Holtek USB Gaming Mouse                 	id=21	[slave  keyboard (3)]&lt;br /&gt;
&lt;br /&gt;
# test mouse aka Virtual core pointer&lt;br /&gt;
xinput test 11&lt;br /&gt;
motion a[0]=2023  # &amp;lt;- cursor moving&lt;br /&gt;
motion a[0]=2024 a[1]=1411 &lt;br /&gt;
motion a[3]=19545 # &amp;lt;- scroll down &lt;br /&gt;
button press   5 &lt;br /&gt;
button release 5 &lt;br /&gt;
&lt;br /&gt;
# test 'virtual core keyboard' aka additional programmable buttons&lt;br /&gt;
## '10' - this virtual keyboard for all buttons except the scrolling wheel&lt;br /&gt;
xinput test 10&lt;br /&gt;
key press   37&lt;br /&gt;
key press   38&lt;br /&gt;
&lt;br /&gt;
## '21' - this is scrolling wheel buttons left/right, not scrolling itself&lt;br /&gt;
xinput test 21&lt;br /&gt;
key press   248 &lt;br /&gt;
key release 248 &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# List of properties of a device. We want to see 'horizontal scrolling wheel buttons'&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ xinput list-props  21&lt;br /&gt;
Device 'Holtek USB Gaming Mouse':&lt;br /&gt;
	Device Enabled (169):	1&lt;br /&gt;
	Coordinate Transformation Matrix (171):	1.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 1.000000&lt;br /&gt;
	libinput Send Events Modes Available (291):	1, 0&lt;br /&gt;
	libinput Send Events Mode Enabled (292):	0, 0&lt;br /&gt;
	libinput Send Events Mode Enabled Default (293):	0, 0&lt;br /&gt;
	Device Node (294):	&amp;quot;/dev/input/event10&amp;quot;&lt;br /&gt;
	Device Product ID (295):	1241, 41063&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
[[Category:linux]]&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Kubernetes/Tools&amp;diff=7033</id>
		<title>Kubernetes/Tools</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Kubernetes/Tools&amp;diff=7033"/>
		<updated>2024-07-02T22:17:12Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Install kubectl */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= kubectl =&lt;br /&gt;
== Install kubectl ==&lt;br /&gt;
List of kubectl [https://kubernetes.io/releases/ releases].&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# List releases&lt;br /&gt;
curl -s https://api.github.com/repos/kubernetes/kubernetes/releases | jq -r '.[].tag_name' | sort -V&lt;br /&gt;
curl -s https://api.github.com/repos/kubernetes/kubernetes/releases | jq -r '[.[] | select(.prerelease == false) | .tag_name] | map(sub(&amp;quot;^v&amp;quot;;&amp;quot;&amp;quot;)) | map(split(&amp;quot;.&amp;quot;)) | group_by(.[0:2]) | map(max_by(.[2]|tonumber)) | map(join(&amp;quot;.&amp;quot;)) | map(&amp;quot;v&amp;quot; + .) | sort | reverse | .[]'&lt;br /&gt;
v1.30.2&lt;br /&gt;
v1.29.6&lt;br /&gt;
v1.28.11&lt;br /&gt;
v1.27.15&lt;br /&gt;
v1.26.15&lt;br /&gt;
&lt;br /&gt;
# Latest&lt;br /&gt;
ARCH=amd64 # amd64|arm&lt;br /&gt;
VERSION=$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt); echo $VERSION&lt;br /&gt;
curl -LO https://storage.googleapis.com/kubernetes-release/release/$VERSION/bin/linux/$ARCH/kubectl&lt;br /&gt;
&lt;br /&gt;
# Specific version&lt;br /&gt;
# Find specific Kubernetes release, then download kubectl&lt;br /&gt;
VERSION=v1.26.14; ARCH=amd64 # amd64|arm&lt;br /&gt;
curl -LO https://storage.googleapis.com/kubernetes-release/release/$VERSION/bin/linux/$ARCH/kubectl&lt;br /&gt;
sudo install ./kubectl /usr/local/bin/kubectl&lt;br /&gt;
&lt;br /&gt;
# Note: sudo install := chmod +x ./kubectl; sudo mv&lt;br /&gt;
&lt;br /&gt;
# Verify, kubectl should not be more than -/+ 1 minor version difference then api-server&lt;br /&gt;
kubectl version --short&lt;br /&gt;
Client Version: v1.26.14&lt;br /&gt;
Kustomize Version: v4.5.7&lt;br /&gt;
Server Version: v1.24.10&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Google way&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install kubectl if you don't already have a suitable version&lt;br /&gt;
# https://kubernetes.io/docs/setup/release/version-skew-policy/#kubectl&lt;br /&gt;
kubectl version --client || gcloud components install kubectl&lt;br /&gt;
kubectl get clusterrolebinding $(gcloud config get-value core/account)-cluster-admin ||&lt;br /&gt;
  kubectl create clusterrolebinding $(gcloud config get-value core/account)-cluster-admin \&lt;br /&gt;
  --clusterrole=cluster-admin \&lt;br /&gt;
  --user=&amp;quot;$(gcloud config get-value core/account)&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
kubectl plugin called [https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke gke-gcloud-auth-plugin]&lt;br /&gt;
* [https://cloud.google.com/sdk/docs/install#deb Install Google Cloud SDK]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo apt-get install apt-transport-https ca-certificates gnupg curl&lt;br /&gt;
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /usr/share/keyrings/cloud.google.gpg&lt;br /&gt;
echo &amp;quot;deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main&amp;quot; | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list&lt;br /&gt;
sudo apt-get update &lt;br /&gt;
sudo apt-get install google-cloud-cli # required to authenticate with GCP&lt;br /&gt;
sudo apt-get install google-cloud-sdk-gke-gcloud-auth-plugin&lt;br /&gt;
gcloud init&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Autocompletion and kubeconfig ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
source &amp;lt;(kubectl completion bash); alias k=kubectl; complete -F __start_kubectl k&lt;br /&gt;
&lt;br /&gt;
# Set default namespace&lt;br /&gt;
kubectl config set-context --current --namespace=dev&lt;br /&gt;
kubectl config set-context $(kubectl config current-context) --namespace=dev&lt;br /&gt;
&lt;br /&gt;
vi ~/.kube/config&lt;br /&gt;
...&lt;br /&gt;
contexts:&lt;br /&gt;
- context:&lt;br /&gt;
    cluster: kubernetes&lt;br /&gt;
    user: kubernetes-admin&lt;br /&gt;
    namespace: web       # default namespace&lt;br /&gt;
  name: dev-frontend&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Add &amp;lt;code&amp;gt;proxy-url&amp;lt;/code&amp;gt; using &amp;lt;code&amp;gt;yq&amp;lt;/code&amp;gt; to kubeconfig ==&lt;br /&gt;
Minimum yq version required is v2.x, tested with yq 2.13.0. The example below does the file inline `-i` update.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
yq -i -y --indentless '.clusters[0].cluster += {&amp;quot;proxy-url&amp;quot;: &amp;quot;http://proxy.acme.com:8080&amp;quot;}' ~/.kube/$ENVIRONMENT&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Get resources and cheatsheet ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Get a list of nodes&lt;br /&gt;
kubectl get nodes -o jsonpath=&amp;quot;{.items[*].metadata.name}&amp;quot;&lt;br /&gt;
ip-10-10-10-10.eu-west-1.compute.internal ip-10-10-10-20.eu-west-1.compute.internal&lt;br /&gt;
&lt;br /&gt;
kubectl get nodes -oname&lt;br /&gt;
node/ip-10-10-10-10.eu-west-1.compute.internal&lt;br /&gt;
node/ip-10-10-10-20.eu-west-1.compute.internal&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
# Pods sorted by node name&lt;br /&gt;
kubectl get pods --sort-by=.spec.nodeName -owide -A&lt;br /&gt;
&lt;br /&gt;
# Watch a namespace in a convinient resources order | sts=statefulset, rs=replicaset, ep=endpoint, cm=configmap&lt;br /&gt;
watch -d kubectl -n dev get sts,deploy,rc,rs,pods,svc,ep,ing,pvc,cm,sa,secret,es,cronjob,job -owide --show-labels &lt;br /&gt;
   # note es - externalsecrets&lt;br /&gt;
watch -d 'kubectl get pv -owide --show-labels | grep -e &amp;lt;eg.NAMESPACE&amp;gt;'&lt;br /&gt;
watch -d helm list -A&lt;br /&gt;
&lt;br /&gt;
# Test your context by creating configMap&lt;br /&gt;
kubectl create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2&lt;br /&gt;
kubectl delete configmap my-config&lt;br /&gt;
&lt;br /&gt;
# Watch multiple namespaces&lt;br /&gt;
eval 'kubectl --context='{context1,context2}' --namespace='{ns1,ns2}' get pod;'&lt;br /&gt;
eval kubectl\ --context={context1,context2}\ --namespace={ns1,ns2}\ get\ pod\;&lt;br /&gt;
watch -d eval 'kubectl -n '{default,ingress-nginx}' get sts,deploy,rc,rs,pods,svc,ep,ing,pvc,cm,sa,secret,es,cronjob,job -owide --show-labels;'&lt;br /&gt;
&lt;br /&gt;
# Auth, can-i&lt;br /&gt;
kubectl auth can-i delete pods&lt;br /&gt;
yes&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Get yaml from existing object ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl create namespace kiali --dry-run=client -o yaml &amp;gt; ns.yaml&lt;br /&gt;
kubectl create namespace kiali --dry-run=client -o yaml | kubectl apply -f -&lt;br /&gt;
&lt;br /&gt;
# Saves version revision in metadata.annotations.kubectl.kubernetes.io/last-applied-configuration={..manifest_json..} &lt;br /&gt;
kubectl create ns foo --save-config&lt;br /&gt;
&lt;br /&gt;
# Get a yaml without status information, almost clean manifest. Deprecated '--export' before &amp;lt;v17.x.&lt;br /&gt;
kubectl -n web get pod &amp;lt;podName&amp;gt; -oyaml --export&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Generate pod manifest, the most clean way I know&lt;br /&gt;
&amp;lt;syntaxhighlightjs lang=bash&amp;gt;&lt;br /&gt;
# kubectl -n foo run --image=ubuntu:20.04 ubuntu-1 --dry-run=client -oyaml -- bash -c sleep&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Pod&lt;br /&gt;
metadata:&lt;br /&gt;
  creationTimestamp: null  # &amp;lt;- can be deleted&lt;br /&gt;
  labels:&lt;br /&gt;
    run: ubuntu-1&lt;br /&gt;
  name: ubuntu-1&lt;br /&gt;
  namespace: foo&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
  - args:&lt;br /&gt;
    - bash&lt;br /&gt;
    - -c&lt;br /&gt;
    - sleep&lt;br /&gt;
    image: ubuntu:20.04&lt;br /&gt;
    name: ubuntu-1&lt;br /&gt;
    resources: {}  # &amp;lt;- can be deleted&lt;br /&gt;
  dnsPolicy: ClusterFirst&lt;br /&gt;
  restartPolicy: Always&lt;br /&gt;
status: {}         # &amp;lt;- can be deleted&lt;br /&gt;
&amp;lt;/syntaxhighlightjs&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;code&amp;gt;kubectl cp&amp;lt;/code&amp;gt; ==&lt;br /&gt;
Each container must be prefixed with a namespace, the copy-to-file(&amp;lt;filename&amp;gt;) must be in place. The recursive copy might be tricky. &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl cp [[namespace/]pod:]file/path ./&amp;lt;filename&amp;gt; -c &amp;lt;container_name&amp;gt;&lt;br /&gt;
kubectl cp vegeta/vegeta-5847d879d8-p9kqw:plot.html ./plot.html -c vegeta&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== One liners ==&lt;br /&gt;
=== Single purpose pods ===&lt;br /&gt;
Note: &amp;lt;code&amp;gt;--generator=deployment/apps.v1&amp;lt;/code&amp;gt; is DEPRECATED and will be removed, use &amp;lt;code&amp;gt;--generator=run-pod/v1 &amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;kubectl create&amp;lt;/code&amp;gt; instead.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Exec to deployment, no need to specify unique pod name&lt;br /&gt;
kubectl exec -it deploy/sleep -- curl httpbin:8000/headers&lt;br /&gt;
&lt;br /&gt;
NS=mynamespace; LABEL='app.kubernetes.io/name=myvalue'&lt;br /&gt;
kubectl exec -n $NS -it $(kubectl get pod -l &amp;quot;$LABEL&amp;quot; -n $NS -o jsonpath='{.items[0].metadata.name}') -- bash&lt;br /&gt;
&lt;br /&gt;
# Echo server&lt;br /&gt;
kubectl run --image=k8s.gcr.io/echoserver:1.4 hello-1 --port=8080&lt;br /&gt;
&lt;br /&gt;
# Single purpose pods&lt;br /&gt;
kubectl run    --image=bitnami/kubectl:1.21.8 kubectl-1    --rm -it -- get pods&lt;br /&gt;
kubectl run    --image=appropriate/curl       curl-1       --rm -it -- sh&lt;br /&gt;
kubectl run    --image=ubuntu:18.04     ubuntu-1  --rm -it -- bash&lt;br /&gt;
kubectl create --image=ubuntu:20.04     ubuntu-2  --rm -it -- bash&lt;br /&gt;
kubectl run    --image=busybox:1.31.0   busybox-1 --rm -it -- sh          # exec and delete when completed&lt;br /&gt;
kubectl run    --image=busybox:1.31.0   busybox-2          -- sleep 7200  # sleep, so you can exec&lt;br /&gt;
kubectl run    --image=alpine           alpine-1  --rm -it -- ping -c 1 8.8.8.8&lt;br /&gt;
 docker run    --rm -it --name alpine-1 alpine                ping -c 1 8.8.8.8&lt;br /&gt;
&lt;br /&gt;
# Network-multitool | https://github.com/wbitt/Network-MultiTool | Runs as a webserver, so won't complete.&lt;br /&gt;
kubectl run    --image=wbitt/network-multitool multitool-1&lt;br /&gt;
kubectl create --image=wbitt/network-multitool deployment multitool&lt;br /&gt;
kubectl exec -it multitool-1          -- /bin/bash&lt;br /&gt;
kubectl exec -it deployment/multitool -- /bin/bash&lt;br /&gt;
docker run --rm -it --name network-multitool wbitt/network-multitool bash&lt;br /&gt;
&lt;br /&gt;
# Curl&lt;br /&gt;
kubectl run test --image=tutum/curl -- sleep 10000&lt;br /&gt;
&lt;br /&gt;
# Deprecation syntax&lt;br /&gt;
kubectl run --image=k8s.gcr.io/echoserver:1.4 --generator=run-pod/v1         hello-1 --port=8080 # VALID!&lt;br /&gt;
kubectl run --image=k8s.gcr.io/echoserver:1.4 --generator=deployment/apps.v1 hello-1 --port=8080 # &amp;lt;- deprecated&lt;br /&gt;
&lt;br /&gt;
# Errors&lt;br /&gt;
# | error: --rm should only be used for attached containers&lt;br /&gt;
# | Error: unknown flag: --image # when kubectl create --image&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Additional software&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Process and network comamnds&lt;br /&gt;
export DEBIAN_FRONTEND=noninteractive # Ubuntu 20.04&lt;br /&gt;
DEBIAN_FRONTEND=noninteractive apt install -yq dnsutils iproute2 iputils-ping iputils-tracepath net-tools netcat procps&lt;br /&gt;
# | dnsutils     - nslookup, dig&lt;br /&gt;
# | iproute2     - ip addr, ss&lt;br /&gt;
# | iputils-ping      - ping&lt;br /&gt;
# | iputils-tracepath - tracepath&lt;br /&gt;
# | net-tools    - ifconfig&lt;br /&gt;
# | netcat       - nc&lt;br /&gt;
# | procps       - ps, top&lt;br /&gt;
&lt;br /&gt;
# Databases&lt;br /&gt;
apt install -yq redis-tools&lt;br /&gt;
apt install -yq postgresql-client&lt;br /&gt;
&lt;br /&gt;
# AWS cli v1 - Debian&lt;br /&gt;
apt install python-pip&lt;br /&gt;
pip install awscli&lt;br /&gt;
&lt;br /&gt;
# Network test without ping, nc or telnet&lt;br /&gt;
(timeout 1 bash -c '&amp;lt;/dev/tcp/127.0.0.1/22 &amp;amp;&amp;amp; echo PORT OPEN || echo PORT CLOSED') 2&amp;gt;/dev/null&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;kubectl heredocs&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl apply -f &amp;lt;(cat &amp;lt;&amp;lt;EOF&lt;br /&gt;
EOF&lt;br /&gt;
) --dry-run=server&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;One lines move to yamls&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
# kubectl exec -it ubuntu-2 -- bash&lt;br /&gt;
kubectl apply -f &amp;lt;(cat &amp;lt;&amp;lt;EOF&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Pod&lt;br /&gt;
metadata:&lt;br /&gt;
# namespace: default&lt;br /&gt;
  name: ubuntu-1&lt;br /&gt;
# annotations:&lt;br /&gt;
#   kubernetes.io/psp: eks.privileged&lt;br /&gt;
# labels:&lt;br /&gt;
#   app: ubuntu&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
  - command:&lt;br /&gt;
    - &amp;quot;sleep&amp;quot;&lt;br /&gt;
    - &amp;quot;7200&amp;quot;&lt;br /&gt;
#   args:&lt;br /&gt;
#   - &amp;quot;bash&amp;quot;&lt;br /&gt;
    image: ubuntu:20.04&lt;br /&gt;
    imagePullPolicy: IfNotPresent&lt;br /&gt;
    name: ubuntu-1&lt;br /&gt;
#   securityContext:&lt;br /&gt;
#     privileged: true&lt;br /&gt;
#   tty: true&lt;br /&gt;
# dnsPolicy: ClusterFirst&lt;br /&gt;
# enableServiceLinks: true&lt;br /&gt;
  restartPolicy: Never&lt;br /&gt;
# serviceAccount    : sa1&lt;br /&gt;
# serviceAccountName: sa1&lt;br /&gt;
# nodeSelector:&lt;br /&gt;
#   node.kubernetes.io/lifecycle: spot&lt;br /&gt;
EOF&lt;br /&gt;
) --dry-run=server&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Docker - for a single missing commands ===&lt;br /&gt;
If you ever miss some commands you can use docker container package with it:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# curl - missing on minikube node that runs CoreOS&lt;br /&gt;
minikube -p metrics ip; minikube ssh&lt;br /&gt;
docker run appropriate/curl -- http://&amp;lt;NodeIP&amp;gt;:10255/stats/summary # check kubelet-metrics non secure endpoint&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== [https://kubernetes.io/blog/2019/01/14/apiserver-dry-run-and-kubectl-diff/ &amp;lt;code&amp;gt;kubectl diff&amp;lt;/code&amp;gt;] ==&lt;br /&gt;
Shows the differences between the current '''live''' object and the new '''dry-run''' object.&lt;br /&gt;
&amp;lt;source lang=diff&amp;gt;&lt;br /&gt;
kubectl diff -f webfront-deploy.yaml&lt;br /&gt;
diff -u -N /tmp/LIVE-761963756/apps.v1.Deployment.default.webfront-deploy /tmp/MERGED-431884635/apps.v1.Deployment.default.webfront-deploy&lt;br /&gt;
--- /tmp/LIVE-761963756/apps.v1.Deployment.default.webfront-deploy      2019-10-13 17:46:59.784000000 +0000&lt;br /&gt;
+++ /tmp/MERGED-431884635/apps.v1.Deployment.default.webfront-deploy    2019-10-13 17:46:59.788000000 +0000&lt;br /&gt;
@@ -4,7 +4,7 @@&lt;br /&gt;
   annotations:&lt;br /&gt;
     deployment.kubernetes.io/revision: &amp;quot;1&amp;quot;&lt;br /&gt;
   creationTimestamp: &amp;quot;2019-10-13T16:38:43Z&amp;quot;&lt;br /&gt;
-  generation: 2&lt;br /&gt;
+  generation: 3&lt;br /&gt;
   labels:&lt;br /&gt;
     app: webfront-deploy&lt;br /&gt;
   name: webfront-deploy&lt;br /&gt;
@@ -14,7 +14,7 @@&lt;br /&gt;
   uid: ebaf757e-edd7-11e9-8060-0a2fb3cdd79a&lt;br /&gt;
 spec:&lt;br /&gt;
   progressDeadlineSeconds: 600&lt;br /&gt;
-  replicas: 2&lt;br /&gt;
+  replicas: 1&lt;br /&gt;
   revisionHistoryLimit: 10&lt;br /&gt;
   selector:&lt;br /&gt;
     matchLabels:&lt;br /&gt;
@@ -29,6 +29,7 @@&lt;br /&gt;
       creationTimestamp: null&lt;br /&gt;
       labels:&lt;br /&gt;
         app: webfront-deploy&lt;br /&gt;
+        role: webfront&lt;br /&gt;
     spec:&lt;br /&gt;
       containers:&lt;br /&gt;
       - image: nginx:1.7.8&lt;br /&gt;
exit status 1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Kubectl-plugins - [https://krew.sigs.k8s.io/docs/ Krew] plugin manager ==&lt;br /&gt;
Install [https://github.com/kubernetes-sigs/krew krew] package manager for kubectl plugins, requires K8s v1.12+&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
(&lt;br /&gt;
  set -x; cd &amp;quot;$(mktemp -d)&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  OS=&amp;quot;$(uname | tr '[:upper:]' '[:lower:]')&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  ARCH=&amp;quot;$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  KREW=&amp;quot;krew-${OS}_${ARCH}&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  curl -fsSLO &amp;quot;https://github.com/kubernetes-sigs/krew/releases/latest/download/${KREW}.tar.gz&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  tar zxvf &amp;quot;${KREW}.tar.gz&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  ./&amp;quot;${KREW}&amp;quot; install krew&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# update PATH&lt;br /&gt;
[ -d ${HOME}/.krew/bin ] &amp;amp;&amp;amp; export PATH=&amp;quot;${PATH}:${HOME}/.krew/bin&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# List plugins&lt;br /&gt;
kubectl krew search&lt;br /&gt;
&lt;br /&gt;
# Install plugins&lt;br /&gt;
kubectl krew install sniff&lt;br /&gt;
&lt;br /&gt;
# Upgrade plugins&lt;br /&gt;
kubectl krew upgrade&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*[https://github.com/kubernetes-sigs/krew-index/blob/master/plugins.md Available kubectl plugins] Github&lt;br /&gt;
*[https://ahmet.im/blog/kubectl-plugins/ kubectl subcommands] write your own plugin&lt;br /&gt;
&lt;br /&gt;
== Install kubectl plugins ==&lt;br /&gt;
&amp;lt;code&amp;gt;kubectl ctx&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;kubectl ns&amp;lt;/code&amp;gt; - change context and set default namespace&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl krew install ctx ns&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;kubectl cssh&amp;lt;/code&amp;gt; - SSH into Kubernetes nodes ===&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Ssh to all nodes, example below for EKS v1.15.11&lt;br /&gt;
kubectl cssh -u ec2-user -i /git/secrets/ssh/dev.pem -a &amp;quot;InternalIP&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;code&amp;gt;[https://github.com/FairwindsOps/pluto kubectl deprecations]&amp;lt;/code&amp;gt;===&lt;br /&gt;
;&amp;lt;code&amp;gt;[https://github.com/FairwindsOps/pluto kubectl deprecations]&amp;lt;/code&amp;gt;: shows all the deprecated objects in a Kubernetes cluster allowing the operator to verify them before upgrading the cluster. It uses the swagger.json version available in master branch of Kubernetes repository (https://github.com/kubernetes/kubernetes/tree/master/api/openapi-spec) as a reference.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl deprecations&lt;br /&gt;
StatefulSet found in statefulsets.apps/v1beta1&lt;br /&gt;
	 ├─ API REMOVED FROM THE CURRENT VERSION AND SHOULD BE MIGRATED IMMEDIATELY!!&lt;br /&gt;
		-&amp;gt; OBJECT: myapp namespace: mynamespace1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Prior upgrade report. Script specific to EKS.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
[[ $# -eq 0 ]] &amp;amp;&amp;amp; echo &amp;quot;no args, provide prefix for the file name&amp;quot; &amp;amp;&amp;amp; exit 1&lt;br /&gt;
PREFIX=$1&lt;br /&gt;
TARGET_K8S_VER=v1.16.8&lt;br /&gt;
K8Sid=$(kubectl cluster-info | head -1 | cut -d'/' -f3 | cut -d'.' -f1)&lt;br /&gt;
kubectl deprecations --k8s-version $TARGET_K8S_VER &amp;gt; $PREFIX-$(kubectl cluster-info | head -1 | cut -d'/' -f3 | cut -d'.' -f1)-$(date +&amp;quot;%Y%m%d-%H%M&amp;quot;)-from-$(kubectl version --short | grep Server | cut -f3 -d' ')-to-${TARGET_K8S_VER}.yaml&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ ./kube-deprecations.sh test&lt;br /&gt;
$ ls -l&lt;br /&gt;
-rw-rw-r-- 1 vagrant vagrant 29356 Jun 29 16:09 test-11111111112222222222333333333344-20200629-1609-from-v1.15.11-eks-af3caf-to-latest.yaml&lt;br /&gt;
-rw-rw-r-- 1 vagrant vagrant   852 Jun 30 22:41 test-11111111112222222222333333333344-20200630-2241-from-v1.15.11-eks-af3caf-to-v1.16.8.yaml&lt;br /&gt;
-rwxrwxr-x 1 vagrant vagrant   437 Jun 30 22:41 kube-deprecations.sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;code&amp;gt;kubectl df-pv&amp;lt;/code&amp;gt;===&lt;br /&gt;
;&amp;lt;code&amp;gt;kubectl df-pv&amp;lt;/code&amp;gt;: Show disk usage (like unix df) for persistent volumes&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl df-pv&lt;br /&gt;
PVC                   NAMESPACE   POD                    SIZE          USED        AVAILABLE     PERCENTUSED   IUSED   IFREE     PERCENTIUSED&lt;br /&gt;
rdbms-volume          shared1     rdbms-d494fbf4-xrssk   2046640128    252817408   1777045504    12.35         688     130384    0.52&lt;br /&gt;
userdata-0            shared2     mft-0                  21003583488   57692160    20929114112   0.27          749     1309971   0.06&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
===&amp;lt;code&amp;gt;kubectl sniff&amp;lt;/code&amp;gt;===&lt;br /&gt;
Start a remote packet capture on pods using tcpdump.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl sniff hello-minikube-7c77b68cff-qbvsd -c hello-minikube&lt;br /&gt;
# Flags:&lt;br /&gt;
#   -c, --container string             container (optional)&lt;br /&gt;
#   -x, --context string               kubectl context to work on (optional)&lt;br /&gt;
#   -f, --filter string                tcpdump filter (optional)&lt;br /&gt;
#   -h, --help                         help for sniff&lt;br /&gt;
#       --image string                 the privileged container image (optional)&lt;br /&gt;
#   -i, --interface string             pod interface to packet capture (optional) (default &amp;quot;any&amp;quot;)&lt;br /&gt;
#   -l, --local-tcpdump-path string    local static tcpdump binary path (optional)&lt;br /&gt;
#   -n, --namespace string             namespace (optional) (default &amp;quot;default&amp;quot;)&lt;br /&gt;
#   -o, --output-file string           output file path, tcpdump output will be redirect to this file instead of wireshark (optional) ('-' stdout)&lt;br /&gt;
#   -p, --privileged                   if specified, ksniff will deploy another pod that have privileges to attach target pod network namespace&lt;br /&gt;
#   -r, --remote-tcpdump-path string   remote static tcpdump binary path (optional) (default &amp;quot;/tmp/static-tcpdump&amp;quot;)&lt;br /&gt;
#   -v, --verbose                      if specified, ksniff output will include debug information (optional)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The command above will open Wireshark, interesting article to follow:&lt;br /&gt;
* [https://istio.io/latest/docs/tasks/security/authentication/mtls-migration/#set-up-the-cluster mutual TLS] istio&lt;br /&gt;
* [https://dzone.com/articles/verifying-service-mesh-tls-in-kubernetes-using-ksn Verifying Service Mesh TLS in Kubernetes, Using Ksniff and Wireshark]&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;code&amp;gt;kubectl neat&amp;lt;/code&amp;gt;===&lt;br /&gt;
Print sanitized Kubernetes manifest.&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
kubectl get csec  dummy-secret -n clustersecret -oyaml | kubectl neat&lt;br /&gt;
apiVersion: clustersecret.io/v1&lt;br /&gt;
data:&lt;br /&gt;
  tls.crt: ***&lt;br /&gt;
  tls.key: ***&lt;br /&gt;
kind: ClusterSecret&lt;br /&gt;
matchNamespace:&lt;br /&gt;
- anothernamespace&lt;br /&gt;
metadata:&lt;br /&gt;
  name: dummy-secret&lt;br /&gt;
  namespace: clustersecret&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Getting help like manpages &amp;lt;code&amp;gt;kubectl explain&amp;lt;/code&amp;gt; ==&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
$ kubectl --help&lt;br /&gt;
$ kubectl get --help&lt;br /&gt;
$ kubectl explain --help&lt;br /&gt;
$ kubectl explain pod.spec.containers # kubectl knows cluster version, so gives you correct schema details&lt;br /&gt;
$ kubectl explain pods.spec.tolerations --recursive # show only fields&lt;br /&gt;
(...)&lt;br /&gt;
FIELDS:&lt;br /&gt;
   effect	&amp;lt;string&amp;gt;&lt;br /&gt;
   key	&amp;lt;string&amp;gt;&lt;br /&gt;
   operator	&amp;lt;string&amp;gt;&lt;br /&gt;
   tolerationSeconds	&amp;lt;integer&amp;gt;&lt;br /&gt;
   value	&amp;lt;string&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*[https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-strong-getting-started-strong- kubectl-commands] K8s interactive kubectl command reference&lt;br /&gt;
&lt;br /&gt;
= Watch Containers logs =&lt;br /&gt;
== [https://github.com/stern/stern Stern] ==&lt;br /&gt;
{{note| https://github.com/wercker/stern repository has no activity [https://github.com/wercker/stern/issues/140 ISSUE-140], the new community maintain repo is &amp;lt;tt&amp;gt;[https://github.com/stern/stern stern/stern]&amp;lt;/tt&amp;gt;  }}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Log tailing and landscape viewing tool. It connects to kubeapi and streams logs from all pods. Thus using this external tool with clusters that have 100ts of containers can be put significant load on kubeapi.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It will re-use kubectl config file to connect to your clusters, so works oob.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Govendor - this module manager is required&lt;br /&gt;
export GOPATH=$HOME/go        # path where go modules can be found, used by 'go get -u &amp;lt;url&amp;gt;'&lt;br /&gt;
export PATH=$PATH:$GOPATH/bin # path to the additional 'go' binaries&lt;br /&gt;
go get -u github.com/kardianos/govendor  # there will be no output&lt;br /&gt;
&lt;br /&gt;
# Stern (official)&lt;br /&gt;
mkdir -p $GOPATH/src/github.com/stern # new link: https://github.com/stern/stern&lt;br /&gt;
cd $GOPATH/src/github.com/stern&lt;br /&gt;
git clone https://github.com/stern/stern.git &amp;amp;&amp;amp; cd stern&lt;br /&gt;
govendor sync # there will be no output, may take 2 min&lt;br /&gt;
go install    # no output&lt;br /&gt;
&lt;br /&gt;
# Stern latest, download binary, no need for govendor&lt;br /&gt;
REPO=stern/stern&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/$REPO/releases/latest&amp;quot; | jq -r .tag_name | tr -d v); echo $LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d); FILE=stern_${LATEST}_linux_amd64&lt;br /&gt;
curl -L https://github.com/$REPO/releases/download/v${LATEST}/$FILE.tar.gz -o $TEMPDIR/$FILE.tar.gz&lt;br /&gt;
sudo tar xzvf $TEMPDIR/$FILE.tar.gz -C /usr/local/bin/ stern&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Usage&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Regex filter (pod-query) to match 2 pods patterns 'proxy' and 'gateway'&lt;br /&gt;
stern -n dev --kubeconfig ~/.kube/dev-config \(proxy\|gateway\)  # escape to protect regex mod characters&lt;br /&gt;
stern -n dev --kubeconfig ~/.kube/dev-config '(proxy|gateway)'   # single-quote to protect mod characters&lt;br /&gt;
&lt;br /&gt;
# Template the output&lt;br /&gt;
stern --template '{{.Message}} ({{.NodeName}}/{{.Namespace}}/{{.PodName}}/{{.ContainerName}}){{&amp;quot;\n&amp;quot;}}' .&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Help&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ stern&lt;br /&gt;
Tail multiple pods and containers from Kubernetes&lt;br /&gt;
&lt;br /&gt;
Usage:&lt;br /&gt;
  stern pod-query [flags]&lt;br /&gt;
&lt;br /&gt;
Flags:&lt;br /&gt;
  -A, --all-namespaces             If present, tail across all namespaces. A specific namespace is ignored even if specified with --namespace.&lt;br /&gt;
      --color string               Color output. Can be 'always', 'never', or 'auto' (default &amp;quot;auto&amp;quot;)&lt;br /&gt;
      --completion string          Outputs stern command-line completion code for the specified shell. Can be 'bash' or 'zsh'&lt;br /&gt;
  -c, --container string           Container name when multiple containers in pod (default &amp;quot;.*&amp;quot;)&lt;br /&gt;
      --container-state string     If present, tail containers with status in running, waiting or terminated. Default to running. (default &amp;quot;running&amp;quot;)&lt;br /&gt;
      --context string             Kubernetes context to use. Default to current context configured in kubeconfig.&lt;br /&gt;
  -e, --exclude strings            Regex of log lines to exclude&lt;br /&gt;
  -E, --exclude-container string   Exclude a Container name&lt;br /&gt;
  -h, --help                       help for stern&lt;br /&gt;
  -i, --include strings            Regex of log lines to include&lt;br /&gt;
      --init-containers            Include or exclude init containers (default true)&lt;br /&gt;
      --kubeconfig string          Path to kubeconfig file to use&lt;br /&gt;
  -n, --namespace string           Kubernetes namespace to use. Default to namespace configured in Kubernetes context.&lt;br /&gt;
  -o, --output string              Specify predefined template. Currently support: [default, raw, json] (default &amp;quot;default&amp;quot;)&lt;br /&gt;
  -l, --selector string            Selector (label query) to filter on. If present, default to &amp;quot;.*&amp;quot; for the pod-query.&lt;br /&gt;
  -s, --since duration             Return logs newer than a relative duration like 5s, 2m, or 3h. Defaults to 48h.&lt;br /&gt;
      --tail int                   The number of lines from the end of the logs to show. Defaults to -1, showing all logs. (default -1)&lt;br /&gt;
      --template string            Template to use for log lines, leave empty to use --output flag&lt;br /&gt;
  -t, --timestamps                 Print timestamps&lt;br /&gt;
  -v, --version                    Print the version and exit&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Usage&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
stern &amp;lt;pod&amp;gt;&lt;br /&gt;
stern --tail 1 busybox -n &amp;lt;namespace&amp;gt; #this is RegEx that matches busybox1|2|etc&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== [https://github.com/johanhaleby/kubetail kubetail] ==&lt;br /&gt;
Bash script that enables you to aggregate (tail/follow) logs from multiple pods into one stream. This is the same as running &amp;lt;code&amp;gt;kubectl logs -f&amp;lt;/code&amp;gt; but for multiple pods.&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/lensapp/lens Lens | Kubernetes IDE] =&lt;br /&gt;
Kubernetes client, this is not a dashboard that needs installing on a cluster. Similar to KUI but much more powerful.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Deb&lt;br /&gt;
curl curl https://api.k8slens.dev/binaries/Lens-5.4.1-latest.20220304.1.amd64.deb&lt;br /&gt;
sudo apt-get install ./Lens-5.4.1-latest.20220304.1.amd64.deb&lt;br /&gt;
&lt;br /&gt;
# Snap&lt;br /&gt;
snap list&lt;br /&gt;
sudo snap install kontena-lens --classic # U16.04+, tested on U20.04&lt;br /&gt;
&lt;br /&gt;
# Install from a .snap file&lt;br /&gt;
mkdir -p ~/Downloads/kontena-lens &amp;amp;&amp;amp; cd $_&lt;br /&gt;
snap download kontena-lens&lt;br /&gt;
sudo snap ack     kontena-lens_152.assert         # add an assertion to the system assertion database&lt;br /&gt;
sudo snap install kontena-lens_152.snap --classic # --dangerous if you do not have the assert file&lt;br /&gt;
&lt;br /&gt;
# download snap from https://k8slens.dev/&lt;br /&gt;
curl https://api.k8slens.dev/binaries/Lens-5.3.4-latest.20220120.1.amd64.snap&lt;br /&gt;
sudo snap install Lens-5.3.4-latest.20220120.1.amd64.snap --classic --dangerous&lt;br /&gt;
&lt;br /&gt;
# Info&lt;br /&gt;
$ snap info kontena-lens_152.assert&lt;br /&gt;
name:      kontena-lens&lt;br /&gt;
summary:   Lens - The Kubernetes IDE&lt;br /&gt;
publisher: Mirantis Inc (jakolehm)&lt;br /&gt;
store-url: https://snapcraft.io/kontena-lens&lt;br /&gt;
contact:   info@k8slens.dev&lt;br /&gt;
license:   Proprietary&lt;br /&gt;
description: |&lt;br /&gt;
  Lens is the most powerful IDE for people who need to deal with Kubernetes clusters on a daily&lt;br /&gt;
  basis. Ensure your clusters are properly setup and configured. Enjoy increased visibility, real&lt;br /&gt;
  time statistics, log streams and hands-on troubleshooting capabilities. With Lens, you can work&lt;br /&gt;
  with your clusters more easily and fast, radically improving productivity and the speed of&lt;br /&gt;
  business.&lt;br /&gt;
snap-id: Dek6y5mTEPxhySFKPB4Z0WVi5EPS9osS&lt;br /&gt;
channels:&lt;br /&gt;
  latest/stable:    4.0.7      2021-01-20 (152) 107MB classic&lt;br /&gt;
  latest/candidate: 4.0.7      2021-01-20 (152) 107MB classic&lt;br /&gt;
  latest/beta:      4.0.7      2021-01-20 (152) 107MB classic&lt;br /&gt;
  latest/edge:      4.1.0-rc.1 2021-02-11 (157) 108MB classic&lt;br /&gt;
&lt;br /&gt;
$ snap info kontena-lens_152.snap&lt;br /&gt;
path:       &amp;quot;kontena-lens_152.snap&amp;quot;&lt;br /&gt;
name:       kontena-lens&lt;br /&gt;
summary:    Lens&lt;br /&gt;
version:    4.0.7 classic&lt;br /&gt;
build-date: 24 days ago, at 16:31 GMT&lt;br /&gt;
license:    unset&lt;br /&gt;
description: |&lt;br /&gt;
  Lens - The Kubernetes IDE&lt;br /&gt;
commands:&lt;br /&gt;
  - kontena-lens&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/lensapp/lens.git OpenLens] | Kubernetes IDE =&lt;br /&gt;
Download binary from https://github.com/MuhammedKalkan/OpenLens&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install on Ubuntu&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
SUDO=''&lt;br /&gt;
if (( $EUID != 0 )); then&lt;br /&gt;
    SUDO='sudo'&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
REPO=MuhammedKalkan/OpenLens&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/$REPO/releases/latest&amp;quot; | jq -r .tag_name | tr -d v); echo $LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d); FILE=OpenLens-${LATEST}.amd64.deb&lt;br /&gt;
curl -L https://github.com/${REPO}/releases/download/v${LATEST}/$FILE -o $TEMPDIR/$FILE&lt;br /&gt;
$SUDO dpkg -i $TEMPDIR/$FILE&lt;br /&gt;
$SUDO apt-get install -y --fix-broken&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Build your own - [https://gist.github.com/jslay88/bf654c23eaaaed443bb8e8b41d02b2a9 gist]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
install_deps_windows() {&lt;br /&gt;
    echo &amp;quot;Installing Build Dependencies (Windows)...&amp;quot;&lt;br /&gt;
    choco install -y make visualstudio2019buildtools visualstudio2019-workload-vctools&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_deps_darwin() {&lt;br /&gt;
    echo &amp;quot;Installing Build Dependencies (Darwin)...&amp;quot;&lt;br /&gt;
    xcode-select --install&lt;br /&gt;
    if ! hash make 2&amp;gt;/dev/null; then&lt;br /&gt;
        if ! hash brew 2&amp;gt;/dev/null; then&lt;br /&gt;
            echo &amp;quot;Installing Homebrew...&amp;quot;&lt;br /&gt;
            /bin/bash -c &amp;quot;$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)&amp;quot;&lt;br /&gt;
        fi&lt;br /&gt;
        echo &amp;quot;Installing make via Homebrew...&amp;quot;&lt;br /&gt;
        brew install make&lt;br /&gt;
    fi&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_deps_posix() {&lt;br /&gt;
    echo &amp;quot;Installing Build Dependencies (Posix)...&amp;quot;&lt;br /&gt;
    sudo apt-get install -y make g++ curl&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_darwin() {&lt;br /&gt;
    echo &amp;quot;Killing OpenLens (if open)...&amp;quot;&lt;br /&gt;
    killall OpenLens&lt;br /&gt;
    echo &amp;quot;Installing OpenLens (Darwin)...&amp;quot;&lt;br /&gt;
    rm -Rf &amp;quot;$HOME/Applications/OpenLens.app&amp;quot;&lt;br /&gt;
    arch=&amp;quot;mac&amp;quot;&lt;br /&gt;
    if [[ &amp;quot;$(uname -m)&amp;quot; == &amp;quot;arm64&amp;quot; ]]; then&lt;br /&gt;
        arch=&amp;quot;mac-arm64&amp;quot;  # credit @teefax&lt;br /&gt;
    fi&lt;br /&gt;
    cp -Rfp &amp;quot;$tempdir/lens/dist/$arch/OpenLens.app&amp;quot; &amp;quot;$HOME/Applications/&amp;quot;&lt;br /&gt;
    rm -Rf &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    &lt;br /&gt;
    print_alias_message&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_posix() {&lt;br /&gt;
    echo &amp;quot;Installing OpenLens (Posix)...&amp;quot;&lt;br /&gt;
    cd &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    sudo dpkg -i &amp;quot;$(ls -Art $tempdir/lens/dist/*.deb  | tail -n 1)&amp;quot;&lt;br /&gt;
    rm -Rf &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    &lt;br /&gt;
    print_alias_message&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_windows() {&lt;br /&gt;
    echo &amp;quot;Installing OpenLens (Windows)...&amp;quot;&lt;br /&gt;
    &amp;quot;$(/bin/ls -Art $tempdir/lens/dist/OpenLens*.exe  | tail -n 1)&amp;quot;&lt;br /&gt;
    rm -Rf &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    &lt;br /&gt;
    print_alias_message&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_nvm() {&lt;br /&gt;
    if [ -z &amp;quot;$NVM_DIR&amp;quot; ]; then&lt;br /&gt;
        echo &amp;quot;Installing NVM...&amp;quot;&lt;br /&gt;
        NVM_VERSION=$(curl -s https://api.github.com/repos/nvm-sh/nvm/releases/latest | sed -En 's/  &amp;quot;tag_name&amp;quot;: &amp;quot;(.+)&amp;quot;,/\1/p')&lt;br /&gt;
        curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/$NVM_VERSION/install.sh | bash&lt;br /&gt;
        NVM_DIR=&amp;quot;$HOME/.nvm&amp;quot;&lt;br /&gt;
    fi&lt;br /&gt;
    [ -s &amp;quot;$NVM_DIR/nvm.sh&amp;quot; ] &amp;amp;&amp;amp; \. &amp;quot;$NVM_DIR/nvm.sh&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
build_openlens() {&lt;br /&gt;
    tempdir=$(mktemp -d)&lt;br /&gt;
    cd &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    if [ -z &amp;quot;$1&amp;quot; ]; then&lt;br /&gt;
        echo &amp;quot;Checking GitHub API for latests tag...&amp;quot;&lt;br /&gt;
        OPENLENS_VERSION=$(curl -s https://api.github.com/repos/lensapp/lens/releases/latest | sed -En 's/  &amp;quot;tag_name&amp;quot;: &amp;quot;(.+)&amp;quot;,/\1/p')&lt;br /&gt;
    else&lt;br /&gt;
        if [[ &amp;quot;$1&amp;quot; == v* ]]; then&lt;br /&gt;
            OPENLENS_VERSION=&amp;quot;$1&amp;quot;&lt;br /&gt;
        else&lt;br /&gt;
            OPENLENS_VERSION=&amp;quot;v$1&amp;quot;&lt;br /&gt;
        fi&lt;br /&gt;
        echo &amp;quot;Using supplied tag $OPENLENS_VERSION&amp;quot;&lt;br /&gt;
    fi&lt;br /&gt;
    if [ -z $OPENLENS_VERSION ]; then&lt;br /&gt;
        echo &amp;quot;Failed to get valid version tag. Aborting!&amp;quot;&lt;br /&gt;
        exit 1&lt;br /&gt;
    fi&lt;br /&gt;
    curl -L https://github.com/lensapp/lens/archive/refs/tags/$OPENLENS_VERSION.tar.gz | tar xvz&lt;br /&gt;
    mv lens-* lens&lt;br /&gt;
    cd lens&lt;br /&gt;
    NVM_CURRENT=$(nvm current)&lt;br /&gt;
    nvm install 16&lt;br /&gt;
    nvm use 16&lt;br /&gt;
    npm install -g yarn&lt;br /&gt;
    make build&lt;br /&gt;
    nvm use &amp;quot;$NVM_CURRENT&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
print_alias_message() {&lt;br /&gt;
    if [ &amp;quot;$(type -t install_openlens)&amp;quot; != 'alias' ]; then&lt;br /&gt;
        printf &amp;quot;It is recommended to add an alias to your shell profile to run this script again.\n&amp;quot;&lt;br /&gt;
        printf &amp;quot;alias install_openlens=\&amp;quot;curl -o- https://gist.githubusercontent.com/jslay88/bf654c23eaaaed443bb8e8b41d02b2a9/raw/install_openlens.sh | bash\&amp;quot;\n\n&amp;quot;&lt;br /&gt;
    fi&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
if [[ &amp;quot;$(uname)&amp;quot; == &amp;quot;Linux&amp;quot; ]]; then&lt;br /&gt;
    install_deps_posix&lt;br /&gt;
    install_nvm&lt;br /&gt;
    build_openlens &amp;quot;$1&amp;quot;&lt;br /&gt;
    install_posix&lt;br /&gt;
elif [[ &amp;quot;$(uname)&amp;quot; == &amp;quot;Darwin&amp;quot; ]]; then&lt;br /&gt;
    install_deps_darwin&lt;br /&gt;
    install_nvm&lt;br /&gt;
    build_openlens &amp;quot;$1&amp;quot;&lt;br /&gt;
    install_darwin&lt;br /&gt;
else&lt;br /&gt;
    install_deps_windows&lt;br /&gt;
    install_nvm&lt;br /&gt;
    build_openlens &amp;quot;$1&amp;quot;&lt;br /&gt;
    install_windows&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Done!&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://kui.tools/ kui terminal] =&lt;br /&gt;
kui is a terminal with visualizations, provided by IBM&lt;br /&gt;
&lt;br /&gt;
Install using continent install script into &amp;lt;code&amp;gt;/opt/Kui-linux-x64/&amp;lt;/code&amp;gt; and symlink &amp;lt;code&amp;gt;Kui&amp;lt;/code&amp;gt; binary to &amp;lt;code&amp;gt;/usr/local/bin/kui&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
REPO=kubernetes-sigs/kui&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/$REPO/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d); FILE=Kui-linux-x64.zip&lt;br /&gt;
curl -L https://github.com/$REPO/releases/download/$LATEST/Kui-linux-x64.zip -o $TEMPDIR/$FILE&lt;br /&gt;
sudo mkdir -p /opt/Kui-linux-x64&lt;br /&gt;
sudo unzip $TEMPDIR/$FILE -d /opt/&lt;br /&gt;
&lt;br /&gt;
# Run&lt;br /&gt;
$&amp;gt; /opt/Kui-linux-x64/Kui&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Run Kui as [Kubernetes plugin https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
export PATH=$PATH:/opt/Kui-linux-x64/ # make sure Kui libs are in environment PATH&lt;br /&gt;
kubectl kui get pods -A               # -&amp;gt; a pop up window will show up&lt;br /&gt;
&lt;br /&gt;
$ kubectl plugin list &lt;br /&gt;
The following compatible plugins are available:&lt;br /&gt;
/opt/Kui-linux-x64/kubectl-kui&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
:[[File:ClipCapIt-200428-205600.PNG]]&lt;br /&gt;
&lt;br /&gt;
; Resources&lt;br /&gt;
* [https://github.com/IBM/kui/wiki kui/wiki] Github&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/derailed/popeye popeye] =&lt;br /&gt;
Popeye is a utility that scans live Kubernetes cluster and reports potential issues with deployed resources and configurations.&lt;br /&gt;
:[[File:ClipCapIt-200501-123645.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install&lt;br /&gt;
REPO=derailed/popeye&lt;br /&gt;
RELEASE=popeye_Linux_x86_64.tar.gz&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/${REPO}/releases/latest&amp;quot; | jq -r .tag_name); echo $VERSION # latest&lt;br /&gt;
wget https://github.com/${REPO}/releases/download/${VERSION}/${RELEASE}&lt;br /&gt;
tar xf ${RELEASE} popeye --remove-files&lt;br /&gt;
sudo install popeye /usr/local/bin&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
popeye # --out html&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/derailed/k9s k9s] =&lt;br /&gt;
K9s provides a terminal UI to interact with Kubernetes clusters.&lt;br /&gt;
&lt;br /&gt;
;Install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/derailed/k9s/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
wget https://github.com/derailed/k9s/releases/download/$LATEST/k9s_Linux_amd64.tar.gz&lt;br /&gt;
tar xf k9s_Linux_amd64.tar.gz --remove-files k9s&lt;br /&gt;
sudo install k9s /usr/local/bin&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Usage&lt;br /&gt;
* &amp;lt;code&amp;gt;?&amp;lt;/code&amp;gt; help&lt;br /&gt;
* &amp;lt;code&amp;gt;:ns&amp;lt;/code&amp;gt; select namespace&lt;br /&gt;
* &amp;lt;code&amp;gt;:nodes&amp;lt;/code&amp;gt; show nodes&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:[[File:ClipCapIt-190826-152830.PNG]]&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/droctothorpe/kubecolor kubecolor] =&lt;br /&gt;
Kubecolor is a bash function that colorizes the output of kubectl get events -w.&lt;br /&gt;
:[[File:ClipCapIt-190831-113158.PNG]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# This script is not working&lt;br /&gt;
git clone https://github.com/droctothorpe/kubecolor.git ~/.kubecolor&lt;br /&gt;
echo &amp;quot;source ~/.kubecolor/kubecolor.bash&amp;quot; &amp;gt;&amp;gt; ~/.bash_profile # (or ~/.bashrc)&lt;br /&gt;
source ~/.bash_profile # (or ~/.bashrc)&lt;br /&gt;
&lt;br /&gt;
# You can source this function instead&lt;br /&gt;
kube-events() {&lt;br /&gt;
    kubectl get events --all-namespaces --watch \&lt;br /&gt;
    -o 'go-template={{.lastTimestamp}} ^ {{.involvedObject.kind}} ^ {{.message}} ^ ({{.involvedObject.name}}){{&amp;quot;\n&amp;quot;}}' \&lt;br /&gt;
    | awk -F^ \&lt;br /&gt;
    -v   black=$(tput setaf 0) \&lt;br /&gt;
    -v     red=$(tput setaf 1) \&lt;br /&gt;
    -v   green=$(tput setaf 2) \&lt;br /&gt;
    -v  yellow=$(tput setaf 3) \&lt;br /&gt;
    -v    blue=$(tput setaf 4) \&lt;br /&gt;
    -v magenta=$(tput setaf 5) \&lt;br /&gt;
    -v    cyan=$(tput setaf 6) \&lt;br /&gt;
    -v   white=$(tput setaf 7) \&lt;br /&gt;
    '{ $1=blue $1; $2=green $2; $3=white $3; }  1'&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
kube-events&lt;br /&gt;
kubectl get events -A -w&lt;br /&gt;
kubectl get events --all-namespaces --watch -o 'go-template={{.lastTimestamp}} {{.involvedObject.kind}} {{.message}} ({{.involvedObject.name}}){{&amp;quot;\n&amp;quot;}}'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
= [https://argoproj.github.io/argo-rollouts/ argo-rollouts] =&lt;br /&gt;
Argo Rollouts introduces a new custom resource called a Rollout to provide additional deployment strategies such as Blue Green and Canary to Kubernetes.&lt;br /&gt;
&lt;br /&gt;
= &amp;lt;code&amp;gt;[https://github.com/groundcover-com/murre murre]&amp;lt;/code&amp;gt; =&lt;br /&gt;
Murre is an on-demand, scaleable source of container resource metrics for K8s. No dependencies needed, no install needed on a cluster.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
goenv install 1.18 # although 1.19 is the latest and the install completes successfully it wont create the binary  &lt;br /&gt;
go install github.com/groundcover-com/murre@latest&lt;br /&gt;
murre --sortby-cpu-util&lt;br /&gt;
murre --sortby-cpu&lt;br /&gt;
murre --pod kong-51xst&lt;br /&gt;
murre --namespace dev&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/amelbakry/kubernetes-scripts/blob/master/cluster-health.sh Kubernetes scripts] =&lt;br /&gt;
These Scripts allow you to troubleshoot and check the health status of the cluster and deployments They allow you to gather these information&lt;br /&gt;
* Cluster resources&lt;br /&gt;
* Cluster Nodes status&lt;br /&gt;
* Nodes Conditions&lt;br /&gt;
* Pods per Nodes&lt;br /&gt;
* Worker Nodes Per Availability Zones&lt;br /&gt;
* Cluster Node Types&lt;br /&gt;
* Pods not in running or completed status&lt;br /&gt;
* Top Pods according to Memory Limits&lt;br /&gt;
* Top Pods according to CPU Limits&lt;br /&gt;
* Number of Pods&lt;br /&gt;
* Pods Status&lt;br /&gt;
* Max Pods restart count&lt;br /&gt;
* Readiness of Pods&lt;br /&gt;
* Pods Average Utilization&lt;br /&gt;
* Top Pods according to CPU Utilization&lt;br /&gt;
* Top Pods according to Memory Utilization&lt;br /&gt;
* Pods Distribution per Nodes&lt;br /&gt;
* Node Distribution per Availability Zone&lt;br /&gt;
* Deployments without correct resources (Memory or CPU)&lt;br /&gt;
* Deployments without Limits&lt;br /&gt;
* Deployments without Application configured in Labels&lt;br /&gt;
&lt;br /&gt;
= Multi-node clusters =&lt;br /&gt;
{{Note|[[Kubernetes/minikube]] can do this natively}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Build multi node cluster for development.&lt;br /&gt;
On a single machine&lt;br /&gt;
* [https://github.com/kinvolk/kube-spawn/ kube-spawn] tool for creating a multi-node Kubernetes (&amp;gt;= 1.8) cluster on a single Linux machine&lt;br /&gt;
* [https://github.com/sttts/kubernetes-dind-cluster kubernetes-dind-cluster] Kubernetes multi-node cluster for developer of Kubernetes that launches in 36 seconds&lt;br /&gt;
* [https://kind.sigs.k8s.io/ kind] is a tool for running local Kubernetes clusters using Docker container “nodes”&lt;br /&gt;
* [https://github.com/ecomm-integration-ballerina/kubernetes-cluster Vagrant] full documentation in thsi [https://medium.com/@wso2tech/multi-node-kubernetes-cluster-with-vagrant-virtualbox-and-kubeadm-9d3eaac28b98 article]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Full cluster provisioning&lt;br /&gt;
* [https://github.com/kubernetes-sigs/kubespray kubespray] Deploy a Production Ready Kubernetes Cluster&lt;br /&gt;
* [https://github.com/kubernetes/kops kops] get a production grade Kubernetes cluster up and running&lt;br /&gt;
&lt;br /&gt;
= [https://kubernetes.io/docs/tasks/debug-application-cluster/crictl/ crictl] =&lt;br /&gt;
CLI and validation tools for Kubelet Container Runtime Interface (CRI). Used for debugging Kubernetes nodes with &amp;lt;code&amp;gt;crictl&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;crictl&amp;lt;/code&amp;gt; requires a Linux operating system with a CRI runtime. Creating containers with this tool on K8s cluster, will eventually cause that Kubernetes will delete these containers.&lt;br /&gt;
= [https://github.com/weaveworks/kubediff kubediff] show diff code vs what is deployed =&lt;br /&gt;
Kubediff is a tool for Kubernetes to show you the differences between your running configuration and your version controlled configuration.&lt;br /&gt;
= Mozilla SOPS - secret manager =&lt;br /&gt;
* [https://github.com/mozilla/sops SOPS] Mozilla SOPS: Secrets OPerationS, sops is an editor of encrypted files that supports YAML, JSON, ENV, INI and BINARY formats and encrypts with AWS KMS, GCP KMS, Azure Key Vault and PGP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/getsops/sops/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d)&lt;br /&gt;
curl -sL https://github.com/mozilla/sops/releases/download/${LATEST}/sops-${LATEST}.linux.amd64 -o $TEMPDIR/sops&lt;br /&gt;
sudo install $TEMPDIR/sops /usr/local/bin/sops&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://kompose.io/ Kompose] (Kubernetes + Compose) =&lt;br /&gt;
kompose is a tool to convert docker-compose files to Kubernetes manifests. &amp;lt;code&amp;gt;kompose&amp;lt;/code&amp;gt; takes a Docker Compose file and translates it into Kubernetes resources.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Linux&lt;br /&gt;
curl -L https://github.com/kubernetes/kompose/releases/download/v1.21.0/kompose-linux-amd64 -o kompose&lt;br /&gt;
sudo install ./kompose /usr/local/bin/kompose               # option 1&lt;br /&gt;
chmod +x kompose; sudo mv ./kompose /usr/local/bin/kompose  # option 2&lt;br /&gt;
&lt;br /&gt;
# Completion&lt;br /&gt;
source &amp;lt;(kompose completion bash)&lt;br /&gt;
&lt;br /&gt;
# Convert&lt;br /&gt;
kompose convert -f docker-compose-mac.yaml&lt;br /&gt;
&lt;br /&gt;
WARN Restart policy 'unless-stopped' in service mysql is not supported, convert it to 'always'&lt;br /&gt;
INFO Kubernetes file &amp;quot;mysql-service.yaml&amp;quot; created&lt;br /&gt;
INFO Kubernetes file &amp;quot;cluster-dir-persistentvolumeclaim.yaml&amp;quot; created&lt;br /&gt;
INFO Kubernetes file &amp;quot;mysql-deployment.yaml&amp;quot; created&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;References&lt;br /&gt;
*[https://github.com/kubernetes/kompose kompose] Github&lt;br /&gt;
&lt;br /&gt;
= [https://kubernetes.io/blog/2019/04/19/introducing-kube-iptables-tailer/ kube-iptables-tailer] - ip-table drop packages logger =&lt;br /&gt;
Allows to view iptables dropped packages, useful when working with Network Policies to identify pods trying to talk to disallowed destinations.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This project deploys &amp;lt;tt&amp;gt;[https://github.com/box/kube-iptables-tailer/tree/master/demo kube-iptables-tailer]&amp;lt;/tt&amp;gt; daemonset that watches iptables log &amp;lt;code&amp;gt;/var/log/iptables.log&amp;lt;/code&amp;gt; on each k8s-node mounted as &amp;lt;code&amp;gt;hostPath&amp;lt;/code&amp;gt; volume. It filters the log for custom prefix, set in &amp;lt;code&amp;gt;daemonset.spec.template.spec.containers.env&amp;lt;/code&amp;gt; and sends to cluster events.&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
            env: &lt;br /&gt;
              - name: &amp;quot;IPTABLES_LOG_PATH&amp;quot;&lt;br /&gt;
                value: &amp;quot;/var/log/iptables.log&amp;quot;&lt;br /&gt;
              - name: &amp;quot;IPTABLES_LOG_PREFIX&amp;quot;&lt;br /&gt;
                # log prefix defined in your iptables chains&lt;br /&gt;
                value: &amp;quot;my-prefix:&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://github.com/box/kube-iptables-tailer#setup-iptables-log-prefix Set iptables Log Prefix]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ iptables -A CHAIN_NAME -j LOG --log-prefix &amp;quot;EXAMPLE_LOG_PREFIX: &amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example output, when packet dropped&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ kubectl describe pods --namespace=YOUR_NAMESPACE&lt;br /&gt;
...&lt;br /&gt;
Events:&lt;br /&gt;
  FirstSeen   LastSeen    Count   From                    Type          Reason          Message&lt;br /&gt;
  ---------   --------	  -----	  ----                    ----          ------          -------&lt;br /&gt;
  1h          5s          10      kube-iptables-tailer    Warning       PacketDrop      Packet dropped when receiving traffic from example-service-2 (IP: 22.222.22.222).&lt;br /&gt;
  3h          2m          5       kube-iptables-tailer    Warning       PacketDrop      Packet dropped when sending traffic to example-service-1 (IP: 11.111.11.111).&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
= [https://github.com/eldadru/ksniff ksniff] - pipe a pod traffic to Wireshark or Tshark =&lt;br /&gt;
A kubectl plugin that utilize tcpdump and Wireshark to start a remote capture on any pod&lt;br /&gt;
&lt;br /&gt;
= [https://docs.flagger.app/ flagger - canary deployments] =&lt;br /&gt;
Flagger is a Kubernetes operator that automates the promotion of canary deployments using Istio, Linkerd, App Mesh, NGINX, Skipper, Contour or Gloo routing for traffic shifting and Prometheus metrics for canary analysis.&lt;br /&gt;
= [https://www.kubeval.com/ Kubeval] =&lt;br /&gt;
Kubeval is used to validate one or more Kubernetes configuration files, and is often used locally as part of a development workflow as well as in CI pipelines.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install&lt;br /&gt;
wget https://github.com/instrumenta/kubeval/releases/latest/download/kubeval-linux-amd64.tar.gz&lt;br /&gt;
tar xf kubeval-linux-amd64.tar.gz&lt;br /&gt;
sudo cp kubeval /usr/local/bin&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
$&amp;gt; kubeval my-invalid-rc.yaml&lt;br /&gt;
WARN - my-invalid-rc.yaml contains an invalid ReplicationController - spec.replicas: Invalid type. Expected: integer, given: string&lt;br /&gt;
$&amp;gt; echo $?&lt;br /&gt;
1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/yannh/kubeconform kubeconform] - improved Kubeval =&lt;br /&gt;
Kubeconform is a Kubernetes manifests validation tool.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install&lt;br /&gt;
wget https://github.com/yannh/kubeconform/releases/latest/download/kubeconform-linux-amd64.tar.gz&lt;br /&gt;
tar xf kubeconform-linux-amd64.tar.gz&lt;br /&gt;
sudo install kubeconform /usr/local/bin&lt;br /&gt;
&lt;br /&gt;
# Show version&lt;br /&gt;
kubeconform -v&lt;br /&gt;
v0.4.14&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Observability =&lt;br /&gt;
== [https://github.com/oslabs-beta/KUR8 KUR8] - like Elastic.io EFK dashboards ==&lt;br /&gt;
{{Note|I've deployed v1.0.0 to monitoring ns along with already existing service &amp;lt;code&amp;gt;&lt;br /&gt;
kube-prometheus-stack-prometheus:9090&amp;lt;/code&amp;gt; but the application was crashing}}&lt;br /&gt;
&lt;br /&gt;
= CPU Load pods =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Repeat the command times x CPU&lt;br /&gt;
cat /proc/cpuinfo | grep processor | wc -l # count processors&lt;br /&gt;
yes &amp;gt; /dev/null &amp;amp;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
*[https://kubernetes.io/docs/reference/kubectl/overview/ kubectl overview - resources types, Namespaced, kinds] K8s docs&lt;br /&gt;
*[https://github.com/johanhaleby/kubetail kubetail] Bash script that enables you to aggregate (tail/follow) logs from multiple pods into one stream. This is the same as running &amp;quot;kubectl logs -f &amp;quot; but for multiple pods.&lt;br /&gt;
*[https://github.com/ahmetb/kubectx kubectx kubens] Kubernetes config switches for context and setting up default namespace&lt;br /&gt;
*[https://medium.com/faun/using-different-kubectl-versions-with-multiple-kubernetes-clusters-a3ad8707b87b manages different ver kubectl] blog&lt;br /&gt;
*[https://github.com/kubernetes/community/blob/master/contributors/devel/sig-cli/kubectl-conventions.md#rules-for-extending-special-resource-alias---all kubectl] Kubectl Conventions&lt;br /&gt;
&lt;br /&gt;
Cheatsheets&lt;br /&gt;
*[https://cheatsheet.dennyzhang.com/cheatsheet-kubernetes-A4 cheatsheet-kubernetes-A4] by dennyzhang&lt;br /&gt;
&lt;br /&gt;
Other projects&lt;br /&gt;
*[https://github.com/jonmosco/kube-tmux kube-tmux] Kubernetes context and namespace status for tmux&lt;br /&gt;
*[https://github.com/jonmosco/kube-ps1 kube-ps1] Kubernetes prompt for bash and zsh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:kubernetes]]&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Kubernetes/Tools&amp;diff=7032</id>
		<title>Kubernetes/Tools</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Kubernetes/Tools&amp;diff=7032"/>
		<updated>2024-07-02T22:16:44Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Install kubectl */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= kubectl =&lt;br /&gt;
== Install kubectl ==&lt;br /&gt;
List of kubectl [https://kubernetes.io/releases/ releases].&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# List releases&lt;br /&gt;
curl -s https://api.github.com/repos/kubernetes/kubernetes/releases | jq -r '.[].tag_name' | sort -V&lt;br /&gt;
curl -s https://api.github.com/repos/kubernetes/kubernetes/releases | jq -r '[.[] | select(.prerelease == false) | .tag_name] | map(sub(&amp;quot;^v&amp;quot;;&amp;quot;&amp;quot;)) | map(split(&amp;quot;.&amp;quot;)) | group_by(.[0:2]) | map(max_by(.[2]|tonumber)) | map(join(&amp;quot;.&amp;quot;)) | map(&amp;quot;v&amp;quot; + .) | sort | reverse | .[]'&lt;br /&gt;
v1.30.2&lt;br /&gt;
v1.29.6&lt;br /&gt;
v1.28.11&lt;br /&gt;
v1.27.15&lt;br /&gt;
v1.26.15&lt;br /&gt;
&lt;br /&gt;
# Latest&lt;br /&gt;
ARCH=amd64 # amd64|arm&lt;br /&gt;
VERSION=$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt); echo $VERSION&lt;br /&gt;
curl -LO https://storage.googleapis.com/kubernetes-release/release/$VERSION/bin/linux/$ARCH/kubectl&lt;br /&gt;
&lt;br /&gt;
# Specific version&lt;br /&gt;
# Find specific Kubernetes release, then download kubectl&lt;br /&gt;
VERSION=v1.26.14; ARCH=amd64 # amd64|arm&lt;br /&gt;
curl -LO https://storage.googleapis.com/kubernetes-release/release/$VERSION/bin/linux/$ARCH/kubectl&lt;br /&gt;
sudo install ./kubectl /usr/local/bin/kubectl&lt;br /&gt;
&lt;br /&gt;
# Note: sudo install := chmod +x ./kubectl; sudo mv&lt;br /&gt;
&lt;br /&gt;
# Verify, kubectl should not be more than -/+ 1 minor version difference then api-server&lt;br /&gt;
kubectl version --short&lt;br /&gt;
Client Version: v1.26.14&lt;br /&gt;
Kustomize Version: v4.5.7&lt;br /&gt;
Server Version: v1.24.10&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Google way&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install kubectl if you don't already have a suitable version&lt;br /&gt;
# https://kubernetes.io/docs/setup/release/version-skew-policy/#kubectl&lt;br /&gt;
kubectl version --client || gcloud components install kubectl&lt;br /&gt;
kubectl get clusterrolebinding $(gcloud config get-value core/account)-cluster-admin ||&lt;br /&gt;
  kubectl create clusterrolebinding $(gcloud config get-value core/account)-cluster-admin \&lt;br /&gt;
  --clusterrole=cluster-admin \&lt;br /&gt;
  --user=&amp;quot;$(gcloud config get-value core/account)&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
kubectl plugin called [https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke gke-gcloud-auth-plugin]&lt;br /&gt;
* [https://cloud.google.com/sdk/docs/install#deb Install Google Cloud SDK]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo apt-get install apt-transport-https ca-certificates gnupg curl&lt;br /&gt;
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /usr/share/keyrings/cloud.google.gpg&lt;br /&gt;
echo &amp;quot;deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main&amp;quot; | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list&lt;br /&gt;
sudo apt-get update &lt;br /&gt;
sudo apt-get install google-cloud-cli # required to authenticate with GCP&lt;br /&gt;
sudo apt-get install google-cloud-sdk-gke-gcloud-auth-plugin&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Autocompletion and kubeconfig ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
source &amp;lt;(kubectl completion bash); alias k=kubectl; complete -F __start_kubectl k&lt;br /&gt;
&lt;br /&gt;
# Set default namespace&lt;br /&gt;
kubectl config set-context --current --namespace=dev&lt;br /&gt;
kubectl config set-context $(kubectl config current-context) --namespace=dev&lt;br /&gt;
&lt;br /&gt;
vi ~/.kube/config&lt;br /&gt;
...&lt;br /&gt;
contexts:&lt;br /&gt;
- context:&lt;br /&gt;
    cluster: kubernetes&lt;br /&gt;
    user: kubernetes-admin&lt;br /&gt;
    namespace: web       # default namespace&lt;br /&gt;
  name: dev-frontend&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Add &amp;lt;code&amp;gt;proxy-url&amp;lt;/code&amp;gt; using &amp;lt;code&amp;gt;yq&amp;lt;/code&amp;gt; to kubeconfig ==&lt;br /&gt;
Minimum yq version required is v2.x, tested with yq 2.13.0. The example below does the file inline `-i` update.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
yq -i -y --indentless '.clusters[0].cluster += {&amp;quot;proxy-url&amp;quot;: &amp;quot;http://proxy.acme.com:8080&amp;quot;}' ~/.kube/$ENVIRONMENT&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Get resources and cheatsheet ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Get a list of nodes&lt;br /&gt;
kubectl get nodes -o jsonpath=&amp;quot;{.items[*].metadata.name}&amp;quot;&lt;br /&gt;
ip-10-10-10-10.eu-west-1.compute.internal ip-10-10-10-20.eu-west-1.compute.internal&lt;br /&gt;
&lt;br /&gt;
kubectl get nodes -oname&lt;br /&gt;
node/ip-10-10-10-10.eu-west-1.compute.internal&lt;br /&gt;
node/ip-10-10-10-20.eu-west-1.compute.internal&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
# Pods sorted by node name&lt;br /&gt;
kubectl get pods --sort-by=.spec.nodeName -owide -A&lt;br /&gt;
&lt;br /&gt;
# Watch a namespace in a convinient resources order | sts=statefulset, rs=replicaset, ep=endpoint, cm=configmap&lt;br /&gt;
watch -d kubectl -n dev get sts,deploy,rc,rs,pods,svc,ep,ing,pvc,cm,sa,secret,es,cronjob,job -owide --show-labels &lt;br /&gt;
   # note es - externalsecrets&lt;br /&gt;
watch -d 'kubectl get pv -owide --show-labels | grep -e &amp;lt;eg.NAMESPACE&amp;gt;'&lt;br /&gt;
watch -d helm list -A&lt;br /&gt;
&lt;br /&gt;
# Test your context by creating configMap&lt;br /&gt;
kubectl create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2&lt;br /&gt;
kubectl delete configmap my-config&lt;br /&gt;
&lt;br /&gt;
# Watch multiple namespaces&lt;br /&gt;
eval 'kubectl --context='{context1,context2}' --namespace='{ns1,ns2}' get pod;'&lt;br /&gt;
eval kubectl\ --context={context1,context2}\ --namespace={ns1,ns2}\ get\ pod\;&lt;br /&gt;
watch -d eval 'kubectl -n '{default,ingress-nginx}' get sts,deploy,rc,rs,pods,svc,ep,ing,pvc,cm,sa,secret,es,cronjob,job -owide --show-labels;'&lt;br /&gt;
&lt;br /&gt;
# Auth, can-i&lt;br /&gt;
kubectl auth can-i delete pods&lt;br /&gt;
yes&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Get yaml from existing object ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl create namespace kiali --dry-run=client -o yaml &amp;gt; ns.yaml&lt;br /&gt;
kubectl create namespace kiali --dry-run=client -o yaml | kubectl apply -f -&lt;br /&gt;
&lt;br /&gt;
# Saves version revision in metadata.annotations.kubectl.kubernetes.io/last-applied-configuration={..manifest_json..} &lt;br /&gt;
kubectl create ns foo --save-config&lt;br /&gt;
&lt;br /&gt;
# Get a yaml without status information, almost clean manifest. Deprecated '--export' before &amp;lt;v17.x.&lt;br /&gt;
kubectl -n web get pod &amp;lt;podName&amp;gt; -oyaml --export&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Generate pod manifest, the most clean way I know&lt;br /&gt;
&amp;lt;syntaxhighlightjs lang=bash&amp;gt;&lt;br /&gt;
# kubectl -n foo run --image=ubuntu:20.04 ubuntu-1 --dry-run=client -oyaml -- bash -c sleep&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Pod&lt;br /&gt;
metadata:&lt;br /&gt;
  creationTimestamp: null  # &amp;lt;- can be deleted&lt;br /&gt;
  labels:&lt;br /&gt;
    run: ubuntu-1&lt;br /&gt;
  name: ubuntu-1&lt;br /&gt;
  namespace: foo&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
  - args:&lt;br /&gt;
    - bash&lt;br /&gt;
    - -c&lt;br /&gt;
    - sleep&lt;br /&gt;
    image: ubuntu:20.04&lt;br /&gt;
    name: ubuntu-1&lt;br /&gt;
    resources: {}  # &amp;lt;- can be deleted&lt;br /&gt;
  dnsPolicy: ClusterFirst&lt;br /&gt;
  restartPolicy: Always&lt;br /&gt;
status: {}         # &amp;lt;- can be deleted&lt;br /&gt;
&amp;lt;/syntaxhighlightjs&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;code&amp;gt;kubectl cp&amp;lt;/code&amp;gt; ==&lt;br /&gt;
Each container must be prefixed with a namespace, the copy-to-file(&amp;lt;filename&amp;gt;) must be in place. The recursive copy might be tricky. &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl cp [[namespace/]pod:]file/path ./&amp;lt;filename&amp;gt; -c &amp;lt;container_name&amp;gt;&lt;br /&gt;
kubectl cp vegeta/vegeta-5847d879d8-p9kqw:plot.html ./plot.html -c vegeta&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== One liners ==&lt;br /&gt;
=== Single purpose pods ===&lt;br /&gt;
Note: &amp;lt;code&amp;gt;--generator=deployment/apps.v1&amp;lt;/code&amp;gt; is DEPRECATED and will be removed, use &amp;lt;code&amp;gt;--generator=run-pod/v1 &amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;kubectl create&amp;lt;/code&amp;gt; instead.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Exec to deployment, no need to specify unique pod name&lt;br /&gt;
kubectl exec -it deploy/sleep -- curl httpbin:8000/headers&lt;br /&gt;
&lt;br /&gt;
NS=mynamespace; LABEL='app.kubernetes.io/name=myvalue'&lt;br /&gt;
kubectl exec -n $NS -it $(kubectl get pod -l &amp;quot;$LABEL&amp;quot; -n $NS -o jsonpath='{.items[0].metadata.name}') -- bash&lt;br /&gt;
&lt;br /&gt;
# Echo server&lt;br /&gt;
kubectl run --image=k8s.gcr.io/echoserver:1.4 hello-1 --port=8080&lt;br /&gt;
&lt;br /&gt;
# Single purpose pods&lt;br /&gt;
kubectl run    --image=bitnami/kubectl:1.21.8 kubectl-1    --rm -it -- get pods&lt;br /&gt;
kubectl run    --image=appropriate/curl       curl-1       --rm -it -- sh&lt;br /&gt;
kubectl run    --image=ubuntu:18.04     ubuntu-1  --rm -it -- bash&lt;br /&gt;
kubectl create --image=ubuntu:20.04     ubuntu-2  --rm -it -- bash&lt;br /&gt;
kubectl run    --image=busybox:1.31.0   busybox-1 --rm -it -- sh          # exec and delete when completed&lt;br /&gt;
kubectl run    --image=busybox:1.31.0   busybox-2          -- sleep 7200  # sleep, so you can exec&lt;br /&gt;
kubectl run    --image=alpine           alpine-1  --rm -it -- ping -c 1 8.8.8.8&lt;br /&gt;
 docker run    --rm -it --name alpine-1 alpine                ping -c 1 8.8.8.8&lt;br /&gt;
&lt;br /&gt;
# Network-multitool | https://github.com/wbitt/Network-MultiTool | Runs as a webserver, so won't complete.&lt;br /&gt;
kubectl run    --image=wbitt/network-multitool multitool-1&lt;br /&gt;
kubectl create --image=wbitt/network-multitool deployment multitool&lt;br /&gt;
kubectl exec -it multitool-1          -- /bin/bash&lt;br /&gt;
kubectl exec -it deployment/multitool -- /bin/bash&lt;br /&gt;
docker run --rm -it --name network-multitool wbitt/network-multitool bash&lt;br /&gt;
&lt;br /&gt;
# Curl&lt;br /&gt;
kubectl run test --image=tutum/curl -- sleep 10000&lt;br /&gt;
&lt;br /&gt;
# Deprecation syntax&lt;br /&gt;
kubectl run --image=k8s.gcr.io/echoserver:1.4 --generator=run-pod/v1         hello-1 --port=8080 # VALID!&lt;br /&gt;
kubectl run --image=k8s.gcr.io/echoserver:1.4 --generator=deployment/apps.v1 hello-1 --port=8080 # &amp;lt;- deprecated&lt;br /&gt;
&lt;br /&gt;
# Errors&lt;br /&gt;
# | error: --rm should only be used for attached containers&lt;br /&gt;
# | Error: unknown flag: --image # when kubectl create --image&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Additional software&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Process and network comamnds&lt;br /&gt;
export DEBIAN_FRONTEND=noninteractive # Ubuntu 20.04&lt;br /&gt;
DEBIAN_FRONTEND=noninteractive apt install -yq dnsutils iproute2 iputils-ping iputils-tracepath net-tools netcat procps&lt;br /&gt;
# | dnsutils     - nslookup, dig&lt;br /&gt;
# | iproute2     - ip addr, ss&lt;br /&gt;
# | iputils-ping      - ping&lt;br /&gt;
# | iputils-tracepath - tracepath&lt;br /&gt;
# | net-tools    - ifconfig&lt;br /&gt;
# | netcat       - nc&lt;br /&gt;
# | procps       - ps, top&lt;br /&gt;
&lt;br /&gt;
# Databases&lt;br /&gt;
apt install -yq redis-tools&lt;br /&gt;
apt install -yq postgresql-client&lt;br /&gt;
&lt;br /&gt;
# AWS cli v1 - Debian&lt;br /&gt;
apt install python-pip&lt;br /&gt;
pip install awscli&lt;br /&gt;
&lt;br /&gt;
# Network test without ping, nc or telnet&lt;br /&gt;
(timeout 1 bash -c '&amp;lt;/dev/tcp/127.0.0.1/22 &amp;amp;&amp;amp; echo PORT OPEN || echo PORT CLOSED') 2&amp;gt;/dev/null&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;kubectl heredocs&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl apply -f &amp;lt;(cat &amp;lt;&amp;lt;EOF&lt;br /&gt;
EOF&lt;br /&gt;
) --dry-run=server&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;One lines move to yamls&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
# kubectl exec -it ubuntu-2 -- bash&lt;br /&gt;
kubectl apply -f &amp;lt;(cat &amp;lt;&amp;lt;EOF&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Pod&lt;br /&gt;
metadata:&lt;br /&gt;
# namespace: default&lt;br /&gt;
  name: ubuntu-1&lt;br /&gt;
# annotations:&lt;br /&gt;
#   kubernetes.io/psp: eks.privileged&lt;br /&gt;
# labels:&lt;br /&gt;
#   app: ubuntu&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
  - command:&lt;br /&gt;
    - &amp;quot;sleep&amp;quot;&lt;br /&gt;
    - &amp;quot;7200&amp;quot;&lt;br /&gt;
#   args:&lt;br /&gt;
#   - &amp;quot;bash&amp;quot;&lt;br /&gt;
    image: ubuntu:20.04&lt;br /&gt;
    imagePullPolicy: IfNotPresent&lt;br /&gt;
    name: ubuntu-1&lt;br /&gt;
#   securityContext:&lt;br /&gt;
#     privileged: true&lt;br /&gt;
#   tty: true&lt;br /&gt;
# dnsPolicy: ClusterFirst&lt;br /&gt;
# enableServiceLinks: true&lt;br /&gt;
  restartPolicy: Never&lt;br /&gt;
# serviceAccount    : sa1&lt;br /&gt;
# serviceAccountName: sa1&lt;br /&gt;
# nodeSelector:&lt;br /&gt;
#   node.kubernetes.io/lifecycle: spot&lt;br /&gt;
EOF&lt;br /&gt;
) --dry-run=server&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Docker - for a single missing commands ===&lt;br /&gt;
If you ever miss some commands you can use docker container package with it:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# curl - missing on minikube node that runs CoreOS&lt;br /&gt;
minikube -p metrics ip; minikube ssh&lt;br /&gt;
docker run appropriate/curl -- http://&amp;lt;NodeIP&amp;gt;:10255/stats/summary # check kubelet-metrics non secure endpoint&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== [https://kubernetes.io/blog/2019/01/14/apiserver-dry-run-and-kubectl-diff/ &amp;lt;code&amp;gt;kubectl diff&amp;lt;/code&amp;gt;] ==&lt;br /&gt;
Shows the differences between the current '''live''' object and the new '''dry-run''' object.&lt;br /&gt;
&amp;lt;source lang=diff&amp;gt;&lt;br /&gt;
kubectl diff -f webfront-deploy.yaml&lt;br /&gt;
diff -u -N /tmp/LIVE-761963756/apps.v1.Deployment.default.webfront-deploy /tmp/MERGED-431884635/apps.v1.Deployment.default.webfront-deploy&lt;br /&gt;
--- /tmp/LIVE-761963756/apps.v1.Deployment.default.webfront-deploy      2019-10-13 17:46:59.784000000 +0000&lt;br /&gt;
+++ /tmp/MERGED-431884635/apps.v1.Deployment.default.webfront-deploy    2019-10-13 17:46:59.788000000 +0000&lt;br /&gt;
@@ -4,7 +4,7 @@&lt;br /&gt;
   annotations:&lt;br /&gt;
     deployment.kubernetes.io/revision: &amp;quot;1&amp;quot;&lt;br /&gt;
   creationTimestamp: &amp;quot;2019-10-13T16:38:43Z&amp;quot;&lt;br /&gt;
-  generation: 2&lt;br /&gt;
+  generation: 3&lt;br /&gt;
   labels:&lt;br /&gt;
     app: webfront-deploy&lt;br /&gt;
   name: webfront-deploy&lt;br /&gt;
@@ -14,7 +14,7 @@&lt;br /&gt;
   uid: ebaf757e-edd7-11e9-8060-0a2fb3cdd79a&lt;br /&gt;
 spec:&lt;br /&gt;
   progressDeadlineSeconds: 600&lt;br /&gt;
-  replicas: 2&lt;br /&gt;
+  replicas: 1&lt;br /&gt;
   revisionHistoryLimit: 10&lt;br /&gt;
   selector:&lt;br /&gt;
     matchLabels:&lt;br /&gt;
@@ -29,6 +29,7 @@&lt;br /&gt;
       creationTimestamp: null&lt;br /&gt;
       labels:&lt;br /&gt;
         app: webfront-deploy&lt;br /&gt;
+        role: webfront&lt;br /&gt;
     spec:&lt;br /&gt;
       containers:&lt;br /&gt;
       - image: nginx:1.7.8&lt;br /&gt;
exit status 1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Kubectl-plugins - [https://krew.sigs.k8s.io/docs/ Krew] plugin manager ==&lt;br /&gt;
Install [https://github.com/kubernetes-sigs/krew krew] package manager for kubectl plugins, requires K8s v1.12+&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
(&lt;br /&gt;
  set -x; cd &amp;quot;$(mktemp -d)&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  OS=&amp;quot;$(uname | tr '[:upper:]' '[:lower:]')&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  ARCH=&amp;quot;$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  KREW=&amp;quot;krew-${OS}_${ARCH}&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  curl -fsSLO &amp;quot;https://github.com/kubernetes-sigs/krew/releases/latest/download/${KREW}.tar.gz&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  tar zxvf &amp;quot;${KREW}.tar.gz&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  ./&amp;quot;${KREW}&amp;quot; install krew&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# update PATH&lt;br /&gt;
[ -d ${HOME}/.krew/bin ] &amp;amp;&amp;amp; export PATH=&amp;quot;${PATH}:${HOME}/.krew/bin&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# List plugins&lt;br /&gt;
kubectl krew search&lt;br /&gt;
&lt;br /&gt;
# Install plugins&lt;br /&gt;
kubectl krew install sniff&lt;br /&gt;
&lt;br /&gt;
# Upgrade plugins&lt;br /&gt;
kubectl krew upgrade&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*[https://github.com/kubernetes-sigs/krew-index/blob/master/plugins.md Available kubectl plugins] Github&lt;br /&gt;
*[https://ahmet.im/blog/kubectl-plugins/ kubectl subcommands] write your own plugin&lt;br /&gt;
&lt;br /&gt;
== Install kubectl plugins ==&lt;br /&gt;
&amp;lt;code&amp;gt;kubectl ctx&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;kubectl ns&amp;lt;/code&amp;gt; - change context and set default namespace&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl krew install ctx ns&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;kubectl cssh&amp;lt;/code&amp;gt; - SSH into Kubernetes nodes ===&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Ssh to all nodes, example below for EKS v1.15.11&lt;br /&gt;
kubectl cssh -u ec2-user -i /git/secrets/ssh/dev.pem -a &amp;quot;InternalIP&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;code&amp;gt;[https://github.com/FairwindsOps/pluto kubectl deprecations]&amp;lt;/code&amp;gt;===&lt;br /&gt;
;&amp;lt;code&amp;gt;[https://github.com/FairwindsOps/pluto kubectl deprecations]&amp;lt;/code&amp;gt;: shows all the deprecated objects in a Kubernetes cluster allowing the operator to verify them before upgrading the cluster. It uses the swagger.json version available in master branch of Kubernetes repository (https://github.com/kubernetes/kubernetes/tree/master/api/openapi-spec) as a reference.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl deprecations&lt;br /&gt;
StatefulSet found in statefulsets.apps/v1beta1&lt;br /&gt;
	 ├─ API REMOVED FROM THE CURRENT VERSION AND SHOULD BE MIGRATED IMMEDIATELY!!&lt;br /&gt;
		-&amp;gt; OBJECT: myapp namespace: mynamespace1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Prior upgrade report. Script specific to EKS.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
[[ $# -eq 0 ]] &amp;amp;&amp;amp; echo &amp;quot;no args, provide prefix for the file name&amp;quot; &amp;amp;&amp;amp; exit 1&lt;br /&gt;
PREFIX=$1&lt;br /&gt;
TARGET_K8S_VER=v1.16.8&lt;br /&gt;
K8Sid=$(kubectl cluster-info | head -1 | cut -d'/' -f3 | cut -d'.' -f1)&lt;br /&gt;
kubectl deprecations --k8s-version $TARGET_K8S_VER &amp;gt; $PREFIX-$(kubectl cluster-info | head -1 | cut -d'/' -f3 | cut -d'.' -f1)-$(date +&amp;quot;%Y%m%d-%H%M&amp;quot;)-from-$(kubectl version --short | grep Server | cut -f3 -d' ')-to-${TARGET_K8S_VER}.yaml&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ ./kube-deprecations.sh test&lt;br /&gt;
$ ls -l&lt;br /&gt;
-rw-rw-r-- 1 vagrant vagrant 29356 Jun 29 16:09 test-11111111112222222222333333333344-20200629-1609-from-v1.15.11-eks-af3caf-to-latest.yaml&lt;br /&gt;
-rw-rw-r-- 1 vagrant vagrant   852 Jun 30 22:41 test-11111111112222222222333333333344-20200630-2241-from-v1.15.11-eks-af3caf-to-v1.16.8.yaml&lt;br /&gt;
-rwxrwxr-x 1 vagrant vagrant   437 Jun 30 22:41 kube-deprecations.sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;code&amp;gt;kubectl df-pv&amp;lt;/code&amp;gt;===&lt;br /&gt;
;&amp;lt;code&amp;gt;kubectl df-pv&amp;lt;/code&amp;gt;: Show disk usage (like unix df) for persistent volumes&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl df-pv&lt;br /&gt;
PVC                   NAMESPACE   POD                    SIZE          USED        AVAILABLE     PERCENTUSED   IUSED   IFREE     PERCENTIUSED&lt;br /&gt;
rdbms-volume          shared1     rdbms-d494fbf4-xrssk   2046640128    252817408   1777045504    12.35         688     130384    0.52&lt;br /&gt;
userdata-0            shared2     mft-0                  21003583488   57692160    20929114112   0.27          749     1309971   0.06&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
===&amp;lt;code&amp;gt;kubectl sniff&amp;lt;/code&amp;gt;===&lt;br /&gt;
Start a remote packet capture on pods using tcpdump.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl sniff hello-minikube-7c77b68cff-qbvsd -c hello-minikube&lt;br /&gt;
# Flags:&lt;br /&gt;
#   -c, --container string             container (optional)&lt;br /&gt;
#   -x, --context string               kubectl context to work on (optional)&lt;br /&gt;
#   -f, --filter string                tcpdump filter (optional)&lt;br /&gt;
#   -h, --help                         help for sniff&lt;br /&gt;
#       --image string                 the privileged container image (optional)&lt;br /&gt;
#   -i, --interface string             pod interface to packet capture (optional) (default &amp;quot;any&amp;quot;)&lt;br /&gt;
#   -l, --local-tcpdump-path string    local static tcpdump binary path (optional)&lt;br /&gt;
#   -n, --namespace string             namespace (optional) (default &amp;quot;default&amp;quot;)&lt;br /&gt;
#   -o, --output-file string           output file path, tcpdump output will be redirect to this file instead of wireshark (optional) ('-' stdout)&lt;br /&gt;
#   -p, --privileged                   if specified, ksniff will deploy another pod that have privileges to attach target pod network namespace&lt;br /&gt;
#   -r, --remote-tcpdump-path string   remote static tcpdump binary path (optional) (default &amp;quot;/tmp/static-tcpdump&amp;quot;)&lt;br /&gt;
#   -v, --verbose                      if specified, ksniff output will include debug information (optional)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The command above will open Wireshark, interesting article to follow:&lt;br /&gt;
* [https://istio.io/latest/docs/tasks/security/authentication/mtls-migration/#set-up-the-cluster mutual TLS] istio&lt;br /&gt;
* [https://dzone.com/articles/verifying-service-mesh-tls-in-kubernetes-using-ksn Verifying Service Mesh TLS in Kubernetes, Using Ksniff and Wireshark]&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;code&amp;gt;kubectl neat&amp;lt;/code&amp;gt;===&lt;br /&gt;
Print sanitized Kubernetes manifest.&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
kubectl get csec  dummy-secret -n clustersecret -oyaml | kubectl neat&lt;br /&gt;
apiVersion: clustersecret.io/v1&lt;br /&gt;
data:&lt;br /&gt;
  tls.crt: ***&lt;br /&gt;
  tls.key: ***&lt;br /&gt;
kind: ClusterSecret&lt;br /&gt;
matchNamespace:&lt;br /&gt;
- anothernamespace&lt;br /&gt;
metadata:&lt;br /&gt;
  name: dummy-secret&lt;br /&gt;
  namespace: clustersecret&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Getting help like manpages &amp;lt;code&amp;gt;kubectl explain&amp;lt;/code&amp;gt; ==&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
$ kubectl --help&lt;br /&gt;
$ kubectl get --help&lt;br /&gt;
$ kubectl explain --help&lt;br /&gt;
$ kubectl explain pod.spec.containers # kubectl knows cluster version, so gives you correct schema details&lt;br /&gt;
$ kubectl explain pods.spec.tolerations --recursive # show only fields&lt;br /&gt;
(...)&lt;br /&gt;
FIELDS:&lt;br /&gt;
   effect	&amp;lt;string&amp;gt;&lt;br /&gt;
   key	&amp;lt;string&amp;gt;&lt;br /&gt;
   operator	&amp;lt;string&amp;gt;&lt;br /&gt;
   tolerationSeconds	&amp;lt;integer&amp;gt;&lt;br /&gt;
   value	&amp;lt;string&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*[https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-strong-getting-started-strong- kubectl-commands] K8s interactive kubectl command reference&lt;br /&gt;
&lt;br /&gt;
= Watch Containers logs =&lt;br /&gt;
== [https://github.com/stern/stern Stern] ==&lt;br /&gt;
{{note| https://github.com/wercker/stern repository has no activity [https://github.com/wercker/stern/issues/140 ISSUE-140], the new community maintain repo is &amp;lt;tt&amp;gt;[https://github.com/stern/stern stern/stern]&amp;lt;/tt&amp;gt;  }}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Log tailing and landscape viewing tool. It connects to kubeapi and streams logs from all pods. Thus using this external tool with clusters that have 100ts of containers can be put significant load on kubeapi.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It will re-use kubectl config file to connect to your clusters, so works oob.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Govendor - this module manager is required&lt;br /&gt;
export GOPATH=$HOME/go        # path where go modules can be found, used by 'go get -u &amp;lt;url&amp;gt;'&lt;br /&gt;
export PATH=$PATH:$GOPATH/bin # path to the additional 'go' binaries&lt;br /&gt;
go get -u github.com/kardianos/govendor  # there will be no output&lt;br /&gt;
&lt;br /&gt;
# Stern (official)&lt;br /&gt;
mkdir -p $GOPATH/src/github.com/stern # new link: https://github.com/stern/stern&lt;br /&gt;
cd $GOPATH/src/github.com/stern&lt;br /&gt;
git clone https://github.com/stern/stern.git &amp;amp;&amp;amp; cd stern&lt;br /&gt;
govendor sync # there will be no output, may take 2 min&lt;br /&gt;
go install    # no output&lt;br /&gt;
&lt;br /&gt;
# Stern latest, download binary, no need for govendor&lt;br /&gt;
REPO=stern/stern&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/$REPO/releases/latest&amp;quot; | jq -r .tag_name | tr -d v); echo $LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d); FILE=stern_${LATEST}_linux_amd64&lt;br /&gt;
curl -L https://github.com/$REPO/releases/download/v${LATEST}/$FILE.tar.gz -o $TEMPDIR/$FILE.tar.gz&lt;br /&gt;
sudo tar xzvf $TEMPDIR/$FILE.tar.gz -C /usr/local/bin/ stern&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Usage&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Regex filter (pod-query) to match 2 pods patterns 'proxy' and 'gateway'&lt;br /&gt;
stern -n dev --kubeconfig ~/.kube/dev-config \(proxy\|gateway\)  # escape to protect regex mod characters&lt;br /&gt;
stern -n dev --kubeconfig ~/.kube/dev-config '(proxy|gateway)'   # single-quote to protect mod characters&lt;br /&gt;
&lt;br /&gt;
# Template the output&lt;br /&gt;
stern --template '{{.Message}} ({{.NodeName}}/{{.Namespace}}/{{.PodName}}/{{.ContainerName}}){{&amp;quot;\n&amp;quot;}}' .&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Help&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ stern&lt;br /&gt;
Tail multiple pods and containers from Kubernetes&lt;br /&gt;
&lt;br /&gt;
Usage:&lt;br /&gt;
  stern pod-query [flags]&lt;br /&gt;
&lt;br /&gt;
Flags:&lt;br /&gt;
  -A, --all-namespaces             If present, tail across all namespaces. A specific namespace is ignored even if specified with --namespace.&lt;br /&gt;
      --color string               Color output. Can be 'always', 'never', or 'auto' (default &amp;quot;auto&amp;quot;)&lt;br /&gt;
      --completion string          Outputs stern command-line completion code for the specified shell. Can be 'bash' or 'zsh'&lt;br /&gt;
  -c, --container string           Container name when multiple containers in pod (default &amp;quot;.*&amp;quot;)&lt;br /&gt;
      --container-state string     If present, tail containers with status in running, waiting or terminated. Default to running. (default &amp;quot;running&amp;quot;)&lt;br /&gt;
      --context string             Kubernetes context to use. Default to current context configured in kubeconfig.&lt;br /&gt;
  -e, --exclude strings            Regex of log lines to exclude&lt;br /&gt;
  -E, --exclude-container string   Exclude a Container name&lt;br /&gt;
  -h, --help                       help for stern&lt;br /&gt;
  -i, --include strings            Regex of log lines to include&lt;br /&gt;
      --init-containers            Include or exclude init containers (default true)&lt;br /&gt;
      --kubeconfig string          Path to kubeconfig file to use&lt;br /&gt;
  -n, --namespace string           Kubernetes namespace to use. Default to namespace configured in Kubernetes context.&lt;br /&gt;
  -o, --output string              Specify predefined template. Currently support: [default, raw, json] (default &amp;quot;default&amp;quot;)&lt;br /&gt;
  -l, --selector string            Selector (label query) to filter on. If present, default to &amp;quot;.*&amp;quot; for the pod-query.&lt;br /&gt;
  -s, --since duration             Return logs newer than a relative duration like 5s, 2m, or 3h. Defaults to 48h.&lt;br /&gt;
      --tail int                   The number of lines from the end of the logs to show. Defaults to -1, showing all logs. (default -1)&lt;br /&gt;
      --template string            Template to use for log lines, leave empty to use --output flag&lt;br /&gt;
  -t, --timestamps                 Print timestamps&lt;br /&gt;
  -v, --version                    Print the version and exit&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Usage&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
stern &amp;lt;pod&amp;gt;&lt;br /&gt;
stern --tail 1 busybox -n &amp;lt;namespace&amp;gt; #this is RegEx that matches busybox1|2|etc&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== [https://github.com/johanhaleby/kubetail kubetail] ==&lt;br /&gt;
Bash script that enables you to aggregate (tail/follow) logs from multiple pods into one stream. This is the same as running &amp;lt;code&amp;gt;kubectl logs -f&amp;lt;/code&amp;gt; but for multiple pods.&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/lensapp/lens Lens | Kubernetes IDE] =&lt;br /&gt;
Kubernetes client, this is not a dashboard that needs installing on a cluster. Similar to KUI but much more powerful.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Deb&lt;br /&gt;
curl curl https://api.k8slens.dev/binaries/Lens-5.4.1-latest.20220304.1.amd64.deb&lt;br /&gt;
sudo apt-get install ./Lens-5.4.1-latest.20220304.1.amd64.deb&lt;br /&gt;
&lt;br /&gt;
# Snap&lt;br /&gt;
snap list&lt;br /&gt;
sudo snap install kontena-lens --classic # U16.04+, tested on U20.04&lt;br /&gt;
&lt;br /&gt;
# Install from a .snap file&lt;br /&gt;
mkdir -p ~/Downloads/kontena-lens &amp;amp;&amp;amp; cd $_&lt;br /&gt;
snap download kontena-lens&lt;br /&gt;
sudo snap ack     kontena-lens_152.assert         # add an assertion to the system assertion database&lt;br /&gt;
sudo snap install kontena-lens_152.snap --classic # --dangerous if you do not have the assert file&lt;br /&gt;
&lt;br /&gt;
# download snap from https://k8slens.dev/&lt;br /&gt;
curl https://api.k8slens.dev/binaries/Lens-5.3.4-latest.20220120.1.amd64.snap&lt;br /&gt;
sudo snap install Lens-5.3.4-latest.20220120.1.amd64.snap --classic --dangerous&lt;br /&gt;
&lt;br /&gt;
# Info&lt;br /&gt;
$ snap info kontena-lens_152.assert&lt;br /&gt;
name:      kontena-lens&lt;br /&gt;
summary:   Lens - The Kubernetes IDE&lt;br /&gt;
publisher: Mirantis Inc (jakolehm)&lt;br /&gt;
store-url: https://snapcraft.io/kontena-lens&lt;br /&gt;
contact:   info@k8slens.dev&lt;br /&gt;
license:   Proprietary&lt;br /&gt;
description: |&lt;br /&gt;
  Lens is the most powerful IDE for people who need to deal with Kubernetes clusters on a daily&lt;br /&gt;
  basis. Ensure your clusters are properly setup and configured. Enjoy increased visibility, real&lt;br /&gt;
  time statistics, log streams and hands-on troubleshooting capabilities. With Lens, you can work&lt;br /&gt;
  with your clusters more easily and fast, radically improving productivity and the speed of&lt;br /&gt;
  business.&lt;br /&gt;
snap-id: Dek6y5mTEPxhySFKPB4Z0WVi5EPS9osS&lt;br /&gt;
channels:&lt;br /&gt;
  latest/stable:    4.0.7      2021-01-20 (152) 107MB classic&lt;br /&gt;
  latest/candidate: 4.0.7      2021-01-20 (152) 107MB classic&lt;br /&gt;
  latest/beta:      4.0.7      2021-01-20 (152) 107MB classic&lt;br /&gt;
  latest/edge:      4.1.0-rc.1 2021-02-11 (157) 108MB classic&lt;br /&gt;
&lt;br /&gt;
$ snap info kontena-lens_152.snap&lt;br /&gt;
path:       &amp;quot;kontena-lens_152.snap&amp;quot;&lt;br /&gt;
name:       kontena-lens&lt;br /&gt;
summary:    Lens&lt;br /&gt;
version:    4.0.7 classic&lt;br /&gt;
build-date: 24 days ago, at 16:31 GMT&lt;br /&gt;
license:    unset&lt;br /&gt;
description: |&lt;br /&gt;
  Lens - The Kubernetes IDE&lt;br /&gt;
commands:&lt;br /&gt;
  - kontena-lens&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/lensapp/lens.git OpenLens] | Kubernetes IDE =&lt;br /&gt;
Download binary from https://github.com/MuhammedKalkan/OpenLens&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install on Ubuntu&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
SUDO=''&lt;br /&gt;
if (( $EUID != 0 )); then&lt;br /&gt;
    SUDO='sudo'&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
REPO=MuhammedKalkan/OpenLens&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/$REPO/releases/latest&amp;quot; | jq -r .tag_name | tr -d v); echo $LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d); FILE=OpenLens-${LATEST}.amd64.deb&lt;br /&gt;
curl -L https://github.com/${REPO}/releases/download/v${LATEST}/$FILE -o $TEMPDIR/$FILE&lt;br /&gt;
$SUDO dpkg -i $TEMPDIR/$FILE&lt;br /&gt;
$SUDO apt-get install -y --fix-broken&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Build your own - [https://gist.github.com/jslay88/bf654c23eaaaed443bb8e8b41d02b2a9 gist]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
install_deps_windows() {&lt;br /&gt;
    echo &amp;quot;Installing Build Dependencies (Windows)...&amp;quot;&lt;br /&gt;
    choco install -y make visualstudio2019buildtools visualstudio2019-workload-vctools&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_deps_darwin() {&lt;br /&gt;
    echo &amp;quot;Installing Build Dependencies (Darwin)...&amp;quot;&lt;br /&gt;
    xcode-select --install&lt;br /&gt;
    if ! hash make 2&amp;gt;/dev/null; then&lt;br /&gt;
        if ! hash brew 2&amp;gt;/dev/null; then&lt;br /&gt;
            echo &amp;quot;Installing Homebrew...&amp;quot;&lt;br /&gt;
            /bin/bash -c &amp;quot;$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)&amp;quot;&lt;br /&gt;
        fi&lt;br /&gt;
        echo &amp;quot;Installing make via Homebrew...&amp;quot;&lt;br /&gt;
        brew install make&lt;br /&gt;
    fi&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_deps_posix() {&lt;br /&gt;
    echo &amp;quot;Installing Build Dependencies (Posix)...&amp;quot;&lt;br /&gt;
    sudo apt-get install -y make g++ curl&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_darwin() {&lt;br /&gt;
    echo &amp;quot;Killing OpenLens (if open)...&amp;quot;&lt;br /&gt;
    killall OpenLens&lt;br /&gt;
    echo &amp;quot;Installing OpenLens (Darwin)...&amp;quot;&lt;br /&gt;
    rm -Rf &amp;quot;$HOME/Applications/OpenLens.app&amp;quot;&lt;br /&gt;
    arch=&amp;quot;mac&amp;quot;&lt;br /&gt;
    if [[ &amp;quot;$(uname -m)&amp;quot; == &amp;quot;arm64&amp;quot; ]]; then&lt;br /&gt;
        arch=&amp;quot;mac-arm64&amp;quot;  # credit @teefax&lt;br /&gt;
    fi&lt;br /&gt;
    cp -Rfp &amp;quot;$tempdir/lens/dist/$arch/OpenLens.app&amp;quot; &amp;quot;$HOME/Applications/&amp;quot;&lt;br /&gt;
    rm -Rf &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    &lt;br /&gt;
    print_alias_message&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_posix() {&lt;br /&gt;
    echo &amp;quot;Installing OpenLens (Posix)...&amp;quot;&lt;br /&gt;
    cd &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    sudo dpkg -i &amp;quot;$(ls -Art $tempdir/lens/dist/*.deb  | tail -n 1)&amp;quot;&lt;br /&gt;
    rm -Rf &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    &lt;br /&gt;
    print_alias_message&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_windows() {&lt;br /&gt;
    echo &amp;quot;Installing OpenLens (Windows)...&amp;quot;&lt;br /&gt;
    &amp;quot;$(/bin/ls -Art $tempdir/lens/dist/OpenLens*.exe  | tail -n 1)&amp;quot;&lt;br /&gt;
    rm -Rf &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    &lt;br /&gt;
    print_alias_message&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_nvm() {&lt;br /&gt;
    if [ -z &amp;quot;$NVM_DIR&amp;quot; ]; then&lt;br /&gt;
        echo &amp;quot;Installing NVM...&amp;quot;&lt;br /&gt;
        NVM_VERSION=$(curl -s https://api.github.com/repos/nvm-sh/nvm/releases/latest | sed -En 's/  &amp;quot;tag_name&amp;quot;: &amp;quot;(.+)&amp;quot;,/\1/p')&lt;br /&gt;
        curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/$NVM_VERSION/install.sh | bash&lt;br /&gt;
        NVM_DIR=&amp;quot;$HOME/.nvm&amp;quot;&lt;br /&gt;
    fi&lt;br /&gt;
    [ -s &amp;quot;$NVM_DIR/nvm.sh&amp;quot; ] &amp;amp;&amp;amp; \. &amp;quot;$NVM_DIR/nvm.sh&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
build_openlens() {&lt;br /&gt;
    tempdir=$(mktemp -d)&lt;br /&gt;
    cd &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    if [ -z &amp;quot;$1&amp;quot; ]; then&lt;br /&gt;
        echo &amp;quot;Checking GitHub API for latests tag...&amp;quot;&lt;br /&gt;
        OPENLENS_VERSION=$(curl -s https://api.github.com/repos/lensapp/lens/releases/latest | sed -En 's/  &amp;quot;tag_name&amp;quot;: &amp;quot;(.+)&amp;quot;,/\1/p')&lt;br /&gt;
    else&lt;br /&gt;
        if [[ &amp;quot;$1&amp;quot; == v* ]]; then&lt;br /&gt;
            OPENLENS_VERSION=&amp;quot;$1&amp;quot;&lt;br /&gt;
        else&lt;br /&gt;
            OPENLENS_VERSION=&amp;quot;v$1&amp;quot;&lt;br /&gt;
        fi&lt;br /&gt;
        echo &amp;quot;Using supplied tag $OPENLENS_VERSION&amp;quot;&lt;br /&gt;
    fi&lt;br /&gt;
    if [ -z $OPENLENS_VERSION ]; then&lt;br /&gt;
        echo &amp;quot;Failed to get valid version tag. Aborting!&amp;quot;&lt;br /&gt;
        exit 1&lt;br /&gt;
    fi&lt;br /&gt;
    curl -L https://github.com/lensapp/lens/archive/refs/tags/$OPENLENS_VERSION.tar.gz | tar xvz&lt;br /&gt;
    mv lens-* lens&lt;br /&gt;
    cd lens&lt;br /&gt;
    NVM_CURRENT=$(nvm current)&lt;br /&gt;
    nvm install 16&lt;br /&gt;
    nvm use 16&lt;br /&gt;
    npm install -g yarn&lt;br /&gt;
    make build&lt;br /&gt;
    nvm use &amp;quot;$NVM_CURRENT&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
print_alias_message() {&lt;br /&gt;
    if [ &amp;quot;$(type -t install_openlens)&amp;quot; != 'alias' ]; then&lt;br /&gt;
        printf &amp;quot;It is recommended to add an alias to your shell profile to run this script again.\n&amp;quot;&lt;br /&gt;
        printf &amp;quot;alias install_openlens=\&amp;quot;curl -o- https://gist.githubusercontent.com/jslay88/bf654c23eaaaed443bb8e8b41d02b2a9/raw/install_openlens.sh | bash\&amp;quot;\n\n&amp;quot;&lt;br /&gt;
    fi&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
if [[ &amp;quot;$(uname)&amp;quot; == &amp;quot;Linux&amp;quot; ]]; then&lt;br /&gt;
    install_deps_posix&lt;br /&gt;
    install_nvm&lt;br /&gt;
    build_openlens &amp;quot;$1&amp;quot;&lt;br /&gt;
    install_posix&lt;br /&gt;
elif [[ &amp;quot;$(uname)&amp;quot; == &amp;quot;Darwin&amp;quot; ]]; then&lt;br /&gt;
    install_deps_darwin&lt;br /&gt;
    install_nvm&lt;br /&gt;
    build_openlens &amp;quot;$1&amp;quot;&lt;br /&gt;
    install_darwin&lt;br /&gt;
else&lt;br /&gt;
    install_deps_windows&lt;br /&gt;
    install_nvm&lt;br /&gt;
    build_openlens &amp;quot;$1&amp;quot;&lt;br /&gt;
    install_windows&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Done!&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://kui.tools/ kui terminal] =&lt;br /&gt;
kui is a terminal with visualizations, provided by IBM&lt;br /&gt;
&lt;br /&gt;
Install using continent install script into &amp;lt;code&amp;gt;/opt/Kui-linux-x64/&amp;lt;/code&amp;gt; and symlink &amp;lt;code&amp;gt;Kui&amp;lt;/code&amp;gt; binary to &amp;lt;code&amp;gt;/usr/local/bin/kui&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
REPO=kubernetes-sigs/kui&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/$REPO/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d); FILE=Kui-linux-x64.zip&lt;br /&gt;
curl -L https://github.com/$REPO/releases/download/$LATEST/Kui-linux-x64.zip -o $TEMPDIR/$FILE&lt;br /&gt;
sudo mkdir -p /opt/Kui-linux-x64&lt;br /&gt;
sudo unzip $TEMPDIR/$FILE -d /opt/&lt;br /&gt;
&lt;br /&gt;
# Run&lt;br /&gt;
$&amp;gt; /opt/Kui-linux-x64/Kui&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Run Kui as [Kubernetes plugin https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
export PATH=$PATH:/opt/Kui-linux-x64/ # make sure Kui libs are in environment PATH&lt;br /&gt;
kubectl kui get pods -A               # -&amp;gt; a pop up window will show up&lt;br /&gt;
&lt;br /&gt;
$ kubectl plugin list &lt;br /&gt;
The following compatible plugins are available:&lt;br /&gt;
/opt/Kui-linux-x64/kubectl-kui&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
:[[File:ClipCapIt-200428-205600.PNG]]&lt;br /&gt;
&lt;br /&gt;
; Resources&lt;br /&gt;
* [https://github.com/IBM/kui/wiki kui/wiki] Github&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/derailed/popeye popeye] =&lt;br /&gt;
Popeye is a utility that scans live Kubernetes cluster and reports potential issues with deployed resources and configurations.&lt;br /&gt;
:[[File:ClipCapIt-200501-123645.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install&lt;br /&gt;
REPO=derailed/popeye&lt;br /&gt;
RELEASE=popeye_Linux_x86_64.tar.gz&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/${REPO}/releases/latest&amp;quot; | jq -r .tag_name); echo $VERSION # latest&lt;br /&gt;
wget https://github.com/${REPO}/releases/download/${VERSION}/${RELEASE}&lt;br /&gt;
tar xf ${RELEASE} popeye --remove-files&lt;br /&gt;
sudo install popeye /usr/local/bin&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
popeye # --out html&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/derailed/k9s k9s] =&lt;br /&gt;
K9s provides a terminal UI to interact with Kubernetes clusters.&lt;br /&gt;
&lt;br /&gt;
;Install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/derailed/k9s/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
wget https://github.com/derailed/k9s/releases/download/$LATEST/k9s_Linux_amd64.tar.gz&lt;br /&gt;
tar xf k9s_Linux_amd64.tar.gz --remove-files k9s&lt;br /&gt;
sudo install k9s /usr/local/bin&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Usage&lt;br /&gt;
* &amp;lt;code&amp;gt;?&amp;lt;/code&amp;gt; help&lt;br /&gt;
* &amp;lt;code&amp;gt;:ns&amp;lt;/code&amp;gt; select namespace&lt;br /&gt;
* &amp;lt;code&amp;gt;:nodes&amp;lt;/code&amp;gt; show nodes&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:[[File:ClipCapIt-190826-152830.PNG]]&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/droctothorpe/kubecolor kubecolor] =&lt;br /&gt;
Kubecolor is a bash function that colorizes the output of kubectl get events -w.&lt;br /&gt;
:[[File:ClipCapIt-190831-113158.PNG]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# This script is not working&lt;br /&gt;
git clone https://github.com/droctothorpe/kubecolor.git ~/.kubecolor&lt;br /&gt;
echo &amp;quot;source ~/.kubecolor/kubecolor.bash&amp;quot; &amp;gt;&amp;gt; ~/.bash_profile # (or ~/.bashrc)&lt;br /&gt;
source ~/.bash_profile # (or ~/.bashrc)&lt;br /&gt;
&lt;br /&gt;
# You can source this function instead&lt;br /&gt;
kube-events() {&lt;br /&gt;
    kubectl get events --all-namespaces --watch \&lt;br /&gt;
    -o 'go-template={{.lastTimestamp}} ^ {{.involvedObject.kind}} ^ {{.message}} ^ ({{.involvedObject.name}}){{&amp;quot;\n&amp;quot;}}' \&lt;br /&gt;
    | awk -F^ \&lt;br /&gt;
    -v   black=$(tput setaf 0) \&lt;br /&gt;
    -v     red=$(tput setaf 1) \&lt;br /&gt;
    -v   green=$(tput setaf 2) \&lt;br /&gt;
    -v  yellow=$(tput setaf 3) \&lt;br /&gt;
    -v    blue=$(tput setaf 4) \&lt;br /&gt;
    -v magenta=$(tput setaf 5) \&lt;br /&gt;
    -v    cyan=$(tput setaf 6) \&lt;br /&gt;
    -v   white=$(tput setaf 7) \&lt;br /&gt;
    '{ $1=blue $1; $2=green $2; $3=white $3; }  1'&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
kube-events&lt;br /&gt;
kubectl get events -A -w&lt;br /&gt;
kubectl get events --all-namespaces --watch -o 'go-template={{.lastTimestamp}} {{.involvedObject.kind}} {{.message}} ({{.involvedObject.name}}){{&amp;quot;\n&amp;quot;}}'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
= [https://argoproj.github.io/argo-rollouts/ argo-rollouts] =&lt;br /&gt;
Argo Rollouts introduces a new custom resource called a Rollout to provide additional deployment strategies such as Blue Green and Canary to Kubernetes.&lt;br /&gt;
&lt;br /&gt;
= &amp;lt;code&amp;gt;[https://github.com/groundcover-com/murre murre]&amp;lt;/code&amp;gt; =&lt;br /&gt;
Murre is an on-demand, scaleable source of container resource metrics for K8s. No dependencies needed, no install needed on a cluster.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
goenv install 1.18 # although 1.19 is the latest and the install completes successfully it wont create the binary  &lt;br /&gt;
go install github.com/groundcover-com/murre@latest&lt;br /&gt;
murre --sortby-cpu-util&lt;br /&gt;
murre --sortby-cpu&lt;br /&gt;
murre --pod kong-51xst&lt;br /&gt;
murre --namespace dev&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/amelbakry/kubernetes-scripts/blob/master/cluster-health.sh Kubernetes scripts] =&lt;br /&gt;
These Scripts allow you to troubleshoot and check the health status of the cluster and deployments They allow you to gather these information&lt;br /&gt;
* Cluster resources&lt;br /&gt;
* Cluster Nodes status&lt;br /&gt;
* Nodes Conditions&lt;br /&gt;
* Pods per Nodes&lt;br /&gt;
* Worker Nodes Per Availability Zones&lt;br /&gt;
* Cluster Node Types&lt;br /&gt;
* Pods not in running or completed status&lt;br /&gt;
* Top Pods according to Memory Limits&lt;br /&gt;
* Top Pods according to CPU Limits&lt;br /&gt;
* Number of Pods&lt;br /&gt;
* Pods Status&lt;br /&gt;
* Max Pods restart count&lt;br /&gt;
* Readiness of Pods&lt;br /&gt;
* Pods Average Utilization&lt;br /&gt;
* Top Pods according to CPU Utilization&lt;br /&gt;
* Top Pods according to Memory Utilization&lt;br /&gt;
* Pods Distribution per Nodes&lt;br /&gt;
* Node Distribution per Availability Zone&lt;br /&gt;
* Deployments without correct resources (Memory or CPU)&lt;br /&gt;
* Deployments without Limits&lt;br /&gt;
* Deployments without Application configured in Labels&lt;br /&gt;
&lt;br /&gt;
= Multi-node clusters =&lt;br /&gt;
{{Note|[[Kubernetes/minikube]] can do this natively}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Build multi node cluster for development.&lt;br /&gt;
On a single machine&lt;br /&gt;
* [https://github.com/kinvolk/kube-spawn/ kube-spawn] tool for creating a multi-node Kubernetes (&amp;gt;= 1.8) cluster on a single Linux machine&lt;br /&gt;
* [https://github.com/sttts/kubernetes-dind-cluster kubernetes-dind-cluster] Kubernetes multi-node cluster for developer of Kubernetes that launches in 36 seconds&lt;br /&gt;
* [https://kind.sigs.k8s.io/ kind] is a tool for running local Kubernetes clusters using Docker container “nodes”&lt;br /&gt;
* [https://github.com/ecomm-integration-ballerina/kubernetes-cluster Vagrant] full documentation in thsi [https://medium.com/@wso2tech/multi-node-kubernetes-cluster-with-vagrant-virtualbox-and-kubeadm-9d3eaac28b98 article]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Full cluster provisioning&lt;br /&gt;
* [https://github.com/kubernetes-sigs/kubespray kubespray] Deploy a Production Ready Kubernetes Cluster&lt;br /&gt;
* [https://github.com/kubernetes/kops kops] get a production grade Kubernetes cluster up and running&lt;br /&gt;
&lt;br /&gt;
= [https://kubernetes.io/docs/tasks/debug-application-cluster/crictl/ crictl] =&lt;br /&gt;
CLI and validation tools for Kubelet Container Runtime Interface (CRI). Used for debugging Kubernetes nodes with &amp;lt;code&amp;gt;crictl&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;crictl&amp;lt;/code&amp;gt; requires a Linux operating system with a CRI runtime. Creating containers with this tool on K8s cluster, will eventually cause that Kubernetes will delete these containers.&lt;br /&gt;
= [https://github.com/weaveworks/kubediff kubediff] show diff code vs what is deployed =&lt;br /&gt;
Kubediff is a tool for Kubernetes to show you the differences between your running configuration and your version controlled configuration.&lt;br /&gt;
= Mozilla SOPS - secret manager =&lt;br /&gt;
* [https://github.com/mozilla/sops SOPS] Mozilla SOPS: Secrets OPerationS, sops is an editor of encrypted files that supports YAML, JSON, ENV, INI and BINARY formats and encrypts with AWS KMS, GCP KMS, Azure Key Vault and PGP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/getsops/sops/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d)&lt;br /&gt;
curl -sL https://github.com/mozilla/sops/releases/download/${LATEST}/sops-${LATEST}.linux.amd64 -o $TEMPDIR/sops&lt;br /&gt;
sudo install $TEMPDIR/sops /usr/local/bin/sops&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://kompose.io/ Kompose] (Kubernetes + Compose) =&lt;br /&gt;
kompose is a tool to convert docker-compose files to Kubernetes manifests. &amp;lt;code&amp;gt;kompose&amp;lt;/code&amp;gt; takes a Docker Compose file and translates it into Kubernetes resources.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Linux&lt;br /&gt;
curl -L https://github.com/kubernetes/kompose/releases/download/v1.21.0/kompose-linux-amd64 -o kompose&lt;br /&gt;
sudo install ./kompose /usr/local/bin/kompose               # option 1&lt;br /&gt;
chmod +x kompose; sudo mv ./kompose /usr/local/bin/kompose  # option 2&lt;br /&gt;
&lt;br /&gt;
# Completion&lt;br /&gt;
source &amp;lt;(kompose completion bash)&lt;br /&gt;
&lt;br /&gt;
# Convert&lt;br /&gt;
kompose convert -f docker-compose-mac.yaml&lt;br /&gt;
&lt;br /&gt;
WARN Restart policy 'unless-stopped' in service mysql is not supported, convert it to 'always'&lt;br /&gt;
INFO Kubernetes file &amp;quot;mysql-service.yaml&amp;quot; created&lt;br /&gt;
INFO Kubernetes file &amp;quot;cluster-dir-persistentvolumeclaim.yaml&amp;quot; created&lt;br /&gt;
INFO Kubernetes file &amp;quot;mysql-deployment.yaml&amp;quot; created&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;References&lt;br /&gt;
*[https://github.com/kubernetes/kompose kompose] Github&lt;br /&gt;
&lt;br /&gt;
= [https://kubernetes.io/blog/2019/04/19/introducing-kube-iptables-tailer/ kube-iptables-tailer] - ip-table drop packages logger =&lt;br /&gt;
Allows to view iptables dropped packages, useful when working with Network Policies to identify pods trying to talk to disallowed destinations.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This project deploys &amp;lt;tt&amp;gt;[https://github.com/box/kube-iptables-tailer/tree/master/demo kube-iptables-tailer]&amp;lt;/tt&amp;gt; daemonset that watches iptables log &amp;lt;code&amp;gt;/var/log/iptables.log&amp;lt;/code&amp;gt; on each k8s-node mounted as &amp;lt;code&amp;gt;hostPath&amp;lt;/code&amp;gt; volume. It filters the log for custom prefix, set in &amp;lt;code&amp;gt;daemonset.spec.template.spec.containers.env&amp;lt;/code&amp;gt; and sends to cluster events.&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
            env: &lt;br /&gt;
              - name: &amp;quot;IPTABLES_LOG_PATH&amp;quot;&lt;br /&gt;
                value: &amp;quot;/var/log/iptables.log&amp;quot;&lt;br /&gt;
              - name: &amp;quot;IPTABLES_LOG_PREFIX&amp;quot;&lt;br /&gt;
                # log prefix defined in your iptables chains&lt;br /&gt;
                value: &amp;quot;my-prefix:&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://github.com/box/kube-iptables-tailer#setup-iptables-log-prefix Set iptables Log Prefix]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ iptables -A CHAIN_NAME -j LOG --log-prefix &amp;quot;EXAMPLE_LOG_PREFIX: &amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example output, when packet dropped&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ kubectl describe pods --namespace=YOUR_NAMESPACE&lt;br /&gt;
...&lt;br /&gt;
Events:&lt;br /&gt;
  FirstSeen   LastSeen    Count   From                    Type          Reason          Message&lt;br /&gt;
  ---------   --------	  -----	  ----                    ----          ------          -------&lt;br /&gt;
  1h          5s          10      kube-iptables-tailer    Warning       PacketDrop      Packet dropped when receiving traffic from example-service-2 (IP: 22.222.22.222).&lt;br /&gt;
  3h          2m          5       kube-iptables-tailer    Warning       PacketDrop      Packet dropped when sending traffic to example-service-1 (IP: 11.111.11.111).&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
= [https://github.com/eldadru/ksniff ksniff] - pipe a pod traffic to Wireshark or Tshark =&lt;br /&gt;
A kubectl plugin that utilize tcpdump and Wireshark to start a remote capture on any pod&lt;br /&gt;
&lt;br /&gt;
= [https://docs.flagger.app/ flagger - canary deployments] =&lt;br /&gt;
Flagger is a Kubernetes operator that automates the promotion of canary deployments using Istio, Linkerd, App Mesh, NGINX, Skipper, Contour or Gloo routing for traffic shifting and Prometheus metrics for canary analysis.&lt;br /&gt;
= [https://www.kubeval.com/ Kubeval] =&lt;br /&gt;
Kubeval is used to validate one or more Kubernetes configuration files, and is often used locally as part of a development workflow as well as in CI pipelines.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install&lt;br /&gt;
wget https://github.com/instrumenta/kubeval/releases/latest/download/kubeval-linux-amd64.tar.gz&lt;br /&gt;
tar xf kubeval-linux-amd64.tar.gz&lt;br /&gt;
sudo cp kubeval /usr/local/bin&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
$&amp;gt; kubeval my-invalid-rc.yaml&lt;br /&gt;
WARN - my-invalid-rc.yaml contains an invalid ReplicationController - spec.replicas: Invalid type. Expected: integer, given: string&lt;br /&gt;
$&amp;gt; echo $?&lt;br /&gt;
1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/yannh/kubeconform kubeconform] - improved Kubeval =&lt;br /&gt;
Kubeconform is a Kubernetes manifests validation tool.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install&lt;br /&gt;
wget https://github.com/yannh/kubeconform/releases/latest/download/kubeconform-linux-amd64.tar.gz&lt;br /&gt;
tar xf kubeconform-linux-amd64.tar.gz&lt;br /&gt;
sudo install kubeconform /usr/local/bin&lt;br /&gt;
&lt;br /&gt;
# Show version&lt;br /&gt;
kubeconform -v&lt;br /&gt;
v0.4.14&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Observability =&lt;br /&gt;
== [https://github.com/oslabs-beta/KUR8 KUR8] - like Elastic.io EFK dashboards ==&lt;br /&gt;
{{Note|I've deployed v1.0.0 to monitoring ns along with already existing service &amp;lt;code&amp;gt;&lt;br /&gt;
kube-prometheus-stack-prometheus:9090&amp;lt;/code&amp;gt; but the application was crashing}}&lt;br /&gt;
&lt;br /&gt;
= CPU Load pods =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Repeat the command times x CPU&lt;br /&gt;
cat /proc/cpuinfo | grep processor | wc -l # count processors&lt;br /&gt;
yes &amp;gt; /dev/null &amp;amp;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
*[https://kubernetes.io/docs/reference/kubectl/overview/ kubectl overview - resources types, Namespaced, kinds] K8s docs&lt;br /&gt;
*[https://github.com/johanhaleby/kubetail kubetail] Bash script that enables you to aggregate (tail/follow) logs from multiple pods into one stream. This is the same as running &amp;quot;kubectl logs -f &amp;quot; but for multiple pods.&lt;br /&gt;
*[https://github.com/ahmetb/kubectx kubectx kubens] Kubernetes config switches for context and setting up default namespace&lt;br /&gt;
*[https://medium.com/faun/using-different-kubectl-versions-with-multiple-kubernetes-clusters-a3ad8707b87b manages different ver kubectl] blog&lt;br /&gt;
*[https://github.com/kubernetes/community/blob/master/contributors/devel/sig-cli/kubectl-conventions.md#rules-for-extending-special-resource-alias---all kubectl] Kubectl Conventions&lt;br /&gt;
&lt;br /&gt;
Cheatsheets&lt;br /&gt;
*[https://cheatsheet.dennyzhang.com/cheatsheet-kubernetes-A4 cheatsheet-kubernetes-A4] by dennyzhang&lt;br /&gt;
&lt;br /&gt;
Other projects&lt;br /&gt;
*[https://github.com/jonmosco/kube-tmux kube-tmux] Kubernetes context and namespace status for tmux&lt;br /&gt;
*[https://github.com/jonmosco/kube-ps1 kube-ps1] Kubernetes prompt for bash and zsh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:kubernetes]]&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Kubernetes/Tools&amp;diff=7031</id>
		<title>Kubernetes/Tools</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Kubernetes/Tools&amp;diff=7031"/>
		<updated>2024-07-02T21:53:07Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Install */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= kubectl =&lt;br /&gt;
== Install kubectl ==&lt;br /&gt;
List of kubectl [https://kubernetes.io/releases/ releases].&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# List releases&lt;br /&gt;
curl -s https://api.github.com/repos/kubernetes/kubernetes/releases | jq -r '.[].tag_name' | sort -V&lt;br /&gt;
curl -s https://api.github.com/repos/kubernetes/kubernetes/releases | jq -r '[.[] | select(.prerelease == false) | .tag_name] | map(sub(&amp;quot;^v&amp;quot;;&amp;quot;&amp;quot;)) | map(split(&amp;quot;.&amp;quot;)) | group_by(.[0:2]) | map(max_by(.[2]|tonumber)) | map(join(&amp;quot;.&amp;quot;)) | map(&amp;quot;v&amp;quot; + .) | sort | reverse | .[]'&lt;br /&gt;
v1.30.2&lt;br /&gt;
v1.29.6&lt;br /&gt;
v1.28.11&lt;br /&gt;
v1.27.15&lt;br /&gt;
v1.26.15&lt;br /&gt;
&lt;br /&gt;
# Latest&lt;br /&gt;
ARCH=amd64 # amd64|arm&lt;br /&gt;
VERSION=$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt); echo $VERSION&lt;br /&gt;
curl -LO https://storage.googleapis.com/kubernetes-release/release/$VERSION/bin/linux/$ARCH/kubectl&lt;br /&gt;
&lt;br /&gt;
# Specific version&lt;br /&gt;
# Find specific Kubernetes release, then download kubectl&lt;br /&gt;
VERSION=v1.26.14; ARCH=amd64 # amd64|arm&lt;br /&gt;
curl -LO https://storage.googleapis.com/kubernetes-release/release/$VERSION/bin/linux/$ARCH/kubectl&lt;br /&gt;
sudo install ./kubectl /usr/local/bin/kubectl&lt;br /&gt;
&lt;br /&gt;
# Note: sudo install := chmod +x ./kubectl; sudo mv&lt;br /&gt;
&lt;br /&gt;
# Verify, kubectl should not be more than -/+ 1 minor version difference then api-server&lt;br /&gt;
kubectl version --short&lt;br /&gt;
Client Version: v1.26.14&lt;br /&gt;
Kustomize Version: v4.5.7&lt;br /&gt;
Server Version: v1.24.10&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Google way&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install kubectl if you don't already have a suitable version&lt;br /&gt;
# https://kubernetes.io/docs/setup/release/version-skew-policy/#kubectl&lt;br /&gt;
kubectl version --client || gcloud components install kubectl&lt;br /&gt;
kubectl get clusterrolebinding $(gcloud config get-value core/account)-cluster-admin ||&lt;br /&gt;
  kubectl create clusterrolebinding $(gcloud config get-value core/account)-cluster-admin \&lt;br /&gt;
  --clusterrole=cluster-admin \&lt;br /&gt;
  --user=&amp;quot;$(gcloud config get-value core/account)&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
kubectl plugin called [https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke gke-gcloud-auth-plugin]&lt;br /&gt;
* [https://cloud.google.com/sdk/docs/install#deb Install Google Cloud SDK]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo apt-get install apt-transport-https ca-certificates gnupg curl&lt;br /&gt;
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /usr/share/keyrings/cloud.google.gpg&lt;br /&gt;
echo &amp;quot;deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main&amp;quot; | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list&lt;br /&gt;
sudo apt-get update &lt;br /&gt;
sudo apt-get install google-cloud-cli # optional&lt;br /&gt;
sudo apt-get install google-cloud-sdk-gke-gcloud-auth-plugin&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Autocompletion and kubeconfig ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
source &amp;lt;(kubectl completion bash); alias k=kubectl; complete -F __start_kubectl k&lt;br /&gt;
&lt;br /&gt;
# Set default namespace&lt;br /&gt;
kubectl config set-context --current --namespace=dev&lt;br /&gt;
kubectl config set-context $(kubectl config current-context) --namespace=dev&lt;br /&gt;
&lt;br /&gt;
vi ~/.kube/config&lt;br /&gt;
...&lt;br /&gt;
contexts:&lt;br /&gt;
- context:&lt;br /&gt;
    cluster: kubernetes&lt;br /&gt;
    user: kubernetes-admin&lt;br /&gt;
    namespace: web       # default namespace&lt;br /&gt;
  name: dev-frontend&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Add &amp;lt;code&amp;gt;proxy-url&amp;lt;/code&amp;gt; using &amp;lt;code&amp;gt;yq&amp;lt;/code&amp;gt; to kubeconfig ==&lt;br /&gt;
Minimum yq version required is v2.x, tested with yq 2.13.0. The example below does the file inline `-i` update.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
yq -i -y --indentless '.clusters[0].cluster += {&amp;quot;proxy-url&amp;quot;: &amp;quot;http://proxy.acme.com:8080&amp;quot;}' ~/.kube/$ENVIRONMENT&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Get resources and cheatsheet ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Get a list of nodes&lt;br /&gt;
kubectl get nodes -o jsonpath=&amp;quot;{.items[*].metadata.name}&amp;quot;&lt;br /&gt;
ip-10-10-10-10.eu-west-1.compute.internal ip-10-10-10-20.eu-west-1.compute.internal&lt;br /&gt;
&lt;br /&gt;
kubectl get nodes -oname&lt;br /&gt;
node/ip-10-10-10-10.eu-west-1.compute.internal&lt;br /&gt;
node/ip-10-10-10-20.eu-west-1.compute.internal&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
# Pods sorted by node name&lt;br /&gt;
kubectl get pods --sort-by=.spec.nodeName -owide -A&lt;br /&gt;
&lt;br /&gt;
# Watch a namespace in a convinient resources order | sts=statefulset, rs=replicaset, ep=endpoint, cm=configmap&lt;br /&gt;
watch -d kubectl -n dev get sts,deploy,rc,rs,pods,svc,ep,ing,pvc,cm,sa,secret,es,cronjob,job -owide --show-labels &lt;br /&gt;
   # note es - externalsecrets&lt;br /&gt;
watch -d 'kubectl get pv -owide --show-labels | grep -e &amp;lt;eg.NAMESPACE&amp;gt;'&lt;br /&gt;
watch -d helm list -A&lt;br /&gt;
&lt;br /&gt;
# Test your context by creating configMap&lt;br /&gt;
kubectl create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2&lt;br /&gt;
kubectl delete configmap my-config&lt;br /&gt;
&lt;br /&gt;
# Watch multiple namespaces&lt;br /&gt;
eval 'kubectl --context='{context1,context2}' --namespace='{ns1,ns2}' get pod;'&lt;br /&gt;
eval kubectl\ --context={context1,context2}\ --namespace={ns1,ns2}\ get\ pod\;&lt;br /&gt;
watch -d eval 'kubectl -n '{default,ingress-nginx}' get sts,deploy,rc,rs,pods,svc,ep,ing,pvc,cm,sa,secret,es,cronjob,job -owide --show-labels;'&lt;br /&gt;
&lt;br /&gt;
# Auth, can-i&lt;br /&gt;
kubectl auth can-i delete pods&lt;br /&gt;
yes&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Get yaml from existing object ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl create namespace kiali --dry-run=client -o yaml &amp;gt; ns.yaml&lt;br /&gt;
kubectl create namespace kiali --dry-run=client -o yaml | kubectl apply -f -&lt;br /&gt;
&lt;br /&gt;
# Saves version revision in metadata.annotations.kubectl.kubernetes.io/last-applied-configuration={..manifest_json..} &lt;br /&gt;
kubectl create ns foo --save-config&lt;br /&gt;
&lt;br /&gt;
# Get a yaml without status information, almost clean manifest. Deprecated '--export' before &amp;lt;v17.x.&lt;br /&gt;
kubectl -n web get pod &amp;lt;podName&amp;gt; -oyaml --export&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Generate pod manifest, the most clean way I know&lt;br /&gt;
&amp;lt;syntaxhighlightjs lang=bash&amp;gt;&lt;br /&gt;
# kubectl -n foo run --image=ubuntu:20.04 ubuntu-1 --dry-run=client -oyaml -- bash -c sleep&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Pod&lt;br /&gt;
metadata:&lt;br /&gt;
  creationTimestamp: null  # &amp;lt;- can be deleted&lt;br /&gt;
  labels:&lt;br /&gt;
    run: ubuntu-1&lt;br /&gt;
  name: ubuntu-1&lt;br /&gt;
  namespace: foo&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
  - args:&lt;br /&gt;
    - bash&lt;br /&gt;
    - -c&lt;br /&gt;
    - sleep&lt;br /&gt;
    image: ubuntu:20.04&lt;br /&gt;
    name: ubuntu-1&lt;br /&gt;
    resources: {}  # &amp;lt;- can be deleted&lt;br /&gt;
  dnsPolicy: ClusterFirst&lt;br /&gt;
  restartPolicy: Always&lt;br /&gt;
status: {}         # &amp;lt;- can be deleted&lt;br /&gt;
&amp;lt;/syntaxhighlightjs&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;code&amp;gt;kubectl cp&amp;lt;/code&amp;gt; ==&lt;br /&gt;
Each container must be prefixed with a namespace, the copy-to-file(&amp;lt;filename&amp;gt;) must be in place. The recursive copy might be tricky. &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl cp [[namespace/]pod:]file/path ./&amp;lt;filename&amp;gt; -c &amp;lt;container_name&amp;gt;&lt;br /&gt;
kubectl cp vegeta/vegeta-5847d879d8-p9kqw:plot.html ./plot.html -c vegeta&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== One liners ==&lt;br /&gt;
=== Single purpose pods ===&lt;br /&gt;
Note: &amp;lt;code&amp;gt;--generator=deployment/apps.v1&amp;lt;/code&amp;gt; is DEPRECATED and will be removed, use &amp;lt;code&amp;gt;--generator=run-pod/v1 &amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;kubectl create&amp;lt;/code&amp;gt; instead.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Exec to deployment, no need to specify unique pod name&lt;br /&gt;
kubectl exec -it deploy/sleep -- curl httpbin:8000/headers&lt;br /&gt;
&lt;br /&gt;
NS=mynamespace; LABEL='app.kubernetes.io/name=myvalue'&lt;br /&gt;
kubectl exec -n $NS -it $(kubectl get pod -l &amp;quot;$LABEL&amp;quot; -n $NS -o jsonpath='{.items[0].metadata.name}') -- bash&lt;br /&gt;
&lt;br /&gt;
# Echo server&lt;br /&gt;
kubectl run --image=k8s.gcr.io/echoserver:1.4 hello-1 --port=8080&lt;br /&gt;
&lt;br /&gt;
# Single purpose pods&lt;br /&gt;
kubectl run    --image=bitnami/kubectl:1.21.8 kubectl-1    --rm -it -- get pods&lt;br /&gt;
kubectl run    --image=appropriate/curl       curl-1       --rm -it -- sh&lt;br /&gt;
kubectl run    --image=ubuntu:18.04     ubuntu-1  --rm -it -- bash&lt;br /&gt;
kubectl create --image=ubuntu:20.04     ubuntu-2  --rm -it -- bash&lt;br /&gt;
kubectl run    --image=busybox:1.31.0   busybox-1 --rm -it -- sh          # exec and delete when completed&lt;br /&gt;
kubectl run    --image=busybox:1.31.0   busybox-2          -- sleep 7200  # sleep, so you can exec&lt;br /&gt;
kubectl run    --image=alpine           alpine-1  --rm -it -- ping -c 1 8.8.8.8&lt;br /&gt;
 docker run    --rm -it --name alpine-1 alpine                ping -c 1 8.8.8.8&lt;br /&gt;
&lt;br /&gt;
# Network-multitool | https://github.com/wbitt/Network-MultiTool | Runs as a webserver, so won't complete.&lt;br /&gt;
kubectl run    --image=wbitt/network-multitool multitool-1&lt;br /&gt;
kubectl create --image=wbitt/network-multitool deployment multitool&lt;br /&gt;
kubectl exec -it multitool-1          -- /bin/bash&lt;br /&gt;
kubectl exec -it deployment/multitool -- /bin/bash&lt;br /&gt;
docker run --rm -it --name network-multitool wbitt/network-multitool bash&lt;br /&gt;
&lt;br /&gt;
# Curl&lt;br /&gt;
kubectl run test --image=tutum/curl -- sleep 10000&lt;br /&gt;
&lt;br /&gt;
# Deprecation syntax&lt;br /&gt;
kubectl run --image=k8s.gcr.io/echoserver:1.4 --generator=run-pod/v1         hello-1 --port=8080 # VALID!&lt;br /&gt;
kubectl run --image=k8s.gcr.io/echoserver:1.4 --generator=deployment/apps.v1 hello-1 --port=8080 # &amp;lt;- deprecated&lt;br /&gt;
&lt;br /&gt;
# Errors&lt;br /&gt;
# | error: --rm should only be used for attached containers&lt;br /&gt;
# | Error: unknown flag: --image # when kubectl create --image&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Additional software&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Process and network comamnds&lt;br /&gt;
export DEBIAN_FRONTEND=noninteractive # Ubuntu 20.04&lt;br /&gt;
DEBIAN_FRONTEND=noninteractive apt install -yq dnsutils iproute2 iputils-ping iputils-tracepath net-tools netcat procps&lt;br /&gt;
# | dnsutils     - nslookup, dig&lt;br /&gt;
# | iproute2     - ip addr, ss&lt;br /&gt;
# | iputils-ping      - ping&lt;br /&gt;
# | iputils-tracepath - tracepath&lt;br /&gt;
# | net-tools    - ifconfig&lt;br /&gt;
# | netcat       - nc&lt;br /&gt;
# | procps       - ps, top&lt;br /&gt;
&lt;br /&gt;
# Databases&lt;br /&gt;
apt install -yq redis-tools&lt;br /&gt;
apt install -yq postgresql-client&lt;br /&gt;
&lt;br /&gt;
# AWS cli v1 - Debian&lt;br /&gt;
apt install python-pip&lt;br /&gt;
pip install awscli&lt;br /&gt;
&lt;br /&gt;
# Network test without ping, nc or telnet&lt;br /&gt;
(timeout 1 bash -c '&amp;lt;/dev/tcp/127.0.0.1/22 &amp;amp;&amp;amp; echo PORT OPEN || echo PORT CLOSED') 2&amp;gt;/dev/null&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;kubectl heredocs&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl apply -f &amp;lt;(cat &amp;lt;&amp;lt;EOF&lt;br /&gt;
EOF&lt;br /&gt;
) --dry-run=server&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;One lines move to yamls&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
# kubectl exec -it ubuntu-2 -- bash&lt;br /&gt;
kubectl apply -f &amp;lt;(cat &amp;lt;&amp;lt;EOF&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Pod&lt;br /&gt;
metadata:&lt;br /&gt;
# namespace: default&lt;br /&gt;
  name: ubuntu-1&lt;br /&gt;
# annotations:&lt;br /&gt;
#   kubernetes.io/psp: eks.privileged&lt;br /&gt;
# labels:&lt;br /&gt;
#   app: ubuntu&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
  - command:&lt;br /&gt;
    - &amp;quot;sleep&amp;quot;&lt;br /&gt;
    - &amp;quot;7200&amp;quot;&lt;br /&gt;
#   args:&lt;br /&gt;
#   - &amp;quot;bash&amp;quot;&lt;br /&gt;
    image: ubuntu:20.04&lt;br /&gt;
    imagePullPolicy: IfNotPresent&lt;br /&gt;
    name: ubuntu-1&lt;br /&gt;
#   securityContext:&lt;br /&gt;
#     privileged: true&lt;br /&gt;
#   tty: true&lt;br /&gt;
# dnsPolicy: ClusterFirst&lt;br /&gt;
# enableServiceLinks: true&lt;br /&gt;
  restartPolicy: Never&lt;br /&gt;
# serviceAccount    : sa1&lt;br /&gt;
# serviceAccountName: sa1&lt;br /&gt;
# nodeSelector:&lt;br /&gt;
#   node.kubernetes.io/lifecycle: spot&lt;br /&gt;
EOF&lt;br /&gt;
) --dry-run=server&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Docker - for a single missing commands ===&lt;br /&gt;
If you ever miss some commands you can use docker container package with it:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# curl - missing on minikube node that runs CoreOS&lt;br /&gt;
minikube -p metrics ip; minikube ssh&lt;br /&gt;
docker run appropriate/curl -- http://&amp;lt;NodeIP&amp;gt;:10255/stats/summary # check kubelet-metrics non secure endpoint&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== [https://kubernetes.io/blog/2019/01/14/apiserver-dry-run-and-kubectl-diff/ &amp;lt;code&amp;gt;kubectl diff&amp;lt;/code&amp;gt;] ==&lt;br /&gt;
Shows the differences between the current '''live''' object and the new '''dry-run''' object.&lt;br /&gt;
&amp;lt;source lang=diff&amp;gt;&lt;br /&gt;
kubectl diff -f webfront-deploy.yaml&lt;br /&gt;
diff -u -N /tmp/LIVE-761963756/apps.v1.Deployment.default.webfront-deploy /tmp/MERGED-431884635/apps.v1.Deployment.default.webfront-deploy&lt;br /&gt;
--- /tmp/LIVE-761963756/apps.v1.Deployment.default.webfront-deploy      2019-10-13 17:46:59.784000000 +0000&lt;br /&gt;
+++ /tmp/MERGED-431884635/apps.v1.Deployment.default.webfront-deploy    2019-10-13 17:46:59.788000000 +0000&lt;br /&gt;
@@ -4,7 +4,7 @@&lt;br /&gt;
   annotations:&lt;br /&gt;
     deployment.kubernetes.io/revision: &amp;quot;1&amp;quot;&lt;br /&gt;
   creationTimestamp: &amp;quot;2019-10-13T16:38:43Z&amp;quot;&lt;br /&gt;
-  generation: 2&lt;br /&gt;
+  generation: 3&lt;br /&gt;
   labels:&lt;br /&gt;
     app: webfront-deploy&lt;br /&gt;
   name: webfront-deploy&lt;br /&gt;
@@ -14,7 +14,7 @@&lt;br /&gt;
   uid: ebaf757e-edd7-11e9-8060-0a2fb3cdd79a&lt;br /&gt;
 spec:&lt;br /&gt;
   progressDeadlineSeconds: 600&lt;br /&gt;
-  replicas: 2&lt;br /&gt;
+  replicas: 1&lt;br /&gt;
   revisionHistoryLimit: 10&lt;br /&gt;
   selector:&lt;br /&gt;
     matchLabels:&lt;br /&gt;
@@ -29,6 +29,7 @@&lt;br /&gt;
       creationTimestamp: null&lt;br /&gt;
       labels:&lt;br /&gt;
         app: webfront-deploy&lt;br /&gt;
+        role: webfront&lt;br /&gt;
     spec:&lt;br /&gt;
       containers:&lt;br /&gt;
       - image: nginx:1.7.8&lt;br /&gt;
exit status 1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Kubectl-plugins - [https://krew.sigs.k8s.io/docs/ Krew] plugin manager ==&lt;br /&gt;
Install [https://github.com/kubernetes-sigs/krew krew] package manager for kubectl plugins, requires K8s v1.12+&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
(&lt;br /&gt;
  set -x; cd &amp;quot;$(mktemp -d)&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  OS=&amp;quot;$(uname | tr '[:upper:]' '[:lower:]')&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  ARCH=&amp;quot;$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  KREW=&amp;quot;krew-${OS}_${ARCH}&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  curl -fsSLO &amp;quot;https://github.com/kubernetes-sigs/krew/releases/latest/download/${KREW}.tar.gz&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  tar zxvf &amp;quot;${KREW}.tar.gz&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  ./&amp;quot;${KREW}&amp;quot; install krew&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# update PATH&lt;br /&gt;
[ -d ${HOME}/.krew/bin ] &amp;amp;&amp;amp; export PATH=&amp;quot;${PATH}:${HOME}/.krew/bin&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# List plugins&lt;br /&gt;
kubectl krew search&lt;br /&gt;
&lt;br /&gt;
# Install plugins&lt;br /&gt;
kubectl krew install sniff&lt;br /&gt;
&lt;br /&gt;
# Upgrade plugins&lt;br /&gt;
kubectl krew upgrade&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*[https://github.com/kubernetes-sigs/krew-index/blob/master/plugins.md Available kubectl plugins] Github&lt;br /&gt;
*[https://ahmet.im/blog/kubectl-plugins/ kubectl subcommands] write your own plugin&lt;br /&gt;
&lt;br /&gt;
== Install kubectl plugins ==&lt;br /&gt;
&amp;lt;code&amp;gt;kubectl ctx&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;kubectl ns&amp;lt;/code&amp;gt; - change context and set default namespace&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl krew install ctx ns&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;kubectl cssh&amp;lt;/code&amp;gt; - SSH into Kubernetes nodes ===&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Ssh to all nodes, example below for EKS v1.15.11&lt;br /&gt;
kubectl cssh -u ec2-user -i /git/secrets/ssh/dev.pem -a &amp;quot;InternalIP&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;code&amp;gt;[https://github.com/FairwindsOps/pluto kubectl deprecations]&amp;lt;/code&amp;gt;===&lt;br /&gt;
;&amp;lt;code&amp;gt;[https://github.com/FairwindsOps/pluto kubectl deprecations]&amp;lt;/code&amp;gt;: shows all the deprecated objects in a Kubernetes cluster allowing the operator to verify them before upgrading the cluster. It uses the swagger.json version available in master branch of Kubernetes repository (https://github.com/kubernetes/kubernetes/tree/master/api/openapi-spec) as a reference.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl deprecations&lt;br /&gt;
StatefulSet found in statefulsets.apps/v1beta1&lt;br /&gt;
	 ├─ API REMOVED FROM THE CURRENT VERSION AND SHOULD BE MIGRATED IMMEDIATELY!!&lt;br /&gt;
		-&amp;gt; OBJECT: myapp namespace: mynamespace1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Prior upgrade report. Script specific to EKS.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
[[ $# -eq 0 ]] &amp;amp;&amp;amp; echo &amp;quot;no args, provide prefix for the file name&amp;quot; &amp;amp;&amp;amp; exit 1&lt;br /&gt;
PREFIX=$1&lt;br /&gt;
TARGET_K8S_VER=v1.16.8&lt;br /&gt;
K8Sid=$(kubectl cluster-info | head -1 | cut -d'/' -f3 | cut -d'.' -f1)&lt;br /&gt;
kubectl deprecations --k8s-version $TARGET_K8S_VER &amp;gt; $PREFIX-$(kubectl cluster-info | head -1 | cut -d'/' -f3 | cut -d'.' -f1)-$(date +&amp;quot;%Y%m%d-%H%M&amp;quot;)-from-$(kubectl version --short | grep Server | cut -f3 -d' ')-to-${TARGET_K8S_VER}.yaml&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ ./kube-deprecations.sh test&lt;br /&gt;
$ ls -l&lt;br /&gt;
-rw-rw-r-- 1 vagrant vagrant 29356 Jun 29 16:09 test-11111111112222222222333333333344-20200629-1609-from-v1.15.11-eks-af3caf-to-latest.yaml&lt;br /&gt;
-rw-rw-r-- 1 vagrant vagrant   852 Jun 30 22:41 test-11111111112222222222333333333344-20200630-2241-from-v1.15.11-eks-af3caf-to-v1.16.8.yaml&lt;br /&gt;
-rwxrwxr-x 1 vagrant vagrant   437 Jun 30 22:41 kube-deprecations.sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;code&amp;gt;kubectl df-pv&amp;lt;/code&amp;gt;===&lt;br /&gt;
;&amp;lt;code&amp;gt;kubectl df-pv&amp;lt;/code&amp;gt;: Show disk usage (like unix df) for persistent volumes&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl df-pv&lt;br /&gt;
PVC                   NAMESPACE   POD                    SIZE          USED        AVAILABLE     PERCENTUSED   IUSED   IFREE     PERCENTIUSED&lt;br /&gt;
rdbms-volume          shared1     rdbms-d494fbf4-xrssk   2046640128    252817408   1777045504    12.35         688     130384    0.52&lt;br /&gt;
userdata-0            shared2     mft-0                  21003583488   57692160    20929114112   0.27          749     1309971   0.06&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
===&amp;lt;code&amp;gt;kubectl sniff&amp;lt;/code&amp;gt;===&lt;br /&gt;
Start a remote packet capture on pods using tcpdump.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl sniff hello-minikube-7c77b68cff-qbvsd -c hello-minikube&lt;br /&gt;
# Flags:&lt;br /&gt;
#   -c, --container string             container (optional)&lt;br /&gt;
#   -x, --context string               kubectl context to work on (optional)&lt;br /&gt;
#   -f, --filter string                tcpdump filter (optional)&lt;br /&gt;
#   -h, --help                         help for sniff&lt;br /&gt;
#       --image string                 the privileged container image (optional)&lt;br /&gt;
#   -i, --interface string             pod interface to packet capture (optional) (default &amp;quot;any&amp;quot;)&lt;br /&gt;
#   -l, --local-tcpdump-path string    local static tcpdump binary path (optional)&lt;br /&gt;
#   -n, --namespace string             namespace (optional) (default &amp;quot;default&amp;quot;)&lt;br /&gt;
#   -o, --output-file string           output file path, tcpdump output will be redirect to this file instead of wireshark (optional) ('-' stdout)&lt;br /&gt;
#   -p, --privileged                   if specified, ksniff will deploy another pod that have privileges to attach target pod network namespace&lt;br /&gt;
#   -r, --remote-tcpdump-path string   remote static tcpdump binary path (optional) (default &amp;quot;/tmp/static-tcpdump&amp;quot;)&lt;br /&gt;
#   -v, --verbose                      if specified, ksniff output will include debug information (optional)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The command above will open Wireshark, interesting article to follow:&lt;br /&gt;
* [https://istio.io/latest/docs/tasks/security/authentication/mtls-migration/#set-up-the-cluster mutual TLS] istio&lt;br /&gt;
* [https://dzone.com/articles/verifying-service-mesh-tls-in-kubernetes-using-ksn Verifying Service Mesh TLS in Kubernetes, Using Ksniff and Wireshark]&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;code&amp;gt;kubectl neat&amp;lt;/code&amp;gt;===&lt;br /&gt;
Print sanitized Kubernetes manifest.&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
kubectl get csec  dummy-secret -n clustersecret -oyaml | kubectl neat&lt;br /&gt;
apiVersion: clustersecret.io/v1&lt;br /&gt;
data:&lt;br /&gt;
  tls.crt: ***&lt;br /&gt;
  tls.key: ***&lt;br /&gt;
kind: ClusterSecret&lt;br /&gt;
matchNamespace:&lt;br /&gt;
- anothernamespace&lt;br /&gt;
metadata:&lt;br /&gt;
  name: dummy-secret&lt;br /&gt;
  namespace: clustersecret&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Getting help like manpages &amp;lt;code&amp;gt;kubectl explain&amp;lt;/code&amp;gt; ==&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
$ kubectl --help&lt;br /&gt;
$ kubectl get --help&lt;br /&gt;
$ kubectl explain --help&lt;br /&gt;
$ kubectl explain pod.spec.containers # kubectl knows cluster version, so gives you correct schema details&lt;br /&gt;
$ kubectl explain pods.spec.tolerations --recursive # show only fields&lt;br /&gt;
(...)&lt;br /&gt;
FIELDS:&lt;br /&gt;
   effect	&amp;lt;string&amp;gt;&lt;br /&gt;
   key	&amp;lt;string&amp;gt;&lt;br /&gt;
   operator	&amp;lt;string&amp;gt;&lt;br /&gt;
   tolerationSeconds	&amp;lt;integer&amp;gt;&lt;br /&gt;
   value	&amp;lt;string&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*[https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-strong-getting-started-strong- kubectl-commands] K8s interactive kubectl command reference&lt;br /&gt;
&lt;br /&gt;
= Watch Containers logs =&lt;br /&gt;
== [https://github.com/stern/stern Stern] ==&lt;br /&gt;
{{note| https://github.com/wercker/stern repository has no activity [https://github.com/wercker/stern/issues/140 ISSUE-140], the new community maintain repo is &amp;lt;tt&amp;gt;[https://github.com/stern/stern stern/stern]&amp;lt;/tt&amp;gt;  }}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Log tailing and landscape viewing tool. It connects to kubeapi and streams logs from all pods. Thus using this external tool with clusters that have 100ts of containers can be put significant load on kubeapi.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It will re-use kubectl config file to connect to your clusters, so works oob.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Govendor - this module manager is required&lt;br /&gt;
export GOPATH=$HOME/go        # path where go modules can be found, used by 'go get -u &amp;lt;url&amp;gt;'&lt;br /&gt;
export PATH=$PATH:$GOPATH/bin # path to the additional 'go' binaries&lt;br /&gt;
go get -u github.com/kardianos/govendor  # there will be no output&lt;br /&gt;
&lt;br /&gt;
# Stern (official)&lt;br /&gt;
mkdir -p $GOPATH/src/github.com/stern # new link: https://github.com/stern/stern&lt;br /&gt;
cd $GOPATH/src/github.com/stern&lt;br /&gt;
git clone https://github.com/stern/stern.git &amp;amp;&amp;amp; cd stern&lt;br /&gt;
govendor sync # there will be no output, may take 2 min&lt;br /&gt;
go install    # no output&lt;br /&gt;
&lt;br /&gt;
# Stern latest, download binary, no need for govendor&lt;br /&gt;
REPO=stern/stern&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/$REPO/releases/latest&amp;quot; | jq -r .tag_name | tr -d v); echo $LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d); FILE=stern_${LATEST}_linux_amd64&lt;br /&gt;
curl -L https://github.com/$REPO/releases/download/v${LATEST}/$FILE.tar.gz -o $TEMPDIR/$FILE.tar.gz&lt;br /&gt;
sudo tar xzvf $TEMPDIR/$FILE.tar.gz -C /usr/local/bin/ stern&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Usage&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Regex filter (pod-query) to match 2 pods patterns 'proxy' and 'gateway'&lt;br /&gt;
stern -n dev --kubeconfig ~/.kube/dev-config \(proxy\|gateway\)  # escape to protect regex mod characters&lt;br /&gt;
stern -n dev --kubeconfig ~/.kube/dev-config '(proxy|gateway)'   # single-quote to protect mod characters&lt;br /&gt;
&lt;br /&gt;
# Template the output&lt;br /&gt;
stern --template '{{.Message}} ({{.NodeName}}/{{.Namespace}}/{{.PodName}}/{{.ContainerName}}){{&amp;quot;\n&amp;quot;}}' .&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Help&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ stern&lt;br /&gt;
Tail multiple pods and containers from Kubernetes&lt;br /&gt;
&lt;br /&gt;
Usage:&lt;br /&gt;
  stern pod-query [flags]&lt;br /&gt;
&lt;br /&gt;
Flags:&lt;br /&gt;
  -A, --all-namespaces             If present, tail across all namespaces. A specific namespace is ignored even if specified with --namespace.&lt;br /&gt;
      --color string               Color output. Can be 'always', 'never', or 'auto' (default &amp;quot;auto&amp;quot;)&lt;br /&gt;
      --completion string          Outputs stern command-line completion code for the specified shell. Can be 'bash' or 'zsh'&lt;br /&gt;
  -c, --container string           Container name when multiple containers in pod (default &amp;quot;.*&amp;quot;)&lt;br /&gt;
      --container-state string     If present, tail containers with status in running, waiting or terminated. Default to running. (default &amp;quot;running&amp;quot;)&lt;br /&gt;
      --context string             Kubernetes context to use. Default to current context configured in kubeconfig.&lt;br /&gt;
  -e, --exclude strings            Regex of log lines to exclude&lt;br /&gt;
  -E, --exclude-container string   Exclude a Container name&lt;br /&gt;
  -h, --help                       help for stern&lt;br /&gt;
  -i, --include strings            Regex of log lines to include&lt;br /&gt;
      --init-containers            Include or exclude init containers (default true)&lt;br /&gt;
      --kubeconfig string          Path to kubeconfig file to use&lt;br /&gt;
  -n, --namespace string           Kubernetes namespace to use. Default to namespace configured in Kubernetes context.&lt;br /&gt;
  -o, --output string              Specify predefined template. Currently support: [default, raw, json] (default &amp;quot;default&amp;quot;)&lt;br /&gt;
  -l, --selector string            Selector (label query) to filter on. If present, default to &amp;quot;.*&amp;quot; for the pod-query.&lt;br /&gt;
  -s, --since duration             Return logs newer than a relative duration like 5s, 2m, or 3h. Defaults to 48h.&lt;br /&gt;
      --tail int                   The number of lines from the end of the logs to show. Defaults to -1, showing all logs. (default -1)&lt;br /&gt;
      --template string            Template to use for log lines, leave empty to use --output flag&lt;br /&gt;
  -t, --timestamps                 Print timestamps&lt;br /&gt;
  -v, --version                    Print the version and exit&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Usage&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
stern &amp;lt;pod&amp;gt;&lt;br /&gt;
stern --tail 1 busybox -n &amp;lt;namespace&amp;gt; #this is RegEx that matches busybox1|2|etc&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== [https://github.com/johanhaleby/kubetail kubetail] ==&lt;br /&gt;
Bash script that enables you to aggregate (tail/follow) logs from multiple pods into one stream. This is the same as running &amp;lt;code&amp;gt;kubectl logs -f&amp;lt;/code&amp;gt; but for multiple pods.&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/lensapp/lens Lens | Kubernetes IDE] =&lt;br /&gt;
Kubernetes client, this is not a dashboard that needs installing on a cluster. Similar to KUI but much more powerful.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Deb&lt;br /&gt;
curl curl https://api.k8slens.dev/binaries/Lens-5.4.1-latest.20220304.1.amd64.deb&lt;br /&gt;
sudo apt-get install ./Lens-5.4.1-latest.20220304.1.amd64.deb&lt;br /&gt;
&lt;br /&gt;
# Snap&lt;br /&gt;
snap list&lt;br /&gt;
sudo snap install kontena-lens --classic # U16.04+, tested on U20.04&lt;br /&gt;
&lt;br /&gt;
# Install from a .snap file&lt;br /&gt;
mkdir -p ~/Downloads/kontena-lens &amp;amp;&amp;amp; cd $_&lt;br /&gt;
snap download kontena-lens&lt;br /&gt;
sudo snap ack     kontena-lens_152.assert         # add an assertion to the system assertion database&lt;br /&gt;
sudo snap install kontena-lens_152.snap --classic # --dangerous if you do not have the assert file&lt;br /&gt;
&lt;br /&gt;
# download snap from https://k8slens.dev/&lt;br /&gt;
curl https://api.k8slens.dev/binaries/Lens-5.3.4-latest.20220120.1.amd64.snap&lt;br /&gt;
sudo snap install Lens-5.3.4-latest.20220120.1.amd64.snap --classic --dangerous&lt;br /&gt;
&lt;br /&gt;
# Info&lt;br /&gt;
$ snap info kontena-lens_152.assert&lt;br /&gt;
name:      kontena-lens&lt;br /&gt;
summary:   Lens - The Kubernetes IDE&lt;br /&gt;
publisher: Mirantis Inc (jakolehm)&lt;br /&gt;
store-url: https://snapcraft.io/kontena-lens&lt;br /&gt;
contact:   info@k8slens.dev&lt;br /&gt;
license:   Proprietary&lt;br /&gt;
description: |&lt;br /&gt;
  Lens is the most powerful IDE for people who need to deal with Kubernetes clusters on a daily&lt;br /&gt;
  basis. Ensure your clusters are properly setup and configured. Enjoy increased visibility, real&lt;br /&gt;
  time statistics, log streams and hands-on troubleshooting capabilities. With Lens, you can work&lt;br /&gt;
  with your clusters more easily and fast, radically improving productivity and the speed of&lt;br /&gt;
  business.&lt;br /&gt;
snap-id: Dek6y5mTEPxhySFKPB4Z0WVi5EPS9osS&lt;br /&gt;
channels:&lt;br /&gt;
  latest/stable:    4.0.7      2021-01-20 (152) 107MB classic&lt;br /&gt;
  latest/candidate: 4.0.7      2021-01-20 (152) 107MB classic&lt;br /&gt;
  latest/beta:      4.0.7      2021-01-20 (152) 107MB classic&lt;br /&gt;
  latest/edge:      4.1.0-rc.1 2021-02-11 (157) 108MB classic&lt;br /&gt;
&lt;br /&gt;
$ snap info kontena-lens_152.snap&lt;br /&gt;
path:       &amp;quot;kontena-lens_152.snap&amp;quot;&lt;br /&gt;
name:       kontena-lens&lt;br /&gt;
summary:    Lens&lt;br /&gt;
version:    4.0.7 classic&lt;br /&gt;
build-date: 24 days ago, at 16:31 GMT&lt;br /&gt;
license:    unset&lt;br /&gt;
description: |&lt;br /&gt;
  Lens - The Kubernetes IDE&lt;br /&gt;
commands:&lt;br /&gt;
  - kontena-lens&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/lensapp/lens.git OpenLens] | Kubernetes IDE =&lt;br /&gt;
Download binary from https://github.com/MuhammedKalkan/OpenLens&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install on Ubuntu&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
SUDO=''&lt;br /&gt;
if (( $EUID != 0 )); then&lt;br /&gt;
    SUDO='sudo'&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
REPO=MuhammedKalkan/OpenLens&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/$REPO/releases/latest&amp;quot; | jq -r .tag_name | tr -d v); echo $LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d); FILE=OpenLens-${LATEST}.amd64.deb&lt;br /&gt;
curl -L https://github.com/${REPO}/releases/download/v${LATEST}/$FILE -o $TEMPDIR/$FILE&lt;br /&gt;
$SUDO dpkg -i $TEMPDIR/$FILE&lt;br /&gt;
$SUDO apt-get install -y --fix-broken&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Build your own - [https://gist.github.com/jslay88/bf654c23eaaaed443bb8e8b41d02b2a9 gist]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
install_deps_windows() {&lt;br /&gt;
    echo &amp;quot;Installing Build Dependencies (Windows)...&amp;quot;&lt;br /&gt;
    choco install -y make visualstudio2019buildtools visualstudio2019-workload-vctools&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_deps_darwin() {&lt;br /&gt;
    echo &amp;quot;Installing Build Dependencies (Darwin)...&amp;quot;&lt;br /&gt;
    xcode-select --install&lt;br /&gt;
    if ! hash make 2&amp;gt;/dev/null; then&lt;br /&gt;
        if ! hash brew 2&amp;gt;/dev/null; then&lt;br /&gt;
            echo &amp;quot;Installing Homebrew...&amp;quot;&lt;br /&gt;
            /bin/bash -c &amp;quot;$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)&amp;quot;&lt;br /&gt;
        fi&lt;br /&gt;
        echo &amp;quot;Installing make via Homebrew...&amp;quot;&lt;br /&gt;
        brew install make&lt;br /&gt;
    fi&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_deps_posix() {&lt;br /&gt;
    echo &amp;quot;Installing Build Dependencies (Posix)...&amp;quot;&lt;br /&gt;
    sudo apt-get install -y make g++ curl&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_darwin() {&lt;br /&gt;
    echo &amp;quot;Killing OpenLens (if open)...&amp;quot;&lt;br /&gt;
    killall OpenLens&lt;br /&gt;
    echo &amp;quot;Installing OpenLens (Darwin)...&amp;quot;&lt;br /&gt;
    rm -Rf &amp;quot;$HOME/Applications/OpenLens.app&amp;quot;&lt;br /&gt;
    arch=&amp;quot;mac&amp;quot;&lt;br /&gt;
    if [[ &amp;quot;$(uname -m)&amp;quot; == &amp;quot;arm64&amp;quot; ]]; then&lt;br /&gt;
        arch=&amp;quot;mac-arm64&amp;quot;  # credit @teefax&lt;br /&gt;
    fi&lt;br /&gt;
    cp -Rfp &amp;quot;$tempdir/lens/dist/$arch/OpenLens.app&amp;quot; &amp;quot;$HOME/Applications/&amp;quot;&lt;br /&gt;
    rm -Rf &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    &lt;br /&gt;
    print_alias_message&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_posix() {&lt;br /&gt;
    echo &amp;quot;Installing OpenLens (Posix)...&amp;quot;&lt;br /&gt;
    cd &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    sudo dpkg -i &amp;quot;$(ls -Art $tempdir/lens/dist/*.deb  | tail -n 1)&amp;quot;&lt;br /&gt;
    rm -Rf &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    &lt;br /&gt;
    print_alias_message&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_windows() {&lt;br /&gt;
    echo &amp;quot;Installing OpenLens (Windows)...&amp;quot;&lt;br /&gt;
    &amp;quot;$(/bin/ls -Art $tempdir/lens/dist/OpenLens*.exe  | tail -n 1)&amp;quot;&lt;br /&gt;
    rm -Rf &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    &lt;br /&gt;
    print_alias_message&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_nvm() {&lt;br /&gt;
    if [ -z &amp;quot;$NVM_DIR&amp;quot; ]; then&lt;br /&gt;
        echo &amp;quot;Installing NVM...&amp;quot;&lt;br /&gt;
        NVM_VERSION=$(curl -s https://api.github.com/repos/nvm-sh/nvm/releases/latest | sed -En 's/  &amp;quot;tag_name&amp;quot;: &amp;quot;(.+)&amp;quot;,/\1/p')&lt;br /&gt;
        curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/$NVM_VERSION/install.sh | bash&lt;br /&gt;
        NVM_DIR=&amp;quot;$HOME/.nvm&amp;quot;&lt;br /&gt;
    fi&lt;br /&gt;
    [ -s &amp;quot;$NVM_DIR/nvm.sh&amp;quot; ] &amp;amp;&amp;amp; \. &amp;quot;$NVM_DIR/nvm.sh&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
build_openlens() {&lt;br /&gt;
    tempdir=$(mktemp -d)&lt;br /&gt;
    cd &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    if [ -z &amp;quot;$1&amp;quot; ]; then&lt;br /&gt;
        echo &amp;quot;Checking GitHub API for latests tag...&amp;quot;&lt;br /&gt;
        OPENLENS_VERSION=$(curl -s https://api.github.com/repos/lensapp/lens/releases/latest | sed -En 's/  &amp;quot;tag_name&amp;quot;: &amp;quot;(.+)&amp;quot;,/\1/p')&lt;br /&gt;
    else&lt;br /&gt;
        if [[ &amp;quot;$1&amp;quot; == v* ]]; then&lt;br /&gt;
            OPENLENS_VERSION=&amp;quot;$1&amp;quot;&lt;br /&gt;
        else&lt;br /&gt;
            OPENLENS_VERSION=&amp;quot;v$1&amp;quot;&lt;br /&gt;
        fi&lt;br /&gt;
        echo &amp;quot;Using supplied tag $OPENLENS_VERSION&amp;quot;&lt;br /&gt;
    fi&lt;br /&gt;
    if [ -z $OPENLENS_VERSION ]; then&lt;br /&gt;
        echo &amp;quot;Failed to get valid version tag. Aborting!&amp;quot;&lt;br /&gt;
        exit 1&lt;br /&gt;
    fi&lt;br /&gt;
    curl -L https://github.com/lensapp/lens/archive/refs/tags/$OPENLENS_VERSION.tar.gz | tar xvz&lt;br /&gt;
    mv lens-* lens&lt;br /&gt;
    cd lens&lt;br /&gt;
    NVM_CURRENT=$(nvm current)&lt;br /&gt;
    nvm install 16&lt;br /&gt;
    nvm use 16&lt;br /&gt;
    npm install -g yarn&lt;br /&gt;
    make build&lt;br /&gt;
    nvm use &amp;quot;$NVM_CURRENT&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
print_alias_message() {&lt;br /&gt;
    if [ &amp;quot;$(type -t install_openlens)&amp;quot; != 'alias' ]; then&lt;br /&gt;
        printf &amp;quot;It is recommended to add an alias to your shell profile to run this script again.\n&amp;quot;&lt;br /&gt;
        printf &amp;quot;alias install_openlens=\&amp;quot;curl -o- https://gist.githubusercontent.com/jslay88/bf654c23eaaaed443bb8e8b41d02b2a9/raw/install_openlens.sh | bash\&amp;quot;\n\n&amp;quot;&lt;br /&gt;
    fi&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
if [[ &amp;quot;$(uname)&amp;quot; == &amp;quot;Linux&amp;quot; ]]; then&lt;br /&gt;
    install_deps_posix&lt;br /&gt;
    install_nvm&lt;br /&gt;
    build_openlens &amp;quot;$1&amp;quot;&lt;br /&gt;
    install_posix&lt;br /&gt;
elif [[ &amp;quot;$(uname)&amp;quot; == &amp;quot;Darwin&amp;quot; ]]; then&lt;br /&gt;
    install_deps_darwin&lt;br /&gt;
    install_nvm&lt;br /&gt;
    build_openlens &amp;quot;$1&amp;quot;&lt;br /&gt;
    install_darwin&lt;br /&gt;
else&lt;br /&gt;
    install_deps_windows&lt;br /&gt;
    install_nvm&lt;br /&gt;
    build_openlens &amp;quot;$1&amp;quot;&lt;br /&gt;
    install_windows&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Done!&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://kui.tools/ kui terminal] =&lt;br /&gt;
kui is a terminal with visualizations, provided by IBM&lt;br /&gt;
&lt;br /&gt;
Install using continent install script into &amp;lt;code&amp;gt;/opt/Kui-linux-x64/&amp;lt;/code&amp;gt; and symlink &amp;lt;code&amp;gt;Kui&amp;lt;/code&amp;gt; binary to &amp;lt;code&amp;gt;/usr/local/bin/kui&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
REPO=kubernetes-sigs/kui&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/$REPO/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d); FILE=Kui-linux-x64.zip&lt;br /&gt;
curl -L https://github.com/$REPO/releases/download/$LATEST/Kui-linux-x64.zip -o $TEMPDIR/$FILE&lt;br /&gt;
sudo mkdir -p /opt/Kui-linux-x64&lt;br /&gt;
sudo unzip $TEMPDIR/$FILE -d /opt/&lt;br /&gt;
&lt;br /&gt;
# Run&lt;br /&gt;
$&amp;gt; /opt/Kui-linux-x64/Kui&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Run Kui as [Kubernetes plugin https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
export PATH=$PATH:/opt/Kui-linux-x64/ # make sure Kui libs are in environment PATH&lt;br /&gt;
kubectl kui get pods -A               # -&amp;gt; a pop up window will show up&lt;br /&gt;
&lt;br /&gt;
$ kubectl plugin list &lt;br /&gt;
The following compatible plugins are available:&lt;br /&gt;
/opt/Kui-linux-x64/kubectl-kui&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
:[[File:ClipCapIt-200428-205600.PNG]]&lt;br /&gt;
&lt;br /&gt;
; Resources&lt;br /&gt;
* [https://github.com/IBM/kui/wiki kui/wiki] Github&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/derailed/popeye popeye] =&lt;br /&gt;
Popeye is a utility that scans live Kubernetes cluster and reports potential issues with deployed resources and configurations.&lt;br /&gt;
:[[File:ClipCapIt-200501-123645.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install&lt;br /&gt;
REPO=derailed/popeye&lt;br /&gt;
RELEASE=popeye_Linux_x86_64.tar.gz&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/${REPO}/releases/latest&amp;quot; | jq -r .tag_name); echo $VERSION # latest&lt;br /&gt;
wget https://github.com/${REPO}/releases/download/${VERSION}/${RELEASE}&lt;br /&gt;
tar xf ${RELEASE} popeye --remove-files&lt;br /&gt;
sudo install popeye /usr/local/bin&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
popeye # --out html&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/derailed/k9s k9s] =&lt;br /&gt;
K9s provides a terminal UI to interact with Kubernetes clusters.&lt;br /&gt;
&lt;br /&gt;
;Install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/derailed/k9s/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
wget https://github.com/derailed/k9s/releases/download/$LATEST/k9s_Linux_amd64.tar.gz&lt;br /&gt;
tar xf k9s_Linux_amd64.tar.gz --remove-files k9s&lt;br /&gt;
sudo install k9s /usr/local/bin&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Usage&lt;br /&gt;
* &amp;lt;code&amp;gt;?&amp;lt;/code&amp;gt; help&lt;br /&gt;
* &amp;lt;code&amp;gt;:ns&amp;lt;/code&amp;gt; select namespace&lt;br /&gt;
* &amp;lt;code&amp;gt;:nodes&amp;lt;/code&amp;gt; show nodes&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:[[File:ClipCapIt-190826-152830.PNG]]&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/droctothorpe/kubecolor kubecolor] =&lt;br /&gt;
Kubecolor is a bash function that colorizes the output of kubectl get events -w.&lt;br /&gt;
:[[File:ClipCapIt-190831-113158.PNG]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# This script is not working&lt;br /&gt;
git clone https://github.com/droctothorpe/kubecolor.git ~/.kubecolor&lt;br /&gt;
echo &amp;quot;source ~/.kubecolor/kubecolor.bash&amp;quot; &amp;gt;&amp;gt; ~/.bash_profile # (or ~/.bashrc)&lt;br /&gt;
source ~/.bash_profile # (or ~/.bashrc)&lt;br /&gt;
&lt;br /&gt;
# You can source this function instead&lt;br /&gt;
kube-events() {&lt;br /&gt;
    kubectl get events --all-namespaces --watch \&lt;br /&gt;
    -o 'go-template={{.lastTimestamp}} ^ {{.involvedObject.kind}} ^ {{.message}} ^ ({{.involvedObject.name}}){{&amp;quot;\n&amp;quot;}}' \&lt;br /&gt;
    | awk -F^ \&lt;br /&gt;
    -v   black=$(tput setaf 0) \&lt;br /&gt;
    -v     red=$(tput setaf 1) \&lt;br /&gt;
    -v   green=$(tput setaf 2) \&lt;br /&gt;
    -v  yellow=$(tput setaf 3) \&lt;br /&gt;
    -v    blue=$(tput setaf 4) \&lt;br /&gt;
    -v magenta=$(tput setaf 5) \&lt;br /&gt;
    -v    cyan=$(tput setaf 6) \&lt;br /&gt;
    -v   white=$(tput setaf 7) \&lt;br /&gt;
    '{ $1=blue $1; $2=green $2; $3=white $3; }  1'&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
kube-events&lt;br /&gt;
kubectl get events -A -w&lt;br /&gt;
kubectl get events --all-namespaces --watch -o 'go-template={{.lastTimestamp}} {{.involvedObject.kind}} {{.message}} ({{.involvedObject.name}}){{&amp;quot;\n&amp;quot;}}'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
= [https://argoproj.github.io/argo-rollouts/ argo-rollouts] =&lt;br /&gt;
Argo Rollouts introduces a new custom resource called a Rollout to provide additional deployment strategies such as Blue Green and Canary to Kubernetes.&lt;br /&gt;
&lt;br /&gt;
= &amp;lt;code&amp;gt;[https://github.com/groundcover-com/murre murre]&amp;lt;/code&amp;gt; =&lt;br /&gt;
Murre is an on-demand, scaleable source of container resource metrics for K8s. No dependencies needed, no install needed on a cluster.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
goenv install 1.18 # although 1.19 is the latest and the install completes successfully it wont create the binary  &lt;br /&gt;
go install github.com/groundcover-com/murre@latest&lt;br /&gt;
murre --sortby-cpu-util&lt;br /&gt;
murre --sortby-cpu&lt;br /&gt;
murre --pod kong-51xst&lt;br /&gt;
murre --namespace dev&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/amelbakry/kubernetes-scripts/blob/master/cluster-health.sh Kubernetes scripts] =&lt;br /&gt;
These Scripts allow you to troubleshoot and check the health status of the cluster and deployments They allow you to gather these information&lt;br /&gt;
* Cluster resources&lt;br /&gt;
* Cluster Nodes status&lt;br /&gt;
* Nodes Conditions&lt;br /&gt;
* Pods per Nodes&lt;br /&gt;
* Worker Nodes Per Availability Zones&lt;br /&gt;
* Cluster Node Types&lt;br /&gt;
* Pods not in running or completed status&lt;br /&gt;
* Top Pods according to Memory Limits&lt;br /&gt;
* Top Pods according to CPU Limits&lt;br /&gt;
* Number of Pods&lt;br /&gt;
* Pods Status&lt;br /&gt;
* Max Pods restart count&lt;br /&gt;
* Readiness of Pods&lt;br /&gt;
* Pods Average Utilization&lt;br /&gt;
* Top Pods according to CPU Utilization&lt;br /&gt;
* Top Pods according to Memory Utilization&lt;br /&gt;
* Pods Distribution per Nodes&lt;br /&gt;
* Node Distribution per Availability Zone&lt;br /&gt;
* Deployments without correct resources (Memory or CPU)&lt;br /&gt;
* Deployments without Limits&lt;br /&gt;
* Deployments without Application configured in Labels&lt;br /&gt;
&lt;br /&gt;
= Multi-node clusters =&lt;br /&gt;
{{Note|[[Kubernetes/minikube]] can do this natively}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Build multi node cluster for development.&lt;br /&gt;
On a single machine&lt;br /&gt;
* [https://github.com/kinvolk/kube-spawn/ kube-spawn] tool for creating a multi-node Kubernetes (&amp;gt;= 1.8) cluster on a single Linux machine&lt;br /&gt;
* [https://github.com/sttts/kubernetes-dind-cluster kubernetes-dind-cluster] Kubernetes multi-node cluster for developer of Kubernetes that launches in 36 seconds&lt;br /&gt;
* [https://kind.sigs.k8s.io/ kind] is a tool for running local Kubernetes clusters using Docker container “nodes”&lt;br /&gt;
* [https://github.com/ecomm-integration-ballerina/kubernetes-cluster Vagrant] full documentation in thsi [https://medium.com/@wso2tech/multi-node-kubernetes-cluster-with-vagrant-virtualbox-and-kubeadm-9d3eaac28b98 article]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Full cluster provisioning&lt;br /&gt;
* [https://github.com/kubernetes-sigs/kubespray kubespray] Deploy a Production Ready Kubernetes Cluster&lt;br /&gt;
* [https://github.com/kubernetes/kops kops] get a production grade Kubernetes cluster up and running&lt;br /&gt;
&lt;br /&gt;
= [https://kubernetes.io/docs/tasks/debug-application-cluster/crictl/ crictl] =&lt;br /&gt;
CLI and validation tools for Kubelet Container Runtime Interface (CRI). Used for debugging Kubernetes nodes with &amp;lt;code&amp;gt;crictl&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;crictl&amp;lt;/code&amp;gt; requires a Linux operating system with a CRI runtime. Creating containers with this tool on K8s cluster, will eventually cause that Kubernetes will delete these containers.&lt;br /&gt;
= [https://github.com/weaveworks/kubediff kubediff] show diff code vs what is deployed =&lt;br /&gt;
Kubediff is a tool for Kubernetes to show you the differences between your running configuration and your version controlled configuration.&lt;br /&gt;
= Mozilla SOPS - secret manager =&lt;br /&gt;
* [https://github.com/mozilla/sops SOPS] Mozilla SOPS: Secrets OPerationS, sops is an editor of encrypted files that supports YAML, JSON, ENV, INI and BINARY formats and encrypts with AWS KMS, GCP KMS, Azure Key Vault and PGP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/getsops/sops/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d)&lt;br /&gt;
curl -sL https://github.com/mozilla/sops/releases/download/${LATEST}/sops-${LATEST}.linux.amd64 -o $TEMPDIR/sops&lt;br /&gt;
sudo install $TEMPDIR/sops /usr/local/bin/sops&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://kompose.io/ Kompose] (Kubernetes + Compose) =&lt;br /&gt;
kompose is a tool to convert docker-compose files to Kubernetes manifests. &amp;lt;code&amp;gt;kompose&amp;lt;/code&amp;gt; takes a Docker Compose file and translates it into Kubernetes resources.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Linux&lt;br /&gt;
curl -L https://github.com/kubernetes/kompose/releases/download/v1.21.0/kompose-linux-amd64 -o kompose&lt;br /&gt;
sudo install ./kompose /usr/local/bin/kompose               # option 1&lt;br /&gt;
chmod +x kompose; sudo mv ./kompose /usr/local/bin/kompose  # option 2&lt;br /&gt;
&lt;br /&gt;
# Completion&lt;br /&gt;
source &amp;lt;(kompose completion bash)&lt;br /&gt;
&lt;br /&gt;
# Convert&lt;br /&gt;
kompose convert -f docker-compose-mac.yaml&lt;br /&gt;
&lt;br /&gt;
WARN Restart policy 'unless-stopped' in service mysql is not supported, convert it to 'always'&lt;br /&gt;
INFO Kubernetes file &amp;quot;mysql-service.yaml&amp;quot; created&lt;br /&gt;
INFO Kubernetes file &amp;quot;cluster-dir-persistentvolumeclaim.yaml&amp;quot; created&lt;br /&gt;
INFO Kubernetes file &amp;quot;mysql-deployment.yaml&amp;quot; created&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;References&lt;br /&gt;
*[https://github.com/kubernetes/kompose kompose] Github&lt;br /&gt;
&lt;br /&gt;
= [https://kubernetes.io/blog/2019/04/19/introducing-kube-iptables-tailer/ kube-iptables-tailer] - ip-table drop packages logger =&lt;br /&gt;
Allows to view iptables dropped packages, useful when working with Network Policies to identify pods trying to talk to disallowed destinations.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This project deploys &amp;lt;tt&amp;gt;[https://github.com/box/kube-iptables-tailer/tree/master/demo kube-iptables-tailer]&amp;lt;/tt&amp;gt; daemonset that watches iptables log &amp;lt;code&amp;gt;/var/log/iptables.log&amp;lt;/code&amp;gt; on each k8s-node mounted as &amp;lt;code&amp;gt;hostPath&amp;lt;/code&amp;gt; volume. It filters the log for custom prefix, set in &amp;lt;code&amp;gt;daemonset.spec.template.spec.containers.env&amp;lt;/code&amp;gt; and sends to cluster events.&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
            env: &lt;br /&gt;
              - name: &amp;quot;IPTABLES_LOG_PATH&amp;quot;&lt;br /&gt;
                value: &amp;quot;/var/log/iptables.log&amp;quot;&lt;br /&gt;
              - name: &amp;quot;IPTABLES_LOG_PREFIX&amp;quot;&lt;br /&gt;
                # log prefix defined in your iptables chains&lt;br /&gt;
                value: &amp;quot;my-prefix:&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://github.com/box/kube-iptables-tailer#setup-iptables-log-prefix Set iptables Log Prefix]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ iptables -A CHAIN_NAME -j LOG --log-prefix &amp;quot;EXAMPLE_LOG_PREFIX: &amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example output, when packet dropped&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ kubectl describe pods --namespace=YOUR_NAMESPACE&lt;br /&gt;
...&lt;br /&gt;
Events:&lt;br /&gt;
  FirstSeen   LastSeen    Count   From                    Type          Reason          Message&lt;br /&gt;
  ---------   --------	  -----	  ----                    ----          ------          -------&lt;br /&gt;
  1h          5s          10      kube-iptables-tailer    Warning       PacketDrop      Packet dropped when receiving traffic from example-service-2 (IP: 22.222.22.222).&lt;br /&gt;
  3h          2m          5       kube-iptables-tailer    Warning       PacketDrop      Packet dropped when sending traffic to example-service-1 (IP: 11.111.11.111).&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
= [https://github.com/eldadru/ksniff ksniff] - pipe a pod traffic to Wireshark or Tshark =&lt;br /&gt;
A kubectl plugin that utilize tcpdump and Wireshark to start a remote capture on any pod&lt;br /&gt;
&lt;br /&gt;
= [https://docs.flagger.app/ flagger - canary deployments] =&lt;br /&gt;
Flagger is a Kubernetes operator that automates the promotion of canary deployments using Istio, Linkerd, App Mesh, NGINX, Skipper, Contour or Gloo routing for traffic shifting and Prometheus metrics for canary analysis.&lt;br /&gt;
= [https://www.kubeval.com/ Kubeval] =&lt;br /&gt;
Kubeval is used to validate one or more Kubernetes configuration files, and is often used locally as part of a development workflow as well as in CI pipelines.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install&lt;br /&gt;
wget https://github.com/instrumenta/kubeval/releases/latest/download/kubeval-linux-amd64.tar.gz&lt;br /&gt;
tar xf kubeval-linux-amd64.tar.gz&lt;br /&gt;
sudo cp kubeval /usr/local/bin&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
$&amp;gt; kubeval my-invalid-rc.yaml&lt;br /&gt;
WARN - my-invalid-rc.yaml contains an invalid ReplicationController - spec.replicas: Invalid type. Expected: integer, given: string&lt;br /&gt;
$&amp;gt; echo $?&lt;br /&gt;
1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/yannh/kubeconform kubeconform] - improved Kubeval =&lt;br /&gt;
Kubeconform is a Kubernetes manifests validation tool.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install&lt;br /&gt;
wget https://github.com/yannh/kubeconform/releases/latest/download/kubeconform-linux-amd64.tar.gz&lt;br /&gt;
tar xf kubeconform-linux-amd64.tar.gz&lt;br /&gt;
sudo install kubeconform /usr/local/bin&lt;br /&gt;
&lt;br /&gt;
# Show version&lt;br /&gt;
kubeconform -v&lt;br /&gt;
v0.4.14&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Observability =&lt;br /&gt;
== [https://github.com/oslabs-beta/KUR8 KUR8] - like Elastic.io EFK dashboards ==&lt;br /&gt;
{{Note|I've deployed v1.0.0 to monitoring ns along with already existing service &amp;lt;code&amp;gt;&lt;br /&gt;
kube-prometheus-stack-prometheus:9090&amp;lt;/code&amp;gt; but the application was crashing}}&lt;br /&gt;
&lt;br /&gt;
= CPU Load pods =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Repeat the command times x CPU&lt;br /&gt;
cat /proc/cpuinfo | grep processor | wc -l # count processors&lt;br /&gt;
yes &amp;gt; /dev/null &amp;amp;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
*[https://kubernetes.io/docs/reference/kubectl/overview/ kubectl overview - resources types, Namespaced, kinds] K8s docs&lt;br /&gt;
*[https://github.com/johanhaleby/kubetail kubetail] Bash script that enables you to aggregate (tail/follow) logs from multiple pods into one stream. This is the same as running &amp;quot;kubectl logs -f &amp;quot; but for multiple pods.&lt;br /&gt;
*[https://github.com/ahmetb/kubectx kubectx kubens] Kubernetes config switches for context and setting up default namespace&lt;br /&gt;
*[https://medium.com/faun/using-different-kubectl-versions-with-multiple-kubernetes-clusters-a3ad8707b87b manages different ver kubectl] blog&lt;br /&gt;
*[https://github.com/kubernetes/community/blob/master/contributors/devel/sig-cli/kubectl-conventions.md#rules-for-extending-special-resource-alias---all kubectl] Kubectl Conventions&lt;br /&gt;
&lt;br /&gt;
Cheatsheets&lt;br /&gt;
*[https://cheatsheet.dennyzhang.com/cheatsheet-kubernetes-A4 cheatsheet-kubernetes-A4] by dennyzhang&lt;br /&gt;
&lt;br /&gt;
Other projects&lt;br /&gt;
*[https://github.com/jonmosco/kube-tmux kube-tmux] Kubernetes context and namespace status for tmux&lt;br /&gt;
*[https://github.com/jonmosco/kube-ps1 kube-ps1] Kubernetes prompt for bash and zsh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:kubernetes]]&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Kubernetes/Tools&amp;diff=7030</id>
		<title>Kubernetes/Tools</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Kubernetes/Tools&amp;diff=7030"/>
		<updated>2024-07-02T15:26:12Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Install */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= kubectl =&lt;br /&gt;
== Install ==&lt;br /&gt;
List of kubectl [https://kubernetes.io/releases/ releases].&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# List releases&lt;br /&gt;
curl -s https://api.github.com/repos/kubernetes/kubernetes/releases | jq -r '.[].tag_name' | sort -V&lt;br /&gt;
curl -s https://api.github.com/repos/kubernetes/kubernetes/releases | jq -r '[.[] | select(.prerelease == false) | .tag_name] | map(sub(&amp;quot;^v&amp;quot;;&amp;quot;&amp;quot;)) | map(split(&amp;quot;.&amp;quot;)) | group_by(.[0:2]) | map(max_by(.[2]|tonumber)) | map(join(&amp;quot;.&amp;quot;)) | map(&amp;quot;v&amp;quot; + .) | sort | reverse | .[]'&lt;br /&gt;
v1.30.2&lt;br /&gt;
v1.29.6&lt;br /&gt;
v1.28.11&lt;br /&gt;
v1.27.15&lt;br /&gt;
v1.26.15&lt;br /&gt;
&lt;br /&gt;
# Latest&lt;br /&gt;
ARCH=amd64 # amd64|arm&lt;br /&gt;
VERSION=$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt); echo $VERSION&lt;br /&gt;
curl -LO https://storage.googleapis.com/kubernetes-release/release/$VERSION/bin/linux/$ARCH/kubectl&lt;br /&gt;
&lt;br /&gt;
# Specific version&lt;br /&gt;
# Find specific Kubernetes release, then download kubectl&lt;br /&gt;
VERSION=v1.26.14; ARCH=amd64 # amd64|arm&lt;br /&gt;
curl -LO https://storage.googleapis.com/kubernetes-release/release/$VERSION/bin/linux/$ARCH/kubectl&lt;br /&gt;
sudo install ./kubectl /usr/local/bin/kubectl&lt;br /&gt;
&lt;br /&gt;
# Note: sudo install := chmod +x ./kubectl; sudo mv&lt;br /&gt;
&lt;br /&gt;
# Verify, kubectl should not be more than -/+ 1 minor version difference then api-server&lt;br /&gt;
kubectl version --short&lt;br /&gt;
Client Version: v1.26.14&lt;br /&gt;
Kustomize Version: v4.5.7&lt;br /&gt;
Server Version: v1.24.10&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Google way&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install kubectl if you don't already have a suitable version&lt;br /&gt;
# https://kubernetes.io/docs/setup/release/version-skew-policy/#kubectl&lt;br /&gt;
kubectl version --client || gcloud components install kubectl&lt;br /&gt;
kubectl get clusterrolebinding $(gcloud config get-value core/account)-cluster-admin ||&lt;br /&gt;
  kubectl create clusterrolebinding $(gcloud config get-value core/account)-cluster-admin \&lt;br /&gt;
  --clusterrole=cluster-admin \&lt;br /&gt;
  --user=&amp;quot;$(gcloud config get-value core/account)&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Autocompletion and kubeconfig ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
source &amp;lt;(kubectl completion bash); alias k=kubectl; complete -F __start_kubectl k&lt;br /&gt;
&lt;br /&gt;
# Set default namespace&lt;br /&gt;
kubectl config set-context --current --namespace=dev&lt;br /&gt;
kubectl config set-context $(kubectl config current-context) --namespace=dev&lt;br /&gt;
&lt;br /&gt;
vi ~/.kube/config&lt;br /&gt;
...&lt;br /&gt;
contexts:&lt;br /&gt;
- context:&lt;br /&gt;
    cluster: kubernetes&lt;br /&gt;
    user: kubernetes-admin&lt;br /&gt;
    namespace: web       # default namespace&lt;br /&gt;
  name: dev-frontend&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Add &amp;lt;code&amp;gt;proxy-url&amp;lt;/code&amp;gt; using &amp;lt;code&amp;gt;yq&amp;lt;/code&amp;gt; to kubeconfig ==&lt;br /&gt;
Minimum yq version required is v2.x, tested with yq 2.13.0. The example below does the file inline `-i` update.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
yq -i -y --indentless '.clusters[0].cluster += {&amp;quot;proxy-url&amp;quot;: &amp;quot;http://proxy.acme.com:8080&amp;quot;}' ~/.kube/$ENVIRONMENT&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Get resources and cheatsheet ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Get a list of nodes&lt;br /&gt;
kubectl get nodes -o jsonpath=&amp;quot;{.items[*].metadata.name}&amp;quot;&lt;br /&gt;
ip-10-10-10-10.eu-west-1.compute.internal ip-10-10-10-20.eu-west-1.compute.internal&lt;br /&gt;
&lt;br /&gt;
kubectl get nodes -oname&lt;br /&gt;
node/ip-10-10-10-10.eu-west-1.compute.internal&lt;br /&gt;
node/ip-10-10-10-20.eu-west-1.compute.internal&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
# Pods sorted by node name&lt;br /&gt;
kubectl get pods --sort-by=.spec.nodeName -owide -A&lt;br /&gt;
&lt;br /&gt;
# Watch a namespace in a convinient resources order | sts=statefulset, rs=replicaset, ep=endpoint, cm=configmap&lt;br /&gt;
watch -d kubectl -n dev get sts,deploy,rc,rs,pods,svc,ep,ing,pvc,cm,sa,secret,es,cronjob,job -owide --show-labels &lt;br /&gt;
   # note es - externalsecrets&lt;br /&gt;
watch -d 'kubectl get pv -owide --show-labels | grep -e &amp;lt;eg.NAMESPACE&amp;gt;'&lt;br /&gt;
watch -d helm list -A&lt;br /&gt;
&lt;br /&gt;
# Test your context by creating configMap&lt;br /&gt;
kubectl create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2&lt;br /&gt;
kubectl delete configmap my-config&lt;br /&gt;
&lt;br /&gt;
# Watch multiple namespaces&lt;br /&gt;
eval 'kubectl --context='{context1,context2}' --namespace='{ns1,ns2}' get pod;'&lt;br /&gt;
eval kubectl\ --context={context1,context2}\ --namespace={ns1,ns2}\ get\ pod\;&lt;br /&gt;
watch -d eval 'kubectl -n '{default,ingress-nginx}' get sts,deploy,rc,rs,pods,svc,ep,ing,pvc,cm,sa,secret,es,cronjob,job -owide --show-labels;'&lt;br /&gt;
&lt;br /&gt;
# Auth, can-i&lt;br /&gt;
kubectl auth can-i delete pods&lt;br /&gt;
yes&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Get yaml from existing object ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl create namespace kiali --dry-run=client -o yaml &amp;gt; ns.yaml&lt;br /&gt;
kubectl create namespace kiali --dry-run=client -o yaml | kubectl apply -f -&lt;br /&gt;
&lt;br /&gt;
# Saves version revision in metadata.annotations.kubectl.kubernetes.io/last-applied-configuration={..manifest_json..} &lt;br /&gt;
kubectl create ns foo --save-config&lt;br /&gt;
&lt;br /&gt;
# Get a yaml without status information, almost clean manifest. Deprecated '--export' before &amp;lt;v17.x.&lt;br /&gt;
kubectl -n web get pod &amp;lt;podName&amp;gt; -oyaml --export&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Generate pod manifest, the most clean way I know&lt;br /&gt;
&amp;lt;syntaxhighlightjs lang=bash&amp;gt;&lt;br /&gt;
# kubectl -n foo run --image=ubuntu:20.04 ubuntu-1 --dry-run=client -oyaml -- bash -c sleep&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Pod&lt;br /&gt;
metadata:&lt;br /&gt;
  creationTimestamp: null  # &amp;lt;- can be deleted&lt;br /&gt;
  labels:&lt;br /&gt;
    run: ubuntu-1&lt;br /&gt;
  name: ubuntu-1&lt;br /&gt;
  namespace: foo&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
  - args:&lt;br /&gt;
    - bash&lt;br /&gt;
    - -c&lt;br /&gt;
    - sleep&lt;br /&gt;
    image: ubuntu:20.04&lt;br /&gt;
    name: ubuntu-1&lt;br /&gt;
    resources: {}  # &amp;lt;- can be deleted&lt;br /&gt;
  dnsPolicy: ClusterFirst&lt;br /&gt;
  restartPolicy: Always&lt;br /&gt;
status: {}         # &amp;lt;- can be deleted&lt;br /&gt;
&amp;lt;/syntaxhighlightjs&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;code&amp;gt;kubectl cp&amp;lt;/code&amp;gt; ==&lt;br /&gt;
Each container must be prefixed with a namespace, the copy-to-file(&amp;lt;filename&amp;gt;) must be in place. The recursive copy might be tricky. &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl cp [[namespace/]pod:]file/path ./&amp;lt;filename&amp;gt; -c &amp;lt;container_name&amp;gt;&lt;br /&gt;
kubectl cp vegeta/vegeta-5847d879d8-p9kqw:plot.html ./plot.html -c vegeta&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== One liners ==&lt;br /&gt;
=== Single purpose pods ===&lt;br /&gt;
Note: &amp;lt;code&amp;gt;--generator=deployment/apps.v1&amp;lt;/code&amp;gt; is DEPRECATED and will be removed, use &amp;lt;code&amp;gt;--generator=run-pod/v1 &amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;kubectl create&amp;lt;/code&amp;gt; instead.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Exec to deployment, no need to specify unique pod name&lt;br /&gt;
kubectl exec -it deploy/sleep -- curl httpbin:8000/headers&lt;br /&gt;
&lt;br /&gt;
NS=mynamespace; LABEL='app.kubernetes.io/name=myvalue'&lt;br /&gt;
kubectl exec -n $NS -it $(kubectl get pod -l &amp;quot;$LABEL&amp;quot; -n $NS -o jsonpath='{.items[0].metadata.name}') -- bash&lt;br /&gt;
&lt;br /&gt;
# Echo server&lt;br /&gt;
kubectl run --image=k8s.gcr.io/echoserver:1.4 hello-1 --port=8080&lt;br /&gt;
&lt;br /&gt;
# Single purpose pods&lt;br /&gt;
kubectl run    --image=bitnami/kubectl:1.21.8 kubectl-1    --rm -it -- get pods&lt;br /&gt;
kubectl run    --image=appropriate/curl       curl-1       --rm -it -- sh&lt;br /&gt;
kubectl run    --image=ubuntu:18.04     ubuntu-1  --rm -it -- bash&lt;br /&gt;
kubectl create --image=ubuntu:20.04     ubuntu-2  --rm -it -- bash&lt;br /&gt;
kubectl run    --image=busybox:1.31.0   busybox-1 --rm -it -- sh          # exec and delete when completed&lt;br /&gt;
kubectl run    --image=busybox:1.31.0   busybox-2          -- sleep 7200  # sleep, so you can exec&lt;br /&gt;
kubectl run    --image=alpine           alpine-1  --rm -it -- ping -c 1 8.8.8.8&lt;br /&gt;
 docker run    --rm -it --name alpine-1 alpine                ping -c 1 8.8.8.8&lt;br /&gt;
&lt;br /&gt;
# Network-multitool | https://github.com/wbitt/Network-MultiTool | Runs as a webserver, so won't complete.&lt;br /&gt;
kubectl run    --image=wbitt/network-multitool multitool-1&lt;br /&gt;
kubectl create --image=wbitt/network-multitool deployment multitool&lt;br /&gt;
kubectl exec -it multitool-1          -- /bin/bash&lt;br /&gt;
kubectl exec -it deployment/multitool -- /bin/bash&lt;br /&gt;
docker run --rm -it --name network-multitool wbitt/network-multitool bash&lt;br /&gt;
&lt;br /&gt;
# Curl&lt;br /&gt;
kubectl run test --image=tutum/curl -- sleep 10000&lt;br /&gt;
&lt;br /&gt;
# Deprecation syntax&lt;br /&gt;
kubectl run --image=k8s.gcr.io/echoserver:1.4 --generator=run-pod/v1         hello-1 --port=8080 # VALID!&lt;br /&gt;
kubectl run --image=k8s.gcr.io/echoserver:1.4 --generator=deployment/apps.v1 hello-1 --port=8080 # &amp;lt;- deprecated&lt;br /&gt;
&lt;br /&gt;
# Errors&lt;br /&gt;
# | error: --rm should only be used for attached containers&lt;br /&gt;
# | Error: unknown flag: --image # when kubectl create --image&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Additional software&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Process and network comamnds&lt;br /&gt;
export DEBIAN_FRONTEND=noninteractive # Ubuntu 20.04&lt;br /&gt;
DEBIAN_FRONTEND=noninteractive apt install -yq dnsutils iproute2 iputils-ping iputils-tracepath net-tools netcat procps&lt;br /&gt;
# | dnsutils     - nslookup, dig&lt;br /&gt;
# | iproute2     - ip addr, ss&lt;br /&gt;
# | iputils-ping      - ping&lt;br /&gt;
# | iputils-tracepath - tracepath&lt;br /&gt;
# | net-tools    - ifconfig&lt;br /&gt;
# | netcat       - nc&lt;br /&gt;
# | procps       - ps, top&lt;br /&gt;
&lt;br /&gt;
# Databases&lt;br /&gt;
apt install -yq redis-tools&lt;br /&gt;
apt install -yq postgresql-client&lt;br /&gt;
&lt;br /&gt;
# AWS cli v1 - Debian&lt;br /&gt;
apt install python-pip&lt;br /&gt;
pip install awscli&lt;br /&gt;
&lt;br /&gt;
# Network test without ping, nc or telnet&lt;br /&gt;
(timeout 1 bash -c '&amp;lt;/dev/tcp/127.0.0.1/22 &amp;amp;&amp;amp; echo PORT OPEN || echo PORT CLOSED') 2&amp;gt;/dev/null&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;kubectl heredocs&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl apply -f &amp;lt;(cat &amp;lt;&amp;lt;EOF&lt;br /&gt;
EOF&lt;br /&gt;
) --dry-run=server&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;One lines move to yamls&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
# kubectl exec -it ubuntu-2 -- bash&lt;br /&gt;
kubectl apply -f &amp;lt;(cat &amp;lt;&amp;lt;EOF&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Pod&lt;br /&gt;
metadata:&lt;br /&gt;
# namespace: default&lt;br /&gt;
  name: ubuntu-1&lt;br /&gt;
# annotations:&lt;br /&gt;
#   kubernetes.io/psp: eks.privileged&lt;br /&gt;
# labels:&lt;br /&gt;
#   app: ubuntu&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
  - command:&lt;br /&gt;
    - &amp;quot;sleep&amp;quot;&lt;br /&gt;
    - &amp;quot;7200&amp;quot;&lt;br /&gt;
#   args:&lt;br /&gt;
#   - &amp;quot;bash&amp;quot;&lt;br /&gt;
    image: ubuntu:20.04&lt;br /&gt;
    imagePullPolicy: IfNotPresent&lt;br /&gt;
    name: ubuntu-1&lt;br /&gt;
#   securityContext:&lt;br /&gt;
#     privileged: true&lt;br /&gt;
#   tty: true&lt;br /&gt;
# dnsPolicy: ClusterFirst&lt;br /&gt;
# enableServiceLinks: true&lt;br /&gt;
  restartPolicy: Never&lt;br /&gt;
# serviceAccount    : sa1&lt;br /&gt;
# serviceAccountName: sa1&lt;br /&gt;
# nodeSelector:&lt;br /&gt;
#   node.kubernetes.io/lifecycle: spot&lt;br /&gt;
EOF&lt;br /&gt;
) --dry-run=server&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Docker - for a single missing commands ===&lt;br /&gt;
If you ever miss some commands you can use docker container package with it:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# curl - missing on minikube node that runs CoreOS&lt;br /&gt;
minikube -p metrics ip; minikube ssh&lt;br /&gt;
docker run appropriate/curl -- http://&amp;lt;NodeIP&amp;gt;:10255/stats/summary # check kubelet-metrics non secure endpoint&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== [https://kubernetes.io/blog/2019/01/14/apiserver-dry-run-and-kubectl-diff/ &amp;lt;code&amp;gt;kubectl diff&amp;lt;/code&amp;gt;] ==&lt;br /&gt;
Shows the differences between the current '''live''' object and the new '''dry-run''' object.&lt;br /&gt;
&amp;lt;source lang=diff&amp;gt;&lt;br /&gt;
kubectl diff -f webfront-deploy.yaml&lt;br /&gt;
diff -u -N /tmp/LIVE-761963756/apps.v1.Deployment.default.webfront-deploy /tmp/MERGED-431884635/apps.v1.Deployment.default.webfront-deploy&lt;br /&gt;
--- /tmp/LIVE-761963756/apps.v1.Deployment.default.webfront-deploy      2019-10-13 17:46:59.784000000 +0000&lt;br /&gt;
+++ /tmp/MERGED-431884635/apps.v1.Deployment.default.webfront-deploy    2019-10-13 17:46:59.788000000 +0000&lt;br /&gt;
@@ -4,7 +4,7 @@&lt;br /&gt;
   annotations:&lt;br /&gt;
     deployment.kubernetes.io/revision: &amp;quot;1&amp;quot;&lt;br /&gt;
   creationTimestamp: &amp;quot;2019-10-13T16:38:43Z&amp;quot;&lt;br /&gt;
-  generation: 2&lt;br /&gt;
+  generation: 3&lt;br /&gt;
   labels:&lt;br /&gt;
     app: webfront-deploy&lt;br /&gt;
   name: webfront-deploy&lt;br /&gt;
@@ -14,7 +14,7 @@&lt;br /&gt;
   uid: ebaf757e-edd7-11e9-8060-0a2fb3cdd79a&lt;br /&gt;
 spec:&lt;br /&gt;
   progressDeadlineSeconds: 600&lt;br /&gt;
-  replicas: 2&lt;br /&gt;
+  replicas: 1&lt;br /&gt;
   revisionHistoryLimit: 10&lt;br /&gt;
   selector:&lt;br /&gt;
     matchLabels:&lt;br /&gt;
@@ -29,6 +29,7 @@&lt;br /&gt;
       creationTimestamp: null&lt;br /&gt;
       labels:&lt;br /&gt;
         app: webfront-deploy&lt;br /&gt;
+        role: webfront&lt;br /&gt;
     spec:&lt;br /&gt;
       containers:&lt;br /&gt;
       - image: nginx:1.7.8&lt;br /&gt;
exit status 1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Kubectl-plugins - [https://krew.sigs.k8s.io/docs/ Krew] plugin manager ==&lt;br /&gt;
Install [https://github.com/kubernetes-sigs/krew krew] package manager for kubectl plugins, requires K8s v1.12+&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
(&lt;br /&gt;
  set -x; cd &amp;quot;$(mktemp -d)&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  OS=&amp;quot;$(uname | tr '[:upper:]' '[:lower:]')&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  ARCH=&amp;quot;$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  KREW=&amp;quot;krew-${OS}_${ARCH}&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  curl -fsSLO &amp;quot;https://github.com/kubernetes-sigs/krew/releases/latest/download/${KREW}.tar.gz&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  tar zxvf &amp;quot;${KREW}.tar.gz&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  ./&amp;quot;${KREW}&amp;quot; install krew&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# update PATH&lt;br /&gt;
[ -d ${HOME}/.krew/bin ] &amp;amp;&amp;amp; export PATH=&amp;quot;${PATH}:${HOME}/.krew/bin&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# List plugins&lt;br /&gt;
kubectl krew search&lt;br /&gt;
&lt;br /&gt;
# Install plugins&lt;br /&gt;
kubectl krew install sniff&lt;br /&gt;
&lt;br /&gt;
# Upgrade plugins&lt;br /&gt;
kubectl krew upgrade&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*[https://github.com/kubernetes-sigs/krew-index/blob/master/plugins.md Available kubectl plugins] Github&lt;br /&gt;
*[https://ahmet.im/blog/kubectl-plugins/ kubectl subcommands] write your own plugin&lt;br /&gt;
&lt;br /&gt;
== Install kubectl plugins ==&lt;br /&gt;
&amp;lt;code&amp;gt;kubectl ctx&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;kubectl ns&amp;lt;/code&amp;gt; - change context and set default namespace&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl krew install ctx ns&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;kubectl cssh&amp;lt;/code&amp;gt; - SSH into Kubernetes nodes ===&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Ssh to all nodes, example below for EKS v1.15.11&lt;br /&gt;
kubectl cssh -u ec2-user -i /git/secrets/ssh/dev.pem -a &amp;quot;InternalIP&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;code&amp;gt;[https://github.com/FairwindsOps/pluto kubectl deprecations]&amp;lt;/code&amp;gt;===&lt;br /&gt;
;&amp;lt;code&amp;gt;[https://github.com/FairwindsOps/pluto kubectl deprecations]&amp;lt;/code&amp;gt;: shows all the deprecated objects in a Kubernetes cluster allowing the operator to verify them before upgrading the cluster. It uses the swagger.json version available in master branch of Kubernetes repository (https://github.com/kubernetes/kubernetes/tree/master/api/openapi-spec) as a reference.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl deprecations&lt;br /&gt;
StatefulSet found in statefulsets.apps/v1beta1&lt;br /&gt;
	 ├─ API REMOVED FROM THE CURRENT VERSION AND SHOULD BE MIGRATED IMMEDIATELY!!&lt;br /&gt;
		-&amp;gt; OBJECT: myapp namespace: mynamespace1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Prior upgrade report. Script specific to EKS.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
[[ $# -eq 0 ]] &amp;amp;&amp;amp; echo &amp;quot;no args, provide prefix for the file name&amp;quot; &amp;amp;&amp;amp; exit 1&lt;br /&gt;
PREFIX=$1&lt;br /&gt;
TARGET_K8S_VER=v1.16.8&lt;br /&gt;
K8Sid=$(kubectl cluster-info | head -1 | cut -d'/' -f3 | cut -d'.' -f1)&lt;br /&gt;
kubectl deprecations --k8s-version $TARGET_K8S_VER &amp;gt; $PREFIX-$(kubectl cluster-info | head -1 | cut -d'/' -f3 | cut -d'.' -f1)-$(date +&amp;quot;%Y%m%d-%H%M&amp;quot;)-from-$(kubectl version --short | grep Server | cut -f3 -d' ')-to-${TARGET_K8S_VER}.yaml&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ ./kube-deprecations.sh test&lt;br /&gt;
$ ls -l&lt;br /&gt;
-rw-rw-r-- 1 vagrant vagrant 29356 Jun 29 16:09 test-11111111112222222222333333333344-20200629-1609-from-v1.15.11-eks-af3caf-to-latest.yaml&lt;br /&gt;
-rw-rw-r-- 1 vagrant vagrant   852 Jun 30 22:41 test-11111111112222222222333333333344-20200630-2241-from-v1.15.11-eks-af3caf-to-v1.16.8.yaml&lt;br /&gt;
-rwxrwxr-x 1 vagrant vagrant   437 Jun 30 22:41 kube-deprecations.sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;code&amp;gt;kubectl df-pv&amp;lt;/code&amp;gt;===&lt;br /&gt;
;&amp;lt;code&amp;gt;kubectl df-pv&amp;lt;/code&amp;gt;: Show disk usage (like unix df) for persistent volumes&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl df-pv&lt;br /&gt;
PVC                   NAMESPACE   POD                    SIZE          USED        AVAILABLE     PERCENTUSED   IUSED   IFREE     PERCENTIUSED&lt;br /&gt;
rdbms-volume          shared1     rdbms-d494fbf4-xrssk   2046640128    252817408   1777045504    12.35         688     130384    0.52&lt;br /&gt;
userdata-0            shared2     mft-0                  21003583488   57692160    20929114112   0.27          749     1309971   0.06&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
===&amp;lt;code&amp;gt;kubectl sniff&amp;lt;/code&amp;gt;===&lt;br /&gt;
Start a remote packet capture on pods using tcpdump.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl sniff hello-minikube-7c77b68cff-qbvsd -c hello-minikube&lt;br /&gt;
# Flags:&lt;br /&gt;
#   -c, --container string             container (optional)&lt;br /&gt;
#   -x, --context string               kubectl context to work on (optional)&lt;br /&gt;
#   -f, --filter string                tcpdump filter (optional)&lt;br /&gt;
#   -h, --help                         help for sniff&lt;br /&gt;
#       --image string                 the privileged container image (optional)&lt;br /&gt;
#   -i, --interface string             pod interface to packet capture (optional) (default &amp;quot;any&amp;quot;)&lt;br /&gt;
#   -l, --local-tcpdump-path string    local static tcpdump binary path (optional)&lt;br /&gt;
#   -n, --namespace string             namespace (optional) (default &amp;quot;default&amp;quot;)&lt;br /&gt;
#   -o, --output-file string           output file path, tcpdump output will be redirect to this file instead of wireshark (optional) ('-' stdout)&lt;br /&gt;
#   -p, --privileged                   if specified, ksniff will deploy another pod that have privileges to attach target pod network namespace&lt;br /&gt;
#   -r, --remote-tcpdump-path string   remote static tcpdump binary path (optional) (default &amp;quot;/tmp/static-tcpdump&amp;quot;)&lt;br /&gt;
#   -v, --verbose                      if specified, ksniff output will include debug information (optional)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The command above will open Wireshark, interesting article to follow:&lt;br /&gt;
* [https://istio.io/latest/docs/tasks/security/authentication/mtls-migration/#set-up-the-cluster mutual TLS] istio&lt;br /&gt;
* [https://dzone.com/articles/verifying-service-mesh-tls-in-kubernetes-using-ksn Verifying Service Mesh TLS in Kubernetes, Using Ksniff and Wireshark]&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;code&amp;gt;kubectl neat&amp;lt;/code&amp;gt;===&lt;br /&gt;
Print sanitized Kubernetes manifest.&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
kubectl get csec  dummy-secret -n clustersecret -oyaml | kubectl neat&lt;br /&gt;
apiVersion: clustersecret.io/v1&lt;br /&gt;
data:&lt;br /&gt;
  tls.crt: ***&lt;br /&gt;
  tls.key: ***&lt;br /&gt;
kind: ClusterSecret&lt;br /&gt;
matchNamespace:&lt;br /&gt;
- anothernamespace&lt;br /&gt;
metadata:&lt;br /&gt;
  name: dummy-secret&lt;br /&gt;
  namespace: clustersecret&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Getting help like manpages &amp;lt;code&amp;gt;kubectl explain&amp;lt;/code&amp;gt; ==&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
$ kubectl --help&lt;br /&gt;
$ kubectl get --help&lt;br /&gt;
$ kubectl explain --help&lt;br /&gt;
$ kubectl explain pod.spec.containers # kubectl knows cluster version, so gives you correct schema details&lt;br /&gt;
$ kubectl explain pods.spec.tolerations --recursive # show only fields&lt;br /&gt;
(...)&lt;br /&gt;
FIELDS:&lt;br /&gt;
   effect	&amp;lt;string&amp;gt;&lt;br /&gt;
   key	&amp;lt;string&amp;gt;&lt;br /&gt;
   operator	&amp;lt;string&amp;gt;&lt;br /&gt;
   tolerationSeconds	&amp;lt;integer&amp;gt;&lt;br /&gt;
   value	&amp;lt;string&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*[https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-strong-getting-started-strong- kubectl-commands] K8s interactive kubectl command reference&lt;br /&gt;
&lt;br /&gt;
= Watch Containers logs =&lt;br /&gt;
== [https://github.com/stern/stern Stern] ==&lt;br /&gt;
{{note| https://github.com/wercker/stern repository has no activity [https://github.com/wercker/stern/issues/140 ISSUE-140], the new community maintain repo is &amp;lt;tt&amp;gt;[https://github.com/stern/stern stern/stern]&amp;lt;/tt&amp;gt;  }}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Log tailing and landscape viewing tool. It connects to kubeapi and streams logs from all pods. Thus using this external tool with clusters that have 100ts of containers can be put significant load on kubeapi.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It will re-use kubectl config file to connect to your clusters, so works oob.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Govendor - this module manager is required&lt;br /&gt;
export GOPATH=$HOME/go        # path where go modules can be found, used by 'go get -u &amp;lt;url&amp;gt;'&lt;br /&gt;
export PATH=$PATH:$GOPATH/bin # path to the additional 'go' binaries&lt;br /&gt;
go get -u github.com/kardianos/govendor  # there will be no output&lt;br /&gt;
&lt;br /&gt;
# Stern (official)&lt;br /&gt;
mkdir -p $GOPATH/src/github.com/stern # new link: https://github.com/stern/stern&lt;br /&gt;
cd $GOPATH/src/github.com/stern&lt;br /&gt;
git clone https://github.com/stern/stern.git &amp;amp;&amp;amp; cd stern&lt;br /&gt;
govendor sync # there will be no output, may take 2 min&lt;br /&gt;
go install    # no output&lt;br /&gt;
&lt;br /&gt;
# Stern latest, download binary, no need for govendor&lt;br /&gt;
REPO=stern/stern&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/$REPO/releases/latest&amp;quot; | jq -r .tag_name | tr -d v); echo $LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d); FILE=stern_${LATEST}_linux_amd64&lt;br /&gt;
curl -L https://github.com/$REPO/releases/download/v${LATEST}/$FILE.tar.gz -o $TEMPDIR/$FILE.tar.gz&lt;br /&gt;
sudo tar xzvf $TEMPDIR/$FILE.tar.gz -C /usr/local/bin/ stern&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Usage&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Regex filter (pod-query) to match 2 pods patterns 'proxy' and 'gateway'&lt;br /&gt;
stern -n dev --kubeconfig ~/.kube/dev-config \(proxy\|gateway\)  # escape to protect regex mod characters&lt;br /&gt;
stern -n dev --kubeconfig ~/.kube/dev-config '(proxy|gateway)'   # single-quote to protect mod characters&lt;br /&gt;
&lt;br /&gt;
# Template the output&lt;br /&gt;
stern --template '{{.Message}} ({{.NodeName}}/{{.Namespace}}/{{.PodName}}/{{.ContainerName}}){{&amp;quot;\n&amp;quot;}}' .&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Help&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ stern&lt;br /&gt;
Tail multiple pods and containers from Kubernetes&lt;br /&gt;
&lt;br /&gt;
Usage:&lt;br /&gt;
  stern pod-query [flags]&lt;br /&gt;
&lt;br /&gt;
Flags:&lt;br /&gt;
  -A, --all-namespaces             If present, tail across all namespaces. A specific namespace is ignored even if specified with --namespace.&lt;br /&gt;
      --color string               Color output. Can be 'always', 'never', or 'auto' (default &amp;quot;auto&amp;quot;)&lt;br /&gt;
      --completion string          Outputs stern command-line completion code for the specified shell. Can be 'bash' or 'zsh'&lt;br /&gt;
  -c, --container string           Container name when multiple containers in pod (default &amp;quot;.*&amp;quot;)&lt;br /&gt;
      --container-state string     If present, tail containers with status in running, waiting or terminated. Default to running. (default &amp;quot;running&amp;quot;)&lt;br /&gt;
      --context string             Kubernetes context to use. Default to current context configured in kubeconfig.&lt;br /&gt;
  -e, --exclude strings            Regex of log lines to exclude&lt;br /&gt;
  -E, --exclude-container string   Exclude a Container name&lt;br /&gt;
  -h, --help                       help for stern&lt;br /&gt;
  -i, --include strings            Regex of log lines to include&lt;br /&gt;
      --init-containers            Include or exclude init containers (default true)&lt;br /&gt;
      --kubeconfig string          Path to kubeconfig file to use&lt;br /&gt;
  -n, --namespace string           Kubernetes namespace to use. Default to namespace configured in Kubernetes context.&lt;br /&gt;
  -o, --output string              Specify predefined template. Currently support: [default, raw, json] (default &amp;quot;default&amp;quot;)&lt;br /&gt;
  -l, --selector string            Selector (label query) to filter on. If present, default to &amp;quot;.*&amp;quot; for the pod-query.&lt;br /&gt;
  -s, --since duration             Return logs newer than a relative duration like 5s, 2m, or 3h. Defaults to 48h.&lt;br /&gt;
      --tail int                   The number of lines from the end of the logs to show. Defaults to -1, showing all logs. (default -1)&lt;br /&gt;
      --template string            Template to use for log lines, leave empty to use --output flag&lt;br /&gt;
  -t, --timestamps                 Print timestamps&lt;br /&gt;
  -v, --version                    Print the version and exit&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Usage&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
stern &amp;lt;pod&amp;gt;&lt;br /&gt;
stern --tail 1 busybox -n &amp;lt;namespace&amp;gt; #this is RegEx that matches busybox1|2|etc&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== [https://github.com/johanhaleby/kubetail kubetail] ==&lt;br /&gt;
Bash script that enables you to aggregate (tail/follow) logs from multiple pods into one stream. This is the same as running &amp;lt;code&amp;gt;kubectl logs -f&amp;lt;/code&amp;gt; but for multiple pods.&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/lensapp/lens Lens | Kubernetes IDE] =&lt;br /&gt;
Kubernetes client, this is not a dashboard that needs installing on a cluster. Similar to KUI but much more powerful.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Deb&lt;br /&gt;
curl curl https://api.k8slens.dev/binaries/Lens-5.4.1-latest.20220304.1.amd64.deb&lt;br /&gt;
sudo apt-get install ./Lens-5.4.1-latest.20220304.1.amd64.deb&lt;br /&gt;
&lt;br /&gt;
# Snap&lt;br /&gt;
snap list&lt;br /&gt;
sudo snap install kontena-lens --classic # U16.04+, tested on U20.04&lt;br /&gt;
&lt;br /&gt;
# Install from a .snap file&lt;br /&gt;
mkdir -p ~/Downloads/kontena-lens &amp;amp;&amp;amp; cd $_&lt;br /&gt;
snap download kontena-lens&lt;br /&gt;
sudo snap ack     kontena-lens_152.assert         # add an assertion to the system assertion database&lt;br /&gt;
sudo snap install kontena-lens_152.snap --classic # --dangerous if you do not have the assert file&lt;br /&gt;
&lt;br /&gt;
# download snap from https://k8slens.dev/&lt;br /&gt;
curl https://api.k8slens.dev/binaries/Lens-5.3.4-latest.20220120.1.amd64.snap&lt;br /&gt;
sudo snap install Lens-5.3.4-latest.20220120.1.amd64.snap --classic --dangerous&lt;br /&gt;
&lt;br /&gt;
# Info&lt;br /&gt;
$ snap info kontena-lens_152.assert&lt;br /&gt;
name:      kontena-lens&lt;br /&gt;
summary:   Lens - The Kubernetes IDE&lt;br /&gt;
publisher: Mirantis Inc (jakolehm)&lt;br /&gt;
store-url: https://snapcraft.io/kontena-lens&lt;br /&gt;
contact:   info@k8slens.dev&lt;br /&gt;
license:   Proprietary&lt;br /&gt;
description: |&lt;br /&gt;
  Lens is the most powerful IDE for people who need to deal with Kubernetes clusters on a daily&lt;br /&gt;
  basis. Ensure your clusters are properly setup and configured. Enjoy increased visibility, real&lt;br /&gt;
  time statistics, log streams and hands-on troubleshooting capabilities. With Lens, you can work&lt;br /&gt;
  with your clusters more easily and fast, radically improving productivity and the speed of&lt;br /&gt;
  business.&lt;br /&gt;
snap-id: Dek6y5mTEPxhySFKPB4Z0WVi5EPS9osS&lt;br /&gt;
channels:&lt;br /&gt;
  latest/stable:    4.0.7      2021-01-20 (152) 107MB classic&lt;br /&gt;
  latest/candidate: 4.0.7      2021-01-20 (152) 107MB classic&lt;br /&gt;
  latest/beta:      4.0.7      2021-01-20 (152) 107MB classic&lt;br /&gt;
  latest/edge:      4.1.0-rc.1 2021-02-11 (157) 108MB classic&lt;br /&gt;
&lt;br /&gt;
$ snap info kontena-lens_152.snap&lt;br /&gt;
path:       &amp;quot;kontena-lens_152.snap&amp;quot;&lt;br /&gt;
name:       kontena-lens&lt;br /&gt;
summary:    Lens&lt;br /&gt;
version:    4.0.7 classic&lt;br /&gt;
build-date: 24 days ago, at 16:31 GMT&lt;br /&gt;
license:    unset&lt;br /&gt;
description: |&lt;br /&gt;
  Lens - The Kubernetes IDE&lt;br /&gt;
commands:&lt;br /&gt;
  - kontena-lens&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/lensapp/lens.git OpenLens] | Kubernetes IDE =&lt;br /&gt;
Download binary from https://github.com/MuhammedKalkan/OpenLens&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install on Ubuntu&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
SUDO=''&lt;br /&gt;
if (( $EUID != 0 )); then&lt;br /&gt;
    SUDO='sudo'&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
REPO=MuhammedKalkan/OpenLens&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/$REPO/releases/latest&amp;quot; | jq -r .tag_name | tr -d v); echo $LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d); FILE=OpenLens-${LATEST}.amd64.deb&lt;br /&gt;
curl -L https://github.com/${REPO}/releases/download/v${LATEST}/$FILE -o $TEMPDIR/$FILE&lt;br /&gt;
$SUDO dpkg -i $TEMPDIR/$FILE&lt;br /&gt;
$SUDO apt-get install -y --fix-broken&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Build your own - [https://gist.github.com/jslay88/bf654c23eaaaed443bb8e8b41d02b2a9 gist]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
install_deps_windows() {&lt;br /&gt;
    echo &amp;quot;Installing Build Dependencies (Windows)...&amp;quot;&lt;br /&gt;
    choco install -y make visualstudio2019buildtools visualstudio2019-workload-vctools&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_deps_darwin() {&lt;br /&gt;
    echo &amp;quot;Installing Build Dependencies (Darwin)...&amp;quot;&lt;br /&gt;
    xcode-select --install&lt;br /&gt;
    if ! hash make 2&amp;gt;/dev/null; then&lt;br /&gt;
        if ! hash brew 2&amp;gt;/dev/null; then&lt;br /&gt;
            echo &amp;quot;Installing Homebrew...&amp;quot;&lt;br /&gt;
            /bin/bash -c &amp;quot;$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)&amp;quot;&lt;br /&gt;
        fi&lt;br /&gt;
        echo &amp;quot;Installing make via Homebrew...&amp;quot;&lt;br /&gt;
        brew install make&lt;br /&gt;
    fi&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_deps_posix() {&lt;br /&gt;
    echo &amp;quot;Installing Build Dependencies (Posix)...&amp;quot;&lt;br /&gt;
    sudo apt-get install -y make g++ curl&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_darwin() {&lt;br /&gt;
    echo &amp;quot;Killing OpenLens (if open)...&amp;quot;&lt;br /&gt;
    killall OpenLens&lt;br /&gt;
    echo &amp;quot;Installing OpenLens (Darwin)...&amp;quot;&lt;br /&gt;
    rm -Rf &amp;quot;$HOME/Applications/OpenLens.app&amp;quot;&lt;br /&gt;
    arch=&amp;quot;mac&amp;quot;&lt;br /&gt;
    if [[ &amp;quot;$(uname -m)&amp;quot; == &amp;quot;arm64&amp;quot; ]]; then&lt;br /&gt;
        arch=&amp;quot;mac-arm64&amp;quot;  # credit @teefax&lt;br /&gt;
    fi&lt;br /&gt;
    cp -Rfp &amp;quot;$tempdir/lens/dist/$arch/OpenLens.app&amp;quot; &amp;quot;$HOME/Applications/&amp;quot;&lt;br /&gt;
    rm -Rf &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    &lt;br /&gt;
    print_alias_message&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_posix() {&lt;br /&gt;
    echo &amp;quot;Installing OpenLens (Posix)...&amp;quot;&lt;br /&gt;
    cd &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    sudo dpkg -i &amp;quot;$(ls -Art $tempdir/lens/dist/*.deb  | tail -n 1)&amp;quot;&lt;br /&gt;
    rm -Rf &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    &lt;br /&gt;
    print_alias_message&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_windows() {&lt;br /&gt;
    echo &amp;quot;Installing OpenLens (Windows)...&amp;quot;&lt;br /&gt;
    &amp;quot;$(/bin/ls -Art $tempdir/lens/dist/OpenLens*.exe  | tail -n 1)&amp;quot;&lt;br /&gt;
    rm -Rf &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    &lt;br /&gt;
    print_alias_message&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_nvm() {&lt;br /&gt;
    if [ -z &amp;quot;$NVM_DIR&amp;quot; ]; then&lt;br /&gt;
        echo &amp;quot;Installing NVM...&amp;quot;&lt;br /&gt;
        NVM_VERSION=$(curl -s https://api.github.com/repos/nvm-sh/nvm/releases/latest | sed -En 's/  &amp;quot;tag_name&amp;quot;: &amp;quot;(.+)&amp;quot;,/\1/p')&lt;br /&gt;
        curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/$NVM_VERSION/install.sh | bash&lt;br /&gt;
        NVM_DIR=&amp;quot;$HOME/.nvm&amp;quot;&lt;br /&gt;
    fi&lt;br /&gt;
    [ -s &amp;quot;$NVM_DIR/nvm.sh&amp;quot; ] &amp;amp;&amp;amp; \. &amp;quot;$NVM_DIR/nvm.sh&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
build_openlens() {&lt;br /&gt;
    tempdir=$(mktemp -d)&lt;br /&gt;
    cd &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    if [ -z &amp;quot;$1&amp;quot; ]; then&lt;br /&gt;
        echo &amp;quot;Checking GitHub API for latests tag...&amp;quot;&lt;br /&gt;
        OPENLENS_VERSION=$(curl -s https://api.github.com/repos/lensapp/lens/releases/latest | sed -En 's/  &amp;quot;tag_name&amp;quot;: &amp;quot;(.+)&amp;quot;,/\1/p')&lt;br /&gt;
    else&lt;br /&gt;
        if [[ &amp;quot;$1&amp;quot; == v* ]]; then&lt;br /&gt;
            OPENLENS_VERSION=&amp;quot;$1&amp;quot;&lt;br /&gt;
        else&lt;br /&gt;
            OPENLENS_VERSION=&amp;quot;v$1&amp;quot;&lt;br /&gt;
        fi&lt;br /&gt;
        echo &amp;quot;Using supplied tag $OPENLENS_VERSION&amp;quot;&lt;br /&gt;
    fi&lt;br /&gt;
    if [ -z $OPENLENS_VERSION ]; then&lt;br /&gt;
        echo &amp;quot;Failed to get valid version tag. Aborting!&amp;quot;&lt;br /&gt;
        exit 1&lt;br /&gt;
    fi&lt;br /&gt;
    curl -L https://github.com/lensapp/lens/archive/refs/tags/$OPENLENS_VERSION.tar.gz | tar xvz&lt;br /&gt;
    mv lens-* lens&lt;br /&gt;
    cd lens&lt;br /&gt;
    NVM_CURRENT=$(nvm current)&lt;br /&gt;
    nvm install 16&lt;br /&gt;
    nvm use 16&lt;br /&gt;
    npm install -g yarn&lt;br /&gt;
    make build&lt;br /&gt;
    nvm use &amp;quot;$NVM_CURRENT&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
print_alias_message() {&lt;br /&gt;
    if [ &amp;quot;$(type -t install_openlens)&amp;quot; != 'alias' ]; then&lt;br /&gt;
        printf &amp;quot;It is recommended to add an alias to your shell profile to run this script again.\n&amp;quot;&lt;br /&gt;
        printf &amp;quot;alias install_openlens=\&amp;quot;curl -o- https://gist.githubusercontent.com/jslay88/bf654c23eaaaed443bb8e8b41d02b2a9/raw/install_openlens.sh | bash\&amp;quot;\n\n&amp;quot;&lt;br /&gt;
    fi&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
if [[ &amp;quot;$(uname)&amp;quot; == &amp;quot;Linux&amp;quot; ]]; then&lt;br /&gt;
    install_deps_posix&lt;br /&gt;
    install_nvm&lt;br /&gt;
    build_openlens &amp;quot;$1&amp;quot;&lt;br /&gt;
    install_posix&lt;br /&gt;
elif [[ &amp;quot;$(uname)&amp;quot; == &amp;quot;Darwin&amp;quot; ]]; then&lt;br /&gt;
    install_deps_darwin&lt;br /&gt;
    install_nvm&lt;br /&gt;
    build_openlens &amp;quot;$1&amp;quot;&lt;br /&gt;
    install_darwin&lt;br /&gt;
else&lt;br /&gt;
    install_deps_windows&lt;br /&gt;
    install_nvm&lt;br /&gt;
    build_openlens &amp;quot;$1&amp;quot;&lt;br /&gt;
    install_windows&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Done!&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://kui.tools/ kui terminal] =&lt;br /&gt;
kui is a terminal with visualizations, provided by IBM&lt;br /&gt;
&lt;br /&gt;
Install using continent install script into &amp;lt;code&amp;gt;/opt/Kui-linux-x64/&amp;lt;/code&amp;gt; and symlink &amp;lt;code&amp;gt;Kui&amp;lt;/code&amp;gt; binary to &amp;lt;code&amp;gt;/usr/local/bin/kui&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
REPO=kubernetes-sigs/kui&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/$REPO/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d); FILE=Kui-linux-x64.zip&lt;br /&gt;
curl -L https://github.com/$REPO/releases/download/$LATEST/Kui-linux-x64.zip -o $TEMPDIR/$FILE&lt;br /&gt;
sudo mkdir -p /opt/Kui-linux-x64&lt;br /&gt;
sudo unzip $TEMPDIR/$FILE -d /opt/&lt;br /&gt;
&lt;br /&gt;
# Run&lt;br /&gt;
$&amp;gt; /opt/Kui-linux-x64/Kui&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Run Kui as [Kubernetes plugin https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
export PATH=$PATH:/opt/Kui-linux-x64/ # make sure Kui libs are in environment PATH&lt;br /&gt;
kubectl kui get pods -A               # -&amp;gt; a pop up window will show up&lt;br /&gt;
&lt;br /&gt;
$ kubectl plugin list &lt;br /&gt;
The following compatible plugins are available:&lt;br /&gt;
/opt/Kui-linux-x64/kubectl-kui&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
:[[File:ClipCapIt-200428-205600.PNG]]&lt;br /&gt;
&lt;br /&gt;
; Resources&lt;br /&gt;
* [https://github.com/IBM/kui/wiki kui/wiki] Github&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/derailed/popeye popeye] =&lt;br /&gt;
Popeye is a utility that scans live Kubernetes cluster and reports potential issues with deployed resources and configurations.&lt;br /&gt;
:[[File:ClipCapIt-200501-123645.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install&lt;br /&gt;
REPO=derailed/popeye&lt;br /&gt;
RELEASE=popeye_Linux_x86_64.tar.gz&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/${REPO}/releases/latest&amp;quot; | jq -r .tag_name); echo $VERSION # latest&lt;br /&gt;
wget https://github.com/${REPO}/releases/download/${VERSION}/${RELEASE}&lt;br /&gt;
tar xf ${RELEASE} popeye --remove-files&lt;br /&gt;
sudo install popeye /usr/local/bin&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
popeye # --out html&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/derailed/k9s k9s] =&lt;br /&gt;
K9s provides a terminal UI to interact with Kubernetes clusters.&lt;br /&gt;
&lt;br /&gt;
;Install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/derailed/k9s/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
wget https://github.com/derailed/k9s/releases/download/$LATEST/k9s_Linux_amd64.tar.gz&lt;br /&gt;
tar xf k9s_Linux_amd64.tar.gz --remove-files k9s&lt;br /&gt;
sudo install k9s /usr/local/bin&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Usage&lt;br /&gt;
* &amp;lt;code&amp;gt;?&amp;lt;/code&amp;gt; help&lt;br /&gt;
* &amp;lt;code&amp;gt;:ns&amp;lt;/code&amp;gt; select namespace&lt;br /&gt;
* &amp;lt;code&amp;gt;:nodes&amp;lt;/code&amp;gt; show nodes&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:[[File:ClipCapIt-190826-152830.PNG]]&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/droctothorpe/kubecolor kubecolor] =&lt;br /&gt;
Kubecolor is a bash function that colorizes the output of kubectl get events -w.&lt;br /&gt;
:[[File:ClipCapIt-190831-113158.PNG]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# This script is not working&lt;br /&gt;
git clone https://github.com/droctothorpe/kubecolor.git ~/.kubecolor&lt;br /&gt;
echo &amp;quot;source ~/.kubecolor/kubecolor.bash&amp;quot; &amp;gt;&amp;gt; ~/.bash_profile # (or ~/.bashrc)&lt;br /&gt;
source ~/.bash_profile # (or ~/.bashrc)&lt;br /&gt;
&lt;br /&gt;
# You can source this function instead&lt;br /&gt;
kube-events() {&lt;br /&gt;
    kubectl get events --all-namespaces --watch \&lt;br /&gt;
    -o 'go-template={{.lastTimestamp}} ^ {{.involvedObject.kind}} ^ {{.message}} ^ ({{.involvedObject.name}}){{&amp;quot;\n&amp;quot;}}' \&lt;br /&gt;
    | awk -F^ \&lt;br /&gt;
    -v   black=$(tput setaf 0) \&lt;br /&gt;
    -v     red=$(tput setaf 1) \&lt;br /&gt;
    -v   green=$(tput setaf 2) \&lt;br /&gt;
    -v  yellow=$(tput setaf 3) \&lt;br /&gt;
    -v    blue=$(tput setaf 4) \&lt;br /&gt;
    -v magenta=$(tput setaf 5) \&lt;br /&gt;
    -v    cyan=$(tput setaf 6) \&lt;br /&gt;
    -v   white=$(tput setaf 7) \&lt;br /&gt;
    '{ $1=blue $1; $2=green $2; $3=white $3; }  1'&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
kube-events&lt;br /&gt;
kubectl get events -A -w&lt;br /&gt;
kubectl get events --all-namespaces --watch -o 'go-template={{.lastTimestamp}} {{.involvedObject.kind}} {{.message}} ({{.involvedObject.name}}){{&amp;quot;\n&amp;quot;}}'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
= [https://argoproj.github.io/argo-rollouts/ argo-rollouts] =&lt;br /&gt;
Argo Rollouts introduces a new custom resource called a Rollout to provide additional deployment strategies such as Blue Green and Canary to Kubernetes.&lt;br /&gt;
&lt;br /&gt;
= &amp;lt;code&amp;gt;[https://github.com/groundcover-com/murre murre]&amp;lt;/code&amp;gt; =&lt;br /&gt;
Murre is an on-demand, scaleable source of container resource metrics for K8s. No dependencies needed, no install needed on a cluster.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
goenv install 1.18 # although 1.19 is the latest and the install completes successfully it wont create the binary  &lt;br /&gt;
go install github.com/groundcover-com/murre@latest&lt;br /&gt;
murre --sortby-cpu-util&lt;br /&gt;
murre --sortby-cpu&lt;br /&gt;
murre --pod kong-51xst&lt;br /&gt;
murre --namespace dev&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/amelbakry/kubernetes-scripts/blob/master/cluster-health.sh Kubernetes scripts] =&lt;br /&gt;
These Scripts allow you to troubleshoot and check the health status of the cluster and deployments They allow you to gather these information&lt;br /&gt;
* Cluster resources&lt;br /&gt;
* Cluster Nodes status&lt;br /&gt;
* Nodes Conditions&lt;br /&gt;
* Pods per Nodes&lt;br /&gt;
* Worker Nodes Per Availability Zones&lt;br /&gt;
* Cluster Node Types&lt;br /&gt;
* Pods not in running or completed status&lt;br /&gt;
* Top Pods according to Memory Limits&lt;br /&gt;
* Top Pods according to CPU Limits&lt;br /&gt;
* Number of Pods&lt;br /&gt;
* Pods Status&lt;br /&gt;
* Max Pods restart count&lt;br /&gt;
* Readiness of Pods&lt;br /&gt;
* Pods Average Utilization&lt;br /&gt;
* Top Pods according to CPU Utilization&lt;br /&gt;
* Top Pods according to Memory Utilization&lt;br /&gt;
* Pods Distribution per Nodes&lt;br /&gt;
* Node Distribution per Availability Zone&lt;br /&gt;
* Deployments without correct resources (Memory or CPU)&lt;br /&gt;
* Deployments without Limits&lt;br /&gt;
* Deployments without Application configured in Labels&lt;br /&gt;
&lt;br /&gt;
= Multi-node clusters =&lt;br /&gt;
{{Note|[[Kubernetes/minikube]] can do this natively}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Build multi node cluster for development.&lt;br /&gt;
On a single machine&lt;br /&gt;
* [https://github.com/kinvolk/kube-spawn/ kube-spawn] tool for creating a multi-node Kubernetes (&amp;gt;= 1.8) cluster on a single Linux machine&lt;br /&gt;
* [https://github.com/sttts/kubernetes-dind-cluster kubernetes-dind-cluster] Kubernetes multi-node cluster for developer of Kubernetes that launches in 36 seconds&lt;br /&gt;
* [https://kind.sigs.k8s.io/ kind] is a tool for running local Kubernetes clusters using Docker container “nodes”&lt;br /&gt;
* [https://github.com/ecomm-integration-ballerina/kubernetes-cluster Vagrant] full documentation in thsi [https://medium.com/@wso2tech/multi-node-kubernetes-cluster-with-vagrant-virtualbox-and-kubeadm-9d3eaac28b98 article]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Full cluster provisioning&lt;br /&gt;
* [https://github.com/kubernetes-sigs/kubespray kubespray] Deploy a Production Ready Kubernetes Cluster&lt;br /&gt;
* [https://github.com/kubernetes/kops kops] get a production grade Kubernetes cluster up and running&lt;br /&gt;
&lt;br /&gt;
= [https://kubernetes.io/docs/tasks/debug-application-cluster/crictl/ crictl] =&lt;br /&gt;
CLI and validation tools for Kubelet Container Runtime Interface (CRI). Used for debugging Kubernetes nodes with &amp;lt;code&amp;gt;crictl&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;crictl&amp;lt;/code&amp;gt; requires a Linux operating system with a CRI runtime. Creating containers with this tool on K8s cluster, will eventually cause that Kubernetes will delete these containers.&lt;br /&gt;
= [https://github.com/weaveworks/kubediff kubediff] show diff code vs what is deployed =&lt;br /&gt;
Kubediff is a tool for Kubernetes to show you the differences between your running configuration and your version controlled configuration.&lt;br /&gt;
= Mozilla SOPS - secret manager =&lt;br /&gt;
* [https://github.com/mozilla/sops SOPS] Mozilla SOPS: Secrets OPerationS, sops is an editor of encrypted files that supports YAML, JSON, ENV, INI and BINARY formats and encrypts with AWS KMS, GCP KMS, Azure Key Vault and PGP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/getsops/sops/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d)&lt;br /&gt;
curl -sL https://github.com/mozilla/sops/releases/download/${LATEST}/sops-${LATEST}.linux.amd64 -o $TEMPDIR/sops&lt;br /&gt;
sudo install $TEMPDIR/sops /usr/local/bin/sops&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://kompose.io/ Kompose] (Kubernetes + Compose) =&lt;br /&gt;
kompose is a tool to convert docker-compose files to Kubernetes manifests. &amp;lt;code&amp;gt;kompose&amp;lt;/code&amp;gt; takes a Docker Compose file and translates it into Kubernetes resources.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Linux&lt;br /&gt;
curl -L https://github.com/kubernetes/kompose/releases/download/v1.21.0/kompose-linux-amd64 -o kompose&lt;br /&gt;
sudo install ./kompose /usr/local/bin/kompose               # option 1&lt;br /&gt;
chmod +x kompose; sudo mv ./kompose /usr/local/bin/kompose  # option 2&lt;br /&gt;
&lt;br /&gt;
# Completion&lt;br /&gt;
source &amp;lt;(kompose completion bash)&lt;br /&gt;
&lt;br /&gt;
# Convert&lt;br /&gt;
kompose convert -f docker-compose-mac.yaml&lt;br /&gt;
&lt;br /&gt;
WARN Restart policy 'unless-stopped' in service mysql is not supported, convert it to 'always'&lt;br /&gt;
INFO Kubernetes file &amp;quot;mysql-service.yaml&amp;quot; created&lt;br /&gt;
INFO Kubernetes file &amp;quot;cluster-dir-persistentvolumeclaim.yaml&amp;quot; created&lt;br /&gt;
INFO Kubernetes file &amp;quot;mysql-deployment.yaml&amp;quot; created&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;References&lt;br /&gt;
*[https://github.com/kubernetes/kompose kompose] Github&lt;br /&gt;
&lt;br /&gt;
= [https://kubernetes.io/blog/2019/04/19/introducing-kube-iptables-tailer/ kube-iptables-tailer] - ip-table drop packages logger =&lt;br /&gt;
Allows to view iptables dropped packages, useful when working with Network Policies to identify pods trying to talk to disallowed destinations.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This project deploys &amp;lt;tt&amp;gt;[https://github.com/box/kube-iptables-tailer/tree/master/demo kube-iptables-tailer]&amp;lt;/tt&amp;gt; daemonset that watches iptables log &amp;lt;code&amp;gt;/var/log/iptables.log&amp;lt;/code&amp;gt; on each k8s-node mounted as &amp;lt;code&amp;gt;hostPath&amp;lt;/code&amp;gt; volume. It filters the log for custom prefix, set in &amp;lt;code&amp;gt;daemonset.spec.template.spec.containers.env&amp;lt;/code&amp;gt; and sends to cluster events.&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
            env: &lt;br /&gt;
              - name: &amp;quot;IPTABLES_LOG_PATH&amp;quot;&lt;br /&gt;
                value: &amp;quot;/var/log/iptables.log&amp;quot;&lt;br /&gt;
              - name: &amp;quot;IPTABLES_LOG_PREFIX&amp;quot;&lt;br /&gt;
                # log prefix defined in your iptables chains&lt;br /&gt;
                value: &amp;quot;my-prefix:&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://github.com/box/kube-iptables-tailer#setup-iptables-log-prefix Set iptables Log Prefix]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ iptables -A CHAIN_NAME -j LOG --log-prefix &amp;quot;EXAMPLE_LOG_PREFIX: &amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example output, when packet dropped&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ kubectl describe pods --namespace=YOUR_NAMESPACE&lt;br /&gt;
...&lt;br /&gt;
Events:&lt;br /&gt;
  FirstSeen   LastSeen    Count   From                    Type          Reason          Message&lt;br /&gt;
  ---------   --------	  -----	  ----                    ----          ------          -------&lt;br /&gt;
  1h          5s          10      kube-iptables-tailer    Warning       PacketDrop      Packet dropped when receiving traffic from example-service-2 (IP: 22.222.22.222).&lt;br /&gt;
  3h          2m          5       kube-iptables-tailer    Warning       PacketDrop      Packet dropped when sending traffic to example-service-1 (IP: 11.111.11.111).&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
= [https://github.com/eldadru/ksniff ksniff] - pipe a pod traffic to Wireshark or Tshark =&lt;br /&gt;
A kubectl plugin that utilize tcpdump and Wireshark to start a remote capture on any pod&lt;br /&gt;
&lt;br /&gt;
= [https://docs.flagger.app/ flagger - canary deployments] =&lt;br /&gt;
Flagger is a Kubernetes operator that automates the promotion of canary deployments using Istio, Linkerd, App Mesh, NGINX, Skipper, Contour or Gloo routing for traffic shifting and Prometheus metrics for canary analysis.&lt;br /&gt;
= [https://www.kubeval.com/ Kubeval] =&lt;br /&gt;
Kubeval is used to validate one or more Kubernetes configuration files, and is often used locally as part of a development workflow as well as in CI pipelines.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install&lt;br /&gt;
wget https://github.com/instrumenta/kubeval/releases/latest/download/kubeval-linux-amd64.tar.gz&lt;br /&gt;
tar xf kubeval-linux-amd64.tar.gz&lt;br /&gt;
sudo cp kubeval /usr/local/bin&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
$&amp;gt; kubeval my-invalid-rc.yaml&lt;br /&gt;
WARN - my-invalid-rc.yaml contains an invalid ReplicationController - spec.replicas: Invalid type. Expected: integer, given: string&lt;br /&gt;
$&amp;gt; echo $?&lt;br /&gt;
1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/yannh/kubeconform kubeconform] - improved Kubeval =&lt;br /&gt;
Kubeconform is a Kubernetes manifests validation tool.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install&lt;br /&gt;
wget https://github.com/yannh/kubeconform/releases/latest/download/kubeconform-linux-amd64.tar.gz&lt;br /&gt;
tar xf kubeconform-linux-amd64.tar.gz&lt;br /&gt;
sudo install kubeconform /usr/local/bin&lt;br /&gt;
&lt;br /&gt;
# Show version&lt;br /&gt;
kubeconform -v&lt;br /&gt;
v0.4.14&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Observability =&lt;br /&gt;
== [https://github.com/oslabs-beta/KUR8 KUR8] - like Elastic.io EFK dashboards ==&lt;br /&gt;
{{Note|I've deployed v1.0.0 to monitoring ns along with already existing service &amp;lt;code&amp;gt;&lt;br /&gt;
kube-prometheus-stack-prometheus:9090&amp;lt;/code&amp;gt; but the application was crashing}}&lt;br /&gt;
&lt;br /&gt;
= CPU Load pods =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Repeat the command times x CPU&lt;br /&gt;
cat /proc/cpuinfo | grep processor | wc -l # count processors&lt;br /&gt;
yes &amp;gt; /dev/null &amp;amp;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
*[https://kubernetes.io/docs/reference/kubectl/overview/ kubectl overview - resources types, Namespaced, kinds] K8s docs&lt;br /&gt;
*[https://github.com/johanhaleby/kubetail kubetail] Bash script that enables you to aggregate (tail/follow) logs from multiple pods into one stream. This is the same as running &amp;quot;kubectl logs -f &amp;quot; but for multiple pods.&lt;br /&gt;
*[https://github.com/ahmetb/kubectx kubectx kubens] Kubernetes config switches for context and setting up default namespace&lt;br /&gt;
*[https://medium.com/faun/using-different-kubectl-versions-with-multiple-kubernetes-clusters-a3ad8707b87b manages different ver kubectl] blog&lt;br /&gt;
*[https://github.com/kubernetes/community/blob/master/contributors/devel/sig-cli/kubectl-conventions.md#rules-for-extending-special-resource-alias---all kubectl] Kubectl Conventions&lt;br /&gt;
&lt;br /&gt;
Cheatsheets&lt;br /&gt;
*[https://cheatsheet.dennyzhang.com/cheatsheet-kubernetes-A4 cheatsheet-kubernetes-A4] by dennyzhang&lt;br /&gt;
&lt;br /&gt;
Other projects&lt;br /&gt;
*[https://github.com/jonmosco/kube-tmux kube-tmux] Kubernetes context and namespace status for tmux&lt;br /&gt;
*[https://github.com/jonmosco/kube-ps1 kube-ps1] Kubernetes prompt for bash and zsh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:kubernetes]]&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Kubernetes/Tools&amp;diff=7029</id>
		<title>Kubernetes/Tools</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Kubernetes/Tools&amp;diff=7029"/>
		<updated>2024-07-02T15:22:05Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Install */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= kubectl =&lt;br /&gt;
== Install ==&lt;br /&gt;
List of kubectl [https://kubernetes.io/releases/ releases].&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# List releases&lt;br /&gt;
curl -s https://api.github.com/repos/kubernetes/kubernetes/releases | jq -r '.[].tag_name' | sort -V&lt;br /&gt;
v1.26.15&lt;br /&gt;
v1.27.15&lt;br /&gt;
v1.28.11&lt;br /&gt;
v1.29.5&lt;br /&gt;
v1.29.6&lt;br /&gt;
v1.30.2&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
# Latest&lt;br /&gt;
ARCH=amd64 # amd64|arm&lt;br /&gt;
VERSION=$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt); echo $VERSION&lt;br /&gt;
curl -LO https://storage.googleapis.com/kubernetes-release/release/$VERSION/bin/linux/$ARCH/kubectl&lt;br /&gt;
&lt;br /&gt;
# Specific version&lt;br /&gt;
# Find specific Kubernetes release, then download kubectl&lt;br /&gt;
VERSION=v1.26.14; ARCH=amd64 # amd64|arm&lt;br /&gt;
curl -LO https://storage.googleapis.com/kubernetes-release/release/$VERSION/bin/linux/$ARCH/kubectl&lt;br /&gt;
sudo install ./kubectl /usr/local/bin/kubectl&lt;br /&gt;
&lt;br /&gt;
# Note: sudo install := chmod +x ./kubectl; sudo mv&lt;br /&gt;
&lt;br /&gt;
# Verify, kubectl should not be more than -/+ 1 minor version difference then api-server&lt;br /&gt;
kubectl version --short&lt;br /&gt;
Client Version: v1.26.14&lt;br /&gt;
Kustomize Version: v4.5.7&lt;br /&gt;
Server Version: v1.24.10&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Google way&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install kubectl if you don't already have a suitable version&lt;br /&gt;
# https://kubernetes.io/docs/setup/release/version-skew-policy/#kubectl&lt;br /&gt;
kubectl version --client || gcloud components install kubectl&lt;br /&gt;
kubectl get clusterrolebinding $(gcloud config get-value core/account)-cluster-admin ||&lt;br /&gt;
  kubectl create clusterrolebinding $(gcloud config get-value core/account)-cluster-admin \&lt;br /&gt;
  --clusterrole=cluster-admin \&lt;br /&gt;
  --user=&amp;quot;$(gcloud config get-value core/account)&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Autocompletion and kubeconfig ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
source &amp;lt;(kubectl completion bash); alias k=kubectl; complete -F __start_kubectl k&lt;br /&gt;
&lt;br /&gt;
# Set default namespace&lt;br /&gt;
kubectl config set-context --current --namespace=dev&lt;br /&gt;
kubectl config set-context $(kubectl config current-context) --namespace=dev&lt;br /&gt;
&lt;br /&gt;
vi ~/.kube/config&lt;br /&gt;
...&lt;br /&gt;
contexts:&lt;br /&gt;
- context:&lt;br /&gt;
    cluster: kubernetes&lt;br /&gt;
    user: kubernetes-admin&lt;br /&gt;
    namespace: web       # default namespace&lt;br /&gt;
  name: dev-frontend&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Add &amp;lt;code&amp;gt;proxy-url&amp;lt;/code&amp;gt; using &amp;lt;code&amp;gt;yq&amp;lt;/code&amp;gt; to kubeconfig ==&lt;br /&gt;
Minimum yq version required is v2.x, tested with yq 2.13.0. The example below does the file inline `-i` update.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
yq -i -y --indentless '.clusters[0].cluster += {&amp;quot;proxy-url&amp;quot;: &amp;quot;http://proxy.acme.com:8080&amp;quot;}' ~/.kube/$ENVIRONMENT&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Get resources and cheatsheet ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Get a list of nodes&lt;br /&gt;
kubectl get nodes -o jsonpath=&amp;quot;{.items[*].metadata.name}&amp;quot;&lt;br /&gt;
ip-10-10-10-10.eu-west-1.compute.internal ip-10-10-10-20.eu-west-1.compute.internal&lt;br /&gt;
&lt;br /&gt;
kubectl get nodes -oname&lt;br /&gt;
node/ip-10-10-10-10.eu-west-1.compute.internal&lt;br /&gt;
node/ip-10-10-10-20.eu-west-1.compute.internal&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
# Pods sorted by node name&lt;br /&gt;
kubectl get pods --sort-by=.spec.nodeName -owide -A&lt;br /&gt;
&lt;br /&gt;
# Watch a namespace in a convinient resources order | sts=statefulset, rs=replicaset, ep=endpoint, cm=configmap&lt;br /&gt;
watch -d kubectl -n dev get sts,deploy,rc,rs,pods,svc,ep,ing,pvc,cm,sa,secret,es,cronjob,job -owide --show-labels &lt;br /&gt;
   # note es - externalsecrets&lt;br /&gt;
watch -d 'kubectl get pv -owide --show-labels | grep -e &amp;lt;eg.NAMESPACE&amp;gt;'&lt;br /&gt;
watch -d helm list -A&lt;br /&gt;
&lt;br /&gt;
# Test your context by creating configMap&lt;br /&gt;
kubectl create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2&lt;br /&gt;
kubectl delete configmap my-config&lt;br /&gt;
&lt;br /&gt;
# Watch multiple namespaces&lt;br /&gt;
eval 'kubectl --context='{context1,context2}' --namespace='{ns1,ns2}' get pod;'&lt;br /&gt;
eval kubectl\ --context={context1,context2}\ --namespace={ns1,ns2}\ get\ pod\;&lt;br /&gt;
watch -d eval 'kubectl -n '{default,ingress-nginx}' get sts,deploy,rc,rs,pods,svc,ep,ing,pvc,cm,sa,secret,es,cronjob,job -owide --show-labels;'&lt;br /&gt;
&lt;br /&gt;
# Auth, can-i&lt;br /&gt;
kubectl auth can-i delete pods&lt;br /&gt;
yes&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Get yaml from existing object ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl create namespace kiali --dry-run=client -o yaml &amp;gt; ns.yaml&lt;br /&gt;
kubectl create namespace kiali --dry-run=client -o yaml | kubectl apply -f -&lt;br /&gt;
&lt;br /&gt;
# Saves version revision in metadata.annotations.kubectl.kubernetes.io/last-applied-configuration={..manifest_json..} &lt;br /&gt;
kubectl create ns foo --save-config&lt;br /&gt;
&lt;br /&gt;
# Get a yaml without status information, almost clean manifest. Deprecated '--export' before &amp;lt;v17.x.&lt;br /&gt;
kubectl -n web get pod &amp;lt;podName&amp;gt; -oyaml --export&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Generate pod manifest, the most clean way I know&lt;br /&gt;
&amp;lt;syntaxhighlightjs lang=bash&amp;gt;&lt;br /&gt;
# kubectl -n foo run --image=ubuntu:20.04 ubuntu-1 --dry-run=client -oyaml -- bash -c sleep&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Pod&lt;br /&gt;
metadata:&lt;br /&gt;
  creationTimestamp: null  # &amp;lt;- can be deleted&lt;br /&gt;
  labels:&lt;br /&gt;
    run: ubuntu-1&lt;br /&gt;
  name: ubuntu-1&lt;br /&gt;
  namespace: foo&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
  - args:&lt;br /&gt;
    - bash&lt;br /&gt;
    - -c&lt;br /&gt;
    - sleep&lt;br /&gt;
    image: ubuntu:20.04&lt;br /&gt;
    name: ubuntu-1&lt;br /&gt;
    resources: {}  # &amp;lt;- can be deleted&lt;br /&gt;
  dnsPolicy: ClusterFirst&lt;br /&gt;
  restartPolicy: Always&lt;br /&gt;
status: {}         # &amp;lt;- can be deleted&lt;br /&gt;
&amp;lt;/syntaxhighlightjs&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;code&amp;gt;kubectl cp&amp;lt;/code&amp;gt; ==&lt;br /&gt;
Each container must be prefixed with a namespace, the copy-to-file(&amp;lt;filename&amp;gt;) must be in place. The recursive copy might be tricky. &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl cp [[namespace/]pod:]file/path ./&amp;lt;filename&amp;gt; -c &amp;lt;container_name&amp;gt;&lt;br /&gt;
kubectl cp vegeta/vegeta-5847d879d8-p9kqw:plot.html ./plot.html -c vegeta&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== One liners ==&lt;br /&gt;
=== Single purpose pods ===&lt;br /&gt;
Note: &amp;lt;code&amp;gt;--generator=deployment/apps.v1&amp;lt;/code&amp;gt; is DEPRECATED and will be removed, use &amp;lt;code&amp;gt;--generator=run-pod/v1 &amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;kubectl create&amp;lt;/code&amp;gt; instead.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Exec to deployment, no need to specify unique pod name&lt;br /&gt;
kubectl exec -it deploy/sleep -- curl httpbin:8000/headers&lt;br /&gt;
&lt;br /&gt;
NS=mynamespace; LABEL='app.kubernetes.io/name=myvalue'&lt;br /&gt;
kubectl exec -n $NS -it $(kubectl get pod -l &amp;quot;$LABEL&amp;quot; -n $NS -o jsonpath='{.items[0].metadata.name}') -- bash&lt;br /&gt;
&lt;br /&gt;
# Echo server&lt;br /&gt;
kubectl run --image=k8s.gcr.io/echoserver:1.4 hello-1 --port=8080&lt;br /&gt;
&lt;br /&gt;
# Single purpose pods&lt;br /&gt;
kubectl run    --image=bitnami/kubectl:1.21.8 kubectl-1    --rm -it -- get pods&lt;br /&gt;
kubectl run    --image=appropriate/curl       curl-1       --rm -it -- sh&lt;br /&gt;
kubectl run    --image=ubuntu:18.04     ubuntu-1  --rm -it -- bash&lt;br /&gt;
kubectl create --image=ubuntu:20.04     ubuntu-2  --rm -it -- bash&lt;br /&gt;
kubectl run    --image=busybox:1.31.0   busybox-1 --rm -it -- sh          # exec and delete when completed&lt;br /&gt;
kubectl run    --image=busybox:1.31.0   busybox-2          -- sleep 7200  # sleep, so you can exec&lt;br /&gt;
kubectl run    --image=alpine           alpine-1  --rm -it -- ping -c 1 8.8.8.8&lt;br /&gt;
 docker run    --rm -it --name alpine-1 alpine                ping -c 1 8.8.8.8&lt;br /&gt;
&lt;br /&gt;
# Network-multitool | https://github.com/wbitt/Network-MultiTool | Runs as a webserver, so won't complete.&lt;br /&gt;
kubectl run    --image=wbitt/network-multitool multitool-1&lt;br /&gt;
kubectl create --image=wbitt/network-multitool deployment multitool&lt;br /&gt;
kubectl exec -it multitool-1          -- /bin/bash&lt;br /&gt;
kubectl exec -it deployment/multitool -- /bin/bash&lt;br /&gt;
docker run --rm -it --name network-multitool wbitt/network-multitool bash&lt;br /&gt;
&lt;br /&gt;
# Curl&lt;br /&gt;
kubectl run test --image=tutum/curl -- sleep 10000&lt;br /&gt;
&lt;br /&gt;
# Deprecation syntax&lt;br /&gt;
kubectl run --image=k8s.gcr.io/echoserver:1.4 --generator=run-pod/v1         hello-1 --port=8080 # VALID!&lt;br /&gt;
kubectl run --image=k8s.gcr.io/echoserver:1.4 --generator=deployment/apps.v1 hello-1 --port=8080 # &amp;lt;- deprecated&lt;br /&gt;
&lt;br /&gt;
# Errors&lt;br /&gt;
# | error: --rm should only be used for attached containers&lt;br /&gt;
# | Error: unknown flag: --image # when kubectl create --image&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Additional software&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Process and network comamnds&lt;br /&gt;
export DEBIAN_FRONTEND=noninteractive # Ubuntu 20.04&lt;br /&gt;
DEBIAN_FRONTEND=noninteractive apt install -yq dnsutils iproute2 iputils-ping iputils-tracepath net-tools netcat procps&lt;br /&gt;
# | dnsutils     - nslookup, dig&lt;br /&gt;
# | iproute2     - ip addr, ss&lt;br /&gt;
# | iputils-ping      - ping&lt;br /&gt;
# | iputils-tracepath - tracepath&lt;br /&gt;
# | net-tools    - ifconfig&lt;br /&gt;
# | netcat       - nc&lt;br /&gt;
# | procps       - ps, top&lt;br /&gt;
&lt;br /&gt;
# Databases&lt;br /&gt;
apt install -yq redis-tools&lt;br /&gt;
apt install -yq postgresql-client&lt;br /&gt;
&lt;br /&gt;
# AWS cli v1 - Debian&lt;br /&gt;
apt install python-pip&lt;br /&gt;
pip install awscli&lt;br /&gt;
&lt;br /&gt;
# Network test without ping, nc or telnet&lt;br /&gt;
(timeout 1 bash -c '&amp;lt;/dev/tcp/127.0.0.1/22 &amp;amp;&amp;amp; echo PORT OPEN || echo PORT CLOSED') 2&amp;gt;/dev/null&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;kubectl heredocs&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl apply -f &amp;lt;(cat &amp;lt;&amp;lt;EOF&lt;br /&gt;
EOF&lt;br /&gt;
) --dry-run=server&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;One lines move to yamls&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
# kubectl exec -it ubuntu-2 -- bash&lt;br /&gt;
kubectl apply -f &amp;lt;(cat &amp;lt;&amp;lt;EOF&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Pod&lt;br /&gt;
metadata:&lt;br /&gt;
# namespace: default&lt;br /&gt;
  name: ubuntu-1&lt;br /&gt;
# annotations:&lt;br /&gt;
#   kubernetes.io/psp: eks.privileged&lt;br /&gt;
# labels:&lt;br /&gt;
#   app: ubuntu&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
  - command:&lt;br /&gt;
    - &amp;quot;sleep&amp;quot;&lt;br /&gt;
    - &amp;quot;7200&amp;quot;&lt;br /&gt;
#   args:&lt;br /&gt;
#   - &amp;quot;bash&amp;quot;&lt;br /&gt;
    image: ubuntu:20.04&lt;br /&gt;
    imagePullPolicy: IfNotPresent&lt;br /&gt;
    name: ubuntu-1&lt;br /&gt;
#   securityContext:&lt;br /&gt;
#     privileged: true&lt;br /&gt;
#   tty: true&lt;br /&gt;
# dnsPolicy: ClusterFirst&lt;br /&gt;
# enableServiceLinks: true&lt;br /&gt;
  restartPolicy: Never&lt;br /&gt;
# serviceAccount    : sa1&lt;br /&gt;
# serviceAccountName: sa1&lt;br /&gt;
# nodeSelector:&lt;br /&gt;
#   node.kubernetes.io/lifecycle: spot&lt;br /&gt;
EOF&lt;br /&gt;
) --dry-run=server&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Docker - for a single missing commands ===&lt;br /&gt;
If you ever miss some commands you can use docker container package with it:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# curl - missing on minikube node that runs CoreOS&lt;br /&gt;
minikube -p metrics ip; minikube ssh&lt;br /&gt;
docker run appropriate/curl -- http://&amp;lt;NodeIP&amp;gt;:10255/stats/summary # check kubelet-metrics non secure endpoint&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== [https://kubernetes.io/blog/2019/01/14/apiserver-dry-run-and-kubectl-diff/ &amp;lt;code&amp;gt;kubectl diff&amp;lt;/code&amp;gt;] ==&lt;br /&gt;
Shows the differences between the current '''live''' object and the new '''dry-run''' object.&lt;br /&gt;
&amp;lt;source lang=diff&amp;gt;&lt;br /&gt;
kubectl diff -f webfront-deploy.yaml&lt;br /&gt;
diff -u -N /tmp/LIVE-761963756/apps.v1.Deployment.default.webfront-deploy /tmp/MERGED-431884635/apps.v1.Deployment.default.webfront-deploy&lt;br /&gt;
--- /tmp/LIVE-761963756/apps.v1.Deployment.default.webfront-deploy      2019-10-13 17:46:59.784000000 +0000&lt;br /&gt;
+++ /tmp/MERGED-431884635/apps.v1.Deployment.default.webfront-deploy    2019-10-13 17:46:59.788000000 +0000&lt;br /&gt;
@@ -4,7 +4,7 @@&lt;br /&gt;
   annotations:&lt;br /&gt;
     deployment.kubernetes.io/revision: &amp;quot;1&amp;quot;&lt;br /&gt;
   creationTimestamp: &amp;quot;2019-10-13T16:38:43Z&amp;quot;&lt;br /&gt;
-  generation: 2&lt;br /&gt;
+  generation: 3&lt;br /&gt;
   labels:&lt;br /&gt;
     app: webfront-deploy&lt;br /&gt;
   name: webfront-deploy&lt;br /&gt;
@@ -14,7 +14,7 @@&lt;br /&gt;
   uid: ebaf757e-edd7-11e9-8060-0a2fb3cdd79a&lt;br /&gt;
 spec:&lt;br /&gt;
   progressDeadlineSeconds: 600&lt;br /&gt;
-  replicas: 2&lt;br /&gt;
+  replicas: 1&lt;br /&gt;
   revisionHistoryLimit: 10&lt;br /&gt;
   selector:&lt;br /&gt;
     matchLabels:&lt;br /&gt;
@@ -29,6 +29,7 @@&lt;br /&gt;
       creationTimestamp: null&lt;br /&gt;
       labels:&lt;br /&gt;
         app: webfront-deploy&lt;br /&gt;
+        role: webfront&lt;br /&gt;
     spec:&lt;br /&gt;
       containers:&lt;br /&gt;
       - image: nginx:1.7.8&lt;br /&gt;
exit status 1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Kubectl-plugins - [https://krew.sigs.k8s.io/docs/ Krew] plugin manager ==&lt;br /&gt;
Install [https://github.com/kubernetes-sigs/krew krew] package manager for kubectl plugins, requires K8s v1.12+&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
(&lt;br /&gt;
  set -x; cd &amp;quot;$(mktemp -d)&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  OS=&amp;quot;$(uname | tr '[:upper:]' '[:lower:]')&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  ARCH=&amp;quot;$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  KREW=&amp;quot;krew-${OS}_${ARCH}&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  curl -fsSLO &amp;quot;https://github.com/kubernetes-sigs/krew/releases/latest/download/${KREW}.tar.gz&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  tar zxvf &amp;quot;${KREW}.tar.gz&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  ./&amp;quot;${KREW}&amp;quot; install krew&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# update PATH&lt;br /&gt;
[ -d ${HOME}/.krew/bin ] &amp;amp;&amp;amp; export PATH=&amp;quot;${PATH}:${HOME}/.krew/bin&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# List plugins&lt;br /&gt;
kubectl krew search&lt;br /&gt;
&lt;br /&gt;
# Install plugins&lt;br /&gt;
kubectl krew install sniff&lt;br /&gt;
&lt;br /&gt;
# Upgrade plugins&lt;br /&gt;
kubectl krew upgrade&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*[https://github.com/kubernetes-sigs/krew-index/blob/master/plugins.md Available kubectl plugins] Github&lt;br /&gt;
*[https://ahmet.im/blog/kubectl-plugins/ kubectl subcommands] write your own plugin&lt;br /&gt;
&lt;br /&gt;
== Install kubectl plugins ==&lt;br /&gt;
&amp;lt;code&amp;gt;kubectl ctx&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;kubectl ns&amp;lt;/code&amp;gt; - change context and set default namespace&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl krew install ctx ns&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;kubectl cssh&amp;lt;/code&amp;gt; - SSH into Kubernetes nodes ===&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Ssh to all nodes, example below for EKS v1.15.11&lt;br /&gt;
kubectl cssh -u ec2-user -i /git/secrets/ssh/dev.pem -a &amp;quot;InternalIP&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;code&amp;gt;[https://github.com/FairwindsOps/pluto kubectl deprecations]&amp;lt;/code&amp;gt;===&lt;br /&gt;
;&amp;lt;code&amp;gt;[https://github.com/FairwindsOps/pluto kubectl deprecations]&amp;lt;/code&amp;gt;: shows all the deprecated objects in a Kubernetes cluster allowing the operator to verify them before upgrading the cluster. It uses the swagger.json version available in master branch of Kubernetes repository (https://github.com/kubernetes/kubernetes/tree/master/api/openapi-spec) as a reference.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl deprecations&lt;br /&gt;
StatefulSet found in statefulsets.apps/v1beta1&lt;br /&gt;
	 ├─ API REMOVED FROM THE CURRENT VERSION AND SHOULD BE MIGRATED IMMEDIATELY!!&lt;br /&gt;
		-&amp;gt; OBJECT: myapp namespace: mynamespace1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Prior upgrade report. Script specific to EKS.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
[[ $# -eq 0 ]] &amp;amp;&amp;amp; echo &amp;quot;no args, provide prefix for the file name&amp;quot; &amp;amp;&amp;amp; exit 1&lt;br /&gt;
PREFIX=$1&lt;br /&gt;
TARGET_K8S_VER=v1.16.8&lt;br /&gt;
K8Sid=$(kubectl cluster-info | head -1 | cut -d'/' -f3 | cut -d'.' -f1)&lt;br /&gt;
kubectl deprecations --k8s-version $TARGET_K8S_VER &amp;gt; $PREFIX-$(kubectl cluster-info | head -1 | cut -d'/' -f3 | cut -d'.' -f1)-$(date +&amp;quot;%Y%m%d-%H%M&amp;quot;)-from-$(kubectl version --short | grep Server | cut -f3 -d' ')-to-${TARGET_K8S_VER}.yaml&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ ./kube-deprecations.sh test&lt;br /&gt;
$ ls -l&lt;br /&gt;
-rw-rw-r-- 1 vagrant vagrant 29356 Jun 29 16:09 test-11111111112222222222333333333344-20200629-1609-from-v1.15.11-eks-af3caf-to-latest.yaml&lt;br /&gt;
-rw-rw-r-- 1 vagrant vagrant   852 Jun 30 22:41 test-11111111112222222222333333333344-20200630-2241-from-v1.15.11-eks-af3caf-to-v1.16.8.yaml&lt;br /&gt;
-rwxrwxr-x 1 vagrant vagrant   437 Jun 30 22:41 kube-deprecations.sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;code&amp;gt;kubectl df-pv&amp;lt;/code&amp;gt;===&lt;br /&gt;
;&amp;lt;code&amp;gt;kubectl df-pv&amp;lt;/code&amp;gt;: Show disk usage (like unix df) for persistent volumes&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl df-pv&lt;br /&gt;
PVC                   NAMESPACE   POD                    SIZE          USED        AVAILABLE     PERCENTUSED   IUSED   IFREE     PERCENTIUSED&lt;br /&gt;
rdbms-volume          shared1     rdbms-d494fbf4-xrssk   2046640128    252817408   1777045504    12.35         688     130384    0.52&lt;br /&gt;
userdata-0            shared2     mft-0                  21003583488   57692160    20929114112   0.27          749     1309971   0.06&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
===&amp;lt;code&amp;gt;kubectl sniff&amp;lt;/code&amp;gt;===&lt;br /&gt;
Start a remote packet capture on pods using tcpdump.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl sniff hello-minikube-7c77b68cff-qbvsd -c hello-minikube&lt;br /&gt;
# Flags:&lt;br /&gt;
#   -c, --container string             container (optional)&lt;br /&gt;
#   -x, --context string               kubectl context to work on (optional)&lt;br /&gt;
#   -f, --filter string                tcpdump filter (optional)&lt;br /&gt;
#   -h, --help                         help for sniff&lt;br /&gt;
#       --image string                 the privileged container image (optional)&lt;br /&gt;
#   -i, --interface string             pod interface to packet capture (optional) (default &amp;quot;any&amp;quot;)&lt;br /&gt;
#   -l, --local-tcpdump-path string    local static tcpdump binary path (optional)&lt;br /&gt;
#   -n, --namespace string             namespace (optional) (default &amp;quot;default&amp;quot;)&lt;br /&gt;
#   -o, --output-file string           output file path, tcpdump output will be redirect to this file instead of wireshark (optional) ('-' stdout)&lt;br /&gt;
#   -p, --privileged                   if specified, ksniff will deploy another pod that have privileges to attach target pod network namespace&lt;br /&gt;
#   -r, --remote-tcpdump-path string   remote static tcpdump binary path (optional) (default &amp;quot;/tmp/static-tcpdump&amp;quot;)&lt;br /&gt;
#   -v, --verbose                      if specified, ksniff output will include debug information (optional)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The command above will open Wireshark, interesting article to follow:&lt;br /&gt;
* [https://istio.io/latest/docs/tasks/security/authentication/mtls-migration/#set-up-the-cluster mutual TLS] istio&lt;br /&gt;
* [https://dzone.com/articles/verifying-service-mesh-tls-in-kubernetes-using-ksn Verifying Service Mesh TLS in Kubernetes, Using Ksniff and Wireshark]&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;code&amp;gt;kubectl neat&amp;lt;/code&amp;gt;===&lt;br /&gt;
Print sanitized Kubernetes manifest.&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
kubectl get csec  dummy-secret -n clustersecret -oyaml | kubectl neat&lt;br /&gt;
apiVersion: clustersecret.io/v1&lt;br /&gt;
data:&lt;br /&gt;
  tls.crt: ***&lt;br /&gt;
  tls.key: ***&lt;br /&gt;
kind: ClusterSecret&lt;br /&gt;
matchNamespace:&lt;br /&gt;
- anothernamespace&lt;br /&gt;
metadata:&lt;br /&gt;
  name: dummy-secret&lt;br /&gt;
  namespace: clustersecret&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Getting help like manpages &amp;lt;code&amp;gt;kubectl explain&amp;lt;/code&amp;gt; ==&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
$ kubectl --help&lt;br /&gt;
$ kubectl get --help&lt;br /&gt;
$ kubectl explain --help&lt;br /&gt;
$ kubectl explain pod.spec.containers # kubectl knows cluster version, so gives you correct schema details&lt;br /&gt;
$ kubectl explain pods.spec.tolerations --recursive # show only fields&lt;br /&gt;
(...)&lt;br /&gt;
FIELDS:&lt;br /&gt;
   effect	&amp;lt;string&amp;gt;&lt;br /&gt;
   key	&amp;lt;string&amp;gt;&lt;br /&gt;
   operator	&amp;lt;string&amp;gt;&lt;br /&gt;
   tolerationSeconds	&amp;lt;integer&amp;gt;&lt;br /&gt;
   value	&amp;lt;string&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*[https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-strong-getting-started-strong- kubectl-commands] K8s interactive kubectl command reference&lt;br /&gt;
&lt;br /&gt;
= Watch Containers logs =&lt;br /&gt;
== [https://github.com/stern/stern Stern] ==&lt;br /&gt;
{{note| https://github.com/wercker/stern repository has no activity [https://github.com/wercker/stern/issues/140 ISSUE-140], the new community maintain repo is &amp;lt;tt&amp;gt;[https://github.com/stern/stern stern/stern]&amp;lt;/tt&amp;gt;  }}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Log tailing and landscape viewing tool. It connects to kubeapi and streams logs from all pods. Thus using this external tool with clusters that have 100ts of containers can be put significant load on kubeapi.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It will re-use kubectl config file to connect to your clusters, so works oob.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Govendor - this module manager is required&lt;br /&gt;
export GOPATH=$HOME/go        # path where go modules can be found, used by 'go get -u &amp;lt;url&amp;gt;'&lt;br /&gt;
export PATH=$PATH:$GOPATH/bin # path to the additional 'go' binaries&lt;br /&gt;
go get -u github.com/kardianos/govendor  # there will be no output&lt;br /&gt;
&lt;br /&gt;
# Stern (official)&lt;br /&gt;
mkdir -p $GOPATH/src/github.com/stern # new link: https://github.com/stern/stern&lt;br /&gt;
cd $GOPATH/src/github.com/stern&lt;br /&gt;
git clone https://github.com/stern/stern.git &amp;amp;&amp;amp; cd stern&lt;br /&gt;
govendor sync # there will be no output, may take 2 min&lt;br /&gt;
go install    # no output&lt;br /&gt;
&lt;br /&gt;
# Stern latest, download binary, no need for govendor&lt;br /&gt;
REPO=stern/stern&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/$REPO/releases/latest&amp;quot; | jq -r .tag_name | tr -d v); echo $LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d); FILE=stern_${LATEST}_linux_amd64&lt;br /&gt;
curl -L https://github.com/$REPO/releases/download/v${LATEST}/$FILE.tar.gz -o $TEMPDIR/$FILE.tar.gz&lt;br /&gt;
sudo tar xzvf $TEMPDIR/$FILE.tar.gz -C /usr/local/bin/ stern&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Usage&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Regex filter (pod-query) to match 2 pods patterns 'proxy' and 'gateway'&lt;br /&gt;
stern -n dev --kubeconfig ~/.kube/dev-config \(proxy\|gateway\)  # escape to protect regex mod characters&lt;br /&gt;
stern -n dev --kubeconfig ~/.kube/dev-config '(proxy|gateway)'   # single-quote to protect mod characters&lt;br /&gt;
&lt;br /&gt;
# Template the output&lt;br /&gt;
stern --template '{{.Message}} ({{.NodeName}}/{{.Namespace}}/{{.PodName}}/{{.ContainerName}}){{&amp;quot;\n&amp;quot;}}' .&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Help&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ stern&lt;br /&gt;
Tail multiple pods and containers from Kubernetes&lt;br /&gt;
&lt;br /&gt;
Usage:&lt;br /&gt;
  stern pod-query [flags]&lt;br /&gt;
&lt;br /&gt;
Flags:&lt;br /&gt;
  -A, --all-namespaces             If present, tail across all namespaces. A specific namespace is ignored even if specified with --namespace.&lt;br /&gt;
      --color string               Color output. Can be 'always', 'never', or 'auto' (default &amp;quot;auto&amp;quot;)&lt;br /&gt;
      --completion string          Outputs stern command-line completion code for the specified shell. Can be 'bash' or 'zsh'&lt;br /&gt;
  -c, --container string           Container name when multiple containers in pod (default &amp;quot;.*&amp;quot;)&lt;br /&gt;
      --container-state string     If present, tail containers with status in running, waiting or terminated. Default to running. (default &amp;quot;running&amp;quot;)&lt;br /&gt;
      --context string             Kubernetes context to use. Default to current context configured in kubeconfig.&lt;br /&gt;
  -e, --exclude strings            Regex of log lines to exclude&lt;br /&gt;
  -E, --exclude-container string   Exclude a Container name&lt;br /&gt;
  -h, --help                       help for stern&lt;br /&gt;
  -i, --include strings            Regex of log lines to include&lt;br /&gt;
      --init-containers            Include or exclude init containers (default true)&lt;br /&gt;
      --kubeconfig string          Path to kubeconfig file to use&lt;br /&gt;
  -n, --namespace string           Kubernetes namespace to use. Default to namespace configured in Kubernetes context.&lt;br /&gt;
  -o, --output string              Specify predefined template. Currently support: [default, raw, json] (default &amp;quot;default&amp;quot;)&lt;br /&gt;
  -l, --selector string            Selector (label query) to filter on. If present, default to &amp;quot;.*&amp;quot; for the pod-query.&lt;br /&gt;
  -s, --since duration             Return logs newer than a relative duration like 5s, 2m, or 3h. Defaults to 48h.&lt;br /&gt;
      --tail int                   The number of lines from the end of the logs to show. Defaults to -1, showing all logs. (default -1)&lt;br /&gt;
      --template string            Template to use for log lines, leave empty to use --output flag&lt;br /&gt;
  -t, --timestamps                 Print timestamps&lt;br /&gt;
  -v, --version                    Print the version and exit&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Usage&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
stern &amp;lt;pod&amp;gt;&lt;br /&gt;
stern --tail 1 busybox -n &amp;lt;namespace&amp;gt; #this is RegEx that matches busybox1|2|etc&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== [https://github.com/johanhaleby/kubetail kubetail] ==&lt;br /&gt;
Bash script that enables you to aggregate (tail/follow) logs from multiple pods into one stream. This is the same as running &amp;lt;code&amp;gt;kubectl logs -f&amp;lt;/code&amp;gt; but for multiple pods.&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/lensapp/lens Lens | Kubernetes IDE] =&lt;br /&gt;
Kubernetes client, this is not a dashboard that needs installing on a cluster. Similar to KUI but much more powerful.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Deb&lt;br /&gt;
curl curl https://api.k8slens.dev/binaries/Lens-5.4.1-latest.20220304.1.amd64.deb&lt;br /&gt;
sudo apt-get install ./Lens-5.4.1-latest.20220304.1.amd64.deb&lt;br /&gt;
&lt;br /&gt;
# Snap&lt;br /&gt;
snap list&lt;br /&gt;
sudo snap install kontena-lens --classic # U16.04+, tested on U20.04&lt;br /&gt;
&lt;br /&gt;
# Install from a .snap file&lt;br /&gt;
mkdir -p ~/Downloads/kontena-lens &amp;amp;&amp;amp; cd $_&lt;br /&gt;
snap download kontena-lens&lt;br /&gt;
sudo snap ack     kontena-lens_152.assert         # add an assertion to the system assertion database&lt;br /&gt;
sudo snap install kontena-lens_152.snap --classic # --dangerous if you do not have the assert file&lt;br /&gt;
&lt;br /&gt;
# download snap from https://k8slens.dev/&lt;br /&gt;
curl https://api.k8slens.dev/binaries/Lens-5.3.4-latest.20220120.1.amd64.snap&lt;br /&gt;
sudo snap install Lens-5.3.4-latest.20220120.1.amd64.snap --classic --dangerous&lt;br /&gt;
&lt;br /&gt;
# Info&lt;br /&gt;
$ snap info kontena-lens_152.assert&lt;br /&gt;
name:      kontena-lens&lt;br /&gt;
summary:   Lens - The Kubernetes IDE&lt;br /&gt;
publisher: Mirantis Inc (jakolehm)&lt;br /&gt;
store-url: https://snapcraft.io/kontena-lens&lt;br /&gt;
contact:   info@k8slens.dev&lt;br /&gt;
license:   Proprietary&lt;br /&gt;
description: |&lt;br /&gt;
  Lens is the most powerful IDE for people who need to deal with Kubernetes clusters on a daily&lt;br /&gt;
  basis. Ensure your clusters are properly setup and configured. Enjoy increased visibility, real&lt;br /&gt;
  time statistics, log streams and hands-on troubleshooting capabilities. With Lens, you can work&lt;br /&gt;
  with your clusters more easily and fast, radically improving productivity and the speed of&lt;br /&gt;
  business.&lt;br /&gt;
snap-id: Dek6y5mTEPxhySFKPB4Z0WVi5EPS9osS&lt;br /&gt;
channels:&lt;br /&gt;
  latest/stable:    4.0.7      2021-01-20 (152) 107MB classic&lt;br /&gt;
  latest/candidate: 4.0.7      2021-01-20 (152) 107MB classic&lt;br /&gt;
  latest/beta:      4.0.7      2021-01-20 (152) 107MB classic&lt;br /&gt;
  latest/edge:      4.1.0-rc.1 2021-02-11 (157) 108MB classic&lt;br /&gt;
&lt;br /&gt;
$ snap info kontena-lens_152.snap&lt;br /&gt;
path:       &amp;quot;kontena-lens_152.snap&amp;quot;&lt;br /&gt;
name:       kontena-lens&lt;br /&gt;
summary:    Lens&lt;br /&gt;
version:    4.0.7 classic&lt;br /&gt;
build-date: 24 days ago, at 16:31 GMT&lt;br /&gt;
license:    unset&lt;br /&gt;
description: |&lt;br /&gt;
  Lens - The Kubernetes IDE&lt;br /&gt;
commands:&lt;br /&gt;
  - kontena-lens&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/lensapp/lens.git OpenLens] | Kubernetes IDE =&lt;br /&gt;
Download binary from https://github.com/MuhammedKalkan/OpenLens&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install on Ubuntu&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
SUDO=''&lt;br /&gt;
if (( $EUID != 0 )); then&lt;br /&gt;
    SUDO='sudo'&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
REPO=MuhammedKalkan/OpenLens&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/$REPO/releases/latest&amp;quot; | jq -r .tag_name | tr -d v); echo $LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d); FILE=OpenLens-${LATEST}.amd64.deb&lt;br /&gt;
curl -L https://github.com/${REPO}/releases/download/v${LATEST}/$FILE -o $TEMPDIR/$FILE&lt;br /&gt;
$SUDO dpkg -i $TEMPDIR/$FILE&lt;br /&gt;
$SUDO apt-get install -y --fix-broken&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Build your own - [https://gist.github.com/jslay88/bf654c23eaaaed443bb8e8b41d02b2a9 gist]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
install_deps_windows() {&lt;br /&gt;
    echo &amp;quot;Installing Build Dependencies (Windows)...&amp;quot;&lt;br /&gt;
    choco install -y make visualstudio2019buildtools visualstudio2019-workload-vctools&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_deps_darwin() {&lt;br /&gt;
    echo &amp;quot;Installing Build Dependencies (Darwin)...&amp;quot;&lt;br /&gt;
    xcode-select --install&lt;br /&gt;
    if ! hash make 2&amp;gt;/dev/null; then&lt;br /&gt;
        if ! hash brew 2&amp;gt;/dev/null; then&lt;br /&gt;
            echo &amp;quot;Installing Homebrew...&amp;quot;&lt;br /&gt;
            /bin/bash -c &amp;quot;$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)&amp;quot;&lt;br /&gt;
        fi&lt;br /&gt;
        echo &amp;quot;Installing make via Homebrew...&amp;quot;&lt;br /&gt;
        brew install make&lt;br /&gt;
    fi&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_deps_posix() {&lt;br /&gt;
    echo &amp;quot;Installing Build Dependencies (Posix)...&amp;quot;&lt;br /&gt;
    sudo apt-get install -y make g++ curl&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_darwin() {&lt;br /&gt;
    echo &amp;quot;Killing OpenLens (if open)...&amp;quot;&lt;br /&gt;
    killall OpenLens&lt;br /&gt;
    echo &amp;quot;Installing OpenLens (Darwin)...&amp;quot;&lt;br /&gt;
    rm -Rf &amp;quot;$HOME/Applications/OpenLens.app&amp;quot;&lt;br /&gt;
    arch=&amp;quot;mac&amp;quot;&lt;br /&gt;
    if [[ &amp;quot;$(uname -m)&amp;quot; == &amp;quot;arm64&amp;quot; ]]; then&lt;br /&gt;
        arch=&amp;quot;mac-arm64&amp;quot;  # credit @teefax&lt;br /&gt;
    fi&lt;br /&gt;
    cp -Rfp &amp;quot;$tempdir/lens/dist/$arch/OpenLens.app&amp;quot; &amp;quot;$HOME/Applications/&amp;quot;&lt;br /&gt;
    rm -Rf &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    &lt;br /&gt;
    print_alias_message&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_posix() {&lt;br /&gt;
    echo &amp;quot;Installing OpenLens (Posix)...&amp;quot;&lt;br /&gt;
    cd &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    sudo dpkg -i &amp;quot;$(ls -Art $tempdir/lens/dist/*.deb  | tail -n 1)&amp;quot;&lt;br /&gt;
    rm -Rf &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    &lt;br /&gt;
    print_alias_message&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_windows() {&lt;br /&gt;
    echo &amp;quot;Installing OpenLens (Windows)...&amp;quot;&lt;br /&gt;
    &amp;quot;$(/bin/ls -Art $tempdir/lens/dist/OpenLens*.exe  | tail -n 1)&amp;quot;&lt;br /&gt;
    rm -Rf &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    &lt;br /&gt;
    print_alias_message&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_nvm() {&lt;br /&gt;
    if [ -z &amp;quot;$NVM_DIR&amp;quot; ]; then&lt;br /&gt;
        echo &amp;quot;Installing NVM...&amp;quot;&lt;br /&gt;
        NVM_VERSION=$(curl -s https://api.github.com/repos/nvm-sh/nvm/releases/latest | sed -En 's/  &amp;quot;tag_name&amp;quot;: &amp;quot;(.+)&amp;quot;,/\1/p')&lt;br /&gt;
        curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/$NVM_VERSION/install.sh | bash&lt;br /&gt;
        NVM_DIR=&amp;quot;$HOME/.nvm&amp;quot;&lt;br /&gt;
    fi&lt;br /&gt;
    [ -s &amp;quot;$NVM_DIR/nvm.sh&amp;quot; ] &amp;amp;&amp;amp; \. &amp;quot;$NVM_DIR/nvm.sh&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
build_openlens() {&lt;br /&gt;
    tempdir=$(mktemp -d)&lt;br /&gt;
    cd &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    if [ -z &amp;quot;$1&amp;quot; ]; then&lt;br /&gt;
        echo &amp;quot;Checking GitHub API for latests tag...&amp;quot;&lt;br /&gt;
        OPENLENS_VERSION=$(curl -s https://api.github.com/repos/lensapp/lens/releases/latest | sed -En 's/  &amp;quot;tag_name&amp;quot;: &amp;quot;(.+)&amp;quot;,/\1/p')&lt;br /&gt;
    else&lt;br /&gt;
        if [[ &amp;quot;$1&amp;quot; == v* ]]; then&lt;br /&gt;
            OPENLENS_VERSION=&amp;quot;$1&amp;quot;&lt;br /&gt;
        else&lt;br /&gt;
            OPENLENS_VERSION=&amp;quot;v$1&amp;quot;&lt;br /&gt;
        fi&lt;br /&gt;
        echo &amp;quot;Using supplied tag $OPENLENS_VERSION&amp;quot;&lt;br /&gt;
    fi&lt;br /&gt;
    if [ -z $OPENLENS_VERSION ]; then&lt;br /&gt;
        echo &amp;quot;Failed to get valid version tag. Aborting!&amp;quot;&lt;br /&gt;
        exit 1&lt;br /&gt;
    fi&lt;br /&gt;
    curl -L https://github.com/lensapp/lens/archive/refs/tags/$OPENLENS_VERSION.tar.gz | tar xvz&lt;br /&gt;
    mv lens-* lens&lt;br /&gt;
    cd lens&lt;br /&gt;
    NVM_CURRENT=$(nvm current)&lt;br /&gt;
    nvm install 16&lt;br /&gt;
    nvm use 16&lt;br /&gt;
    npm install -g yarn&lt;br /&gt;
    make build&lt;br /&gt;
    nvm use &amp;quot;$NVM_CURRENT&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
print_alias_message() {&lt;br /&gt;
    if [ &amp;quot;$(type -t install_openlens)&amp;quot; != 'alias' ]; then&lt;br /&gt;
        printf &amp;quot;It is recommended to add an alias to your shell profile to run this script again.\n&amp;quot;&lt;br /&gt;
        printf &amp;quot;alias install_openlens=\&amp;quot;curl -o- https://gist.githubusercontent.com/jslay88/bf654c23eaaaed443bb8e8b41d02b2a9/raw/install_openlens.sh | bash\&amp;quot;\n\n&amp;quot;&lt;br /&gt;
    fi&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
if [[ &amp;quot;$(uname)&amp;quot; == &amp;quot;Linux&amp;quot; ]]; then&lt;br /&gt;
    install_deps_posix&lt;br /&gt;
    install_nvm&lt;br /&gt;
    build_openlens &amp;quot;$1&amp;quot;&lt;br /&gt;
    install_posix&lt;br /&gt;
elif [[ &amp;quot;$(uname)&amp;quot; == &amp;quot;Darwin&amp;quot; ]]; then&lt;br /&gt;
    install_deps_darwin&lt;br /&gt;
    install_nvm&lt;br /&gt;
    build_openlens &amp;quot;$1&amp;quot;&lt;br /&gt;
    install_darwin&lt;br /&gt;
else&lt;br /&gt;
    install_deps_windows&lt;br /&gt;
    install_nvm&lt;br /&gt;
    build_openlens &amp;quot;$1&amp;quot;&lt;br /&gt;
    install_windows&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Done!&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://kui.tools/ kui terminal] =&lt;br /&gt;
kui is a terminal with visualizations, provided by IBM&lt;br /&gt;
&lt;br /&gt;
Install using continent install script into &amp;lt;code&amp;gt;/opt/Kui-linux-x64/&amp;lt;/code&amp;gt; and symlink &amp;lt;code&amp;gt;Kui&amp;lt;/code&amp;gt; binary to &amp;lt;code&amp;gt;/usr/local/bin/kui&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
REPO=kubernetes-sigs/kui&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/$REPO/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d); FILE=Kui-linux-x64.zip&lt;br /&gt;
curl -L https://github.com/$REPO/releases/download/$LATEST/Kui-linux-x64.zip -o $TEMPDIR/$FILE&lt;br /&gt;
sudo mkdir -p /opt/Kui-linux-x64&lt;br /&gt;
sudo unzip $TEMPDIR/$FILE -d /opt/&lt;br /&gt;
&lt;br /&gt;
# Run&lt;br /&gt;
$&amp;gt; /opt/Kui-linux-x64/Kui&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Run Kui as [Kubernetes plugin https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
export PATH=$PATH:/opt/Kui-linux-x64/ # make sure Kui libs are in environment PATH&lt;br /&gt;
kubectl kui get pods -A               # -&amp;gt; a pop up window will show up&lt;br /&gt;
&lt;br /&gt;
$ kubectl plugin list &lt;br /&gt;
The following compatible plugins are available:&lt;br /&gt;
/opt/Kui-linux-x64/kubectl-kui&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
:[[File:ClipCapIt-200428-205600.PNG]]&lt;br /&gt;
&lt;br /&gt;
; Resources&lt;br /&gt;
* [https://github.com/IBM/kui/wiki kui/wiki] Github&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/derailed/popeye popeye] =&lt;br /&gt;
Popeye is a utility that scans live Kubernetes cluster and reports potential issues with deployed resources and configurations.&lt;br /&gt;
:[[File:ClipCapIt-200501-123645.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install&lt;br /&gt;
REPO=derailed/popeye&lt;br /&gt;
RELEASE=popeye_Linux_x86_64.tar.gz&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/${REPO}/releases/latest&amp;quot; | jq -r .tag_name); echo $VERSION # latest&lt;br /&gt;
wget https://github.com/${REPO}/releases/download/${VERSION}/${RELEASE}&lt;br /&gt;
tar xf ${RELEASE} popeye --remove-files&lt;br /&gt;
sudo install popeye /usr/local/bin&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
popeye # --out html&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/derailed/k9s k9s] =&lt;br /&gt;
K9s provides a terminal UI to interact with Kubernetes clusters.&lt;br /&gt;
&lt;br /&gt;
;Install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/derailed/k9s/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
wget https://github.com/derailed/k9s/releases/download/$LATEST/k9s_Linux_amd64.tar.gz&lt;br /&gt;
tar xf k9s_Linux_amd64.tar.gz --remove-files k9s&lt;br /&gt;
sudo install k9s /usr/local/bin&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Usage&lt;br /&gt;
* &amp;lt;code&amp;gt;?&amp;lt;/code&amp;gt; help&lt;br /&gt;
* &amp;lt;code&amp;gt;:ns&amp;lt;/code&amp;gt; select namespace&lt;br /&gt;
* &amp;lt;code&amp;gt;:nodes&amp;lt;/code&amp;gt; show nodes&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:[[File:ClipCapIt-190826-152830.PNG]]&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/droctothorpe/kubecolor kubecolor] =&lt;br /&gt;
Kubecolor is a bash function that colorizes the output of kubectl get events -w.&lt;br /&gt;
:[[File:ClipCapIt-190831-113158.PNG]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# This script is not working&lt;br /&gt;
git clone https://github.com/droctothorpe/kubecolor.git ~/.kubecolor&lt;br /&gt;
echo &amp;quot;source ~/.kubecolor/kubecolor.bash&amp;quot; &amp;gt;&amp;gt; ~/.bash_profile # (or ~/.bashrc)&lt;br /&gt;
source ~/.bash_profile # (or ~/.bashrc)&lt;br /&gt;
&lt;br /&gt;
# You can source this function instead&lt;br /&gt;
kube-events() {&lt;br /&gt;
    kubectl get events --all-namespaces --watch \&lt;br /&gt;
    -o 'go-template={{.lastTimestamp}} ^ {{.involvedObject.kind}} ^ {{.message}} ^ ({{.involvedObject.name}}){{&amp;quot;\n&amp;quot;}}' \&lt;br /&gt;
    | awk -F^ \&lt;br /&gt;
    -v   black=$(tput setaf 0) \&lt;br /&gt;
    -v     red=$(tput setaf 1) \&lt;br /&gt;
    -v   green=$(tput setaf 2) \&lt;br /&gt;
    -v  yellow=$(tput setaf 3) \&lt;br /&gt;
    -v    blue=$(tput setaf 4) \&lt;br /&gt;
    -v magenta=$(tput setaf 5) \&lt;br /&gt;
    -v    cyan=$(tput setaf 6) \&lt;br /&gt;
    -v   white=$(tput setaf 7) \&lt;br /&gt;
    '{ $1=blue $1; $2=green $2; $3=white $3; }  1'&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
kube-events&lt;br /&gt;
kubectl get events -A -w&lt;br /&gt;
kubectl get events --all-namespaces --watch -o 'go-template={{.lastTimestamp}} {{.involvedObject.kind}} {{.message}} ({{.involvedObject.name}}){{&amp;quot;\n&amp;quot;}}'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
= [https://argoproj.github.io/argo-rollouts/ argo-rollouts] =&lt;br /&gt;
Argo Rollouts introduces a new custom resource called a Rollout to provide additional deployment strategies such as Blue Green and Canary to Kubernetes.&lt;br /&gt;
&lt;br /&gt;
= &amp;lt;code&amp;gt;[https://github.com/groundcover-com/murre murre]&amp;lt;/code&amp;gt; =&lt;br /&gt;
Murre is an on-demand, scaleable source of container resource metrics for K8s. No dependencies needed, no install needed on a cluster.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
goenv install 1.18 # although 1.19 is the latest and the install completes successfully it wont create the binary  &lt;br /&gt;
go install github.com/groundcover-com/murre@latest&lt;br /&gt;
murre --sortby-cpu-util&lt;br /&gt;
murre --sortby-cpu&lt;br /&gt;
murre --pod kong-51xst&lt;br /&gt;
murre --namespace dev&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/amelbakry/kubernetes-scripts/blob/master/cluster-health.sh Kubernetes scripts] =&lt;br /&gt;
These Scripts allow you to troubleshoot and check the health status of the cluster and deployments They allow you to gather these information&lt;br /&gt;
* Cluster resources&lt;br /&gt;
* Cluster Nodes status&lt;br /&gt;
* Nodes Conditions&lt;br /&gt;
* Pods per Nodes&lt;br /&gt;
* Worker Nodes Per Availability Zones&lt;br /&gt;
* Cluster Node Types&lt;br /&gt;
* Pods not in running or completed status&lt;br /&gt;
* Top Pods according to Memory Limits&lt;br /&gt;
* Top Pods according to CPU Limits&lt;br /&gt;
* Number of Pods&lt;br /&gt;
* Pods Status&lt;br /&gt;
* Max Pods restart count&lt;br /&gt;
* Readiness of Pods&lt;br /&gt;
* Pods Average Utilization&lt;br /&gt;
* Top Pods according to CPU Utilization&lt;br /&gt;
* Top Pods according to Memory Utilization&lt;br /&gt;
* Pods Distribution per Nodes&lt;br /&gt;
* Node Distribution per Availability Zone&lt;br /&gt;
* Deployments without correct resources (Memory or CPU)&lt;br /&gt;
* Deployments without Limits&lt;br /&gt;
* Deployments without Application configured in Labels&lt;br /&gt;
&lt;br /&gt;
= Multi-node clusters =&lt;br /&gt;
{{Note|[[Kubernetes/minikube]] can do this natively}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Build multi node cluster for development.&lt;br /&gt;
On a single machine&lt;br /&gt;
* [https://github.com/kinvolk/kube-spawn/ kube-spawn] tool for creating a multi-node Kubernetes (&amp;gt;= 1.8) cluster on a single Linux machine&lt;br /&gt;
* [https://github.com/sttts/kubernetes-dind-cluster kubernetes-dind-cluster] Kubernetes multi-node cluster for developer of Kubernetes that launches in 36 seconds&lt;br /&gt;
* [https://kind.sigs.k8s.io/ kind] is a tool for running local Kubernetes clusters using Docker container “nodes”&lt;br /&gt;
* [https://github.com/ecomm-integration-ballerina/kubernetes-cluster Vagrant] full documentation in thsi [https://medium.com/@wso2tech/multi-node-kubernetes-cluster-with-vagrant-virtualbox-and-kubeadm-9d3eaac28b98 article]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Full cluster provisioning&lt;br /&gt;
* [https://github.com/kubernetes-sigs/kubespray kubespray] Deploy a Production Ready Kubernetes Cluster&lt;br /&gt;
* [https://github.com/kubernetes/kops kops] get a production grade Kubernetes cluster up and running&lt;br /&gt;
&lt;br /&gt;
= [https://kubernetes.io/docs/tasks/debug-application-cluster/crictl/ crictl] =&lt;br /&gt;
CLI and validation tools for Kubelet Container Runtime Interface (CRI). Used for debugging Kubernetes nodes with &amp;lt;code&amp;gt;crictl&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;crictl&amp;lt;/code&amp;gt; requires a Linux operating system with a CRI runtime. Creating containers with this tool on K8s cluster, will eventually cause that Kubernetes will delete these containers.&lt;br /&gt;
= [https://github.com/weaveworks/kubediff kubediff] show diff code vs what is deployed =&lt;br /&gt;
Kubediff is a tool for Kubernetes to show you the differences between your running configuration and your version controlled configuration.&lt;br /&gt;
= Mozilla SOPS - secret manager =&lt;br /&gt;
* [https://github.com/mozilla/sops SOPS] Mozilla SOPS: Secrets OPerationS, sops is an editor of encrypted files that supports YAML, JSON, ENV, INI and BINARY formats and encrypts with AWS KMS, GCP KMS, Azure Key Vault and PGP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/getsops/sops/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d)&lt;br /&gt;
curl -sL https://github.com/mozilla/sops/releases/download/${LATEST}/sops-${LATEST}.linux.amd64 -o $TEMPDIR/sops&lt;br /&gt;
sudo install $TEMPDIR/sops /usr/local/bin/sops&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://kompose.io/ Kompose] (Kubernetes + Compose) =&lt;br /&gt;
kompose is a tool to convert docker-compose files to Kubernetes manifests. &amp;lt;code&amp;gt;kompose&amp;lt;/code&amp;gt; takes a Docker Compose file and translates it into Kubernetes resources.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Linux&lt;br /&gt;
curl -L https://github.com/kubernetes/kompose/releases/download/v1.21.0/kompose-linux-amd64 -o kompose&lt;br /&gt;
sudo install ./kompose /usr/local/bin/kompose               # option 1&lt;br /&gt;
chmod +x kompose; sudo mv ./kompose /usr/local/bin/kompose  # option 2&lt;br /&gt;
&lt;br /&gt;
# Completion&lt;br /&gt;
source &amp;lt;(kompose completion bash)&lt;br /&gt;
&lt;br /&gt;
# Convert&lt;br /&gt;
kompose convert -f docker-compose-mac.yaml&lt;br /&gt;
&lt;br /&gt;
WARN Restart policy 'unless-stopped' in service mysql is not supported, convert it to 'always'&lt;br /&gt;
INFO Kubernetes file &amp;quot;mysql-service.yaml&amp;quot; created&lt;br /&gt;
INFO Kubernetes file &amp;quot;cluster-dir-persistentvolumeclaim.yaml&amp;quot; created&lt;br /&gt;
INFO Kubernetes file &amp;quot;mysql-deployment.yaml&amp;quot; created&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;References&lt;br /&gt;
*[https://github.com/kubernetes/kompose kompose] Github&lt;br /&gt;
&lt;br /&gt;
= [https://kubernetes.io/blog/2019/04/19/introducing-kube-iptables-tailer/ kube-iptables-tailer] - ip-table drop packages logger =&lt;br /&gt;
Allows to view iptables dropped packages, useful when working with Network Policies to identify pods trying to talk to disallowed destinations.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This project deploys &amp;lt;tt&amp;gt;[https://github.com/box/kube-iptables-tailer/tree/master/demo kube-iptables-tailer]&amp;lt;/tt&amp;gt; daemonset that watches iptables log &amp;lt;code&amp;gt;/var/log/iptables.log&amp;lt;/code&amp;gt; on each k8s-node mounted as &amp;lt;code&amp;gt;hostPath&amp;lt;/code&amp;gt; volume. It filters the log for custom prefix, set in &amp;lt;code&amp;gt;daemonset.spec.template.spec.containers.env&amp;lt;/code&amp;gt; and sends to cluster events.&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
            env: &lt;br /&gt;
              - name: &amp;quot;IPTABLES_LOG_PATH&amp;quot;&lt;br /&gt;
                value: &amp;quot;/var/log/iptables.log&amp;quot;&lt;br /&gt;
              - name: &amp;quot;IPTABLES_LOG_PREFIX&amp;quot;&lt;br /&gt;
                # log prefix defined in your iptables chains&lt;br /&gt;
                value: &amp;quot;my-prefix:&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://github.com/box/kube-iptables-tailer#setup-iptables-log-prefix Set iptables Log Prefix]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ iptables -A CHAIN_NAME -j LOG --log-prefix &amp;quot;EXAMPLE_LOG_PREFIX: &amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example output, when packet dropped&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ kubectl describe pods --namespace=YOUR_NAMESPACE&lt;br /&gt;
...&lt;br /&gt;
Events:&lt;br /&gt;
  FirstSeen   LastSeen    Count   From                    Type          Reason          Message&lt;br /&gt;
  ---------   --------	  -----	  ----                    ----          ------          -------&lt;br /&gt;
  1h          5s          10      kube-iptables-tailer    Warning       PacketDrop      Packet dropped when receiving traffic from example-service-2 (IP: 22.222.22.222).&lt;br /&gt;
  3h          2m          5       kube-iptables-tailer    Warning       PacketDrop      Packet dropped when sending traffic to example-service-1 (IP: 11.111.11.111).&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
= [https://github.com/eldadru/ksniff ksniff] - pipe a pod traffic to Wireshark or Tshark =&lt;br /&gt;
A kubectl plugin that utilize tcpdump and Wireshark to start a remote capture on any pod&lt;br /&gt;
&lt;br /&gt;
= [https://docs.flagger.app/ flagger - canary deployments] =&lt;br /&gt;
Flagger is a Kubernetes operator that automates the promotion of canary deployments using Istio, Linkerd, App Mesh, NGINX, Skipper, Contour or Gloo routing for traffic shifting and Prometheus metrics for canary analysis.&lt;br /&gt;
= [https://www.kubeval.com/ Kubeval] =&lt;br /&gt;
Kubeval is used to validate one or more Kubernetes configuration files, and is often used locally as part of a development workflow as well as in CI pipelines.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install&lt;br /&gt;
wget https://github.com/instrumenta/kubeval/releases/latest/download/kubeval-linux-amd64.tar.gz&lt;br /&gt;
tar xf kubeval-linux-amd64.tar.gz&lt;br /&gt;
sudo cp kubeval /usr/local/bin&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
$&amp;gt; kubeval my-invalid-rc.yaml&lt;br /&gt;
WARN - my-invalid-rc.yaml contains an invalid ReplicationController - spec.replicas: Invalid type. Expected: integer, given: string&lt;br /&gt;
$&amp;gt; echo $?&lt;br /&gt;
1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/yannh/kubeconform kubeconform] - improved Kubeval =&lt;br /&gt;
Kubeconform is a Kubernetes manifests validation tool.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install&lt;br /&gt;
wget https://github.com/yannh/kubeconform/releases/latest/download/kubeconform-linux-amd64.tar.gz&lt;br /&gt;
tar xf kubeconform-linux-amd64.tar.gz&lt;br /&gt;
sudo install kubeconform /usr/local/bin&lt;br /&gt;
&lt;br /&gt;
# Show version&lt;br /&gt;
kubeconform -v&lt;br /&gt;
v0.4.14&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Observability =&lt;br /&gt;
== [https://github.com/oslabs-beta/KUR8 KUR8] - like Elastic.io EFK dashboards ==&lt;br /&gt;
{{Note|I've deployed v1.0.0 to monitoring ns along with already existing service &amp;lt;code&amp;gt;&lt;br /&gt;
kube-prometheus-stack-prometheus:9090&amp;lt;/code&amp;gt; but the application was crashing}}&lt;br /&gt;
&lt;br /&gt;
= CPU Load pods =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Repeat the command times x CPU&lt;br /&gt;
cat /proc/cpuinfo | grep processor | wc -l # count processors&lt;br /&gt;
yes &amp;gt; /dev/null &amp;amp;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
*[https://kubernetes.io/docs/reference/kubectl/overview/ kubectl overview - resources types, Namespaced, kinds] K8s docs&lt;br /&gt;
*[https://github.com/johanhaleby/kubetail kubetail] Bash script that enables you to aggregate (tail/follow) logs from multiple pods into one stream. This is the same as running &amp;quot;kubectl logs -f &amp;quot; but for multiple pods.&lt;br /&gt;
*[https://github.com/ahmetb/kubectx kubectx kubens] Kubernetes config switches for context and setting up default namespace&lt;br /&gt;
*[https://medium.com/faun/using-different-kubectl-versions-with-multiple-kubernetes-clusters-a3ad8707b87b manages different ver kubectl] blog&lt;br /&gt;
*[https://github.com/kubernetes/community/blob/master/contributors/devel/sig-cli/kubectl-conventions.md#rules-for-extending-special-resource-alias---all kubectl] Kubectl Conventions&lt;br /&gt;
&lt;br /&gt;
Cheatsheets&lt;br /&gt;
*[https://cheatsheet.dennyzhang.com/cheatsheet-kubernetes-A4 cheatsheet-kubernetes-A4] by dennyzhang&lt;br /&gt;
&lt;br /&gt;
Other projects&lt;br /&gt;
*[https://github.com/jonmosco/kube-tmux kube-tmux] Kubernetes context and namespace status for tmux&lt;br /&gt;
*[https://github.com/jonmosco/kube-ps1 kube-ps1] Kubernetes prompt for bash and zsh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:kubernetes]]&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Kubernetes/Tools&amp;diff=7028</id>
		<title>Kubernetes/Tools</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Kubernetes/Tools&amp;diff=7028"/>
		<updated>2024-07-02T15:21:46Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Install */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= kubectl =&lt;br /&gt;
== Install ==&lt;br /&gt;
List of kubectl [https://kubernetes.io/releases/ releases].&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
curl -s https://api.github.com/repos/kubernetes/kubernetes/releases | jq -r '.[].tag_name' | sort -V&lt;br /&gt;
v1.26.15&lt;br /&gt;
v1.27.15&lt;br /&gt;
v1.28.11&lt;br /&gt;
v1.29.5&lt;br /&gt;
v1.29.6&lt;br /&gt;
v1.30.2&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
# Latest&lt;br /&gt;
ARCH=amd64 # amd64|arm&lt;br /&gt;
VERSION=$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt); echo $VERSION&lt;br /&gt;
curl -LO https://storage.googleapis.com/kubernetes-release/release/$VERSION/bin/linux/$ARCH/kubectl&lt;br /&gt;
&lt;br /&gt;
# Specific version&lt;br /&gt;
# Find specific Kubernetes release, then download kubectl&lt;br /&gt;
VERSION=v1.26.14; ARCH=amd64 # amd64|arm&lt;br /&gt;
curl -LO https://storage.googleapis.com/kubernetes-release/release/$VERSION/bin/linux/$ARCH/kubectl&lt;br /&gt;
sudo install ./kubectl /usr/local/bin/kubectl&lt;br /&gt;
&lt;br /&gt;
# Note: sudo install := chmod +x ./kubectl; sudo mv&lt;br /&gt;
&lt;br /&gt;
# Verify, kubectl should not be more than -/+ 1 minor version difference then api-server&lt;br /&gt;
kubectl version --short&lt;br /&gt;
Client Version: v1.26.14&lt;br /&gt;
Kustomize Version: v4.5.7&lt;br /&gt;
Server Version: v1.24.10&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Google way&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install kubectl if you don't already have a suitable version&lt;br /&gt;
# https://kubernetes.io/docs/setup/release/version-skew-policy/#kubectl&lt;br /&gt;
kubectl version --client || gcloud components install kubectl&lt;br /&gt;
kubectl get clusterrolebinding $(gcloud config get-value core/account)-cluster-admin ||&lt;br /&gt;
  kubectl create clusterrolebinding $(gcloud config get-value core/account)-cluster-admin \&lt;br /&gt;
  --clusterrole=cluster-admin \&lt;br /&gt;
  --user=&amp;quot;$(gcloud config get-value core/account)&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Autocompletion and kubeconfig ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
source &amp;lt;(kubectl completion bash); alias k=kubectl; complete -F __start_kubectl k&lt;br /&gt;
&lt;br /&gt;
# Set default namespace&lt;br /&gt;
kubectl config set-context --current --namespace=dev&lt;br /&gt;
kubectl config set-context $(kubectl config current-context) --namespace=dev&lt;br /&gt;
&lt;br /&gt;
vi ~/.kube/config&lt;br /&gt;
...&lt;br /&gt;
contexts:&lt;br /&gt;
- context:&lt;br /&gt;
    cluster: kubernetes&lt;br /&gt;
    user: kubernetes-admin&lt;br /&gt;
    namespace: web       # default namespace&lt;br /&gt;
  name: dev-frontend&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Add &amp;lt;code&amp;gt;proxy-url&amp;lt;/code&amp;gt; using &amp;lt;code&amp;gt;yq&amp;lt;/code&amp;gt; to kubeconfig ==&lt;br /&gt;
Minimum yq version required is v2.x, tested with yq 2.13.0. The example below does the file inline `-i` update.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
yq -i -y --indentless '.clusters[0].cluster += {&amp;quot;proxy-url&amp;quot;: &amp;quot;http://proxy.acme.com:8080&amp;quot;}' ~/.kube/$ENVIRONMENT&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Get resources and cheatsheet ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Get a list of nodes&lt;br /&gt;
kubectl get nodes -o jsonpath=&amp;quot;{.items[*].metadata.name}&amp;quot;&lt;br /&gt;
ip-10-10-10-10.eu-west-1.compute.internal ip-10-10-10-20.eu-west-1.compute.internal&lt;br /&gt;
&lt;br /&gt;
kubectl get nodes -oname&lt;br /&gt;
node/ip-10-10-10-10.eu-west-1.compute.internal&lt;br /&gt;
node/ip-10-10-10-20.eu-west-1.compute.internal&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
# Pods sorted by node name&lt;br /&gt;
kubectl get pods --sort-by=.spec.nodeName -owide -A&lt;br /&gt;
&lt;br /&gt;
# Watch a namespace in a convinient resources order | sts=statefulset, rs=replicaset, ep=endpoint, cm=configmap&lt;br /&gt;
watch -d kubectl -n dev get sts,deploy,rc,rs,pods,svc,ep,ing,pvc,cm,sa,secret,es,cronjob,job -owide --show-labels &lt;br /&gt;
   # note es - externalsecrets&lt;br /&gt;
watch -d 'kubectl get pv -owide --show-labels | grep -e &amp;lt;eg.NAMESPACE&amp;gt;'&lt;br /&gt;
watch -d helm list -A&lt;br /&gt;
&lt;br /&gt;
# Test your context by creating configMap&lt;br /&gt;
kubectl create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2&lt;br /&gt;
kubectl delete configmap my-config&lt;br /&gt;
&lt;br /&gt;
# Watch multiple namespaces&lt;br /&gt;
eval 'kubectl --context='{context1,context2}' --namespace='{ns1,ns2}' get pod;'&lt;br /&gt;
eval kubectl\ --context={context1,context2}\ --namespace={ns1,ns2}\ get\ pod\;&lt;br /&gt;
watch -d eval 'kubectl -n '{default,ingress-nginx}' get sts,deploy,rc,rs,pods,svc,ep,ing,pvc,cm,sa,secret,es,cronjob,job -owide --show-labels;'&lt;br /&gt;
&lt;br /&gt;
# Auth, can-i&lt;br /&gt;
kubectl auth can-i delete pods&lt;br /&gt;
yes&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Get yaml from existing object ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl create namespace kiali --dry-run=client -o yaml &amp;gt; ns.yaml&lt;br /&gt;
kubectl create namespace kiali --dry-run=client -o yaml | kubectl apply -f -&lt;br /&gt;
&lt;br /&gt;
# Saves version revision in metadata.annotations.kubectl.kubernetes.io/last-applied-configuration={..manifest_json..} &lt;br /&gt;
kubectl create ns foo --save-config&lt;br /&gt;
&lt;br /&gt;
# Get a yaml without status information, almost clean manifest. Deprecated '--export' before &amp;lt;v17.x.&lt;br /&gt;
kubectl -n web get pod &amp;lt;podName&amp;gt; -oyaml --export&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Generate pod manifest, the most clean way I know&lt;br /&gt;
&amp;lt;syntaxhighlightjs lang=bash&amp;gt;&lt;br /&gt;
# kubectl -n foo run --image=ubuntu:20.04 ubuntu-1 --dry-run=client -oyaml -- bash -c sleep&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Pod&lt;br /&gt;
metadata:&lt;br /&gt;
  creationTimestamp: null  # &amp;lt;- can be deleted&lt;br /&gt;
  labels:&lt;br /&gt;
    run: ubuntu-1&lt;br /&gt;
  name: ubuntu-1&lt;br /&gt;
  namespace: foo&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
  - args:&lt;br /&gt;
    - bash&lt;br /&gt;
    - -c&lt;br /&gt;
    - sleep&lt;br /&gt;
    image: ubuntu:20.04&lt;br /&gt;
    name: ubuntu-1&lt;br /&gt;
    resources: {}  # &amp;lt;- can be deleted&lt;br /&gt;
  dnsPolicy: ClusterFirst&lt;br /&gt;
  restartPolicy: Always&lt;br /&gt;
status: {}         # &amp;lt;- can be deleted&lt;br /&gt;
&amp;lt;/syntaxhighlightjs&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;code&amp;gt;kubectl cp&amp;lt;/code&amp;gt; ==&lt;br /&gt;
Each container must be prefixed with a namespace, the copy-to-file(&amp;lt;filename&amp;gt;) must be in place. The recursive copy might be tricky. &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl cp [[namespace/]pod:]file/path ./&amp;lt;filename&amp;gt; -c &amp;lt;container_name&amp;gt;&lt;br /&gt;
kubectl cp vegeta/vegeta-5847d879d8-p9kqw:plot.html ./plot.html -c vegeta&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== One liners ==&lt;br /&gt;
=== Single purpose pods ===&lt;br /&gt;
Note: &amp;lt;code&amp;gt;--generator=deployment/apps.v1&amp;lt;/code&amp;gt; is DEPRECATED and will be removed, use &amp;lt;code&amp;gt;--generator=run-pod/v1 &amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;kubectl create&amp;lt;/code&amp;gt; instead.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Exec to deployment, no need to specify unique pod name&lt;br /&gt;
kubectl exec -it deploy/sleep -- curl httpbin:8000/headers&lt;br /&gt;
&lt;br /&gt;
NS=mynamespace; LABEL='app.kubernetes.io/name=myvalue'&lt;br /&gt;
kubectl exec -n $NS -it $(kubectl get pod -l &amp;quot;$LABEL&amp;quot; -n $NS -o jsonpath='{.items[0].metadata.name}') -- bash&lt;br /&gt;
&lt;br /&gt;
# Echo server&lt;br /&gt;
kubectl run --image=k8s.gcr.io/echoserver:1.4 hello-1 --port=8080&lt;br /&gt;
&lt;br /&gt;
# Single purpose pods&lt;br /&gt;
kubectl run    --image=bitnami/kubectl:1.21.8 kubectl-1    --rm -it -- get pods&lt;br /&gt;
kubectl run    --image=appropriate/curl       curl-1       --rm -it -- sh&lt;br /&gt;
kubectl run    --image=ubuntu:18.04     ubuntu-1  --rm -it -- bash&lt;br /&gt;
kubectl create --image=ubuntu:20.04     ubuntu-2  --rm -it -- bash&lt;br /&gt;
kubectl run    --image=busybox:1.31.0   busybox-1 --rm -it -- sh          # exec and delete when completed&lt;br /&gt;
kubectl run    --image=busybox:1.31.0   busybox-2          -- sleep 7200  # sleep, so you can exec&lt;br /&gt;
kubectl run    --image=alpine           alpine-1  --rm -it -- ping -c 1 8.8.8.8&lt;br /&gt;
 docker run    --rm -it --name alpine-1 alpine                ping -c 1 8.8.8.8&lt;br /&gt;
&lt;br /&gt;
# Network-multitool | https://github.com/wbitt/Network-MultiTool | Runs as a webserver, so won't complete.&lt;br /&gt;
kubectl run    --image=wbitt/network-multitool multitool-1&lt;br /&gt;
kubectl create --image=wbitt/network-multitool deployment multitool&lt;br /&gt;
kubectl exec -it multitool-1          -- /bin/bash&lt;br /&gt;
kubectl exec -it deployment/multitool -- /bin/bash&lt;br /&gt;
docker run --rm -it --name network-multitool wbitt/network-multitool bash&lt;br /&gt;
&lt;br /&gt;
# Curl&lt;br /&gt;
kubectl run test --image=tutum/curl -- sleep 10000&lt;br /&gt;
&lt;br /&gt;
# Deprecation syntax&lt;br /&gt;
kubectl run --image=k8s.gcr.io/echoserver:1.4 --generator=run-pod/v1         hello-1 --port=8080 # VALID!&lt;br /&gt;
kubectl run --image=k8s.gcr.io/echoserver:1.4 --generator=deployment/apps.v1 hello-1 --port=8080 # &amp;lt;- deprecated&lt;br /&gt;
&lt;br /&gt;
# Errors&lt;br /&gt;
# | error: --rm should only be used for attached containers&lt;br /&gt;
# | Error: unknown flag: --image # when kubectl create --image&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Additional software&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Process and network comamnds&lt;br /&gt;
export DEBIAN_FRONTEND=noninteractive # Ubuntu 20.04&lt;br /&gt;
DEBIAN_FRONTEND=noninteractive apt install -yq dnsutils iproute2 iputils-ping iputils-tracepath net-tools netcat procps&lt;br /&gt;
# | dnsutils     - nslookup, dig&lt;br /&gt;
# | iproute2     - ip addr, ss&lt;br /&gt;
# | iputils-ping      - ping&lt;br /&gt;
# | iputils-tracepath - tracepath&lt;br /&gt;
# | net-tools    - ifconfig&lt;br /&gt;
# | netcat       - nc&lt;br /&gt;
# | procps       - ps, top&lt;br /&gt;
&lt;br /&gt;
# Databases&lt;br /&gt;
apt install -yq redis-tools&lt;br /&gt;
apt install -yq postgresql-client&lt;br /&gt;
&lt;br /&gt;
# AWS cli v1 - Debian&lt;br /&gt;
apt install python-pip&lt;br /&gt;
pip install awscli&lt;br /&gt;
&lt;br /&gt;
# Network test without ping, nc or telnet&lt;br /&gt;
(timeout 1 bash -c '&amp;lt;/dev/tcp/127.0.0.1/22 &amp;amp;&amp;amp; echo PORT OPEN || echo PORT CLOSED') 2&amp;gt;/dev/null&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;kubectl heredocs&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl apply -f &amp;lt;(cat &amp;lt;&amp;lt;EOF&lt;br /&gt;
EOF&lt;br /&gt;
) --dry-run=server&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;One lines move to yamls&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
# kubectl exec -it ubuntu-2 -- bash&lt;br /&gt;
kubectl apply -f &amp;lt;(cat &amp;lt;&amp;lt;EOF&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Pod&lt;br /&gt;
metadata:&lt;br /&gt;
# namespace: default&lt;br /&gt;
  name: ubuntu-1&lt;br /&gt;
# annotations:&lt;br /&gt;
#   kubernetes.io/psp: eks.privileged&lt;br /&gt;
# labels:&lt;br /&gt;
#   app: ubuntu&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
  - command:&lt;br /&gt;
    - &amp;quot;sleep&amp;quot;&lt;br /&gt;
    - &amp;quot;7200&amp;quot;&lt;br /&gt;
#   args:&lt;br /&gt;
#   - &amp;quot;bash&amp;quot;&lt;br /&gt;
    image: ubuntu:20.04&lt;br /&gt;
    imagePullPolicy: IfNotPresent&lt;br /&gt;
    name: ubuntu-1&lt;br /&gt;
#   securityContext:&lt;br /&gt;
#     privileged: true&lt;br /&gt;
#   tty: true&lt;br /&gt;
# dnsPolicy: ClusterFirst&lt;br /&gt;
# enableServiceLinks: true&lt;br /&gt;
  restartPolicy: Never&lt;br /&gt;
# serviceAccount    : sa1&lt;br /&gt;
# serviceAccountName: sa1&lt;br /&gt;
# nodeSelector:&lt;br /&gt;
#   node.kubernetes.io/lifecycle: spot&lt;br /&gt;
EOF&lt;br /&gt;
) --dry-run=server&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Docker - for a single missing commands ===&lt;br /&gt;
If you ever miss some commands you can use docker container package with it:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# curl - missing on minikube node that runs CoreOS&lt;br /&gt;
minikube -p metrics ip; minikube ssh&lt;br /&gt;
docker run appropriate/curl -- http://&amp;lt;NodeIP&amp;gt;:10255/stats/summary # check kubelet-metrics non secure endpoint&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== [https://kubernetes.io/blog/2019/01/14/apiserver-dry-run-and-kubectl-diff/ &amp;lt;code&amp;gt;kubectl diff&amp;lt;/code&amp;gt;] ==&lt;br /&gt;
Shows the differences between the current '''live''' object and the new '''dry-run''' object.&lt;br /&gt;
&amp;lt;source lang=diff&amp;gt;&lt;br /&gt;
kubectl diff -f webfront-deploy.yaml&lt;br /&gt;
diff -u -N /tmp/LIVE-761963756/apps.v1.Deployment.default.webfront-deploy /tmp/MERGED-431884635/apps.v1.Deployment.default.webfront-deploy&lt;br /&gt;
--- /tmp/LIVE-761963756/apps.v1.Deployment.default.webfront-deploy      2019-10-13 17:46:59.784000000 +0000&lt;br /&gt;
+++ /tmp/MERGED-431884635/apps.v1.Deployment.default.webfront-deploy    2019-10-13 17:46:59.788000000 +0000&lt;br /&gt;
@@ -4,7 +4,7 @@&lt;br /&gt;
   annotations:&lt;br /&gt;
     deployment.kubernetes.io/revision: &amp;quot;1&amp;quot;&lt;br /&gt;
   creationTimestamp: &amp;quot;2019-10-13T16:38:43Z&amp;quot;&lt;br /&gt;
-  generation: 2&lt;br /&gt;
+  generation: 3&lt;br /&gt;
   labels:&lt;br /&gt;
     app: webfront-deploy&lt;br /&gt;
   name: webfront-deploy&lt;br /&gt;
@@ -14,7 +14,7 @@&lt;br /&gt;
   uid: ebaf757e-edd7-11e9-8060-0a2fb3cdd79a&lt;br /&gt;
 spec:&lt;br /&gt;
   progressDeadlineSeconds: 600&lt;br /&gt;
-  replicas: 2&lt;br /&gt;
+  replicas: 1&lt;br /&gt;
   revisionHistoryLimit: 10&lt;br /&gt;
   selector:&lt;br /&gt;
     matchLabels:&lt;br /&gt;
@@ -29,6 +29,7 @@&lt;br /&gt;
       creationTimestamp: null&lt;br /&gt;
       labels:&lt;br /&gt;
         app: webfront-deploy&lt;br /&gt;
+        role: webfront&lt;br /&gt;
     spec:&lt;br /&gt;
       containers:&lt;br /&gt;
       - image: nginx:1.7.8&lt;br /&gt;
exit status 1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Kubectl-plugins - [https://krew.sigs.k8s.io/docs/ Krew] plugin manager ==&lt;br /&gt;
Install [https://github.com/kubernetes-sigs/krew krew] package manager for kubectl plugins, requires K8s v1.12+&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
(&lt;br /&gt;
  set -x; cd &amp;quot;$(mktemp -d)&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  OS=&amp;quot;$(uname | tr '[:upper:]' '[:lower:]')&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  ARCH=&amp;quot;$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  KREW=&amp;quot;krew-${OS}_${ARCH}&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  curl -fsSLO &amp;quot;https://github.com/kubernetes-sigs/krew/releases/latest/download/${KREW}.tar.gz&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  tar zxvf &amp;quot;${KREW}.tar.gz&amp;quot; &amp;amp;&amp;amp;&lt;br /&gt;
  ./&amp;quot;${KREW}&amp;quot; install krew&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
# update PATH&lt;br /&gt;
[ -d ${HOME}/.krew/bin ] &amp;amp;&amp;amp; export PATH=&amp;quot;${PATH}:${HOME}/.krew/bin&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# List plugins&lt;br /&gt;
kubectl krew search&lt;br /&gt;
&lt;br /&gt;
# Install plugins&lt;br /&gt;
kubectl krew install sniff&lt;br /&gt;
&lt;br /&gt;
# Upgrade plugins&lt;br /&gt;
kubectl krew upgrade&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*[https://github.com/kubernetes-sigs/krew-index/blob/master/plugins.md Available kubectl plugins] Github&lt;br /&gt;
*[https://ahmet.im/blog/kubectl-plugins/ kubectl subcommands] write your own plugin&lt;br /&gt;
&lt;br /&gt;
== Install kubectl plugins ==&lt;br /&gt;
&amp;lt;code&amp;gt;kubectl ctx&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;kubectl ns&amp;lt;/code&amp;gt; - change context and set default namespace&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl krew install ctx ns&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;kubectl cssh&amp;lt;/code&amp;gt; - SSH into Kubernetes nodes ===&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Ssh to all nodes, example below for EKS v1.15.11&lt;br /&gt;
kubectl cssh -u ec2-user -i /git/secrets/ssh/dev.pem -a &amp;quot;InternalIP&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;code&amp;gt;[https://github.com/FairwindsOps/pluto kubectl deprecations]&amp;lt;/code&amp;gt;===&lt;br /&gt;
;&amp;lt;code&amp;gt;[https://github.com/FairwindsOps/pluto kubectl deprecations]&amp;lt;/code&amp;gt;: shows all the deprecated objects in a Kubernetes cluster allowing the operator to verify them before upgrading the cluster. It uses the swagger.json version available in master branch of Kubernetes repository (https://github.com/kubernetes/kubernetes/tree/master/api/openapi-spec) as a reference.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl deprecations&lt;br /&gt;
StatefulSet found in statefulsets.apps/v1beta1&lt;br /&gt;
	 ├─ API REMOVED FROM THE CURRENT VERSION AND SHOULD BE MIGRATED IMMEDIATELY!!&lt;br /&gt;
		-&amp;gt; OBJECT: myapp namespace: mynamespace1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Prior upgrade report. Script specific to EKS.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
[[ $# -eq 0 ]] &amp;amp;&amp;amp; echo &amp;quot;no args, provide prefix for the file name&amp;quot; &amp;amp;&amp;amp; exit 1&lt;br /&gt;
PREFIX=$1&lt;br /&gt;
TARGET_K8S_VER=v1.16.8&lt;br /&gt;
K8Sid=$(kubectl cluster-info | head -1 | cut -d'/' -f3 | cut -d'.' -f1)&lt;br /&gt;
kubectl deprecations --k8s-version $TARGET_K8S_VER &amp;gt; $PREFIX-$(kubectl cluster-info | head -1 | cut -d'/' -f3 | cut -d'.' -f1)-$(date +&amp;quot;%Y%m%d-%H%M&amp;quot;)-from-$(kubectl version --short | grep Server | cut -f3 -d' ')-to-${TARGET_K8S_VER}.yaml&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ ./kube-deprecations.sh test&lt;br /&gt;
$ ls -l&lt;br /&gt;
-rw-rw-r-- 1 vagrant vagrant 29356 Jun 29 16:09 test-11111111112222222222333333333344-20200629-1609-from-v1.15.11-eks-af3caf-to-latest.yaml&lt;br /&gt;
-rw-rw-r-- 1 vagrant vagrant   852 Jun 30 22:41 test-11111111112222222222333333333344-20200630-2241-from-v1.15.11-eks-af3caf-to-v1.16.8.yaml&lt;br /&gt;
-rwxrwxr-x 1 vagrant vagrant   437 Jun 30 22:41 kube-deprecations.sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;code&amp;gt;kubectl df-pv&amp;lt;/code&amp;gt;===&lt;br /&gt;
;&amp;lt;code&amp;gt;kubectl df-pv&amp;lt;/code&amp;gt;: Show disk usage (like unix df) for persistent volumes&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl df-pv&lt;br /&gt;
PVC                   NAMESPACE   POD                    SIZE          USED        AVAILABLE     PERCENTUSED   IUSED   IFREE     PERCENTIUSED&lt;br /&gt;
rdbms-volume          shared1     rdbms-d494fbf4-xrssk   2046640128    252817408   1777045504    12.35         688     130384    0.52&lt;br /&gt;
userdata-0            shared2     mft-0                  21003583488   57692160    20929114112   0.27          749     1309971   0.06&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
===&amp;lt;code&amp;gt;kubectl sniff&amp;lt;/code&amp;gt;===&lt;br /&gt;
Start a remote packet capture on pods using tcpdump.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
kubectl sniff hello-minikube-7c77b68cff-qbvsd -c hello-minikube&lt;br /&gt;
# Flags:&lt;br /&gt;
#   -c, --container string             container (optional)&lt;br /&gt;
#   -x, --context string               kubectl context to work on (optional)&lt;br /&gt;
#   -f, --filter string                tcpdump filter (optional)&lt;br /&gt;
#   -h, --help                         help for sniff&lt;br /&gt;
#       --image string                 the privileged container image (optional)&lt;br /&gt;
#   -i, --interface string             pod interface to packet capture (optional) (default &amp;quot;any&amp;quot;)&lt;br /&gt;
#   -l, --local-tcpdump-path string    local static tcpdump binary path (optional)&lt;br /&gt;
#   -n, --namespace string             namespace (optional) (default &amp;quot;default&amp;quot;)&lt;br /&gt;
#   -o, --output-file string           output file path, tcpdump output will be redirect to this file instead of wireshark (optional) ('-' stdout)&lt;br /&gt;
#   -p, --privileged                   if specified, ksniff will deploy another pod that have privileges to attach target pod network namespace&lt;br /&gt;
#   -r, --remote-tcpdump-path string   remote static tcpdump binary path (optional) (default &amp;quot;/tmp/static-tcpdump&amp;quot;)&lt;br /&gt;
#   -v, --verbose                      if specified, ksniff output will include debug information (optional)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The command above will open Wireshark, interesting article to follow:&lt;br /&gt;
* [https://istio.io/latest/docs/tasks/security/authentication/mtls-migration/#set-up-the-cluster mutual TLS] istio&lt;br /&gt;
* [https://dzone.com/articles/verifying-service-mesh-tls-in-kubernetes-using-ksn Verifying Service Mesh TLS in Kubernetes, Using Ksniff and Wireshark]&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;code&amp;gt;kubectl neat&amp;lt;/code&amp;gt;===&lt;br /&gt;
Print sanitized Kubernetes manifest.&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
kubectl get csec  dummy-secret -n clustersecret -oyaml | kubectl neat&lt;br /&gt;
apiVersion: clustersecret.io/v1&lt;br /&gt;
data:&lt;br /&gt;
  tls.crt: ***&lt;br /&gt;
  tls.key: ***&lt;br /&gt;
kind: ClusterSecret&lt;br /&gt;
matchNamespace:&lt;br /&gt;
- anothernamespace&lt;br /&gt;
metadata:&lt;br /&gt;
  name: dummy-secret&lt;br /&gt;
  namespace: clustersecret&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Getting help like manpages &amp;lt;code&amp;gt;kubectl explain&amp;lt;/code&amp;gt; ==&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
$ kubectl --help&lt;br /&gt;
$ kubectl get --help&lt;br /&gt;
$ kubectl explain --help&lt;br /&gt;
$ kubectl explain pod.spec.containers # kubectl knows cluster version, so gives you correct schema details&lt;br /&gt;
$ kubectl explain pods.spec.tolerations --recursive # show only fields&lt;br /&gt;
(...)&lt;br /&gt;
FIELDS:&lt;br /&gt;
   effect	&amp;lt;string&amp;gt;&lt;br /&gt;
   key	&amp;lt;string&amp;gt;&lt;br /&gt;
   operator	&amp;lt;string&amp;gt;&lt;br /&gt;
   tolerationSeconds	&amp;lt;integer&amp;gt;&lt;br /&gt;
   value	&amp;lt;string&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*[https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-strong-getting-started-strong- kubectl-commands] K8s interactive kubectl command reference&lt;br /&gt;
&lt;br /&gt;
= Watch Containers logs =&lt;br /&gt;
== [https://github.com/stern/stern Stern] ==&lt;br /&gt;
{{note| https://github.com/wercker/stern repository has no activity [https://github.com/wercker/stern/issues/140 ISSUE-140], the new community maintain repo is &amp;lt;tt&amp;gt;[https://github.com/stern/stern stern/stern]&amp;lt;/tt&amp;gt;  }}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Log tailing and landscape viewing tool. It connects to kubeapi and streams logs from all pods. Thus using this external tool with clusters that have 100ts of containers can be put significant load on kubeapi.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It will re-use kubectl config file to connect to your clusters, so works oob.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Govendor - this module manager is required&lt;br /&gt;
export GOPATH=$HOME/go        # path where go modules can be found, used by 'go get -u &amp;lt;url&amp;gt;'&lt;br /&gt;
export PATH=$PATH:$GOPATH/bin # path to the additional 'go' binaries&lt;br /&gt;
go get -u github.com/kardianos/govendor  # there will be no output&lt;br /&gt;
&lt;br /&gt;
# Stern (official)&lt;br /&gt;
mkdir -p $GOPATH/src/github.com/stern # new link: https://github.com/stern/stern&lt;br /&gt;
cd $GOPATH/src/github.com/stern&lt;br /&gt;
git clone https://github.com/stern/stern.git &amp;amp;&amp;amp; cd stern&lt;br /&gt;
govendor sync # there will be no output, may take 2 min&lt;br /&gt;
go install    # no output&lt;br /&gt;
&lt;br /&gt;
# Stern latest, download binary, no need for govendor&lt;br /&gt;
REPO=stern/stern&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/$REPO/releases/latest&amp;quot; | jq -r .tag_name | tr -d v); echo $LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d); FILE=stern_${LATEST}_linux_amd64&lt;br /&gt;
curl -L https://github.com/$REPO/releases/download/v${LATEST}/$FILE.tar.gz -o $TEMPDIR/$FILE.tar.gz&lt;br /&gt;
sudo tar xzvf $TEMPDIR/$FILE.tar.gz -C /usr/local/bin/ stern&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Usage&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Regex filter (pod-query) to match 2 pods patterns 'proxy' and 'gateway'&lt;br /&gt;
stern -n dev --kubeconfig ~/.kube/dev-config \(proxy\|gateway\)  # escape to protect regex mod characters&lt;br /&gt;
stern -n dev --kubeconfig ~/.kube/dev-config '(proxy|gateway)'   # single-quote to protect mod characters&lt;br /&gt;
&lt;br /&gt;
# Template the output&lt;br /&gt;
stern --template '{{.Message}} ({{.NodeName}}/{{.Namespace}}/{{.PodName}}/{{.ContainerName}}){{&amp;quot;\n&amp;quot;}}' .&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Help&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ stern&lt;br /&gt;
Tail multiple pods and containers from Kubernetes&lt;br /&gt;
&lt;br /&gt;
Usage:&lt;br /&gt;
  stern pod-query [flags]&lt;br /&gt;
&lt;br /&gt;
Flags:&lt;br /&gt;
  -A, --all-namespaces             If present, tail across all namespaces. A specific namespace is ignored even if specified with --namespace.&lt;br /&gt;
      --color string               Color output. Can be 'always', 'never', or 'auto' (default &amp;quot;auto&amp;quot;)&lt;br /&gt;
      --completion string          Outputs stern command-line completion code for the specified shell. Can be 'bash' or 'zsh'&lt;br /&gt;
  -c, --container string           Container name when multiple containers in pod (default &amp;quot;.*&amp;quot;)&lt;br /&gt;
      --container-state string     If present, tail containers with status in running, waiting or terminated. Default to running. (default &amp;quot;running&amp;quot;)&lt;br /&gt;
      --context string             Kubernetes context to use. Default to current context configured in kubeconfig.&lt;br /&gt;
  -e, --exclude strings            Regex of log lines to exclude&lt;br /&gt;
  -E, --exclude-container string   Exclude a Container name&lt;br /&gt;
  -h, --help                       help for stern&lt;br /&gt;
  -i, --include strings            Regex of log lines to include&lt;br /&gt;
      --init-containers            Include or exclude init containers (default true)&lt;br /&gt;
      --kubeconfig string          Path to kubeconfig file to use&lt;br /&gt;
  -n, --namespace string           Kubernetes namespace to use. Default to namespace configured in Kubernetes context.&lt;br /&gt;
  -o, --output string              Specify predefined template. Currently support: [default, raw, json] (default &amp;quot;default&amp;quot;)&lt;br /&gt;
  -l, --selector string            Selector (label query) to filter on. If present, default to &amp;quot;.*&amp;quot; for the pod-query.&lt;br /&gt;
  -s, --since duration             Return logs newer than a relative duration like 5s, 2m, or 3h. Defaults to 48h.&lt;br /&gt;
      --tail int                   The number of lines from the end of the logs to show. Defaults to -1, showing all logs. (default -1)&lt;br /&gt;
      --template string            Template to use for log lines, leave empty to use --output flag&lt;br /&gt;
  -t, --timestamps                 Print timestamps&lt;br /&gt;
  -v, --version                    Print the version and exit&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Usage&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
stern &amp;lt;pod&amp;gt;&lt;br /&gt;
stern --tail 1 busybox -n &amp;lt;namespace&amp;gt; #this is RegEx that matches busybox1|2|etc&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== [https://github.com/johanhaleby/kubetail kubetail] ==&lt;br /&gt;
Bash script that enables you to aggregate (tail/follow) logs from multiple pods into one stream. This is the same as running &amp;lt;code&amp;gt;kubectl logs -f&amp;lt;/code&amp;gt; but for multiple pods.&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/lensapp/lens Lens | Kubernetes IDE] =&lt;br /&gt;
Kubernetes client, this is not a dashboard that needs installing on a cluster. Similar to KUI but much more powerful.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Deb&lt;br /&gt;
curl curl https://api.k8slens.dev/binaries/Lens-5.4.1-latest.20220304.1.amd64.deb&lt;br /&gt;
sudo apt-get install ./Lens-5.4.1-latest.20220304.1.amd64.deb&lt;br /&gt;
&lt;br /&gt;
# Snap&lt;br /&gt;
snap list&lt;br /&gt;
sudo snap install kontena-lens --classic # U16.04+, tested on U20.04&lt;br /&gt;
&lt;br /&gt;
# Install from a .snap file&lt;br /&gt;
mkdir -p ~/Downloads/kontena-lens &amp;amp;&amp;amp; cd $_&lt;br /&gt;
snap download kontena-lens&lt;br /&gt;
sudo snap ack     kontena-lens_152.assert         # add an assertion to the system assertion database&lt;br /&gt;
sudo snap install kontena-lens_152.snap --classic # --dangerous if you do not have the assert file&lt;br /&gt;
&lt;br /&gt;
# download snap from https://k8slens.dev/&lt;br /&gt;
curl https://api.k8slens.dev/binaries/Lens-5.3.4-latest.20220120.1.amd64.snap&lt;br /&gt;
sudo snap install Lens-5.3.4-latest.20220120.1.amd64.snap --classic --dangerous&lt;br /&gt;
&lt;br /&gt;
# Info&lt;br /&gt;
$ snap info kontena-lens_152.assert&lt;br /&gt;
name:      kontena-lens&lt;br /&gt;
summary:   Lens - The Kubernetes IDE&lt;br /&gt;
publisher: Mirantis Inc (jakolehm)&lt;br /&gt;
store-url: https://snapcraft.io/kontena-lens&lt;br /&gt;
contact:   info@k8slens.dev&lt;br /&gt;
license:   Proprietary&lt;br /&gt;
description: |&lt;br /&gt;
  Lens is the most powerful IDE for people who need to deal with Kubernetes clusters on a daily&lt;br /&gt;
  basis. Ensure your clusters are properly setup and configured. Enjoy increased visibility, real&lt;br /&gt;
  time statistics, log streams and hands-on troubleshooting capabilities. With Lens, you can work&lt;br /&gt;
  with your clusters more easily and fast, radically improving productivity and the speed of&lt;br /&gt;
  business.&lt;br /&gt;
snap-id: Dek6y5mTEPxhySFKPB4Z0WVi5EPS9osS&lt;br /&gt;
channels:&lt;br /&gt;
  latest/stable:    4.0.7      2021-01-20 (152) 107MB classic&lt;br /&gt;
  latest/candidate: 4.0.7      2021-01-20 (152) 107MB classic&lt;br /&gt;
  latest/beta:      4.0.7      2021-01-20 (152) 107MB classic&lt;br /&gt;
  latest/edge:      4.1.0-rc.1 2021-02-11 (157) 108MB classic&lt;br /&gt;
&lt;br /&gt;
$ snap info kontena-lens_152.snap&lt;br /&gt;
path:       &amp;quot;kontena-lens_152.snap&amp;quot;&lt;br /&gt;
name:       kontena-lens&lt;br /&gt;
summary:    Lens&lt;br /&gt;
version:    4.0.7 classic&lt;br /&gt;
build-date: 24 days ago, at 16:31 GMT&lt;br /&gt;
license:    unset&lt;br /&gt;
description: |&lt;br /&gt;
  Lens - The Kubernetes IDE&lt;br /&gt;
commands:&lt;br /&gt;
  - kontena-lens&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/lensapp/lens.git OpenLens] | Kubernetes IDE =&lt;br /&gt;
Download binary from https://github.com/MuhammedKalkan/OpenLens&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install on Ubuntu&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
SUDO=''&lt;br /&gt;
if (( $EUID != 0 )); then&lt;br /&gt;
    SUDO='sudo'&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
REPO=MuhammedKalkan/OpenLens&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/$REPO/releases/latest&amp;quot; | jq -r .tag_name | tr -d v); echo $LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d); FILE=OpenLens-${LATEST}.amd64.deb&lt;br /&gt;
curl -L https://github.com/${REPO}/releases/download/v${LATEST}/$FILE -o $TEMPDIR/$FILE&lt;br /&gt;
$SUDO dpkg -i $TEMPDIR/$FILE&lt;br /&gt;
$SUDO apt-get install -y --fix-broken&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Build your own - [https://gist.github.com/jslay88/bf654c23eaaaed443bb8e8b41d02b2a9 gist]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
install_deps_windows() {&lt;br /&gt;
    echo &amp;quot;Installing Build Dependencies (Windows)...&amp;quot;&lt;br /&gt;
    choco install -y make visualstudio2019buildtools visualstudio2019-workload-vctools&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_deps_darwin() {&lt;br /&gt;
    echo &amp;quot;Installing Build Dependencies (Darwin)...&amp;quot;&lt;br /&gt;
    xcode-select --install&lt;br /&gt;
    if ! hash make 2&amp;gt;/dev/null; then&lt;br /&gt;
        if ! hash brew 2&amp;gt;/dev/null; then&lt;br /&gt;
            echo &amp;quot;Installing Homebrew...&amp;quot;&lt;br /&gt;
            /bin/bash -c &amp;quot;$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)&amp;quot;&lt;br /&gt;
        fi&lt;br /&gt;
        echo &amp;quot;Installing make via Homebrew...&amp;quot;&lt;br /&gt;
        brew install make&lt;br /&gt;
    fi&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_deps_posix() {&lt;br /&gt;
    echo &amp;quot;Installing Build Dependencies (Posix)...&amp;quot;&lt;br /&gt;
    sudo apt-get install -y make g++ curl&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_darwin() {&lt;br /&gt;
    echo &amp;quot;Killing OpenLens (if open)...&amp;quot;&lt;br /&gt;
    killall OpenLens&lt;br /&gt;
    echo &amp;quot;Installing OpenLens (Darwin)...&amp;quot;&lt;br /&gt;
    rm -Rf &amp;quot;$HOME/Applications/OpenLens.app&amp;quot;&lt;br /&gt;
    arch=&amp;quot;mac&amp;quot;&lt;br /&gt;
    if [[ &amp;quot;$(uname -m)&amp;quot; == &amp;quot;arm64&amp;quot; ]]; then&lt;br /&gt;
        arch=&amp;quot;mac-arm64&amp;quot;  # credit @teefax&lt;br /&gt;
    fi&lt;br /&gt;
    cp -Rfp &amp;quot;$tempdir/lens/dist/$arch/OpenLens.app&amp;quot; &amp;quot;$HOME/Applications/&amp;quot;&lt;br /&gt;
    rm -Rf &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    &lt;br /&gt;
    print_alias_message&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_posix() {&lt;br /&gt;
    echo &amp;quot;Installing OpenLens (Posix)...&amp;quot;&lt;br /&gt;
    cd &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    sudo dpkg -i &amp;quot;$(ls -Art $tempdir/lens/dist/*.deb  | tail -n 1)&amp;quot;&lt;br /&gt;
    rm -Rf &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    &lt;br /&gt;
    print_alias_message&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_windows() {&lt;br /&gt;
    echo &amp;quot;Installing OpenLens (Windows)...&amp;quot;&lt;br /&gt;
    &amp;quot;$(/bin/ls -Art $tempdir/lens/dist/OpenLens*.exe  | tail -n 1)&amp;quot;&lt;br /&gt;
    rm -Rf &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    &lt;br /&gt;
    print_alias_message&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
install_nvm() {&lt;br /&gt;
    if [ -z &amp;quot;$NVM_DIR&amp;quot; ]; then&lt;br /&gt;
        echo &amp;quot;Installing NVM...&amp;quot;&lt;br /&gt;
        NVM_VERSION=$(curl -s https://api.github.com/repos/nvm-sh/nvm/releases/latest | sed -En 's/  &amp;quot;tag_name&amp;quot;: &amp;quot;(.+)&amp;quot;,/\1/p')&lt;br /&gt;
        curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/$NVM_VERSION/install.sh | bash&lt;br /&gt;
        NVM_DIR=&amp;quot;$HOME/.nvm&amp;quot;&lt;br /&gt;
    fi&lt;br /&gt;
    [ -s &amp;quot;$NVM_DIR/nvm.sh&amp;quot; ] &amp;amp;&amp;amp; \. &amp;quot;$NVM_DIR/nvm.sh&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
build_openlens() {&lt;br /&gt;
    tempdir=$(mktemp -d)&lt;br /&gt;
    cd &amp;quot;$tempdir&amp;quot;&lt;br /&gt;
    if [ -z &amp;quot;$1&amp;quot; ]; then&lt;br /&gt;
        echo &amp;quot;Checking GitHub API for latests tag...&amp;quot;&lt;br /&gt;
        OPENLENS_VERSION=$(curl -s https://api.github.com/repos/lensapp/lens/releases/latest | sed -En 's/  &amp;quot;tag_name&amp;quot;: &amp;quot;(.+)&amp;quot;,/\1/p')&lt;br /&gt;
    else&lt;br /&gt;
        if [[ &amp;quot;$1&amp;quot; == v* ]]; then&lt;br /&gt;
            OPENLENS_VERSION=&amp;quot;$1&amp;quot;&lt;br /&gt;
        else&lt;br /&gt;
            OPENLENS_VERSION=&amp;quot;v$1&amp;quot;&lt;br /&gt;
        fi&lt;br /&gt;
        echo &amp;quot;Using supplied tag $OPENLENS_VERSION&amp;quot;&lt;br /&gt;
    fi&lt;br /&gt;
    if [ -z $OPENLENS_VERSION ]; then&lt;br /&gt;
        echo &amp;quot;Failed to get valid version tag. Aborting!&amp;quot;&lt;br /&gt;
        exit 1&lt;br /&gt;
    fi&lt;br /&gt;
    curl -L https://github.com/lensapp/lens/archive/refs/tags/$OPENLENS_VERSION.tar.gz | tar xvz&lt;br /&gt;
    mv lens-* lens&lt;br /&gt;
    cd lens&lt;br /&gt;
    NVM_CURRENT=$(nvm current)&lt;br /&gt;
    nvm install 16&lt;br /&gt;
    nvm use 16&lt;br /&gt;
    npm install -g yarn&lt;br /&gt;
    make build&lt;br /&gt;
    nvm use &amp;quot;$NVM_CURRENT&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
print_alias_message() {&lt;br /&gt;
    if [ &amp;quot;$(type -t install_openlens)&amp;quot; != 'alias' ]; then&lt;br /&gt;
        printf &amp;quot;It is recommended to add an alias to your shell profile to run this script again.\n&amp;quot;&lt;br /&gt;
        printf &amp;quot;alias install_openlens=\&amp;quot;curl -o- https://gist.githubusercontent.com/jslay88/bf654c23eaaaed443bb8e8b41d02b2a9/raw/install_openlens.sh | bash\&amp;quot;\n\n&amp;quot;&lt;br /&gt;
    fi&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
if [[ &amp;quot;$(uname)&amp;quot; == &amp;quot;Linux&amp;quot; ]]; then&lt;br /&gt;
    install_deps_posix&lt;br /&gt;
    install_nvm&lt;br /&gt;
    build_openlens &amp;quot;$1&amp;quot;&lt;br /&gt;
    install_posix&lt;br /&gt;
elif [[ &amp;quot;$(uname)&amp;quot; == &amp;quot;Darwin&amp;quot; ]]; then&lt;br /&gt;
    install_deps_darwin&lt;br /&gt;
    install_nvm&lt;br /&gt;
    build_openlens &amp;quot;$1&amp;quot;&lt;br /&gt;
    install_darwin&lt;br /&gt;
else&lt;br /&gt;
    install_deps_windows&lt;br /&gt;
    install_nvm&lt;br /&gt;
    build_openlens &amp;quot;$1&amp;quot;&lt;br /&gt;
    install_windows&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;Done!&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://kui.tools/ kui terminal] =&lt;br /&gt;
kui is a terminal with visualizations, provided by IBM&lt;br /&gt;
&lt;br /&gt;
Install using continent install script into &amp;lt;code&amp;gt;/opt/Kui-linux-x64/&amp;lt;/code&amp;gt; and symlink &amp;lt;code&amp;gt;Kui&amp;lt;/code&amp;gt; binary to &amp;lt;code&amp;gt;/usr/local/bin/kui&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
REPO=kubernetes-sigs/kui&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/$REPO/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d); FILE=Kui-linux-x64.zip&lt;br /&gt;
curl -L https://github.com/$REPO/releases/download/$LATEST/Kui-linux-x64.zip -o $TEMPDIR/$FILE&lt;br /&gt;
sudo mkdir -p /opt/Kui-linux-x64&lt;br /&gt;
sudo unzip $TEMPDIR/$FILE -d /opt/&lt;br /&gt;
&lt;br /&gt;
# Run&lt;br /&gt;
$&amp;gt; /opt/Kui-linux-x64/Kui&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Run Kui as [Kubernetes plugin https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
export PATH=$PATH:/opt/Kui-linux-x64/ # make sure Kui libs are in environment PATH&lt;br /&gt;
kubectl kui get pods -A               # -&amp;gt; a pop up window will show up&lt;br /&gt;
&lt;br /&gt;
$ kubectl plugin list &lt;br /&gt;
The following compatible plugins are available:&lt;br /&gt;
/opt/Kui-linux-x64/kubectl-kui&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
:[[File:ClipCapIt-200428-205600.PNG]]&lt;br /&gt;
&lt;br /&gt;
; Resources&lt;br /&gt;
* [https://github.com/IBM/kui/wiki kui/wiki] Github&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/derailed/popeye popeye] =&lt;br /&gt;
Popeye is a utility that scans live Kubernetes cluster and reports potential issues with deployed resources and configurations.&lt;br /&gt;
:[[File:ClipCapIt-200501-123645.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install&lt;br /&gt;
REPO=derailed/popeye&lt;br /&gt;
RELEASE=popeye_Linux_x86_64.tar.gz&lt;br /&gt;
VERSION=$(curl --silent &amp;quot;https://api.github.com/repos/${REPO}/releases/latest&amp;quot; | jq -r .tag_name); echo $VERSION # latest&lt;br /&gt;
wget https://github.com/${REPO}/releases/download/${VERSION}/${RELEASE}&lt;br /&gt;
tar xf ${RELEASE} popeye --remove-files&lt;br /&gt;
sudo install popeye /usr/local/bin&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
popeye # --out html&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/derailed/k9s k9s] =&lt;br /&gt;
K9s provides a terminal UI to interact with Kubernetes clusters.&lt;br /&gt;
&lt;br /&gt;
;Install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/derailed/k9s/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
wget https://github.com/derailed/k9s/releases/download/$LATEST/k9s_Linux_amd64.tar.gz&lt;br /&gt;
tar xf k9s_Linux_amd64.tar.gz --remove-files k9s&lt;br /&gt;
sudo install k9s /usr/local/bin&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Usage&lt;br /&gt;
* &amp;lt;code&amp;gt;?&amp;lt;/code&amp;gt; help&lt;br /&gt;
* &amp;lt;code&amp;gt;:ns&amp;lt;/code&amp;gt; select namespace&lt;br /&gt;
* &amp;lt;code&amp;gt;:nodes&amp;lt;/code&amp;gt; show nodes&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:[[File:ClipCapIt-190826-152830.PNG]]&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/droctothorpe/kubecolor kubecolor] =&lt;br /&gt;
Kubecolor is a bash function that colorizes the output of kubectl get events -w.&lt;br /&gt;
:[[File:ClipCapIt-190831-113158.PNG]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# This script is not working&lt;br /&gt;
git clone https://github.com/droctothorpe/kubecolor.git ~/.kubecolor&lt;br /&gt;
echo &amp;quot;source ~/.kubecolor/kubecolor.bash&amp;quot; &amp;gt;&amp;gt; ~/.bash_profile # (or ~/.bashrc)&lt;br /&gt;
source ~/.bash_profile # (or ~/.bashrc)&lt;br /&gt;
&lt;br /&gt;
# You can source this function instead&lt;br /&gt;
kube-events() {&lt;br /&gt;
    kubectl get events --all-namespaces --watch \&lt;br /&gt;
    -o 'go-template={{.lastTimestamp}} ^ {{.involvedObject.kind}} ^ {{.message}} ^ ({{.involvedObject.name}}){{&amp;quot;\n&amp;quot;}}' \&lt;br /&gt;
    | awk -F^ \&lt;br /&gt;
    -v   black=$(tput setaf 0) \&lt;br /&gt;
    -v     red=$(tput setaf 1) \&lt;br /&gt;
    -v   green=$(tput setaf 2) \&lt;br /&gt;
    -v  yellow=$(tput setaf 3) \&lt;br /&gt;
    -v    blue=$(tput setaf 4) \&lt;br /&gt;
    -v magenta=$(tput setaf 5) \&lt;br /&gt;
    -v    cyan=$(tput setaf 6) \&lt;br /&gt;
    -v   white=$(tput setaf 7) \&lt;br /&gt;
    '{ $1=blue $1; $2=green $2; $3=white $3; }  1'&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
kube-events&lt;br /&gt;
kubectl get events -A -w&lt;br /&gt;
kubectl get events --all-namespaces --watch -o 'go-template={{.lastTimestamp}} {{.involvedObject.kind}} {{.message}} ({{.involvedObject.name}}){{&amp;quot;\n&amp;quot;}}'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
= [https://argoproj.github.io/argo-rollouts/ argo-rollouts] =&lt;br /&gt;
Argo Rollouts introduces a new custom resource called a Rollout to provide additional deployment strategies such as Blue Green and Canary to Kubernetes.&lt;br /&gt;
&lt;br /&gt;
= &amp;lt;code&amp;gt;[https://github.com/groundcover-com/murre murre]&amp;lt;/code&amp;gt; =&lt;br /&gt;
Murre is an on-demand, scaleable source of container resource metrics for K8s. No dependencies needed, no install needed on a cluster.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
goenv install 1.18 # although 1.19 is the latest and the install completes successfully it wont create the binary  &lt;br /&gt;
go install github.com/groundcover-com/murre@latest&lt;br /&gt;
murre --sortby-cpu-util&lt;br /&gt;
murre --sortby-cpu&lt;br /&gt;
murre --pod kong-51xst&lt;br /&gt;
murre --namespace dev&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/amelbakry/kubernetes-scripts/blob/master/cluster-health.sh Kubernetes scripts] =&lt;br /&gt;
These Scripts allow you to troubleshoot and check the health status of the cluster and deployments They allow you to gather these information&lt;br /&gt;
* Cluster resources&lt;br /&gt;
* Cluster Nodes status&lt;br /&gt;
* Nodes Conditions&lt;br /&gt;
* Pods per Nodes&lt;br /&gt;
* Worker Nodes Per Availability Zones&lt;br /&gt;
* Cluster Node Types&lt;br /&gt;
* Pods not in running or completed status&lt;br /&gt;
* Top Pods according to Memory Limits&lt;br /&gt;
* Top Pods according to CPU Limits&lt;br /&gt;
* Number of Pods&lt;br /&gt;
* Pods Status&lt;br /&gt;
* Max Pods restart count&lt;br /&gt;
* Readiness of Pods&lt;br /&gt;
* Pods Average Utilization&lt;br /&gt;
* Top Pods according to CPU Utilization&lt;br /&gt;
* Top Pods according to Memory Utilization&lt;br /&gt;
* Pods Distribution per Nodes&lt;br /&gt;
* Node Distribution per Availability Zone&lt;br /&gt;
* Deployments without correct resources (Memory or CPU)&lt;br /&gt;
* Deployments without Limits&lt;br /&gt;
* Deployments without Application configured in Labels&lt;br /&gt;
&lt;br /&gt;
= Multi-node clusters =&lt;br /&gt;
{{Note|[[Kubernetes/minikube]] can do this natively}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Build multi node cluster for development.&lt;br /&gt;
On a single machine&lt;br /&gt;
* [https://github.com/kinvolk/kube-spawn/ kube-spawn] tool for creating a multi-node Kubernetes (&amp;gt;= 1.8) cluster on a single Linux machine&lt;br /&gt;
* [https://github.com/sttts/kubernetes-dind-cluster kubernetes-dind-cluster] Kubernetes multi-node cluster for developer of Kubernetes that launches in 36 seconds&lt;br /&gt;
* [https://kind.sigs.k8s.io/ kind] is a tool for running local Kubernetes clusters using Docker container “nodes”&lt;br /&gt;
* [https://github.com/ecomm-integration-ballerina/kubernetes-cluster Vagrant] full documentation in thsi [https://medium.com/@wso2tech/multi-node-kubernetes-cluster-with-vagrant-virtualbox-and-kubeadm-9d3eaac28b98 article]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Full cluster provisioning&lt;br /&gt;
* [https://github.com/kubernetes-sigs/kubespray kubespray] Deploy a Production Ready Kubernetes Cluster&lt;br /&gt;
* [https://github.com/kubernetes/kops kops] get a production grade Kubernetes cluster up and running&lt;br /&gt;
&lt;br /&gt;
= [https://kubernetes.io/docs/tasks/debug-application-cluster/crictl/ crictl] =&lt;br /&gt;
CLI and validation tools for Kubelet Container Runtime Interface (CRI). Used for debugging Kubernetes nodes with &amp;lt;code&amp;gt;crictl&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;crictl&amp;lt;/code&amp;gt; requires a Linux operating system with a CRI runtime. Creating containers with this tool on K8s cluster, will eventually cause that Kubernetes will delete these containers.&lt;br /&gt;
= [https://github.com/weaveworks/kubediff kubediff] show diff code vs what is deployed =&lt;br /&gt;
Kubediff is a tool for Kubernetes to show you the differences between your running configuration and your version controlled configuration.&lt;br /&gt;
= Mozilla SOPS - secret manager =&lt;br /&gt;
* [https://github.com/mozilla/sops SOPS] Mozilla SOPS: Secrets OPerationS, sops is an editor of encrypted files that supports YAML, JSON, ENV, INI and BINARY formats and encrypts with AWS KMS, GCP KMS, Azure Key Vault and PGP&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
LATEST=$(curl --silent &amp;quot;https://api.github.com/repos/getsops/sops/releases/latest&amp;quot; | jq -r .tag_name); echo $LATEST&lt;br /&gt;
TEMPDIR=$(mktemp -d)&lt;br /&gt;
curl -sL https://github.com/mozilla/sops/releases/download/${LATEST}/sops-${LATEST}.linux.amd64 -o $TEMPDIR/sops&lt;br /&gt;
sudo install $TEMPDIR/sops /usr/local/bin/sops&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://kompose.io/ Kompose] (Kubernetes + Compose) =&lt;br /&gt;
kompose is a tool to convert docker-compose files to Kubernetes manifests. &amp;lt;code&amp;gt;kompose&amp;lt;/code&amp;gt; takes a Docker Compose file and translates it into Kubernetes resources.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Linux&lt;br /&gt;
curl -L https://github.com/kubernetes/kompose/releases/download/v1.21.0/kompose-linux-amd64 -o kompose&lt;br /&gt;
sudo install ./kompose /usr/local/bin/kompose               # option 1&lt;br /&gt;
chmod +x kompose; sudo mv ./kompose /usr/local/bin/kompose  # option 2&lt;br /&gt;
&lt;br /&gt;
# Completion&lt;br /&gt;
source &amp;lt;(kompose completion bash)&lt;br /&gt;
&lt;br /&gt;
# Convert&lt;br /&gt;
kompose convert -f docker-compose-mac.yaml&lt;br /&gt;
&lt;br /&gt;
WARN Restart policy 'unless-stopped' in service mysql is not supported, convert it to 'always'&lt;br /&gt;
INFO Kubernetes file &amp;quot;mysql-service.yaml&amp;quot; created&lt;br /&gt;
INFO Kubernetes file &amp;quot;cluster-dir-persistentvolumeclaim.yaml&amp;quot; created&lt;br /&gt;
INFO Kubernetes file &amp;quot;mysql-deployment.yaml&amp;quot; created&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;References&lt;br /&gt;
*[https://github.com/kubernetes/kompose kompose] Github&lt;br /&gt;
&lt;br /&gt;
= [https://kubernetes.io/blog/2019/04/19/introducing-kube-iptables-tailer/ kube-iptables-tailer] - ip-table drop packages logger =&lt;br /&gt;
Allows to view iptables dropped packages, useful when working with Network Policies to identify pods trying to talk to disallowed destinations.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This project deploys &amp;lt;tt&amp;gt;[https://github.com/box/kube-iptables-tailer/tree/master/demo kube-iptables-tailer]&amp;lt;/tt&amp;gt; daemonset that watches iptables log &amp;lt;code&amp;gt;/var/log/iptables.log&amp;lt;/code&amp;gt; on each k8s-node mounted as &amp;lt;code&amp;gt;hostPath&amp;lt;/code&amp;gt; volume. It filters the log for custom prefix, set in &amp;lt;code&amp;gt;daemonset.spec.template.spec.containers.env&amp;lt;/code&amp;gt; and sends to cluster events.&lt;br /&gt;
&amp;lt;source lang=yaml&amp;gt;&lt;br /&gt;
            env: &lt;br /&gt;
              - name: &amp;quot;IPTABLES_LOG_PATH&amp;quot;&lt;br /&gt;
                value: &amp;quot;/var/log/iptables.log&amp;quot;&lt;br /&gt;
              - name: &amp;quot;IPTABLES_LOG_PREFIX&amp;quot;&lt;br /&gt;
                # log prefix defined in your iptables chains&lt;br /&gt;
                value: &amp;quot;my-prefix:&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://github.com/box/kube-iptables-tailer#setup-iptables-log-prefix Set iptables Log Prefix]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ iptables -A CHAIN_NAME -j LOG --log-prefix &amp;quot;EXAMPLE_LOG_PREFIX: &amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example output, when packet dropped&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ kubectl describe pods --namespace=YOUR_NAMESPACE&lt;br /&gt;
...&lt;br /&gt;
Events:&lt;br /&gt;
  FirstSeen   LastSeen    Count   From                    Type          Reason          Message&lt;br /&gt;
  ---------   --------	  -----	  ----                    ----          ------          -------&lt;br /&gt;
  1h          5s          10      kube-iptables-tailer    Warning       PacketDrop      Packet dropped when receiving traffic from example-service-2 (IP: 22.222.22.222).&lt;br /&gt;
  3h          2m          5       kube-iptables-tailer    Warning       PacketDrop      Packet dropped when sending traffic to example-service-1 (IP: 11.111.11.111).&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
= [https://github.com/eldadru/ksniff ksniff] - pipe a pod traffic to Wireshark or Tshark =&lt;br /&gt;
A kubectl plugin that utilize tcpdump and Wireshark to start a remote capture on any pod&lt;br /&gt;
&lt;br /&gt;
= [https://docs.flagger.app/ flagger - canary deployments] =&lt;br /&gt;
Flagger is a Kubernetes operator that automates the promotion of canary deployments using Istio, Linkerd, App Mesh, NGINX, Skipper, Contour or Gloo routing for traffic shifting and Prometheus metrics for canary analysis.&lt;br /&gt;
= [https://www.kubeval.com/ Kubeval] =&lt;br /&gt;
Kubeval is used to validate one or more Kubernetes configuration files, and is often used locally as part of a development workflow as well as in CI pipelines.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install&lt;br /&gt;
wget https://github.com/instrumenta/kubeval/releases/latest/download/kubeval-linux-amd64.tar.gz&lt;br /&gt;
tar xf kubeval-linux-amd64.tar.gz&lt;br /&gt;
sudo cp kubeval /usr/local/bin&lt;br /&gt;
&lt;br /&gt;
# Usage&lt;br /&gt;
$&amp;gt; kubeval my-invalid-rc.yaml&lt;br /&gt;
WARN - my-invalid-rc.yaml contains an invalid ReplicationController - spec.replicas: Invalid type. Expected: integer, given: string&lt;br /&gt;
$&amp;gt; echo $?&lt;br /&gt;
1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://github.com/yannh/kubeconform kubeconform] - improved Kubeval =&lt;br /&gt;
Kubeconform is a Kubernetes manifests validation tool.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install&lt;br /&gt;
wget https://github.com/yannh/kubeconform/releases/latest/download/kubeconform-linux-amd64.tar.gz&lt;br /&gt;
tar xf kubeconform-linux-amd64.tar.gz&lt;br /&gt;
sudo install kubeconform /usr/local/bin&lt;br /&gt;
&lt;br /&gt;
# Show version&lt;br /&gt;
kubeconform -v&lt;br /&gt;
v0.4.14&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Observability =&lt;br /&gt;
== [https://github.com/oslabs-beta/KUR8 KUR8] - like Elastic.io EFK dashboards ==&lt;br /&gt;
{{Note|I've deployed v1.0.0 to monitoring ns along with already existing service &amp;lt;code&amp;gt;&lt;br /&gt;
kube-prometheus-stack-prometheus:9090&amp;lt;/code&amp;gt; but the application was crashing}}&lt;br /&gt;
&lt;br /&gt;
= CPU Load pods =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Repeat the command times x CPU&lt;br /&gt;
cat /proc/cpuinfo | grep processor | wc -l # count processors&lt;br /&gt;
yes &amp;gt; /dev/null &amp;amp;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
*[https://kubernetes.io/docs/reference/kubectl/overview/ kubectl overview - resources types, Namespaced, kinds] K8s docs&lt;br /&gt;
*[https://github.com/johanhaleby/kubetail kubetail] Bash script that enables you to aggregate (tail/follow) logs from multiple pods into one stream. This is the same as running &amp;quot;kubectl logs -f &amp;quot; but for multiple pods.&lt;br /&gt;
*[https://github.com/ahmetb/kubectx kubectx kubens] Kubernetes config switches for context and setting up default namespace&lt;br /&gt;
*[https://medium.com/faun/using-different-kubectl-versions-with-multiple-kubernetes-clusters-a3ad8707b87b manages different ver kubectl] blog&lt;br /&gt;
*[https://github.com/kubernetes/community/blob/master/contributors/devel/sig-cli/kubectl-conventions.md#rules-for-extending-special-resource-alias---all kubectl] Kubectl Conventions&lt;br /&gt;
&lt;br /&gt;
Cheatsheets&lt;br /&gt;
*[https://cheatsheet.dennyzhang.com/cheatsheet-kubernetes-A4 cheatsheet-kubernetes-A4] by dennyzhang&lt;br /&gt;
&lt;br /&gt;
Other projects&lt;br /&gt;
*[https://github.com/jonmosco/kube-tmux kube-tmux] Kubernetes context and namespace status for tmux&lt;br /&gt;
*[https://github.com/jonmosco/kube-ps1 kube-ps1] Kubernetes prompt for bash and zsh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:kubernetes]]&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=HashiCorp/Vagrant&amp;diff=7027</id>
		<title>HashiCorp/Vagrant</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=HashiCorp/Vagrant&amp;diff=7027"/>
		<updated>2024-06-05T19:05:35Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Sync using vagrant-vbguest plugin */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
Vagrant is configured on a per project basis. Each of these projects has its own Vagran file. The Vagrant file is a text file in which vagrant reads that sets up our environment. There is a description of what OS, how much RAM, and what software to be installed etc. You can version control this file.&lt;br /&gt;
&lt;br /&gt;
= Install | [https://github.com/hashicorp/vagrant/blob/v2.2.10/CHANGELOG.md Changelog] =&lt;br /&gt;
Download or upgrade&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install using Ubuntu package manager (2024)&lt;br /&gt;
wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg&lt;br /&gt;
echo &amp;quot;deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main&amp;quot; | sudo tee /etc/apt/sources.list.d/hashicorp.list&lt;br /&gt;
apt-cache policy vagrant&lt;br /&gt;
sudo apt update &amp;amp;&amp;amp; sudo apt install vagrant&lt;br /&gt;
&lt;br /&gt;
# Install downloading a package from sources (2022)&lt;br /&gt;
LATEST=$(curl -s GET https://api.github.com/repos/hashicorp/vagrant/tags | jq -r '.[].name' | head -n1 | tr -d v); echo $LATEST&lt;br /&gt;
VERSION=${LATEST:=2.2.18}; &lt;br /&gt;
wget https://releases.hashicorp.com/vagrant/${VERSION}/vagrant_${VERSION}_linux_amd64.zip&lt;br /&gt;
sudo install /usr/bin/vagrant&lt;br /&gt;
#sudo dpkg -i vagrant_${VERSION}_x86_64.deb&lt;br /&gt;
#sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -f   # resolve missing dependencies&lt;br /&gt;
&lt;br /&gt;
# Fix plugins if needed&lt;br /&gt;
vagrant plugin update&lt;br /&gt;
vagrant plugin repair&lt;br /&gt;
vagrant plugin expunge --reinstall&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install ruby is recommended as configuration within '''Vagrant''' file is written in Ruby language. &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo apt-get install ruby&lt;br /&gt;
sudo gem install bundler&lt;br /&gt;
sudo gem update  bundler    # if update needed&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repair plugins after the upgrade&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant plugin repair    # use first&lt;br /&gt;
vagrant plugin expunge --reinstall&lt;br /&gt;
vagrant plugin update    # then update broken plugin&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Images aka &amp;lt;code&amp;gt;box&amp;lt;/code&amp;gt; management =&lt;br /&gt;
Vagrant comes with a preconfigured image repositories.&lt;br /&gt;
;Manage boxes&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant box [list | add | remove] [--help]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Add a box (image) into local repository&lt;br /&gt;
These are standard VMs from providers in Virtualbox, VMware or Hyper-V format taken from a given repository&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant box add hashicorp/precise64      #user: hashicorp boximage: precise64, this is preconfigured repository&lt;br /&gt;
vagrant box add ubuntu/xenial64&lt;br /&gt;
vagrant box add ubuntu/xenial64    --box-version 20170618.0.0 --provider virtualbox&lt;br /&gt;
vagrant box add bento/ubuntu-18.04 --box-version 201812.27.0  --provider hyperv&lt;br /&gt;
&lt;br /&gt;
# Box can be specified via URLs or local file paths, Virtualbox can only nest 32bit machines&lt;br /&gt;
vagrant box add --force ubuntu/14.04      https://cloud-images.ubuntu.com/vagrant/trusty/current/trusty-server-cloudimg-amd64-vagrant-disk1.box&lt;br /&gt;
vagrant box add --force ubuntu/14.04-i386 https://cloud-images.ubuntu.com/vagrant/precise/current/precise-server-cloudimg-i386-vagrant-disk1.box&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Windows images&lt;br /&gt;
* devopsgroup-io/windows_server-2012r2-standard-amd64-nocm&lt;br /&gt;
* peru/windows-server-2016-standard-x64-eval&lt;br /&gt;
* scotch/box&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Update a box to the latest version&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; vagrant box list&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20190411.0.0)&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20190718.0.0)&lt;br /&gt;
&lt;br /&gt;
$&amp;gt; vagrant box update --box ubuntu/bionic64&lt;br /&gt;
Checking for updates to 'ubuntu/bionic64'&lt;br /&gt;
Latest installed version: 20190718.0.0&lt;br /&gt;
Version constraints: &amp;gt; 20190718.0.0&lt;br /&gt;
Provider: virtualbox&lt;br /&gt;
Updating 'ubuntu/bionic64' with provider 'virtualbox' from version&lt;br /&gt;
'20190718.0.0' to '20200124.0.0'...&lt;br /&gt;
Loading metadata for box 'https://vagrantcloud.com/ubuntu/bionic64'&lt;br /&gt;
Adding box 'ubuntu/bionic64' (v20200124.0.0) for provider: virtualbox&lt;br /&gt;
Downloading: https://vagrantcloud.com/ubuntu/boxes/bionic64/versions/20200124.0.0/providers/virtualbox.box&lt;br /&gt;
Download redirected to host: cloud-images.ubuntu.com&lt;br /&gt;
&lt;br /&gt;
$&amp;gt; vagrant box list&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20190411.0.0)&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20190718.0.0)&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20200124.0.0) # &amp;lt;- new downloaded&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Delete all images (aka boxes)&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant box prune&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= vagrant init - your first project =&lt;br /&gt;
;Configure Vagrantfile to use the box as your base system&lt;br /&gt;
&amp;lt;source lang=&amp;quot;ruby&amp;quot;&amp;gt;&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
 config.vm.box      = &amp;quot;ubuntu/bionic64&amp;quot;&lt;br /&gt;
 config.vm.hostname = &amp;quot;ubuntu&amp;quot; #hostname, requires reload&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Create Vagrant project, by creating ''Vagrantfile'' in your current directory&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
vagrant init                    #initialises an project &lt;br /&gt;
vagrant init ubuntu/xenial64    # initialises official Ubuntu 16.04 LTS (Xenial Xerus) Daily Build&lt;br /&gt;
vagrant init ubuntu/bionic64    #supports only VirtualBox provider&lt;br /&gt;
vagrant init bento/ubuntu-18.04 #supports variety of providers&lt;br /&gt;
&lt;br /&gt;
#Windows&lt;br /&gt;
vagrant init devopsgroup-io/windows_server-2012r2-standard-amd64-nocm #Windows 2012r2, VirtualBox only; cannot ssh&lt;br /&gt;
vagrant init peru/windows-server-2016-standard-x64-eval               #Windows 2016, halt works&lt;br /&gt;
vagrant init gusztavvargadr/windows-server                            #Windows 2019, full integration&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Power up your Vagrant box&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
vagrant up&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Ssh to the box. Below example of nested virtualisation 64bit VM(host) runs 32bit (guest vm)&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
piotr@vm-ubuntu64:~/git/vagrant$ vagrant ssh    #default password is &amp;quot;vagrant&amp;quot;&lt;br /&gt;
vagrant@vagrant-ubuntu-precise-32:~$ w&lt;br /&gt;
13:08:35 up 15 min,  1 user,  load average: 0.06, 0.31, 0.54&lt;br /&gt;
USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT&lt;br /&gt;
vagrant  pts/0    10.0.2.2         13:02    1.00s  4.63s  0.09s w&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Shared directory between Vagrant VM and an hypervisor provider&lt;br /&gt;
Vagrant VM shares a directory mounted at &amp;lt;tt&amp;gt;/vagrant&amp;lt;/tt&amp;gt;  with the directory on the host containing your Vagrantfile. This can be manually mounted from within VM as long the shared directory is setup in  GUI. &lt;br /&gt;
&lt;br /&gt;
Eg. vm_name &amp;gt; Settings &amp;gt; Shared Folders &amp;gt; Name: vagrant | Path: /home/piotr/vm_name&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
 sudo mount -t vboxsf -o uid=1000 vagrant /vagrant #firts arg 'vagrant' refers to GUI setiing &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
vagrant --debug up&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Nesting VMs ==&lt;br /&gt;
The error below is due to Virtualbox cannot run nested 64bit virtualbox VM. Spinning up a 64bit VM stops with an error that no 64bit CPU could be found. Update [https://forums.virtualbox.org/viewtopic.php?f=1&amp;amp;t=90831 VirtualBox 6.x Nested virtualization, VT-x/AMD-V in the guest].&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Error:&lt;br /&gt;
 Timed out while waiting for the machine to boot. This means that&lt;br /&gt;
 Vagrant was unable to communicate with the guest machine within&lt;br /&gt;
 the configured (&amp;quot;config.vm.boot_timeout&amp;quot; value) time period.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Manage power states =&lt;br /&gt;
*&amp;lt;code&amp;gt;vagrant suspend&amp;lt;/code&amp;gt; - saves the current running state of the machine and stop it&lt;br /&gt;
*&amp;lt;code&amp;gt;vagrant halt&amp;lt;/code&amp;gt; - gracefully shuts down the guest operating system and power down the guest machine&lt;br /&gt;
*&amp;lt;code&amp;gt;vagrant destroy&amp;lt;/code&amp;gt; - removes all traces of the guest machine from your system. It'll stop the guest machine, power it down, and remove all of the guest hard disks&lt;br /&gt;
&lt;br /&gt;
= Snapshots =&lt;br /&gt;
You can easily save snapshots.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Get status&lt;br /&gt;
$ vagrant status&lt;br /&gt;
Current machine states:&lt;br /&gt;
default                   poweroff (virtualbox) # &amp;lt;- 'default' it's machine name&lt;br /&gt;
                                                # in multi-vm Vagrant config file&lt;br /&gt;
The VM is powered off. To restart the VM, simply run `vagrant up`&lt;br /&gt;
&lt;br /&gt;
# List&lt;br /&gt;
vagrant snapshot list&lt;br /&gt;
==&amp;gt; default: &lt;br /&gt;
11_b4-upgradeVbox-stopped&lt;br /&gt;
12_Dec01_stopped&lt;br /&gt;
&lt;br /&gt;
# Save&lt;br /&gt;
                        &amp;lt;nameOfvm&amp;gt; &amp;lt;snapshot-name&amp;gt; &lt;br /&gt;
vagrant snapshot save    default    13_Dec30_external-eks_stopped&lt;br /&gt;
&lt;br /&gt;
# Restore&lt;br /&gt;
vagrant snapshot restore default    13_Dec30_external-eks_stopped&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Lookup path precedence for Vagrant project file =&lt;br /&gt;
When you run any vagrant command, Vagrant climbs your directory tree starting first in the current directory you are in. Example:&lt;br /&gt;
 /home/peter/projects/la/Vagrant&lt;br /&gt;
 /home/peter/projects/Vagrant&lt;br /&gt;
 /home/peter/Vagrant&lt;br /&gt;
 /home/Vagrant&lt;br /&gt;
 /Vagrant&lt;br /&gt;
&lt;br /&gt;
= Configuration =&lt;br /&gt;
== Networking ==&lt;br /&gt;
'''Private''' network is network that is not accessible from Internet. Networking stanza is a part of the main &amp;lt;tt&amp;gt;|config|&amp;lt;/tt&amp;gt; loop.&lt;br /&gt;
&lt;br /&gt;
DHCP IP address assigned&lt;br /&gt;
 config.vm.network &amp;quot;private_network&amp;quot;, type: &amp;quot;dhcp&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Static IP assigment&lt;br /&gt;
 config.vm.network &amp;quot;private_network&amp;quot;, ip: &amp;quot;192.168.80.5&amp;quot;&lt;br /&gt;
 auto_config: false     #optional to disable auto-configure&lt;br /&gt;
&lt;br /&gt;
'''Public network'''&lt;br /&gt;
These networks can be accessible from outside of the host machine including Internet, are usually '''Bridged Networks'''.&lt;br /&gt;
&lt;br /&gt;
Examples of dhcp and static IP assignment&lt;br /&gt;
 config.vm.network &amp;quot;public_network&amp;quot;, type: &amp;quot;dhcp&amp;quot;&lt;br /&gt;
 config.vm.network &amp;quot;public_network&amp;quot;, ip: &amp;quot;192.168.80.5&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Default interface. The name need to match your system name otherwise Vagrant will prompt you to choose from available interfaces during ''vagrant up'' process.&lt;br /&gt;
 config.vm.network &amp;quot;public_network&amp;quot;, bridge: 'eth1'&lt;br /&gt;
&lt;br /&gt;
== Port forwarding ==&lt;br /&gt;
Vagrant can forward any host(hypervisor) TCP port to guest vm specyfing in ~/git/vargant/Vagrant file&lt;br /&gt;
 config.vm.network :forwarded_port, guest: 80, host: 4567&lt;br /&gt;
Reload virtual machine &amp;lt;code&amp;gt;vagrant reload&amp;lt;/code&amp;gt; and run from hypervisor web browser http://127.0.0.1:4567 to test it.&lt;br /&gt;
&lt;br /&gt;
== Sync folders ==&lt;br /&gt;
Vagrant v2 renamed ''Shared folders'' into '''Sync folders'''. This feature mounts HostOS directory into GuestOS. allowing workflow of editing files with IDE installed on a host machine but access them within GuestOS. The files sync both directions (as mount on GuestOS), Remember, taking &amp;lt;code&amp;gt;vagrant snapshot save ubuntu-snap1&amp;lt;/code&amp;gt; '''will NOT save''' the '''Sync folder''' content as it's just mounted directory.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When configuring, 1st argument is a path existing on a '''host machine'''. If relative then it's relative to the root-project folder (where Vagrantfile exists) and 2nd arg is a full path to the mounted dir on guest-os.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Enabling Sync folders and Symlinks&lt;br /&gt;
This can be done at any time, it's applied during &amp;lt;code&amp;gt;vagrant up | reload&amp;lt;/code&amp;gt;. In general symlinks are disabled by VirtualBox as insecure.&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
#                    path on the host       mount on the guestOS&lt;br /&gt;
                              \             /         &lt;br /&gt;
     config.vm.sync_folder &amp;quot;git-host/&amp;quot;, &amp;quot;/git&amp;quot;, disabled: false&lt;br /&gt;
 end&lt;br /&gt;
  config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
    vb.name   = File.basename(Dir.pwd) + &amp;quot;_vagrant&amp;quot;&lt;br /&gt;
    ...&lt;br /&gt;
    vb.customize [&amp;quot;setextradata&amp;quot;, :id, &amp;quot;VBoxInternal2/SharedFoldersEnableSymlinksCreate//git&amp;quot;,     &amp;quot;1&amp;quot;]&lt;br /&gt;
#   vb.customize [&amp;quot;setextradata&amp;quot;, :id, &amp;quot;VBoxInternal2/SharedFoldersEnableSymlinksCreate//vagrant&amp;quot;, &amp;quot;1&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
    # symlinks should be active in root of vm by default&lt;br /&gt;
#   vb.customize [&amp;quot;setextradata&amp;quot;, :id, &amp;quot;VBoxInternal2/SharedFoldersEnableSymlinksCreate/v-root&amp;quot;,   &amp;quot;1&amp;quot;]&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Disabling&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
     config.vm.sync_folder &amp;quot;../data/&amp;quot;, &amp;quot;/vagrant-data&amp;quot;, disabled: true&lt;br /&gt;
 end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Modifying the Owner/Group&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 config.vm.sync_folder &amp;quot;../data/&amp;quot;, &amp;quot;/vagrant-data&amp;quot;, disabled: true,&lt;br /&gt;
    owner: &amp;quot;root&amp;quot;, group: &amp;quot;root&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
References&lt;br /&gt;
* [https://www.vagrantup.com/docs/synced-folders/basic_usage.html#id synced-folders] Hashicorp docs&lt;br /&gt;
&lt;br /&gt;
= Vagrant providers =&lt;br /&gt;
Vagrant can work with a wide variety of backend providers, such as VMware, AWS, and more without changing Vagrantfile. It's enough to  specify the provider and Vagrant will do the rest:&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
vagrant up --provider=vmware_fusion&lt;br /&gt;
vagrant up --provider=aws&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Hyper-V ==&lt;br /&gt;
*Enable Hyper-V&lt;br /&gt;
*if you running Docker for Windows make sure is disabled as only one application can bound to Internal NAT vswitch, if you are using it&lt;br /&gt;
*WSL and Windows Vagrant versions must match&lt;br /&gt;
*the terminal you run WSL or PowerShell runs with elevated privileges&lt;br /&gt;
When running in WSL, make sure you have &amp;lt;code&amp;gt;export VAGRANT_WSL_ENABLE_WINDOWS_ACCESS=&amp;quot;1&amp;quot;&amp;lt;/code&amp;gt;,&lt;br /&gt;
*you are in native Bash.exe not eg. ConEmu terminal with as it was proven not working at the time. You can change default provider by &amp;lt;code&amp;gt;export VAGRANT_DEFAULT_PROVIDER=hyperv&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Optional: Set the user-level environment variable in PowerShell: &lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
[Environment]::SetEnvironmentVariable(&amp;quot;VAGRANT_DEFAULT_PROVIDER&amp;quot;, &amp;quot;hyperv&amp;quot;, &amp;quot;User&amp;quot;) &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Workarounds&lt;br /&gt;
Copy insecure private key from &amp;lt;code&amp;gt;https://raw.githubusercontent.com/hashicorp/vagrant/master/keys/vagrant to WSL ~/.vagr&lt;br /&gt;
ant_key/private_key&amp;lt;/code&amp;gt; because Microsoft filesystem does not support Unix style file permissions, until WSL2 is released.&lt;br /&gt;
&amp;lt;source&amp;gt; &lt;br /&gt;
$ wget https://raw.githubusercontent.com/hashicorp/vagrant/master/keys/vagrant -O ~/.vagrant_key/private_key&lt;br /&gt;
# then set in Vagrantfile&lt;br /&gt;
config.ssh.private_key_path = &amp;quot;~/.vagrant_key/private_key&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When running on HyperV you need to choose a vswitch you will use. Vagrant will prompt you, select &amp;quot;Default Switch&amp;quot;, w&lt;br /&gt;
hich is eqvivalent of NAT Network. You need to creat your own vswitch if you want access to Internet.&lt;br /&gt;
&lt;br /&gt;
Go to Hyper-V Manager, open Virtual Switch Manager..., create External switch, name: vagrant-external, press OK. Then&lt;br /&gt;
 run.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
vagrant up --provider hyperv&lt;br /&gt;
&lt;br /&gt;
    default: Please choose a switch to attach to your Hyper-V instance.&lt;br /&gt;
    default: If none of these are appropriate, please open the Hyper-V manager&lt;br /&gt;
    default: to create a new virtual switch.&lt;br /&gt;
    default:&lt;br /&gt;
    default: 1) DockerNAT&lt;br /&gt;
    default: 2) Default Switch&lt;br /&gt;
    default: 3) vagrant-external&lt;br /&gt;
    default:&lt;br /&gt;
    default: What switch would you like to use?3    #&amp;lt;-- select 3&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Read more https://www.vagrantup.com/docs/hyperv/limitations.html&lt;br /&gt;
&lt;br /&gt;
Run Vagrant file&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant up --provider=hyperv&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== References ===&lt;br /&gt;
*[https://gist.github.com/savishy/8ed40cd8692e295d64f45e299c2b83c9 Create vSwitch in Hyper-V to run Vagrant]&lt;br /&gt;
*[https://techcommunity.microsoft.com/t5/Virtualization/Copying-Files-into-a-Hyper-V-VM-with-Vagrant/ba-p/382376 Copying Files into a Hyper-V VM with Vagrant]&lt;br /&gt;
*[https://techcommunity.microsoft.com/t5/Virtualization/Vagrant-and-Hyper-V-Tips-and-Tricks/ba-p/382373 Vagrant and Hyper-V -- Tips and Tricks] techcommunity.microsoft.com&lt;br /&gt;
&lt;br /&gt;
= Provisioners =&lt;br /&gt;
==Shell provisioner==&lt;br /&gt;
Vagrant can run from shared location script or from inline: Vagrant file shell provisioning commands.&lt;br /&gt;
&lt;br /&gt;
Create provisioning script&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi ~/git/vagrant/bootstrap.sh     &lt;br /&gt;
#!/usr/bin/env bash&lt;br /&gt;
export http_proxy=&amp;lt;nowiki&amp;gt;http://username:password@proxyserver.local:8080&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
export https_proxy=$http_proxy &lt;br /&gt;
apt-get update&lt;br /&gt;
apt-get install -y apache2&lt;br /&gt;
if ! [ -L /var/www ]; then &lt;br /&gt;
  rm -rf /var/www&lt;br /&gt;
  ln -sf /vagrant /var/www  # sets Vagrant shared dir to Apache DocumentRoot&lt;br /&gt;
fi&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure Vagrant to run this shell script above when setting up our machine&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi ~/git/vagrant/Vagrantfile   &lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
   config.vm.box = ubuntu/14.04-i386&lt;br /&gt;
   config.vm.provision: shell, path: &amp;quot;bootstrap.sh&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another example of using shell provisioner, separating a script out&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$script = &amp;lt;&amp;lt;SCRIPT&lt;br /&gt;
echo    &amp;quot; touch /home/vagrant/test_\\`date +%s\\`.txt &amp;quot; &amp;gt; /home/vagrant/newfile&lt;br /&gt;
chmod +x        /home/vagrant/newfile&lt;br /&gt;
echo &amp;quot;* * * * * /home/vagrant/newfile&amp;quot; &amp;gt; mycron&lt;br /&gt;
crontab mycron&lt;br /&gt;
SCRIPT&lt;br /&gt;
&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
  config.vm.box = &amp;quot;ubuntu/xenial64&amp;quot;&lt;br /&gt;
  config.vm.provision &amp;quot;shell&amp;quot;, inline: $script , privileged: false&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Bring the environment up  &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant up                   #runs provisioning only once&lt;br /&gt;
vagrant reload --provision   #reloads VM skipping import and runs provisioning&lt;br /&gt;
vagrant ssh                  #ssh to VM&lt;br /&gt;
wget -qO- 127.0.0.1          #test Apache is running on VM&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Provisioners - shell, ansible, ansible_local and more&lt;br /&gt;
&lt;br /&gt;
This section is about using Ansible with Vagrant, &lt;br /&gt;
 *&amp;lt;code&amp;gt;ansible&amp;lt;/code&amp;gt;, where Ansible is executed on the '''Vagrant host'''&lt;br /&gt;
 *&amp;lt;code&amp;gt;ansible_local&amp;lt;/code&amp;gt;, where Ansible is executed on the '''Vagrant guest'''&lt;br /&gt;
&lt;br /&gt;
==Ansible provisioner==&lt;br /&gt;
&lt;br /&gt;
Specify Ansible as a provisioner in Vagrant file&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 # Run Ansible from the Vagrant Host&lt;br /&gt;
 config.vm.provision &amp;quot;ansible&amp;quot; do |ansible|&lt;br /&gt;
    ansible.playbook = &amp;quot;playbook.yml&amp;quot;&lt;br /&gt;
 end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Chef_solo provisioner ==&lt;br /&gt;
Create recipe, the following dirctory structure is required, eg. recipe name is: vagrant_la&lt;br /&gt;
 ├── cookbooks&lt;br /&gt;
 │   └── vagrant_la&lt;br /&gt;
 │       └── recipes&lt;br /&gt;
 │           └── default.rb&lt;br /&gt;
 Vagrant&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Recipe&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
vi cookbooks/vagrant_la/recipes/default.rb&lt;br /&gt;
execute &amp;quot;apt-get update&amp;quot;&lt;br /&gt;
package &amp;quot;apache2&amp;quot;&lt;br /&gt;
execute &amp;quot;rm -rf /var/www&amp;quot;&lt;br /&gt;
link &amp;quot;var/www&amp;quot; do&lt;br /&gt;
        to &amp;quot;/vagrant&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In Vagrant file add following&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
config.vm.provision &amp;quot;chef_solo&amp;quot; do |chef|&lt;br /&gt;
        chef.add_recipe &amp;quot;vagrant_la&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Run &amp;lt;code&amp;gt;vagrant up&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Puppet manifest ==&lt;br /&gt;
Create Vagrant provisioning stanza&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 config.vm.define &amp;quot;web&amp;quot; do |web|&lt;br /&gt;
         web.vm.hostname = &amp;quot;web&amp;quot;&lt;br /&gt;
         web.vm.box = &amp;quot;apache&amp;quot;&lt;br /&gt;
         web.vm.network &amp;quot;private_network&amp;quot;, type: &amp;quot;dhcp&amp;quot;&lt;br /&gt;
         web.vm.network &amp;quot;forwarded_port&amp;quot;, guest: 80, host: 8080&lt;br /&gt;
         web.vm.provision &amp;quot;puppet&amp;quot; do |puppet|&lt;br /&gt;
                 puppet.manifests_path = &amp;quot;manifests&amp;quot;&lt;br /&gt;
                 puppet.manifest_file = &amp;quot;default.pp&amp;quot;&lt;br /&gt;
         end&lt;br /&gt;
 end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create a required folder structure for puppet manifests&lt;br /&gt;
 ├── manifests&lt;br /&gt;
 │   └── default.pp&lt;br /&gt;
 └── Vagrantfile&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Puppet manifest file&lt;br /&gt;
 vi manifests/default.pp&lt;br /&gt;
 exec { &amp;quot;apt-get update&amp;quot;:&lt;br /&gt;
        command =&amp;gt; &amp;quot;/usr/bin/apt-get update&amp;quot;,&lt;br /&gt;
 }&lt;br /&gt;
 package { &amp;quot;apache2&amp;quot;:&lt;br /&gt;
        require =&amp;gt; Exec[&amp;quot;apt-get update&amp;quot;],&lt;br /&gt;
 }&lt;br /&gt;
 file { &amp;quot;/var/www&amp;quot;:&lt;br /&gt;
        ensure =&amp;gt; link,&lt;br /&gt;
        target =&amp;gt; &amp;quot;/vagrant&amp;quot;,&lt;br /&gt;
        force =&amp;gt; true,&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
= Box images advanced=&lt;br /&gt;
 vagrant box list   #list all downloaded boxes&lt;br /&gt;
&lt;br /&gt;
Default path of boxes image, it can be specified by environment variable &amp;lt;tt&amp;gt;VAGRANT_HOME&amp;lt;/tt&amp;gt;&lt;br /&gt;
 C:\Users\%username%\.vagrant.d\boxes  #Windows&lt;br /&gt;
 ~/.vagrant.d/boxes                    #Linux&lt;br /&gt;
&lt;br /&gt;
Change default path via environment variable&lt;br /&gt;
 export VAGRANT_HOME=my/new/path/goes/here/&lt;br /&gt;
&lt;br /&gt;
==Box format==&lt;br /&gt;
When you un-tar the &amp;lt;tt&amp;gt;.box&amp;lt;/tt&amp;gt; file it contains 4 files:&lt;br /&gt;
 |--Vagrantfile&lt;br /&gt;
 |--box-disk1.vmdk  #compressed virtual disk&lt;br /&gt;
 |--box.ovf         #description of virtual hardware&lt;br /&gt;
 |--metadata.json&lt;br /&gt;
&lt;br /&gt;
== [https://www.vagrantup.com/docs/virtualbox/boxes.html Create box] from current project (package a box) ==&lt;br /&gt;
This allows to create a reusable box that contains all changes to the software we made, VirtualBox or Hyper-V supported only.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://www.vagrantup.com/docs/cli/package.html Command basics]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant package [options] [name|id]&lt;br /&gt;
# --base NAME - instead of packaging a VirtualBox machine that Vagrant manages, &lt;br /&gt;
#               this will package a VirtualBox machine that VirtualBox manages&lt;br /&gt;
# --output NAME - default is package.box&lt;br /&gt;
# --include x,y,z -  additional files will be packaged with the box&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Package&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ vagrant version # -&amp;gt; Installed Version: 2.2.9&lt;br /&gt;
&lt;br /&gt;
# Optional '--vagrantfile NAME' can be included, that automatically restores '--include' files &lt;br /&gt;
# learn more at https://www.vagrantup.com/docs/vagrantfile#load-order&lt;br /&gt;
$ time vagrant package --output u18cli-1.box --include data,git-host,git-host3rd,sync.sh,cleanup.sh&lt;br /&gt;
==&amp;gt; default: Clearing any previously set forwarded ports...&lt;br /&gt;
==&amp;gt; default: Exporting VM...&lt;br /&gt;
==&amp;gt; default: Compressing package to: /home/piotr/vms-vagrant/u18cli-1/2020-05-23-u18cli-1.box&lt;br /&gt;
==&amp;gt; default: Packaging additional file: data               # &amp;lt;- dir&lt;br /&gt;
==&amp;gt; default: Packaging additional file: git-host           # &amp;lt;- dir&lt;br /&gt;
==&amp;gt; default: Packaging additional file: git-host3rd        # &amp;lt;- dir&lt;br /&gt;
==&amp;gt; default: Packaging additional file: cleanup.sh         # &amp;lt;- file&lt;br /&gt;
real	15m27.324s user	8m23.550s sys	0m16.827s&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Re-ristribute the &amp;lt;tt&amp;gt;.box&amp;lt;/tt&amp;gt; file then restore it.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Add the packaged box to local system box repository&lt;br /&gt;
#                        _____box-name________ __box-file_____&lt;br /&gt;
$ vagrant box add --name box-packages/u18cli-1 u18cli-1-v1.box&lt;br /&gt;
==&amp;gt; box: Box file was not detected as metadata. Adding it directly...&lt;br /&gt;
==&amp;gt; box: Adding box 'u18cli-1-v1.box' (v0) for provider: &lt;br /&gt;
    box: Unpacking necessary files from: file:///home/piotr/vms-vagrant/test-box-restore/u18cli-1-v1.box&lt;br /&gt;
==&amp;gt; box: Successfully added box 'box-packages/u18cli-1' (v0) for 'virtualbox'!&lt;br /&gt;
&lt;br /&gt;
# List boxes&lt;br /&gt;
$ vagrant box list&lt;br /&gt;
box-packages/u18cli-1 (virtualbox, 0)&lt;br /&gt;
&lt;br /&gt;
$ ls -l ~/.vagrant.d/boxes&lt;br /&gt;
total 16&lt;br /&gt;
drwxrwxr-x 3 piotr piotr 4096 Jul 16 17:44 box-packages-VAGRANTSLASH-u18cli-1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Restore. Create/re-use Vagrantfile using box you added to your local box repository&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# vi Vagrantfile&lt;br /&gt;
config.vm.box = &amp;quot;box-packages/u18cli-1.box&amp;quot;&lt;br /&gt;
&lt;br /&gt;
vagrant up&lt;br /&gt;
# restore '--include' files coping them from&lt;br /&gt;
# 'ls -l ~/.vagrant.d/boxes/box-packages-VAGRANTSLASH-u18cli-1/0/virtualbox/include/*'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://tuhrig.de/resizing-vagrant-box-disk-space/ Resizing Vagrant box disk] =&lt;br /&gt;
* [https://www.vagrantup.com/docs/disks/usage Resizing primary disk] native way&lt;br /&gt;
&lt;br /&gt;
= Enable Vagrant to use proxy server for VMs =&lt;br /&gt;
Install proxyconf plugin or use &amp;lt;code&amp;gt;vagrant plugin list&amp;lt;/code&amp;gt; to verify if installed&lt;br /&gt;
 vagrant plugin install vagrant-proxyconf&lt;br /&gt;
&lt;br /&gt;
Configure your Vagrantfile, here particularly host 10.0.0.1:3128 runs CNTLM proxy&lt;br /&gt;
 Vagrant.configure(&amp;quot;2&amp;quot;) do |config| &lt;br /&gt;
     &amp;lt;nowiki&amp;gt;config.proxy.http = &amp;quot;http://10.0.0.1:3128&amp;quot;&lt;br /&gt;
    config.proxy.https = &amp;quot;http://10.0.0.1:3128&amp;quot;&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
     config.proxy.no_proxy = &amp;quot;localhost,127.0.0.1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
= Virtualbox Guest Additions =&lt;br /&gt;
== Sync using vagrant-vbguest plugin ==&lt;br /&gt;
Plugin install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant plugin install vagrant-vbguest&lt;br /&gt;
&lt;br /&gt;
# Verify current version, running on a host(hypervisor)&lt;br /&gt;
vagrant vbguest --status&lt;br /&gt;
&lt;br /&gt;
# Add to your Vagrant file&lt;br /&gt;
if Vagrant.has_plugin?(&amp;quot;vagrant-vbguest&amp;quot;)&lt;br /&gt;
  host.vbguest.auto_update = true&lt;br /&gt;
  host.vbguest.no_remote   = true&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Manual install&lt;br /&gt;
Download VBoxGuestAdditions from:&lt;br /&gt;
* https://download.virtualbox.org/virtualbox&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install a matching version to your host version onto the virtual machine.&lt;br /&gt;
wget https://download.virtualbox.org/virtualbox/7.0.16/VBoxGuestAdditions_7.0.16.iso&lt;br /&gt;
vagrant vbguest --do install --iso VBoxGuestAdditions_7.0.16.iso&lt;br /&gt;
&lt;br /&gt;
Usage: vagrant vbguest [vm-name] [--do start|rebuild|install] [--status] [-f|--force] [-b|--auto-reboot] [-R|--no-remote] [--iso VBoxGuestAdditions.iso] [--no-cleanup]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More you will find at [https://github.com/dotless-de/vagrant-vbguest vagrant-vbguest] plugin project.&lt;br /&gt;
&lt;br /&gt;
== Manual upgrade ==&lt;br /&gt;
Find out what version you are running, execute on a guest VM&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant@ubuntu:~$ modinfo vboxguest | grep ^version&lt;br /&gt;
version:        6.0.10 r132072&lt;br /&gt;
&lt;br /&gt;
vagrant@ubuntu:~$ lsmod | grep -io vboxguest | xargs modinfo | grep -iw version&lt;br /&gt;
version:        6.0.10 r132072&lt;br /&gt;
&lt;br /&gt;
vagrant@u18cli-3:~$ sudo /usr/sbin/VBoxService --version&lt;br /&gt;
6.0.10r132072&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download the extension, you can explore [http://download.virtualbox.org/virtualbox here]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
wget http://download.virtualbox.org/virtualbox/5.0.32/VBoxGuestAdditions_5.0.32.iso&lt;br /&gt;
#you need to get it mounted or extracted the content and run inside the VM.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
*[https://github.com/chilcano/box-vagrant-wso2-dev-srv/blob/master/_downloads/vagrant-vboxguestadditions-workaroud.md Upgrade Vbox extension additions within Vagrant box]&lt;br /&gt;
&lt;br /&gt;
= List all Virtualbox SSH redirections =&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
vboxmanage list vms | cut -d ' ' -f 2  &amp;gt; /tmp/vms.out &amp;amp;&amp;amp; for vm in $(cat /tmp/vms.out); do vboxmanage showvminfo &amp;quot;$vm&amp;quot; | grep ssh; done&lt;br /&gt;
vboxmanage list vms | cut -d ' ' -f 1 | sed 's/&amp;quot;//g' &amp;gt; /tmp/vms.out &amp;amp;&amp;amp; for vm in $(cat /tmp/vms.out); do echo $vm; vboxmanage showvminfo &amp;quot;$vm&amp;quot; | grep ssh; done&lt;br /&gt;
vboxmanage list vms \&lt;br /&gt;
  | cut -d ' ' -f 1 \&lt;br /&gt;
  | sed 's/&amp;quot;//g' &amp;gt; /tmp/vms.out \&lt;br /&gt;
  &amp;amp;&amp;amp; for vm in $(cat /tmp/vms.out); do vboxmanage showvminfo &amp;quot;$vm&amp;quot; \&lt;br /&gt;
                                      | grep ssh \&lt;br /&gt;
                                      | tr --delete '\n'; echo &amp;quot; $vm&amp;quot;; done&lt;br /&gt;
&lt;br /&gt;
sed 's/&amp;quot;//g'      #removes double quotes from whole string&lt;br /&gt;
tr --delete '\n'  #deletes EOL, so the next command output is appended to the previous line&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Vagrant file =&lt;br /&gt;
;Ruby gotchas&lt;br /&gt;
Vagrant configuration file is written in Ruby specification, therefore you need to remember&lt;br /&gt;
*don't use dashes in object names, '''don't''': &amp;lt;tt&amp;gt;jenkins-minion_config.vm.box = &amp;quot;ubuntu/xenial64&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
*don't use symbols (here underscore) in variable names, '''don't''': &amp;lt;tt&amp;gt;(1..2).each do |minion_number|&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== HAProxy cluster, multi-node Vagrant config  ==&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
git clone https://github.com/jweissig/episode-45&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This creates ''Ansible'' mgmt server, Load Balancer and Web nodes [1..2]. HAProxy will be configured via Ansible code.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# -*- mode: ruby -*-&lt;br /&gt;
# vi: set ft=ruby :&lt;br /&gt;
&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
 # create mgmt node&lt;br /&gt;
 config.vm.define :mgmt do |mgmt_config|&lt;br /&gt;
     mgmt_config.vm.box = &amp;quot;ubuntu/trusty64&amp;quot;&lt;br /&gt;
     mgmt_config.vm.hostname = &amp;quot;mgmt&amp;quot;&lt;br /&gt;
     mgmt_config.vm.network :private_network, ip: &amp;quot;10.0.15.10&amp;quot;&lt;br /&gt;
     mgmt_config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
       vb.memory = &amp;quot;256&amp;quot;&lt;br /&gt;
     end&lt;br /&gt;
     mgmt_config.vm.provision :shell, path: &amp;quot;bootstrap-mgmt.sh&amp;quot;&lt;br /&gt;
 end&lt;br /&gt;
&lt;br /&gt;
 # create load balancer&lt;br /&gt;
 config.vm.define :lb do |lb_config|&lt;br /&gt;
     lb_config.vm.box = &amp;quot;ubuntu/trusty64&amp;quot;&lt;br /&gt;
     lb_config.vm.hostname = &amp;quot;lb&amp;quot;&lt;br /&gt;
     lb_config.vm.network :private_network, ip: &amp;quot;10.0.15.11&amp;quot;&lt;br /&gt;
     lb_config.vm.network &amp;quot;forwarded_port&amp;quot;, guest: 80, host: 8080&lt;br /&gt;
     lb_config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
       vb.memory = &amp;quot;256&amp;quot;&lt;br /&gt;
     end&lt;br /&gt;
 end&lt;br /&gt;
&lt;br /&gt;
 # create some web servers&lt;br /&gt;
 # https://docs.vagrantup.com/v2/vagrantfile/tips.html&lt;br /&gt;
  (1..2).each do |i|&lt;br /&gt;
    config.vm.define &amp;quot;web#{i}&amp;quot; do |node|&lt;br /&gt;
        node.vm.box = &amp;quot;ubuntu/trusty64&amp;quot;&lt;br /&gt;
        node.vm.hostname = &amp;quot;web#{i}&amp;quot;&lt;br /&gt;
        node.vm.network :private_network, ip: &amp;quot;10.0.15.2#{i}&amp;quot;&lt;br /&gt;
        node.vm.network &amp;quot;forwarded_port&amp;quot;, guest: 80, host: &amp;quot;808#{i}&amp;quot;&lt;br /&gt;
        node.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
          vb.memory = &amp;quot;256&amp;quot;&lt;br /&gt;
        end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Boot strap script &amp;lt;tt&amp;gt;bootstrap-mgmt.sh&amp;lt;/tt&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;shell&amp;quot;&amp;gt;&lt;br /&gt;
#!/usr/bin/env bash &lt;br /&gt;
# install ansible (http://docs.ansible.com/intro_installation.html)&lt;br /&gt;
apt-get -y install software-properties-common&lt;br /&gt;
apt-add-repository -y ppa:ansible/ansible&lt;br /&gt;
apt-get update&lt;br /&gt;
apt-get -y install ansible&lt;br /&gt;
&lt;br /&gt;
# copy examples into /home/vagrant (from inside the mgmt node)&lt;br /&gt;
cp -a /vagrant/examples/* /home/vagrant&lt;br /&gt;
chown -R vagrant:vagrant /home/vagrant&lt;br /&gt;
&lt;br /&gt;
# configure hosts file for our internal network defined by Vagrantfile&lt;br /&gt;
cat &amp;gt;&amp;gt; /etc/hosts &amp;lt;&amp;lt;EOL&lt;br /&gt;
# vagrant environment nodes&lt;br /&gt;
10.0.15.10  mgmt&lt;br /&gt;
10.0.15.11  lb&lt;br /&gt;
10.0.15.21  web1&lt;br /&gt;
10.0.15.22  web2&lt;br /&gt;
10.0.15.23  web3&lt;br /&gt;
10.0.15.24  web4&lt;br /&gt;
10.0.15.25  web5&lt;br /&gt;
10.0.15.26  web6&lt;br /&gt;
10.0.15.27  web7&lt;br /&gt;
10.0.15.28  web8&lt;br /&gt;
10.0.15.29  web9&lt;br /&gt;
EOL&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Gitbash path -  &amp;lt;code&amp;gt;/c/Program\ Files/Oracle/VirtualBox/VBoxManage.exe&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Set bootstrap script for Proxy or No-proxy specific system&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
Vagrant status&lt;br /&gt;
Vagrant up&lt;br /&gt;
Vagrant ssh mgmt&lt;br /&gt;
ansible all --list-hosts&lt;br /&gt;
ssh-keyscan web1 web2 lb &amp;gt; ~/.ssh/known_hosts&lt;br /&gt;
ansible-playbook ssh-addkey.yml -u vagrant --ask-pass&lt;br /&gt;
ansible-playbook site.yml&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once set it up you can navigate on your laptop to:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
http://localhost:8080/              #Website test&lt;br /&gt;
http://localhost:8080/haproxy?stats #HAProxy stats&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Use to verify end server&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
curl -I http://localhost:8080&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:X-Backend-Server.png|none|left|Curl -i X-Backend-Server]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Generate web traffic&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
vagrant ssh lb&lt;br /&gt;
sudo apt-get install apache2-utils&lt;br /&gt;
ansible localhost -m apt -a &amp;quot;pkg=apache2-utils state=present&amp;quot; --become&lt;br /&gt;
ab -n 1000 -c 1 http://10.0.2.15:80/ # Apache replaced 'ab' with 'hey'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Vagrant DNS =&lt;br /&gt;
== Multi-machine mDNS discovery ==&lt;br /&gt;
Multi-machine setup requires 3 ingredients* :&lt;br /&gt;
* each machine to have different hostname&lt;br /&gt;
* a way of getting the IP address for a hostname (eg. mDNS)&lt;br /&gt;
* connect the VMs through a private network&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In multi-machine configuration we need a way of getting the IP address for a hostname. We use &amp;lt;code&amp;gt;mDNS&amp;lt;/code&amp;gt; for this. By default &amp;lt;codemDNS&amp;lt;/code&amp;gt; only resolves host names ending with the &amp;lt;code&amp;gt;.local&amp;lt;/code&amp;gt; top-level domain (TLD). This can cause problems if that domain includes hosts which do not implement mDNS but which can be found via a conventional unicast DNS server. Resolving such conflicts requires network-configuration changes that violate the zero-configuration goal. Install &amp;lt;code&amp;gt;avahi&amp;lt;/code&amp;gt; system on all machines to facilitate service discovery on a local network via the &amp;lt;code&amp;gt;mDNS/DNS-SD&amp;lt;/code&amp;gt; protocol suite.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
config.vm.provision &amp;quot;shell&amp;quot;, inline: &amp;lt;&amp;lt;-SCRIPT&lt;br /&gt;
  apt-get install -y avahi-daemon libnss-mdns&lt;br /&gt;
SCRIPT&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;References&lt;br /&gt;
*[https://github.com/lathiat/nss-mdns nss-mdns] system which allows hostname lookup of *.local hostnames via mDNS in all system programs using nsswitch&lt;br /&gt;
*[https://www.avahi.org/ avahi.org]&lt;br /&gt;
&lt;br /&gt;
== Set host system DNS server resolver ==&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
config.vm.provider :virtualbox do |vb|&lt;br /&gt;
    vb.customize [&amp;quot;modifyvm&amp;quot;, :id, &amp;quot;--natdnshostresolver1&amp;quot;, &amp;quot;on&amp;quot;]&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Ubuntu with GUI =&lt;br /&gt;
This article is going to describe to setup Vagrant Virtualbox VM with GUI, setting up xserver with xfce4 as desktop environment.&lt;br /&gt;
== Locales ==&lt;br /&gt;
This is not working&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
     locale-gen en_GB.utf8 #en_GB.UTF-8&lt;br /&gt;
     update-locale LANG=en_GB.UTF-8&lt;br /&gt;
     locale-gen --purge &amp;quot;en_GB.UTF-8&amp;quot;&lt;br /&gt;
     dpkg-reconfigure --frontend=noninteractive locales&lt;br /&gt;
     dpkg-reconfigure --frontend=noninteractive keyboard-configuration&lt;br /&gt;
     localedef -i en_GB -c -f UTF-8 en_GB.utf8&lt;br /&gt;
     sudo update-locale LANG=en_GB.UTF-8&lt;br /&gt;
     locale-gen --purge &amp;quot;en_GB.UTF-8&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Troubleshooting&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
locale -a #shows which locales are available on your system&lt;br /&gt;
sudo less /usr/share/i18n/SUPPORTED&lt;br /&gt;
cat /etc/default/locale&lt;br /&gt;
&lt;br /&gt;
#Set system wide locales (does not work for users)&lt;br /&gt;
localectl set-locale LANG=en_GB.UTF-8 LANGUAGE=en_GB:en&lt;br /&gt;
localectl set-keymap gb&lt;br /&gt;
localectl set-x11-keymap gb&lt;br /&gt;
&lt;br /&gt;
#Quick kb change&lt;br /&gt;
apt-get install -yq x11-xkb-utils; setxkbmap gb&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Gnome3 ==&lt;br /&gt;
This setup installs Ubuntu desktop and may require a restart to apply changes to like a taskbar with shortcuts.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;ruby&amp;quot;&amp;gt;&lt;br /&gt;
# -*- mode: ruby -*-&lt;br /&gt;
# vi: set ft=ruby :&lt;br /&gt;
&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
  config.vm.box      = &amp;quot;ubuntu/bionic64&amp;quot; #bento/ubuntu-18.04, ubuntu/xenial64&lt;br /&gt;
&lt;br /&gt;
  machineName = File.basename(Dir.pwd) #name as a current working dir&lt;br /&gt;
# machineName = 'u18gui-1'&lt;br /&gt;
  config.vm.hostname = machineName&lt;br /&gt;
&lt;br /&gt;
  # Manually check for updates `vagrant box outdated`&lt;br /&gt;
  config.vm.box_check_update = false&lt;br /&gt;
&lt;br /&gt;
  # Vbguest plugin&lt;br /&gt;
  if Vagrant.has_plugin?(&amp;quot;vagrant-vbguest&amp;quot;)&lt;br /&gt;
    config.vbguest.auto_update = false&lt;br /&gt;
    config.vbguest.no_remote   = true&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # config.vm.network &amp;quot;forwarded_port&amp;quot;, guest: 80, host: 8080, host_ip: &amp;quot;127.0.0.1&amp;quot;&lt;br /&gt;
  # Public network, which generally matched to bridged network.&lt;br /&gt;
  # config.vm.network &amp;quot;public_network&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  # config.vm.synced_folder &amp;quot;hostDir&amp;quot;, &amp;quot;/InVagrantMount/path&amp;quot; &lt;br /&gt;
  # config.vm.synced_folder &amp;quot;../data&amp;quot;, &amp;quot;/vagrant_data&amp;quot;&lt;br /&gt;
&lt;br /&gt;
   config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
     vb.gui    = true&lt;br /&gt;
     vb.memory = &amp;quot;3072&amp;quot;&lt;br /&gt;
     vb.name   = machineName + &amp;quot;_vagrant&amp;quot;&lt;br /&gt;
   end&lt;br /&gt;
  &lt;br /&gt;
   config.vm.provision &amp;quot;shell&amp;quot;, inline: &amp;lt;&amp;lt;-SHELL&lt;br /&gt;
     export DEBIAN_FRONTEND=noninteractive&lt;br /&gt;
     setxkbmap gb&lt;br /&gt;
     apt-get update &amp;amp;&amp;amp; apt-get upgrade -yq&lt;br /&gt;
     apt-get install -yq ubuntu-desktop --no-install-recommends&lt;br /&gt;
     apt-get install -yq terminator tmux&lt;br /&gt;
     #only U16 xenial to fix Unity&lt;br /&gt;
     #apt-get install -yq unity-lens-files unity-lens-applications indicator-session --no-install-recommends &lt;br /&gt;
   SHELL&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Running up&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
vagrant plugin install vagrant-vbguest&lt;br /&gt;
vagrant up &amp;amp;&amp;amp; vagrant vbguest --do install &amp;amp;&amp;amp; vagrant reload&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Xface ==&lt;br /&gt;
Get a basic Ubuntu image working, boot it up and vagrant ssh.&lt;br /&gt;
Next, enable the VirtualBox display, which is off by default. Halt the VM and uncomment these lines in Vagrantfile:&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
config.vm.provider :virtualbox do |vb|&lt;br /&gt;
  vb.gui = true&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Boot the VM and observe the new display window. Now you just need to install and start xfce4. Use vagrant ssh and:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get install -y xfce4 virtualbox-guest-dkms virtualbox-guest-utils virtualbox-guest-x11&lt;br /&gt;
#guest additions are already installed on most of the Vagrant boxes&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Don't start the GUI as root because you really want to stay the vagrant user. To do this you need to permit anyone to start the GUI: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo vim /etc/X11/Xwrapper.config and edit it to allowed_users=anybody&lt;br /&gt;
sudo startxfce4&amp;amp;&lt;br /&gt;
sudo VBoxClient-all #optional&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should be landed in a xfce4 session.&lt;br /&gt;
&lt;br /&gt;
(Optional) If VBoxClient-all script isn't installed or anything is missing, you can replace with the equivalent:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo VBoxClient --clipboard&lt;br /&gt;
sudo VBoxClient --draganddrop&lt;br /&gt;
sudo VBoxClient --display&lt;br /&gt;
sudo VBoxClient --checkhostversion&lt;br /&gt;
sudo VBoxClient --seamless&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
*[http://stackoverflow.com/questions/18878117/using-vagrant-to-run-virtual-machines-with-desktop-environment Vagrant GUI vms] stackoverflow&lt;br /&gt;
&lt;br /&gt;
= Windows=&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
# -*- mode: ruby -*-&lt;br /&gt;
# vi: set ft=ruby :&lt;br /&gt;
&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
  config.vm.box = &amp;quot;gusztavvargadr/windows-server&amp;quot;&lt;br /&gt;
  config.vm.box_check_update = false&lt;br /&gt;
  config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
     vb.gui = true       # Display the VirtualBox GUI when booting the machine&lt;br /&gt;
     vb.memory = &amp;quot;3072&amp;quot;  # Customize the amount of memory on the VM:&lt;br /&gt;
  end&lt;br /&gt;
  # Plugins&lt;br /&gt;
  config.vbguest.auto_update = false&lt;br /&gt;
  config.vbguest.no_remote = true&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Shared location&lt;br /&gt;
* enable Network Sharing&lt;br /&gt;
* Vagrant path is mapped to &amp;lt;code&amp;gt;\\VBOXSVR\vagrant&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= WIP DevOps workstation =&lt;br /&gt;
This to contain:&lt;br /&gt;
*bashrc with git branch in ps1&lt;br /&gt;
*bash autocomplete (...samename)&lt;br /&gt;
*bash colored symlinks&lt;br /&gt;
*bash_logout and .profile to eval ssh-agent and kill on exit&lt;br /&gt;
*git install&lt;br /&gt;
*ansible 1.9.4&lt;br /&gt;
*java Oracle&lt;br /&gt;
*clone tfenv and install terraform&lt;br /&gt;
*vim install&lt;br /&gt;
*vundle install&lt;br /&gt;
*[done] python 2.7 OOB in 16.04&lt;br /&gt;
*[done]python pip: awscli, boto, boto3, etc..&lt;br /&gt;
&lt;br /&gt;
Challenges:&lt;br /&gt;
*Ubuntu 16.04 official box does not come with a default ''vagrant'' user but instead comes with ''ubuntu'' user. This causes a number of incompatibilities.&lt;br /&gt;
**Read more at launchpad [https://bugs.launchpad.net/cloud-images/+bug/1569237 vagrant xenial box is not provided with vagrant/vagrant username and password ]&lt;br /&gt;
* Solutions&lt;br /&gt;
** on W10 host both users: ubuntu &amp;amp; vagrant exist. Only vagrant has insecure_pub installed OOB. I am coping vagrant user pub key into ubuntu user authorized_keys&lt;br /&gt;
** on U16.05 host the official image does not seem to come with vagrant user but Ubuntu user works OOB&lt;br /&gt;
** Read more at SO &lt;br /&gt;
***[Vagrant's Ubuntu 16.04 vagrantfile default password https://stackoverflow.com/questions/41337802/vagrants-ubuntu-16-04-vagrantfile-default-password]&lt;br /&gt;
***[https://stackoverflow.com/questions/30075461/how-do-i-add-my-own-public-key-to-vagrant-vm How do I add my own public key to Vagrant VM?]&lt;br /&gt;
*** [https://blog.ouseful.info/2015/07/27/running-a-shell-script-once-only-in-vagrant/ Running a Shell Script Once Only in vagrant]&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
*[https://www.vagrantup.com/docs/getting-started/ Vagrant Start up documentation]&lt;br /&gt;
*[https://atlas.hashicorp.com/boxes/search Vagrant Hashicorp VMs repository] Virtualbox&lt;br /&gt;
*[https://cloud-images.ubuntu.com/vagrant/ Vagrant Ubuntu VMs images] Virtualbox&lt;br /&gt;
*[https://www.vagrantup.com/docs/provisioning/ansible_intro.html Vagrant and Ansible provisioner] Vagrant docs&lt;br /&gt;
*[https://manski.net/2016/09/vagrant-multi-machine-tutorial/#multi-machine.3A-the-naive-way Vagrant Tutorial – From Nothing To Multi-Machine] Tutorial&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=HashiCorp/Vagrant&amp;diff=7026</id>
		<title>HashiCorp/Vagrant</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=HashiCorp/Vagrant&amp;diff=7026"/>
		<updated>2024-06-05T19:02:39Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Sync using vagrant-vbguest plugin */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
Vagrant is configured on a per project basis. Each of these projects has its own Vagran file. The Vagrant file is a text file in which vagrant reads that sets up our environment. There is a description of what OS, how much RAM, and what software to be installed etc. You can version control this file.&lt;br /&gt;
&lt;br /&gt;
= Install | [https://github.com/hashicorp/vagrant/blob/v2.2.10/CHANGELOG.md Changelog] =&lt;br /&gt;
Download or upgrade&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install using Ubuntu package manager (2024)&lt;br /&gt;
wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg&lt;br /&gt;
echo &amp;quot;deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main&amp;quot; | sudo tee /etc/apt/sources.list.d/hashicorp.list&lt;br /&gt;
apt-cache policy vagrant&lt;br /&gt;
sudo apt update &amp;amp;&amp;amp; sudo apt install vagrant&lt;br /&gt;
&lt;br /&gt;
# Install downloading a package from sources (2022)&lt;br /&gt;
LATEST=$(curl -s GET https://api.github.com/repos/hashicorp/vagrant/tags | jq -r '.[].name' | head -n1 | tr -d v); echo $LATEST&lt;br /&gt;
VERSION=${LATEST:=2.2.18}; &lt;br /&gt;
wget https://releases.hashicorp.com/vagrant/${VERSION}/vagrant_${VERSION}_linux_amd64.zip&lt;br /&gt;
sudo install /usr/bin/vagrant&lt;br /&gt;
#sudo dpkg -i vagrant_${VERSION}_x86_64.deb&lt;br /&gt;
#sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -f   # resolve missing dependencies&lt;br /&gt;
&lt;br /&gt;
# Fix plugins if needed&lt;br /&gt;
vagrant plugin update&lt;br /&gt;
vagrant plugin repair&lt;br /&gt;
vagrant plugin expunge --reinstall&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install ruby is recommended as configuration within '''Vagrant''' file is written in Ruby language. &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo apt-get install ruby&lt;br /&gt;
sudo gem install bundler&lt;br /&gt;
sudo gem update  bundler    # if update needed&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repair plugins after the upgrade&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant plugin repair    # use first&lt;br /&gt;
vagrant plugin expunge --reinstall&lt;br /&gt;
vagrant plugin update    # then update broken plugin&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Images aka &amp;lt;code&amp;gt;box&amp;lt;/code&amp;gt; management =&lt;br /&gt;
Vagrant comes with a preconfigured image repositories.&lt;br /&gt;
;Manage boxes&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant box [list | add | remove] [--help]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Add a box (image) into local repository&lt;br /&gt;
These are standard VMs from providers in Virtualbox, VMware or Hyper-V format taken from a given repository&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant box add hashicorp/precise64      #user: hashicorp boximage: precise64, this is preconfigured repository&lt;br /&gt;
vagrant box add ubuntu/xenial64&lt;br /&gt;
vagrant box add ubuntu/xenial64    --box-version 20170618.0.0 --provider virtualbox&lt;br /&gt;
vagrant box add bento/ubuntu-18.04 --box-version 201812.27.0  --provider hyperv&lt;br /&gt;
&lt;br /&gt;
# Box can be specified via URLs or local file paths, Virtualbox can only nest 32bit machines&lt;br /&gt;
vagrant box add --force ubuntu/14.04      https://cloud-images.ubuntu.com/vagrant/trusty/current/trusty-server-cloudimg-amd64-vagrant-disk1.box&lt;br /&gt;
vagrant box add --force ubuntu/14.04-i386 https://cloud-images.ubuntu.com/vagrant/precise/current/precise-server-cloudimg-i386-vagrant-disk1.box&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Windows images&lt;br /&gt;
* devopsgroup-io/windows_server-2012r2-standard-amd64-nocm&lt;br /&gt;
* peru/windows-server-2016-standard-x64-eval&lt;br /&gt;
* scotch/box&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Update a box to the latest version&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; vagrant box list&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20190411.0.0)&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20190718.0.0)&lt;br /&gt;
&lt;br /&gt;
$&amp;gt; vagrant box update --box ubuntu/bionic64&lt;br /&gt;
Checking for updates to 'ubuntu/bionic64'&lt;br /&gt;
Latest installed version: 20190718.0.0&lt;br /&gt;
Version constraints: &amp;gt; 20190718.0.0&lt;br /&gt;
Provider: virtualbox&lt;br /&gt;
Updating 'ubuntu/bionic64' with provider 'virtualbox' from version&lt;br /&gt;
'20190718.0.0' to '20200124.0.0'...&lt;br /&gt;
Loading metadata for box 'https://vagrantcloud.com/ubuntu/bionic64'&lt;br /&gt;
Adding box 'ubuntu/bionic64' (v20200124.0.0) for provider: virtualbox&lt;br /&gt;
Downloading: https://vagrantcloud.com/ubuntu/boxes/bionic64/versions/20200124.0.0/providers/virtualbox.box&lt;br /&gt;
Download redirected to host: cloud-images.ubuntu.com&lt;br /&gt;
&lt;br /&gt;
$&amp;gt; vagrant box list&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20190411.0.0)&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20190718.0.0)&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20200124.0.0) # &amp;lt;- new downloaded&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Delete all images (aka boxes)&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant box prune&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= vagrant init - your first project =&lt;br /&gt;
;Configure Vagrantfile to use the box as your base system&lt;br /&gt;
&amp;lt;source lang=&amp;quot;ruby&amp;quot;&amp;gt;&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
 config.vm.box      = &amp;quot;ubuntu/bionic64&amp;quot;&lt;br /&gt;
 config.vm.hostname = &amp;quot;ubuntu&amp;quot; #hostname, requires reload&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Create Vagrant project, by creating ''Vagrantfile'' in your current directory&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
vagrant init                    #initialises an project &lt;br /&gt;
vagrant init ubuntu/xenial64    # initialises official Ubuntu 16.04 LTS (Xenial Xerus) Daily Build&lt;br /&gt;
vagrant init ubuntu/bionic64    #supports only VirtualBox provider&lt;br /&gt;
vagrant init bento/ubuntu-18.04 #supports variety of providers&lt;br /&gt;
&lt;br /&gt;
#Windows&lt;br /&gt;
vagrant init devopsgroup-io/windows_server-2012r2-standard-amd64-nocm #Windows 2012r2, VirtualBox only; cannot ssh&lt;br /&gt;
vagrant init peru/windows-server-2016-standard-x64-eval               #Windows 2016, halt works&lt;br /&gt;
vagrant init gusztavvargadr/windows-server                            #Windows 2019, full integration&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Power up your Vagrant box&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
vagrant up&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Ssh to the box. Below example of nested virtualisation 64bit VM(host) runs 32bit (guest vm)&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
piotr@vm-ubuntu64:~/git/vagrant$ vagrant ssh    #default password is &amp;quot;vagrant&amp;quot;&lt;br /&gt;
vagrant@vagrant-ubuntu-precise-32:~$ w&lt;br /&gt;
13:08:35 up 15 min,  1 user,  load average: 0.06, 0.31, 0.54&lt;br /&gt;
USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT&lt;br /&gt;
vagrant  pts/0    10.0.2.2         13:02    1.00s  4.63s  0.09s w&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Shared directory between Vagrant VM and an hypervisor provider&lt;br /&gt;
Vagrant VM shares a directory mounted at &amp;lt;tt&amp;gt;/vagrant&amp;lt;/tt&amp;gt;  with the directory on the host containing your Vagrantfile. This can be manually mounted from within VM as long the shared directory is setup in  GUI. &lt;br /&gt;
&lt;br /&gt;
Eg. vm_name &amp;gt; Settings &amp;gt; Shared Folders &amp;gt; Name: vagrant | Path: /home/piotr/vm_name&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
 sudo mount -t vboxsf -o uid=1000 vagrant /vagrant #firts arg 'vagrant' refers to GUI setiing &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
vagrant --debug up&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Nesting VMs ==&lt;br /&gt;
The error below is due to Virtualbox cannot run nested 64bit virtualbox VM. Spinning up a 64bit VM stops with an error that no 64bit CPU could be found. Update [https://forums.virtualbox.org/viewtopic.php?f=1&amp;amp;t=90831 VirtualBox 6.x Nested virtualization, VT-x/AMD-V in the guest].&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Error:&lt;br /&gt;
 Timed out while waiting for the machine to boot. This means that&lt;br /&gt;
 Vagrant was unable to communicate with the guest machine within&lt;br /&gt;
 the configured (&amp;quot;config.vm.boot_timeout&amp;quot; value) time period.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Manage power states =&lt;br /&gt;
*&amp;lt;code&amp;gt;vagrant suspend&amp;lt;/code&amp;gt; - saves the current running state of the machine and stop it&lt;br /&gt;
*&amp;lt;code&amp;gt;vagrant halt&amp;lt;/code&amp;gt; - gracefully shuts down the guest operating system and power down the guest machine&lt;br /&gt;
*&amp;lt;code&amp;gt;vagrant destroy&amp;lt;/code&amp;gt; - removes all traces of the guest machine from your system. It'll stop the guest machine, power it down, and remove all of the guest hard disks&lt;br /&gt;
&lt;br /&gt;
= Snapshots =&lt;br /&gt;
You can easily save snapshots.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Get status&lt;br /&gt;
$ vagrant status&lt;br /&gt;
Current machine states:&lt;br /&gt;
default                   poweroff (virtualbox) # &amp;lt;- 'default' it's machine name&lt;br /&gt;
                                                # in multi-vm Vagrant config file&lt;br /&gt;
The VM is powered off. To restart the VM, simply run `vagrant up`&lt;br /&gt;
&lt;br /&gt;
# List&lt;br /&gt;
vagrant snapshot list&lt;br /&gt;
==&amp;gt; default: &lt;br /&gt;
11_b4-upgradeVbox-stopped&lt;br /&gt;
12_Dec01_stopped&lt;br /&gt;
&lt;br /&gt;
# Save&lt;br /&gt;
                        &amp;lt;nameOfvm&amp;gt; &amp;lt;snapshot-name&amp;gt; &lt;br /&gt;
vagrant snapshot save    default    13_Dec30_external-eks_stopped&lt;br /&gt;
&lt;br /&gt;
# Restore&lt;br /&gt;
vagrant snapshot restore default    13_Dec30_external-eks_stopped&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Lookup path precedence for Vagrant project file =&lt;br /&gt;
When you run any vagrant command, Vagrant climbs your directory tree starting first in the current directory you are in. Example:&lt;br /&gt;
 /home/peter/projects/la/Vagrant&lt;br /&gt;
 /home/peter/projects/Vagrant&lt;br /&gt;
 /home/peter/Vagrant&lt;br /&gt;
 /home/Vagrant&lt;br /&gt;
 /Vagrant&lt;br /&gt;
&lt;br /&gt;
= Configuration =&lt;br /&gt;
== Networking ==&lt;br /&gt;
'''Private''' network is network that is not accessible from Internet. Networking stanza is a part of the main &amp;lt;tt&amp;gt;|config|&amp;lt;/tt&amp;gt; loop.&lt;br /&gt;
&lt;br /&gt;
DHCP IP address assigned&lt;br /&gt;
 config.vm.network &amp;quot;private_network&amp;quot;, type: &amp;quot;dhcp&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Static IP assigment&lt;br /&gt;
 config.vm.network &amp;quot;private_network&amp;quot;, ip: &amp;quot;192.168.80.5&amp;quot;&lt;br /&gt;
 auto_config: false     #optional to disable auto-configure&lt;br /&gt;
&lt;br /&gt;
'''Public network'''&lt;br /&gt;
These networks can be accessible from outside of the host machine including Internet, are usually '''Bridged Networks'''.&lt;br /&gt;
&lt;br /&gt;
Examples of dhcp and static IP assignment&lt;br /&gt;
 config.vm.network &amp;quot;public_network&amp;quot;, type: &amp;quot;dhcp&amp;quot;&lt;br /&gt;
 config.vm.network &amp;quot;public_network&amp;quot;, ip: &amp;quot;192.168.80.5&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Default interface. The name need to match your system name otherwise Vagrant will prompt you to choose from available interfaces during ''vagrant up'' process.&lt;br /&gt;
 config.vm.network &amp;quot;public_network&amp;quot;, bridge: 'eth1'&lt;br /&gt;
&lt;br /&gt;
== Port forwarding ==&lt;br /&gt;
Vagrant can forward any host(hypervisor) TCP port to guest vm specyfing in ~/git/vargant/Vagrant file&lt;br /&gt;
 config.vm.network :forwarded_port, guest: 80, host: 4567&lt;br /&gt;
Reload virtual machine &amp;lt;code&amp;gt;vagrant reload&amp;lt;/code&amp;gt; and run from hypervisor web browser http://127.0.0.1:4567 to test it.&lt;br /&gt;
&lt;br /&gt;
== Sync folders ==&lt;br /&gt;
Vagrant v2 renamed ''Shared folders'' into '''Sync folders'''. This feature mounts HostOS directory into GuestOS. allowing workflow of editing files with IDE installed on a host machine but access them within GuestOS. The files sync both directions (as mount on GuestOS), Remember, taking &amp;lt;code&amp;gt;vagrant snapshot save ubuntu-snap1&amp;lt;/code&amp;gt; '''will NOT save''' the '''Sync folder''' content as it's just mounted directory.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When configuring, 1st argument is a path existing on a '''host machine'''. If relative then it's relative to the root-project folder (where Vagrantfile exists) and 2nd arg is a full path to the mounted dir on guest-os.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Enabling Sync folders and Symlinks&lt;br /&gt;
This can be done at any time, it's applied during &amp;lt;code&amp;gt;vagrant up | reload&amp;lt;/code&amp;gt;. In general symlinks are disabled by VirtualBox as insecure.&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
#                    path on the host       mount on the guestOS&lt;br /&gt;
                              \             /         &lt;br /&gt;
     config.vm.sync_folder &amp;quot;git-host/&amp;quot;, &amp;quot;/git&amp;quot;, disabled: false&lt;br /&gt;
 end&lt;br /&gt;
  config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
    vb.name   = File.basename(Dir.pwd) + &amp;quot;_vagrant&amp;quot;&lt;br /&gt;
    ...&lt;br /&gt;
    vb.customize [&amp;quot;setextradata&amp;quot;, :id, &amp;quot;VBoxInternal2/SharedFoldersEnableSymlinksCreate//git&amp;quot;,     &amp;quot;1&amp;quot;]&lt;br /&gt;
#   vb.customize [&amp;quot;setextradata&amp;quot;, :id, &amp;quot;VBoxInternal2/SharedFoldersEnableSymlinksCreate//vagrant&amp;quot;, &amp;quot;1&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
    # symlinks should be active in root of vm by default&lt;br /&gt;
#   vb.customize [&amp;quot;setextradata&amp;quot;, :id, &amp;quot;VBoxInternal2/SharedFoldersEnableSymlinksCreate/v-root&amp;quot;,   &amp;quot;1&amp;quot;]&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Disabling&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
     config.vm.sync_folder &amp;quot;../data/&amp;quot;, &amp;quot;/vagrant-data&amp;quot;, disabled: true&lt;br /&gt;
 end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Modifying the Owner/Group&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 config.vm.sync_folder &amp;quot;../data/&amp;quot;, &amp;quot;/vagrant-data&amp;quot;, disabled: true,&lt;br /&gt;
    owner: &amp;quot;root&amp;quot;, group: &amp;quot;root&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
References&lt;br /&gt;
* [https://www.vagrantup.com/docs/synced-folders/basic_usage.html#id synced-folders] Hashicorp docs&lt;br /&gt;
&lt;br /&gt;
= Vagrant providers =&lt;br /&gt;
Vagrant can work with a wide variety of backend providers, such as VMware, AWS, and more without changing Vagrantfile. It's enough to  specify the provider and Vagrant will do the rest:&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
vagrant up --provider=vmware_fusion&lt;br /&gt;
vagrant up --provider=aws&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Hyper-V ==&lt;br /&gt;
*Enable Hyper-V&lt;br /&gt;
*if you running Docker for Windows make sure is disabled as only one application can bound to Internal NAT vswitch, if you are using it&lt;br /&gt;
*WSL and Windows Vagrant versions must match&lt;br /&gt;
*the terminal you run WSL or PowerShell runs with elevated privileges&lt;br /&gt;
When running in WSL, make sure you have &amp;lt;code&amp;gt;export VAGRANT_WSL_ENABLE_WINDOWS_ACCESS=&amp;quot;1&amp;quot;&amp;lt;/code&amp;gt;,&lt;br /&gt;
*you are in native Bash.exe not eg. ConEmu terminal with as it was proven not working at the time. You can change default provider by &amp;lt;code&amp;gt;export VAGRANT_DEFAULT_PROVIDER=hyperv&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Optional: Set the user-level environment variable in PowerShell: &lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
[Environment]::SetEnvironmentVariable(&amp;quot;VAGRANT_DEFAULT_PROVIDER&amp;quot;, &amp;quot;hyperv&amp;quot;, &amp;quot;User&amp;quot;) &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Workarounds&lt;br /&gt;
Copy insecure private key from &amp;lt;code&amp;gt;https://raw.githubusercontent.com/hashicorp/vagrant/master/keys/vagrant to WSL ~/.vagr&lt;br /&gt;
ant_key/private_key&amp;lt;/code&amp;gt; because Microsoft filesystem does not support Unix style file permissions, until WSL2 is released.&lt;br /&gt;
&amp;lt;source&amp;gt; &lt;br /&gt;
$ wget https://raw.githubusercontent.com/hashicorp/vagrant/master/keys/vagrant -O ~/.vagrant_key/private_key&lt;br /&gt;
# then set in Vagrantfile&lt;br /&gt;
config.ssh.private_key_path = &amp;quot;~/.vagrant_key/private_key&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When running on HyperV you need to choose a vswitch you will use. Vagrant will prompt you, select &amp;quot;Default Switch&amp;quot;, w&lt;br /&gt;
hich is eqvivalent of NAT Network. You need to creat your own vswitch if you want access to Internet.&lt;br /&gt;
&lt;br /&gt;
Go to Hyper-V Manager, open Virtual Switch Manager..., create External switch, name: vagrant-external, press OK. Then&lt;br /&gt;
 run.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
vagrant up --provider hyperv&lt;br /&gt;
&lt;br /&gt;
    default: Please choose a switch to attach to your Hyper-V instance.&lt;br /&gt;
    default: If none of these are appropriate, please open the Hyper-V manager&lt;br /&gt;
    default: to create a new virtual switch.&lt;br /&gt;
    default:&lt;br /&gt;
    default: 1) DockerNAT&lt;br /&gt;
    default: 2) Default Switch&lt;br /&gt;
    default: 3) vagrant-external&lt;br /&gt;
    default:&lt;br /&gt;
    default: What switch would you like to use?3    #&amp;lt;-- select 3&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Read more https://www.vagrantup.com/docs/hyperv/limitations.html&lt;br /&gt;
&lt;br /&gt;
Run Vagrant file&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant up --provider=hyperv&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== References ===&lt;br /&gt;
*[https://gist.github.com/savishy/8ed40cd8692e295d64f45e299c2b83c9 Create vSwitch in Hyper-V to run Vagrant]&lt;br /&gt;
*[https://techcommunity.microsoft.com/t5/Virtualization/Copying-Files-into-a-Hyper-V-VM-with-Vagrant/ba-p/382376 Copying Files into a Hyper-V VM with Vagrant]&lt;br /&gt;
*[https://techcommunity.microsoft.com/t5/Virtualization/Vagrant-and-Hyper-V-Tips-and-Tricks/ba-p/382373 Vagrant and Hyper-V -- Tips and Tricks] techcommunity.microsoft.com&lt;br /&gt;
&lt;br /&gt;
= Provisioners =&lt;br /&gt;
==Shell provisioner==&lt;br /&gt;
Vagrant can run from shared location script or from inline: Vagrant file shell provisioning commands.&lt;br /&gt;
&lt;br /&gt;
Create provisioning script&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi ~/git/vagrant/bootstrap.sh     &lt;br /&gt;
#!/usr/bin/env bash&lt;br /&gt;
export http_proxy=&amp;lt;nowiki&amp;gt;http://username:password@proxyserver.local:8080&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
export https_proxy=$http_proxy &lt;br /&gt;
apt-get update&lt;br /&gt;
apt-get install -y apache2&lt;br /&gt;
if ! [ -L /var/www ]; then &lt;br /&gt;
  rm -rf /var/www&lt;br /&gt;
  ln -sf /vagrant /var/www  # sets Vagrant shared dir to Apache DocumentRoot&lt;br /&gt;
fi&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure Vagrant to run this shell script above when setting up our machine&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi ~/git/vagrant/Vagrantfile   &lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
   config.vm.box = ubuntu/14.04-i386&lt;br /&gt;
   config.vm.provision: shell, path: &amp;quot;bootstrap.sh&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another example of using shell provisioner, separating a script out&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$script = &amp;lt;&amp;lt;SCRIPT&lt;br /&gt;
echo    &amp;quot; touch /home/vagrant/test_\\`date +%s\\`.txt &amp;quot; &amp;gt; /home/vagrant/newfile&lt;br /&gt;
chmod +x        /home/vagrant/newfile&lt;br /&gt;
echo &amp;quot;* * * * * /home/vagrant/newfile&amp;quot; &amp;gt; mycron&lt;br /&gt;
crontab mycron&lt;br /&gt;
SCRIPT&lt;br /&gt;
&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
  config.vm.box = &amp;quot;ubuntu/xenial64&amp;quot;&lt;br /&gt;
  config.vm.provision &amp;quot;shell&amp;quot;, inline: $script , privileged: false&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Bring the environment up  &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant up                   #runs provisioning only once&lt;br /&gt;
vagrant reload --provision   #reloads VM skipping import and runs provisioning&lt;br /&gt;
vagrant ssh                  #ssh to VM&lt;br /&gt;
wget -qO- 127.0.0.1          #test Apache is running on VM&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Provisioners - shell, ansible, ansible_local and more&lt;br /&gt;
&lt;br /&gt;
This section is about using Ansible with Vagrant, &lt;br /&gt;
 *&amp;lt;code&amp;gt;ansible&amp;lt;/code&amp;gt;, where Ansible is executed on the '''Vagrant host'''&lt;br /&gt;
 *&amp;lt;code&amp;gt;ansible_local&amp;lt;/code&amp;gt;, where Ansible is executed on the '''Vagrant guest'''&lt;br /&gt;
&lt;br /&gt;
==Ansible provisioner==&lt;br /&gt;
&lt;br /&gt;
Specify Ansible as a provisioner in Vagrant file&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 # Run Ansible from the Vagrant Host&lt;br /&gt;
 config.vm.provision &amp;quot;ansible&amp;quot; do |ansible|&lt;br /&gt;
    ansible.playbook = &amp;quot;playbook.yml&amp;quot;&lt;br /&gt;
 end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Chef_solo provisioner ==&lt;br /&gt;
Create recipe, the following dirctory structure is required, eg. recipe name is: vagrant_la&lt;br /&gt;
 ├── cookbooks&lt;br /&gt;
 │   └── vagrant_la&lt;br /&gt;
 │       └── recipes&lt;br /&gt;
 │           └── default.rb&lt;br /&gt;
 Vagrant&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Recipe&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
vi cookbooks/vagrant_la/recipes/default.rb&lt;br /&gt;
execute &amp;quot;apt-get update&amp;quot;&lt;br /&gt;
package &amp;quot;apache2&amp;quot;&lt;br /&gt;
execute &amp;quot;rm -rf /var/www&amp;quot;&lt;br /&gt;
link &amp;quot;var/www&amp;quot; do&lt;br /&gt;
        to &amp;quot;/vagrant&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In Vagrant file add following&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
config.vm.provision &amp;quot;chef_solo&amp;quot; do |chef|&lt;br /&gt;
        chef.add_recipe &amp;quot;vagrant_la&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Run &amp;lt;code&amp;gt;vagrant up&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Puppet manifest ==&lt;br /&gt;
Create Vagrant provisioning stanza&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 config.vm.define &amp;quot;web&amp;quot; do |web|&lt;br /&gt;
         web.vm.hostname = &amp;quot;web&amp;quot;&lt;br /&gt;
         web.vm.box = &amp;quot;apache&amp;quot;&lt;br /&gt;
         web.vm.network &amp;quot;private_network&amp;quot;, type: &amp;quot;dhcp&amp;quot;&lt;br /&gt;
         web.vm.network &amp;quot;forwarded_port&amp;quot;, guest: 80, host: 8080&lt;br /&gt;
         web.vm.provision &amp;quot;puppet&amp;quot; do |puppet|&lt;br /&gt;
                 puppet.manifests_path = &amp;quot;manifests&amp;quot;&lt;br /&gt;
                 puppet.manifest_file = &amp;quot;default.pp&amp;quot;&lt;br /&gt;
         end&lt;br /&gt;
 end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create a required folder structure for puppet manifests&lt;br /&gt;
 ├── manifests&lt;br /&gt;
 │   └── default.pp&lt;br /&gt;
 └── Vagrantfile&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Puppet manifest file&lt;br /&gt;
 vi manifests/default.pp&lt;br /&gt;
 exec { &amp;quot;apt-get update&amp;quot;:&lt;br /&gt;
        command =&amp;gt; &amp;quot;/usr/bin/apt-get update&amp;quot;,&lt;br /&gt;
 }&lt;br /&gt;
 package { &amp;quot;apache2&amp;quot;:&lt;br /&gt;
        require =&amp;gt; Exec[&amp;quot;apt-get update&amp;quot;],&lt;br /&gt;
 }&lt;br /&gt;
 file { &amp;quot;/var/www&amp;quot;:&lt;br /&gt;
        ensure =&amp;gt; link,&lt;br /&gt;
        target =&amp;gt; &amp;quot;/vagrant&amp;quot;,&lt;br /&gt;
        force =&amp;gt; true,&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
= Box images advanced=&lt;br /&gt;
 vagrant box list   #list all downloaded boxes&lt;br /&gt;
&lt;br /&gt;
Default path of boxes image, it can be specified by environment variable &amp;lt;tt&amp;gt;VAGRANT_HOME&amp;lt;/tt&amp;gt;&lt;br /&gt;
 C:\Users\%username%\.vagrant.d\boxes  #Windows&lt;br /&gt;
 ~/.vagrant.d/boxes                    #Linux&lt;br /&gt;
&lt;br /&gt;
Change default path via environment variable&lt;br /&gt;
 export VAGRANT_HOME=my/new/path/goes/here/&lt;br /&gt;
&lt;br /&gt;
==Box format==&lt;br /&gt;
When you un-tar the &amp;lt;tt&amp;gt;.box&amp;lt;/tt&amp;gt; file it contains 4 files:&lt;br /&gt;
 |--Vagrantfile&lt;br /&gt;
 |--box-disk1.vmdk  #compressed virtual disk&lt;br /&gt;
 |--box.ovf         #description of virtual hardware&lt;br /&gt;
 |--metadata.json&lt;br /&gt;
&lt;br /&gt;
== [https://www.vagrantup.com/docs/virtualbox/boxes.html Create box] from current project (package a box) ==&lt;br /&gt;
This allows to create a reusable box that contains all changes to the software we made, VirtualBox or Hyper-V supported only.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://www.vagrantup.com/docs/cli/package.html Command basics]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant package [options] [name|id]&lt;br /&gt;
# --base NAME - instead of packaging a VirtualBox machine that Vagrant manages, &lt;br /&gt;
#               this will package a VirtualBox machine that VirtualBox manages&lt;br /&gt;
# --output NAME - default is package.box&lt;br /&gt;
# --include x,y,z -  additional files will be packaged with the box&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Package&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ vagrant version # -&amp;gt; Installed Version: 2.2.9&lt;br /&gt;
&lt;br /&gt;
# Optional '--vagrantfile NAME' can be included, that automatically restores '--include' files &lt;br /&gt;
# learn more at https://www.vagrantup.com/docs/vagrantfile#load-order&lt;br /&gt;
$ time vagrant package --output u18cli-1.box --include data,git-host,git-host3rd,sync.sh,cleanup.sh&lt;br /&gt;
==&amp;gt; default: Clearing any previously set forwarded ports...&lt;br /&gt;
==&amp;gt; default: Exporting VM...&lt;br /&gt;
==&amp;gt; default: Compressing package to: /home/piotr/vms-vagrant/u18cli-1/2020-05-23-u18cli-1.box&lt;br /&gt;
==&amp;gt; default: Packaging additional file: data               # &amp;lt;- dir&lt;br /&gt;
==&amp;gt; default: Packaging additional file: git-host           # &amp;lt;- dir&lt;br /&gt;
==&amp;gt; default: Packaging additional file: git-host3rd        # &amp;lt;- dir&lt;br /&gt;
==&amp;gt; default: Packaging additional file: cleanup.sh         # &amp;lt;- file&lt;br /&gt;
real	15m27.324s user	8m23.550s sys	0m16.827s&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Re-ristribute the &amp;lt;tt&amp;gt;.box&amp;lt;/tt&amp;gt; file then restore it.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Add the packaged box to local system box repository&lt;br /&gt;
#                        _____box-name________ __box-file_____&lt;br /&gt;
$ vagrant box add --name box-packages/u18cli-1 u18cli-1-v1.box&lt;br /&gt;
==&amp;gt; box: Box file was not detected as metadata. Adding it directly...&lt;br /&gt;
==&amp;gt; box: Adding box 'u18cli-1-v1.box' (v0) for provider: &lt;br /&gt;
    box: Unpacking necessary files from: file:///home/piotr/vms-vagrant/test-box-restore/u18cli-1-v1.box&lt;br /&gt;
==&amp;gt; box: Successfully added box 'box-packages/u18cli-1' (v0) for 'virtualbox'!&lt;br /&gt;
&lt;br /&gt;
# List boxes&lt;br /&gt;
$ vagrant box list&lt;br /&gt;
box-packages/u18cli-1 (virtualbox, 0)&lt;br /&gt;
&lt;br /&gt;
$ ls -l ~/.vagrant.d/boxes&lt;br /&gt;
total 16&lt;br /&gt;
drwxrwxr-x 3 piotr piotr 4096 Jul 16 17:44 box-packages-VAGRANTSLASH-u18cli-1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Restore. Create/re-use Vagrantfile using box you added to your local box repository&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# vi Vagrantfile&lt;br /&gt;
config.vm.box = &amp;quot;box-packages/u18cli-1.box&amp;quot;&lt;br /&gt;
&lt;br /&gt;
vagrant up&lt;br /&gt;
# restore '--include' files coping them from&lt;br /&gt;
# 'ls -l ~/.vagrant.d/boxes/box-packages-VAGRANTSLASH-u18cli-1/0/virtualbox/include/*'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://tuhrig.de/resizing-vagrant-box-disk-space/ Resizing Vagrant box disk] =&lt;br /&gt;
* [https://www.vagrantup.com/docs/disks/usage Resizing primary disk] native way&lt;br /&gt;
&lt;br /&gt;
= Enable Vagrant to use proxy server for VMs =&lt;br /&gt;
Install proxyconf plugin or use &amp;lt;code&amp;gt;vagrant plugin list&amp;lt;/code&amp;gt; to verify if installed&lt;br /&gt;
 vagrant plugin install vagrant-proxyconf&lt;br /&gt;
&lt;br /&gt;
Configure your Vagrantfile, here particularly host 10.0.0.1:3128 runs CNTLM proxy&lt;br /&gt;
 Vagrant.configure(&amp;quot;2&amp;quot;) do |config| &lt;br /&gt;
     &amp;lt;nowiki&amp;gt;config.proxy.http = &amp;quot;http://10.0.0.1:3128&amp;quot;&lt;br /&gt;
    config.proxy.https = &amp;quot;http://10.0.0.1:3128&amp;quot;&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
     config.proxy.no_proxy = &amp;quot;localhost,127.0.0.1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
= Virtualbox Guest Additions =&lt;br /&gt;
== Sync using vagrant-vbguest plugin ==&lt;br /&gt;
Install plugin&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant plugin install vagrant-vbguest&lt;br /&gt;
&lt;br /&gt;
# Verify current version, running on a host(hypervisor)&lt;br /&gt;
vagrant vbguest --status&lt;br /&gt;
&lt;br /&gt;
# Add to your Vagrant file&lt;br /&gt;
if Vagrant.has_plugin?(&amp;quot;vagrant-vbguest&amp;quot;)&lt;br /&gt;
  host.vbguest.auto_update = true&lt;br /&gt;
  host.vbguest.no_remote   = true&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Manual install&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download VBoxGuestAdditions from:&lt;br /&gt;
* https://download.virtualbox.org/virtualbox&lt;br /&gt;
* https://download.virtualbox.org/virtualbox/7.0.16/VBoxGuestAdditions_7.0.16.iso&lt;br /&gt;
&lt;br /&gt;
Install a matching version of your host installation onto the virtual machine.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant vbguest --do install&lt;br /&gt;
vagrant vbguest --do install --iso VBoxGuestAdditions_7.0.16.iso&lt;br /&gt;
&lt;br /&gt;
Usage: vagrant vbguest [vm-name] [--do start|rebuild|install] [--status] [-f|--force] [-b|--auto-reboot] [-R|--no-remote] [--iso VBoxGuestAdditions.iso] [--no-cleanup]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
More you will find at [https://github.com/dotless-de/vagrant-vbguest vagrant-vbguest] plugin project.&lt;br /&gt;
&lt;br /&gt;
== Manual upgrade ==&lt;br /&gt;
Find out what version you are running, execute on a guest VM&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant@ubuntu:~$ modinfo vboxguest | grep ^version&lt;br /&gt;
version:        6.0.10 r132072&lt;br /&gt;
&lt;br /&gt;
vagrant@ubuntu:~$ lsmod | grep -io vboxguest | xargs modinfo | grep -iw version&lt;br /&gt;
version:        6.0.10 r132072&lt;br /&gt;
&lt;br /&gt;
vagrant@u18cli-3:~$ sudo /usr/sbin/VBoxService --version&lt;br /&gt;
6.0.10r132072&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download the extension, you can explore [http://download.virtualbox.org/virtualbox here]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
wget http://download.virtualbox.org/virtualbox/5.0.32/VBoxGuestAdditions_5.0.32.iso&lt;br /&gt;
#you need to get it mounted or extracted the content and run inside the VM.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
*[https://github.com/chilcano/box-vagrant-wso2-dev-srv/blob/master/_downloads/vagrant-vboxguestadditions-workaroud.md Upgrade Vbox extension additions within Vagrant box]&lt;br /&gt;
&lt;br /&gt;
= List all Virtualbox SSH redirections =&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
vboxmanage list vms | cut -d ' ' -f 2  &amp;gt; /tmp/vms.out &amp;amp;&amp;amp; for vm in $(cat /tmp/vms.out); do vboxmanage showvminfo &amp;quot;$vm&amp;quot; | grep ssh; done&lt;br /&gt;
vboxmanage list vms | cut -d ' ' -f 1 | sed 's/&amp;quot;//g' &amp;gt; /tmp/vms.out &amp;amp;&amp;amp; for vm in $(cat /tmp/vms.out); do echo $vm; vboxmanage showvminfo &amp;quot;$vm&amp;quot; | grep ssh; done&lt;br /&gt;
vboxmanage list vms \&lt;br /&gt;
  | cut -d ' ' -f 1 \&lt;br /&gt;
  | sed 's/&amp;quot;//g' &amp;gt; /tmp/vms.out \&lt;br /&gt;
  &amp;amp;&amp;amp; for vm in $(cat /tmp/vms.out); do vboxmanage showvminfo &amp;quot;$vm&amp;quot; \&lt;br /&gt;
                                      | grep ssh \&lt;br /&gt;
                                      | tr --delete '\n'; echo &amp;quot; $vm&amp;quot;; done&lt;br /&gt;
&lt;br /&gt;
sed 's/&amp;quot;//g'      #removes double quotes from whole string&lt;br /&gt;
tr --delete '\n'  #deletes EOL, so the next command output is appended to the previous line&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Vagrant file =&lt;br /&gt;
;Ruby gotchas&lt;br /&gt;
Vagrant configuration file is written in Ruby specification, therefore you need to remember&lt;br /&gt;
*don't use dashes in object names, '''don't''': &amp;lt;tt&amp;gt;jenkins-minion_config.vm.box = &amp;quot;ubuntu/xenial64&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
*don't use symbols (here underscore) in variable names, '''don't''': &amp;lt;tt&amp;gt;(1..2).each do |minion_number|&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== HAProxy cluster, multi-node Vagrant config  ==&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
git clone https://github.com/jweissig/episode-45&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This creates ''Ansible'' mgmt server, Load Balancer and Web nodes [1..2]. HAProxy will be configured via Ansible code.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# -*- mode: ruby -*-&lt;br /&gt;
# vi: set ft=ruby :&lt;br /&gt;
&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
 # create mgmt node&lt;br /&gt;
 config.vm.define :mgmt do |mgmt_config|&lt;br /&gt;
     mgmt_config.vm.box = &amp;quot;ubuntu/trusty64&amp;quot;&lt;br /&gt;
     mgmt_config.vm.hostname = &amp;quot;mgmt&amp;quot;&lt;br /&gt;
     mgmt_config.vm.network :private_network, ip: &amp;quot;10.0.15.10&amp;quot;&lt;br /&gt;
     mgmt_config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
       vb.memory = &amp;quot;256&amp;quot;&lt;br /&gt;
     end&lt;br /&gt;
     mgmt_config.vm.provision :shell, path: &amp;quot;bootstrap-mgmt.sh&amp;quot;&lt;br /&gt;
 end&lt;br /&gt;
&lt;br /&gt;
 # create load balancer&lt;br /&gt;
 config.vm.define :lb do |lb_config|&lt;br /&gt;
     lb_config.vm.box = &amp;quot;ubuntu/trusty64&amp;quot;&lt;br /&gt;
     lb_config.vm.hostname = &amp;quot;lb&amp;quot;&lt;br /&gt;
     lb_config.vm.network :private_network, ip: &amp;quot;10.0.15.11&amp;quot;&lt;br /&gt;
     lb_config.vm.network &amp;quot;forwarded_port&amp;quot;, guest: 80, host: 8080&lt;br /&gt;
     lb_config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
       vb.memory = &amp;quot;256&amp;quot;&lt;br /&gt;
     end&lt;br /&gt;
 end&lt;br /&gt;
&lt;br /&gt;
 # create some web servers&lt;br /&gt;
 # https://docs.vagrantup.com/v2/vagrantfile/tips.html&lt;br /&gt;
  (1..2).each do |i|&lt;br /&gt;
    config.vm.define &amp;quot;web#{i}&amp;quot; do |node|&lt;br /&gt;
        node.vm.box = &amp;quot;ubuntu/trusty64&amp;quot;&lt;br /&gt;
        node.vm.hostname = &amp;quot;web#{i}&amp;quot;&lt;br /&gt;
        node.vm.network :private_network, ip: &amp;quot;10.0.15.2#{i}&amp;quot;&lt;br /&gt;
        node.vm.network &amp;quot;forwarded_port&amp;quot;, guest: 80, host: &amp;quot;808#{i}&amp;quot;&lt;br /&gt;
        node.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
          vb.memory = &amp;quot;256&amp;quot;&lt;br /&gt;
        end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Boot strap script &amp;lt;tt&amp;gt;bootstrap-mgmt.sh&amp;lt;/tt&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;shell&amp;quot;&amp;gt;&lt;br /&gt;
#!/usr/bin/env bash &lt;br /&gt;
# install ansible (http://docs.ansible.com/intro_installation.html)&lt;br /&gt;
apt-get -y install software-properties-common&lt;br /&gt;
apt-add-repository -y ppa:ansible/ansible&lt;br /&gt;
apt-get update&lt;br /&gt;
apt-get -y install ansible&lt;br /&gt;
&lt;br /&gt;
# copy examples into /home/vagrant (from inside the mgmt node)&lt;br /&gt;
cp -a /vagrant/examples/* /home/vagrant&lt;br /&gt;
chown -R vagrant:vagrant /home/vagrant&lt;br /&gt;
&lt;br /&gt;
# configure hosts file for our internal network defined by Vagrantfile&lt;br /&gt;
cat &amp;gt;&amp;gt; /etc/hosts &amp;lt;&amp;lt;EOL&lt;br /&gt;
# vagrant environment nodes&lt;br /&gt;
10.0.15.10  mgmt&lt;br /&gt;
10.0.15.11  lb&lt;br /&gt;
10.0.15.21  web1&lt;br /&gt;
10.0.15.22  web2&lt;br /&gt;
10.0.15.23  web3&lt;br /&gt;
10.0.15.24  web4&lt;br /&gt;
10.0.15.25  web5&lt;br /&gt;
10.0.15.26  web6&lt;br /&gt;
10.0.15.27  web7&lt;br /&gt;
10.0.15.28  web8&lt;br /&gt;
10.0.15.29  web9&lt;br /&gt;
EOL&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Gitbash path -  &amp;lt;code&amp;gt;/c/Program\ Files/Oracle/VirtualBox/VBoxManage.exe&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Set bootstrap script for Proxy or No-proxy specific system&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
Vagrant status&lt;br /&gt;
Vagrant up&lt;br /&gt;
Vagrant ssh mgmt&lt;br /&gt;
ansible all --list-hosts&lt;br /&gt;
ssh-keyscan web1 web2 lb &amp;gt; ~/.ssh/known_hosts&lt;br /&gt;
ansible-playbook ssh-addkey.yml -u vagrant --ask-pass&lt;br /&gt;
ansible-playbook site.yml&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once set it up you can navigate on your laptop to:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
http://localhost:8080/              #Website test&lt;br /&gt;
http://localhost:8080/haproxy?stats #HAProxy stats&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Use to verify end server&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
curl -I http://localhost:8080&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:X-Backend-Server.png|none|left|Curl -i X-Backend-Server]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Generate web traffic&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
vagrant ssh lb&lt;br /&gt;
sudo apt-get install apache2-utils&lt;br /&gt;
ansible localhost -m apt -a &amp;quot;pkg=apache2-utils state=present&amp;quot; --become&lt;br /&gt;
ab -n 1000 -c 1 http://10.0.2.15:80/ # Apache replaced 'ab' with 'hey'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Vagrant DNS =&lt;br /&gt;
== Multi-machine mDNS discovery ==&lt;br /&gt;
Multi-machine setup requires 3 ingredients* :&lt;br /&gt;
* each machine to have different hostname&lt;br /&gt;
* a way of getting the IP address for a hostname (eg. mDNS)&lt;br /&gt;
* connect the VMs through a private network&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In multi-machine configuration we need a way of getting the IP address for a hostname. We use &amp;lt;code&amp;gt;mDNS&amp;lt;/code&amp;gt; for this. By default &amp;lt;codemDNS&amp;lt;/code&amp;gt; only resolves host names ending with the &amp;lt;code&amp;gt;.local&amp;lt;/code&amp;gt; top-level domain (TLD). This can cause problems if that domain includes hosts which do not implement mDNS but which can be found via a conventional unicast DNS server. Resolving such conflicts requires network-configuration changes that violate the zero-configuration goal. Install &amp;lt;code&amp;gt;avahi&amp;lt;/code&amp;gt; system on all machines to facilitate service discovery on a local network via the &amp;lt;code&amp;gt;mDNS/DNS-SD&amp;lt;/code&amp;gt; protocol suite.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
config.vm.provision &amp;quot;shell&amp;quot;, inline: &amp;lt;&amp;lt;-SCRIPT&lt;br /&gt;
  apt-get install -y avahi-daemon libnss-mdns&lt;br /&gt;
SCRIPT&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;References&lt;br /&gt;
*[https://github.com/lathiat/nss-mdns nss-mdns] system which allows hostname lookup of *.local hostnames via mDNS in all system programs using nsswitch&lt;br /&gt;
*[https://www.avahi.org/ avahi.org]&lt;br /&gt;
&lt;br /&gt;
== Set host system DNS server resolver ==&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
config.vm.provider :virtualbox do |vb|&lt;br /&gt;
    vb.customize [&amp;quot;modifyvm&amp;quot;, :id, &amp;quot;--natdnshostresolver1&amp;quot;, &amp;quot;on&amp;quot;]&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Ubuntu with GUI =&lt;br /&gt;
This article is going to describe to setup Vagrant Virtualbox VM with GUI, setting up xserver with xfce4 as desktop environment.&lt;br /&gt;
== Locales ==&lt;br /&gt;
This is not working&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
     locale-gen en_GB.utf8 #en_GB.UTF-8&lt;br /&gt;
     update-locale LANG=en_GB.UTF-8&lt;br /&gt;
     locale-gen --purge &amp;quot;en_GB.UTF-8&amp;quot;&lt;br /&gt;
     dpkg-reconfigure --frontend=noninteractive locales&lt;br /&gt;
     dpkg-reconfigure --frontend=noninteractive keyboard-configuration&lt;br /&gt;
     localedef -i en_GB -c -f UTF-8 en_GB.utf8&lt;br /&gt;
     sudo update-locale LANG=en_GB.UTF-8&lt;br /&gt;
     locale-gen --purge &amp;quot;en_GB.UTF-8&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Troubleshooting&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
locale -a #shows which locales are available on your system&lt;br /&gt;
sudo less /usr/share/i18n/SUPPORTED&lt;br /&gt;
cat /etc/default/locale&lt;br /&gt;
&lt;br /&gt;
#Set system wide locales (does not work for users)&lt;br /&gt;
localectl set-locale LANG=en_GB.UTF-8 LANGUAGE=en_GB:en&lt;br /&gt;
localectl set-keymap gb&lt;br /&gt;
localectl set-x11-keymap gb&lt;br /&gt;
&lt;br /&gt;
#Quick kb change&lt;br /&gt;
apt-get install -yq x11-xkb-utils; setxkbmap gb&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Gnome3 ==&lt;br /&gt;
This setup installs Ubuntu desktop and may require a restart to apply changes to like a taskbar with shortcuts.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;ruby&amp;quot;&amp;gt;&lt;br /&gt;
# -*- mode: ruby -*-&lt;br /&gt;
# vi: set ft=ruby :&lt;br /&gt;
&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
  config.vm.box      = &amp;quot;ubuntu/bionic64&amp;quot; #bento/ubuntu-18.04, ubuntu/xenial64&lt;br /&gt;
&lt;br /&gt;
  machineName = File.basename(Dir.pwd) #name as a current working dir&lt;br /&gt;
# machineName = 'u18gui-1'&lt;br /&gt;
  config.vm.hostname = machineName&lt;br /&gt;
&lt;br /&gt;
  # Manually check for updates `vagrant box outdated`&lt;br /&gt;
  config.vm.box_check_update = false&lt;br /&gt;
&lt;br /&gt;
  # Vbguest plugin&lt;br /&gt;
  if Vagrant.has_plugin?(&amp;quot;vagrant-vbguest&amp;quot;)&lt;br /&gt;
    config.vbguest.auto_update = false&lt;br /&gt;
    config.vbguest.no_remote   = true&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # config.vm.network &amp;quot;forwarded_port&amp;quot;, guest: 80, host: 8080, host_ip: &amp;quot;127.0.0.1&amp;quot;&lt;br /&gt;
  # Public network, which generally matched to bridged network.&lt;br /&gt;
  # config.vm.network &amp;quot;public_network&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  # config.vm.synced_folder &amp;quot;hostDir&amp;quot;, &amp;quot;/InVagrantMount/path&amp;quot; &lt;br /&gt;
  # config.vm.synced_folder &amp;quot;../data&amp;quot;, &amp;quot;/vagrant_data&amp;quot;&lt;br /&gt;
&lt;br /&gt;
   config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
     vb.gui    = true&lt;br /&gt;
     vb.memory = &amp;quot;3072&amp;quot;&lt;br /&gt;
     vb.name   = machineName + &amp;quot;_vagrant&amp;quot;&lt;br /&gt;
   end&lt;br /&gt;
  &lt;br /&gt;
   config.vm.provision &amp;quot;shell&amp;quot;, inline: &amp;lt;&amp;lt;-SHELL&lt;br /&gt;
     export DEBIAN_FRONTEND=noninteractive&lt;br /&gt;
     setxkbmap gb&lt;br /&gt;
     apt-get update &amp;amp;&amp;amp; apt-get upgrade -yq&lt;br /&gt;
     apt-get install -yq ubuntu-desktop --no-install-recommends&lt;br /&gt;
     apt-get install -yq terminator tmux&lt;br /&gt;
     #only U16 xenial to fix Unity&lt;br /&gt;
     #apt-get install -yq unity-lens-files unity-lens-applications indicator-session --no-install-recommends &lt;br /&gt;
   SHELL&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Running up&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
vagrant plugin install vagrant-vbguest&lt;br /&gt;
vagrant up &amp;amp;&amp;amp; vagrant vbguest --do install &amp;amp;&amp;amp; vagrant reload&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Xface ==&lt;br /&gt;
Get a basic Ubuntu image working, boot it up and vagrant ssh.&lt;br /&gt;
Next, enable the VirtualBox display, which is off by default. Halt the VM and uncomment these lines in Vagrantfile:&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
config.vm.provider :virtualbox do |vb|&lt;br /&gt;
  vb.gui = true&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Boot the VM and observe the new display window. Now you just need to install and start xfce4. Use vagrant ssh and:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get install -y xfce4 virtualbox-guest-dkms virtualbox-guest-utils virtualbox-guest-x11&lt;br /&gt;
#guest additions are already installed on most of the Vagrant boxes&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Don't start the GUI as root because you really want to stay the vagrant user. To do this you need to permit anyone to start the GUI: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo vim /etc/X11/Xwrapper.config and edit it to allowed_users=anybody&lt;br /&gt;
sudo startxfce4&amp;amp;&lt;br /&gt;
sudo VBoxClient-all #optional&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should be landed in a xfce4 session.&lt;br /&gt;
&lt;br /&gt;
(Optional) If VBoxClient-all script isn't installed or anything is missing, you can replace with the equivalent:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo VBoxClient --clipboard&lt;br /&gt;
sudo VBoxClient --draganddrop&lt;br /&gt;
sudo VBoxClient --display&lt;br /&gt;
sudo VBoxClient --checkhostversion&lt;br /&gt;
sudo VBoxClient --seamless&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
*[http://stackoverflow.com/questions/18878117/using-vagrant-to-run-virtual-machines-with-desktop-environment Vagrant GUI vms] stackoverflow&lt;br /&gt;
&lt;br /&gt;
= Windows=&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
# -*- mode: ruby -*-&lt;br /&gt;
# vi: set ft=ruby :&lt;br /&gt;
&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
  config.vm.box = &amp;quot;gusztavvargadr/windows-server&amp;quot;&lt;br /&gt;
  config.vm.box_check_update = false&lt;br /&gt;
  config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
     vb.gui = true       # Display the VirtualBox GUI when booting the machine&lt;br /&gt;
     vb.memory = &amp;quot;3072&amp;quot;  # Customize the amount of memory on the VM:&lt;br /&gt;
  end&lt;br /&gt;
  # Plugins&lt;br /&gt;
  config.vbguest.auto_update = false&lt;br /&gt;
  config.vbguest.no_remote = true&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Shared location&lt;br /&gt;
* enable Network Sharing&lt;br /&gt;
* Vagrant path is mapped to &amp;lt;code&amp;gt;\\VBOXSVR\vagrant&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= WIP DevOps workstation =&lt;br /&gt;
This to contain:&lt;br /&gt;
*bashrc with git branch in ps1&lt;br /&gt;
*bash autocomplete (...samename)&lt;br /&gt;
*bash colored symlinks&lt;br /&gt;
*bash_logout and .profile to eval ssh-agent and kill on exit&lt;br /&gt;
*git install&lt;br /&gt;
*ansible 1.9.4&lt;br /&gt;
*java Oracle&lt;br /&gt;
*clone tfenv and install terraform&lt;br /&gt;
*vim install&lt;br /&gt;
*vundle install&lt;br /&gt;
*[done] python 2.7 OOB in 16.04&lt;br /&gt;
*[done]python pip: awscli, boto, boto3, etc..&lt;br /&gt;
&lt;br /&gt;
Challenges:&lt;br /&gt;
*Ubuntu 16.04 official box does not come with a default ''vagrant'' user but instead comes with ''ubuntu'' user. This causes a number of incompatibilities.&lt;br /&gt;
**Read more at launchpad [https://bugs.launchpad.net/cloud-images/+bug/1569237 vagrant xenial box is not provided with vagrant/vagrant username and password ]&lt;br /&gt;
* Solutions&lt;br /&gt;
** on W10 host both users: ubuntu &amp;amp; vagrant exist. Only vagrant has insecure_pub installed OOB. I am coping vagrant user pub key into ubuntu user authorized_keys&lt;br /&gt;
** on U16.05 host the official image does not seem to come with vagrant user but Ubuntu user works OOB&lt;br /&gt;
** Read more at SO &lt;br /&gt;
***[Vagrant's Ubuntu 16.04 vagrantfile default password https://stackoverflow.com/questions/41337802/vagrants-ubuntu-16-04-vagrantfile-default-password]&lt;br /&gt;
***[https://stackoverflow.com/questions/30075461/how-do-i-add-my-own-public-key-to-vagrant-vm How do I add my own public key to Vagrant VM?]&lt;br /&gt;
*** [https://blog.ouseful.info/2015/07/27/running-a-shell-script-once-only-in-vagrant/ Running a Shell Script Once Only in vagrant]&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
*[https://www.vagrantup.com/docs/getting-started/ Vagrant Start up documentation]&lt;br /&gt;
*[https://atlas.hashicorp.com/boxes/search Vagrant Hashicorp VMs repository] Virtualbox&lt;br /&gt;
*[https://cloud-images.ubuntu.com/vagrant/ Vagrant Ubuntu VMs images] Virtualbox&lt;br /&gt;
*[https://www.vagrantup.com/docs/provisioning/ansible_intro.html Vagrant and Ansible provisioner] Vagrant docs&lt;br /&gt;
*[https://manski.net/2016/09/vagrant-multi-machine-tutorial/#multi-machine.3A-the-naive-way Vagrant Tutorial – From Nothing To Multi-Machine] Tutorial&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Ubuntu_Setup&amp;diff=7025</id>
		<title>Ubuntu Setup</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Ubuntu_Setup&amp;diff=7025"/>
		<updated>2024-06-04T07:17:36Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* gnome-shell-system-monitor-applet - cpu, memory indicators */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;If you are using Ubuntu for various Linux projects you will find that as it comes with pre installed with many packages. On the other hand installing just minimal version seems to be too extreme. Therefore I started maitaining a list of unnecessary packages and one liner to that removes them all. Please feel free to modify for your needs.&lt;br /&gt;
&lt;br /&gt;
= Default partitioning =&lt;br /&gt;
On virtual systems schema below will be applied, eg on laptops:&lt;br /&gt;
:[[File:ClipCapIt-200620-131502.PNG]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#Eg. for 4G memory and 50G storage system&lt;br /&gt;
&lt;br /&gt;
/dev/mapper/ubuntu--vg-root        mount_point: /&lt;br /&gt;
/dev/mapper/ubuntu--vg-swapt_1&lt;br /&gt;
/dev/sda&lt;br /&gt;
 /dev/sda1 (50G)&lt;br /&gt;
&lt;br /&gt;
LVM VG ubuntu-vg, LV root    as ext4&lt;br /&gt;
LVM VG ubuntu-vg, LV swapt_1 as swap&lt;br /&gt;
&lt;br /&gt;
#Boot device:&lt;br /&gt;
/dev/mapper/ubuntu--vg-root&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
As a good handy practice you may create 100G virtual disk that you thin provision. Then create 2 PVs for  root and swap partitions. Don't utilize all space at once but extend partitions when needed. This method eliminates adding new disks to VMs saving time and efforts.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example LVM setup, here using 30G Physical Volume(99.9% used), 1 Volume Group and 2 Logical Volumes (root and swap). &lt;br /&gt;
&amp;lt;source lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
$ sudo pvs&lt;br /&gt;
  PV         VG        Fmt  Attr PSize   PFree &lt;br /&gt;
  /dev/sda1  ubuntu-vg lvm2 a--  &amp;lt;29.93g 36.00m&lt;br /&gt;
$ sudo vgs&lt;br /&gt;
  VG        #PV #LV #SN Attr   VSize   VFree &lt;br /&gt;
  ubuntu-vg   1   2   0 wz--n- &amp;lt;29.93g 36.00m&lt;br /&gt;
$ sudo lvs&lt;br /&gt;
  LV     VG        Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert&lt;br /&gt;
  root   ubuntu-vg -wi-ao----  28.94g                                                    &lt;br /&gt;
  swap_1 ubuntu-vg -wi-ao---- 976.00m                                                    &lt;br /&gt;
piotr@u18:~$&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
$ lsblk /dev/sda --fs&lt;br /&gt;
NAME                  FSTYPE      LABEL UUID                                   MOUNTPOINT&lt;br /&gt;
sda                                                                            &lt;br /&gt;
└─sda1                LVM2_member       rP18Kb-Q12j-wjVf-C1iV-uy42-BUJD-aWFuO7 &lt;br /&gt;
  ├─ubuntu--vg-root   ext4              fad04a3b-5fa3-4a03-bbd6-24a93cda1eb3   /&lt;br /&gt;
  └─ubuntu--vg-swap_1 swap              47cd084b-89b0-4cd5-bdb8-367238842ba1   [SWAP]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= List of unnecessary packages =&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get remove libreoffice-* #Remove LibreOffice&lt;br /&gt;
sudo apt-get remove unity-lens-* #This package contains photos scopes which allow Unity to search for local and online photos.&lt;br /&gt;
sudo apt-get remove shotwell* #Photo organizer&lt;br /&gt;
sudo apt-get remove simple-scan #Scanner software&lt;br /&gt;
sudo apt-get remove empathy* #Internet messaging ~13M&lt;br /&gt;
sudo apt-get remove thunderbird* #Email client ~61M&lt;br /&gt;
sudo apt-get remove unity-scope-gdrive #Google Drive scope for Unity ~116KB&lt;br /&gt;
sudo apt-get remove cheese* #Cheese Webcam Booth - webcam software&lt;br /&gt;
sudo apt-get remove brasero* #Brasero Disc Burner ~6.5MB&lt;br /&gt;
sudo apt-get remove gnome-bluetooth Package to manipulate bloototh devices using Gnome desktop ~2MB&lt;br /&gt;
sudo apt-get remove gnome-orca Orca Screen Reader -Provide access to graphical desktop environments via synthesised speech and/or refreshable braille&lt;br /&gt;
sudo apt-get remove unity-webapps-common #Amazon Unity WebApp integration scripts ~133KB&lt;br /&gt;
sudo apt-get remove ibus-pinyin #IBus Bopomofo Preferences - ibus-pinyin is a IBus based IM engine for Chinese ~1.4MB&lt;br /&gt;
sudo apt-get remove apt-get remove printer-driver-foo2zjs* #Reactivate HP LaserJet 1018/1020 after reloading paper ~3.2MB&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Remove unnecessary packages - one liner =&lt;br /&gt;
;Ubuntu 12, 14, 16&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sudo apt-get remove libreoffice-* unity-lens-* shotwell* simple-scan empathy* thunderbird* unity-scope-gdrive cheese*\&lt;br /&gt;
brasero* gnome-bluetooth gnome-orca unity-webapps-common ibus-pinyin printer-driver-foo2zjs*&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Ubuntu 18. It's recommended to choose ''Minimal Install'', so most of packages below won't get installed.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sudo apt-get purge libreoffice-* unity-lens-* shotwell* simple-scan empathy* thunderbird* cheese* \&lt;br /&gt;
brasero* gnome-bluetooth gnome-orca ibus-pinyin printer-driver-foo2zjs* xul-ext-ubufox speech-dispatcher* \&lt;br /&gt;
rhythmbox* printer-driver-* mythes-en-us mobile-broadband-provider-inf* \&lt;br /&gt;
evolution-data-server* espeak-ng-data:amd64 bluez* ubuntu-web-launchers \&lt;br /&gt;
transmission-*&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get purge xul-ext-ubufox                           # Canonical FF customizations for u14,16,18,20&lt;br /&gt;
sudo apt-get remove gnome-mahjongg gnome-mines gnome-sudoku # games, works for u14,16,18,20&lt;br /&gt;
sudo apt-get remove gnome-video-effects gstreamer1.0-* &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
; XTREME&lt;br /&gt;
UnInstallant Ubuntu software notifier&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get remove update-notifier&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Uninstall locales - unused languages etc =&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get install localepurge&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Set apt-get to not install recommended and suggested packages =&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo bash -c 'cat &amp;gt; /etc/apt/apt.conf.d/01no-recommend &amp;lt;&amp;lt; EOF&lt;br /&gt;
APT::Install-Recommends &amp;quot;0&amp;quot;;&lt;br /&gt;
APT::Install-Suggests &amp;quot;0&amp;quot;;&lt;br /&gt;
EOF'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To see if apt reads this, enter this in command line (as root or regular user):&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
apt-config dump | grep -e Recommends -e Suggests&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Install necessary packages =&lt;br /&gt;
&lt;br /&gt;
Adobe Flash Player&lt;br /&gt;
 sudo apt-get install flashplugin-installer&lt;br /&gt;
&lt;br /&gt;
Java JRE&lt;br /&gt;
This will install the default verison Java for you distro plus Icedtea plugin for using Firefox with Java&lt;br /&gt;
 sudo apt-get install default-jre icedtea-plugin&lt;br /&gt;
&lt;br /&gt;
Unity Settings&lt;br /&gt;
 sudo apt-get install unity-control-center&lt;br /&gt;
&lt;br /&gt;
Opera&lt;br /&gt;
&lt;br /&gt;
Add Opera repository &amp;lt;code&amp;gt;'''deb &amp;lt;nowiki&amp;gt;http://deb.opera.com/opera/&amp;lt;/nowiki&amp;gt; stable non-free'''&amp;lt;/code&amp;gt; to the apt-get source list in &amp;lt;code&amp;gt;/etc/apt/sources.list&amp;lt;/code&amp;gt;. Then import a public PGP repository key.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
echo &amp;quot;deb http://deb.opera.com/opera/ stable non-free&amp;quot; | sudo tee -a /etc/apt/sources.list&lt;br /&gt;
wget -qO - http://deb.opera.com/archive.key | sudo apt-key add -&lt;br /&gt;
sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install opera&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Silverlight&lt;br /&gt;
&lt;br /&gt;
Pipelight has been released and we can use it for silverlight as a best alternative moonlight.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-add-repository ppa:ehoover/compholio&lt;br /&gt;
sudo apt-add-repository ppa:mqchael/pipelight&lt;br /&gt;
sudo apt-get update&lt;br /&gt;
sudo apt-get install pipelight&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= GUI tools =&lt;br /&gt;
* [https://github.com/hluk/CopyQ/releases copyQ] clipboard manager&lt;br /&gt;
* VisualVM&lt;br /&gt;
&lt;br /&gt;
= Customise Ubuntu =&lt;br /&gt;
==Fix Ubuntu Unity Dash Search for Applications and Files==&lt;br /&gt;
 sudo apt-get install unity-lens-files unity-lens-applications #log out and log back in required&lt;br /&gt;
&lt;br /&gt;
==Fix Ubuntu &amp;lt;17.10 missing Control Center==&lt;br /&gt;
 sudo apt-get install unity-control-center --no-install-recommends&lt;br /&gt;
&lt;br /&gt;
==Fix Ubuntu &amp;gt;18.04 missing System Settings==&lt;br /&gt;
 sudo apt install gnome-control-center&lt;br /&gt;
&lt;br /&gt;
==Remove background wallpaper ==&lt;br /&gt;
Tested on Ubuntu 14,16,18&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
gsettings set org.gnome.settings-daemon.plugins.background active true&lt;br /&gt;
gsettings set org.gnome.desktop.background draw-background false        #disable &lt;br /&gt;
gsettings set org.gnome.desktop.background primary-color &amp;quot;#000000&amp;quot;      #set to black&lt;br /&gt;
gsettings set org.gnome.desktop.background secondary-color &amp;quot;#000000&amp;quot;    #set to black&lt;br /&gt;
gsettings set org.gnome.desktop.background color-shading-type &amp;quot;solid&amp;quot;   #set solid colour&lt;br /&gt;
gsettings set org.gnome.desktop.background picture-uri file:///dev/null #remove wallpaper, not perfect but nothing worked in U15.10&lt;br /&gt;
gsettings set com.canonical.unity-greeter draw-user-backgrounds false   #disable not worked&lt;br /&gt;
&lt;br /&gt;
# Reset background picture to origin, U15.10&lt;br /&gt;
gsettings set org.gnome.desktop.background picture-uri file:///usr/share/backgrounds/warty-final-ubuntu.png &lt;br /&gt;
&lt;br /&gt;
# Sets Unity greeter background, &amp;lt;17.04&lt;br /&gt;
gsettings set com.canonical.unity-greeter background /usr/share/backgrounds/warty-final-ubuntu.png&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Disable screen lock out==&lt;br /&gt;
&amp;lt;code&amp;gt;dconf&amp;lt;/code&amp;gt; is legacy tool to configure &amp;lt;tt&amp;gt;gnome&amp;lt;/tt&amp;gt; nowadays more modern way is to use &amp;lt;code&amp;gt;gsettings&amp;lt;/code&amp;gt;.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
dconf write /org/gnome/desktop/screensaver/idle-activation-enabled false  #gnome&lt;br /&gt;
dconf write /org/gnome/desktop/screensaver/lock-enabled            false&lt;br /&gt;
&lt;br /&gt;
# Unity - Ubuntu 14.04, 16.04&lt;br /&gt;
gsettings set org.gnome.desktop.session     idle-delay   0      #disable the screen blackout:(0 to disable)&lt;br /&gt;
gsettings set org.gnome.desktop.screensaver lock-enabled false  #disable the screen lock&lt;br /&gt;
&lt;br /&gt;
# VirtualBox &amp;gt; Ubuntu 18.04 Disabling Xserver screen timeouts&lt;br /&gt;
xset s off     # Xserver s parameter sets screensaver to off&lt;br /&gt;
xset s noblank # prevent the display from blanking &lt;br /&gt;
xset -dpms     # prevent the monitor's DPMS energy saver from kicking in&lt;br /&gt;
&lt;br /&gt;
# Gnome - Ubuntu 18.04 LTS, Settings &amp;gt; Power &amp;gt; Blank screen &amp;gt; set to: Never&lt;br /&gt;
gsettings get org.gnome.desktop.lockdown    disable-lock-screen      # verify status&lt;br /&gt;
gsettings set org.gnome.desktop.lockdown    disable-lock-screen true # set disabled&lt;br /&gt;
gsettings get org.gnome.desktop.screensaver lock-enabled             # verify status&lt;br /&gt;
gsettings set org.gnome.desktop.screensaver lock-enabled false       # set disabled&lt;br /&gt;
dconf write  /org/gnome/desktop/screensaver/lock-enabled false       # set disbaled using dconf&lt;br /&gt;
gsettings set org.gnome.desktop.screensaver idle-activation-enabled false # some say it's last resort :)&lt;br /&gt;
&lt;br /&gt;
# Power management&lt;br /&gt;
gsettings set org.gnome.settings-daemon.plugins.power active true  #set gnome to be the default power management run&lt;br /&gt;
gsettings set org.gnome.settings-daemon.plugins.power active false #turn off power management&lt;br /&gt;
&lt;br /&gt;
# last resort as it was a bud in Ubuntu 11.10 with DPMS&lt;br /&gt;
gsettings set org.gnome.desktop.screensaver idle-activation-enabled false&lt;br /&gt;
gsettings set org.gnome.desktop.session idle-delay 2400&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Verify by navigating in &amp;lt;tt&amp;gt;dconf-editor&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;/org/gnome/desktop/screensaver/&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Change number of workspaces==&lt;br /&gt;
To get the current values:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
dconf read /org/compiz/profiles/unity/plugins/core/hsize&lt;br /&gt;
dconf read /org/compiz/profiles/unity/plugins/core/vsize&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To set new values:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
dconf write /org/compiz/profiles/unity/plugins/core/hsize 2&lt;br /&gt;
# or&lt;br /&gt;
gsettings set org.compiz.core:/org/compiz/profiles/unity/plugins/core/ hsize 4&lt;br /&gt;
gsettings set org.compiz.core:/org/compiz/profiles/unity/plugins/core/ vsize 4&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Clenup motd messages ==&lt;br /&gt;
Ubuntu at login displays a number standard messages taking terminal space causing potential loosing context of previous operations. &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
Welcome to Ubuntu 16.04.5 LTS (GNU/Linux 4.4.0-134-generic x86_64)&lt;br /&gt;
&lt;br /&gt;
 * Documentation:  https://help.ubuntu.com&lt;br /&gt;
 * Management:     https://landscape.canonical.com&lt;br /&gt;
 * Support:        https://ubuntu.com/advantage&lt;br /&gt;
&lt;br /&gt;
  Get cloud support with Ubuntu Advantage Cloud Guest:&lt;br /&gt;
    http://www.ubuntu.com/business/services/cloud&lt;br /&gt;
&lt;br /&gt;
1 package can be updated.&lt;br /&gt;
0 updates are security updates.&lt;br /&gt;
&lt;br /&gt;
New release '18.04.1 LTS' available.&lt;br /&gt;
Run 'do-release-upgrade' to upgrade to it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Last login: Fri Aug 31 12:11:28 2018 from 10.0.2.2&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is managed by files in &amp;lt;code&amp;gt;/etc/update-motd.d/&amp;lt;/code&amp;gt;, so deleting them will remove clutter on a screen&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ls /etc/update-motd.d/&lt;br /&gt;
00-header             51-cloudguest         91-release-upgrade    98-fsck-at-reboot     &lt;br /&gt;
10-help-text          90-updates-available  97-overlayroot        98-reboot-required &lt;br /&gt;
&lt;br /&gt;
# Ubuntu Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-1022-azure x86_64)&lt;br /&gt;
# Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-1021-aws x86_64)&lt;br /&gt;
sudo rm /etc/update-motd.d/{10-help-text,50-landscape-sysinfo,50-motd-news,51-cloudguest,80-livepatch,95-hwe-eol}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This cuts down to this message, Ubuntu 18.04 in AWS&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
Welcome to Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-1021-aws x86_64)&lt;br /&gt;
&lt;br /&gt;
0 packages can be updated.&lt;br /&gt;
0 updates are security updates.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Last login: Thu Jan 31 17:09:38 2019 from 10.10.11.11&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Useful setups =&lt;br /&gt;
== Call screen saver from a terminal to blank all screens ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# tested on Ubuntu 18.04 with Gnome&lt;br /&gt;
sudo apt-get install gnome-screensaver&lt;br /&gt;
gnome-screensaver-command -a #controls GNOME screensaver, -a activate (blank the screen)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create application launcher ==&lt;br /&gt;
;Ubuntu 18.04&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install the GNOME-panel toolset&lt;br /&gt;
sudo apt-get install --no-install-recommends gnome-panel&lt;br /&gt;
&lt;br /&gt;
# Every user launcher&lt;br /&gt;
sudo gnome-desktop-item-edit /usr/share/applications/VisualVM.desktop --create-new&lt;br /&gt;
&lt;br /&gt;
# Local user only, the filename by default is a Name-of-appication.desktop&lt;br /&gt;
gnome-desktop-item-edit ~/.local/share/applications --create-new &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:[[File:ClipCapIt-190807-080016.PNG]]&lt;br /&gt;
&lt;br /&gt;
;Ubuntu 19.10, 20.04&lt;br /&gt;
In above releases &amp;lt;code&amp;gt;gnome-desktop-item-edit&amp;lt;/code&amp;gt; has been removed from the &amp;lt;code&amp;gt;gnome-panel&amp;lt;/code&amp;gt; package, as an alternative &amp;lt;code&amp;gt;.desktop&amp;lt;/code&amp;gt; files can be created manually.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi /usr/share/applications/APPNAME.desktop&lt;br /&gt;
[Desktop Entry]&lt;br /&gt;
Name=&amp;lt;NAME OF THE APPLICATION&amp;gt;&lt;br /&gt;
Comment=&amp;lt;A SHORT DESCRIPTION&amp;gt;&lt;br /&gt;
Exec=&amp;lt;COMMAND-OR-FULL-PATH-TO-LAUNCH-THE-APPLICATION&amp;gt;&lt;br /&gt;
Type=Application&lt;br /&gt;
Terminal=false&lt;br /&gt;
Icon=&amp;lt;ICON NAME OR PATH TO ICON&amp;gt;&lt;br /&gt;
NoDisplay=false&lt;br /&gt;
Keywords=&amp;lt;eg. sql&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It's optional but you may need to right click and set 'allow launching' with addition to set executable permissions. Usual locations of &amp;lt;code&amp;gt;.desktop&amp;lt;/code&amp;gt; files are:&lt;br /&gt;
* &amp;lt;code&amp;gt;/usr/share/applications/&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/var/lib/snapd/desktop/applications/&amp;lt;/code&amp;gt; for snap applications&lt;br /&gt;
&lt;br /&gt;
== [https://github.com/paradoxxxzero/gnome-shell-system-monitor-applet gnome-shell-system-monitor-applet] - cpu, memory indicators ==&lt;br /&gt;
System information such as memory usage, cpu usage, network rates and more can be displayed in the notification area in GNOME Shell.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
System-monitor extensions:&lt;br /&gt;
[https://extensions.gnome.org/extension/120/system-monitor/ system-monitor] by paradoxxxzero on [https://github.com/paradoxxxzero/gnome-shell-system-monitor-applet github] supports Gnome-shell up to v40. It seems like abandoned project.&lt;br /&gt;
[https://extensions.gnome.org/extension/3010/system-monitor-next/ system-monitor-next] by mgalgs on [https://github.com/mgalgs/gnome-shell-system-monitor-applet github] supports Gnome-shell v40+, it's a fork of the above.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All extensions:&lt;br /&gt;
* https://extensions.gnome.org&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Note|The current version of the browser Firefox is packaged as a snap version. One of the issues with this is that it cannot work with the Gnome Extensions website.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install on Ubuntu 24.04 (June 2024)&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Ubuntu 20/22/24&lt;br /&gt;
GNOME Shell 46.0                                      # version as of Ubuntu 24.04&lt;br /&gt;
sudo apt install gnome-shell-extensions               # Ubuntu 20.04 already has this package, 24.04 needs installing it&lt;br /&gt;
sudo apt install gnome-shell-extension-manager        # Ubuntu 22|24.04 (as Firefox is installed as snap) on 24.04 it's v0.5.0&lt;br /&gt;
&lt;br /&gt;
# Open `Extensions` app, turn &amp;quot;Use Extensions&amp;quot;.       # Already turned on on Ubuntu 24.04&lt;br /&gt;
# Open Browse tab &amp;gt; search for 'system-monitor-next'  # cpu/mem/net indicators will appear in the system tray&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Additional steps for Ubuntu &amp;lt; 24.04&lt;br /&gt;
sudo apt install gnome-tweaks                         # GUI to manage gnome-extensions&lt;br /&gt;
sudo apt install gir1.2-gtop-2.0 gir1.2-nm-1.0 gir1.2-clutter-1.0 gnome-system-monitor&lt;br /&gt;
sudo apt install gnome-shell-extension-system-monitor # after requires log out&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Download the extension from&lt;br /&gt;
## https://extensions.gnome.org/extension/120/system-monitor/&lt;br /&gt;
&lt;br /&gt;
# Never worked out how to use this direct download and install via 'gnome-extensions install &amp;lt;extension_name&amp;gt;'&lt;br /&gt;
## wget https://github.com/paradoxxxzero/gnome-shell-system-monitor-applet/archive/v38.zip&lt;br /&gt;
## gnome-extensions install &amp;lt;system-monitor@paradoxxx.zero.gmail.com&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Enable extension using cli&lt;br /&gt;
gnome-extensions enable system-monitor-next@paradoxxx.zero.gmail.com&lt;br /&gt;
gnome-extensions list --user&lt;br /&gt;
clipboard-indicator@tudmotu.com&lt;br /&gt;
system-monitor-next@paradoxxx.zero.gmail.com&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
:[[File:ClipCapIt-210105-084527.PNG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== [https://github.com/paradoxxxzero/gnome-shell-system-monitor-applet/issues/737#issuecomment-1230654455 Ubuntu 22.04 workaround for the OUTDATED extension] ===&lt;br /&gt;
{{Note|Workaround still needed in August 2022}}&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo apt install gir1.2-gtop-2.0 gir1.2-nm-1.0 gir1.2-clutter-1.0 gnome-system-monitor&lt;br /&gt;
git clone https://github.com/paradoxxxzero/gnome-shell-system-monitor-applet.git&lt;br /&gt;
cd gnome-shell-system-monitor-applet # commit b359d88 verified&lt;br /&gt;
vi system-monitor@paradoxxx.zero.gmail.com/metadata.json &lt;br /&gt;
# | change &amp;quot;version&amp;quot;: -1 to &amp;quot;version&amp;quot;: 42&lt;br /&gt;
make install&lt;br /&gt;
# log out and back in (required)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Snapd - Chromium =&lt;br /&gt;
Recently in U19+ Chromium get installed via snapd package. This is classic installation that has limited access to only a certain directories. It happen that when working with AWS we need get access to &amp;lt;code&amp;gt;~/.ssh&amp;lt;/code&amp;gt; folder to get ec2 machine password. This folder is denied, but we can bind mount &amp;lt;code&amp;gt;~/.ssh&amp;lt;/code&amp;gt; folder into the snap container directory:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ snap list chromium &lt;br /&gt;
Name      Version        Rev   Tracking       Publisher   Notes&lt;br /&gt;
chromium  86.0.4240.111  1373  latest/stable  canonical✓  -&lt;br /&gt;
&lt;br /&gt;
# cd to chromim $HOME dir&lt;br /&gt;
mkdir ~/snap/chromium/current/.ssh&lt;br /&gt;
sudo mount --bind ~/.ssh/ ~/snap/chromium/current/.ssh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Screen shooting =&lt;br /&gt;
In Ubuntu 20.04 Shutter is not a part of default repositories. It can be added via PPA:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo add-apt-repository -y ppa:linuxuprising/shutter&lt;br /&gt;
sudo apt-get install shutter&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Audio - [https://rastating.github.io/setting-default-audio-device-in-ubuntu-18-04/ set defaults] =&lt;br /&gt;
For preserving settings using GUI you can install [https://freedesktop.org/software/pulseaudio/pavucontrol/ PulseAudio Volume Control] &amp;lt;code&amp;gt;pavucontrol&amp;lt;/code&amp;gt;.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# install&lt;br /&gt;
sudo apt install pavucontrol # Ubuntu 20.04&lt;br /&gt;
# run&lt;br /&gt;
pavucontrol&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set default output/input device. In Ubuntu PulseAudio is used to control audio devices. It contains following configuration files&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
/etc/pulse/default.pa # system wide&lt;br /&gt;
~/.config/pulse       # user configuration&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set defaults&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# List devices: modules, sinks, sources, sink-inputs, source-outputs, clients, samples, cards&lt;br /&gt;
# sinks - outputs, sink-inputs, sources - all input/output including RUNNING and SUSPENDED devices&lt;br /&gt;
$ pactl list short sources | column -t&lt;br /&gt;
5   alsa_output.pci-0000_00_1f.3-platform-skl_hda_dsp_generic.HiFi__hw_sofhdadsp_5__sink.monitor  module-alsa-card.c  s16le  2ch  48000Hz  SUSPENDED&lt;br /&gt;
6   alsa_output.pci-0000_00_1f.3-platform-skl_hda_dsp_generic.HiFi__hw_sofhdadsp_4__sink.monitor  module-alsa-card.c  s16le  2ch  48000Hz  RUNNING&lt;br /&gt;
7   alsa_output.pci-0000_00_1f.3-platform-skl_hda_dsp_generic.HiFi__hw_sofhdadsp_3__sink.monitor  module-alsa-card.c  s16le  2ch  48000Hz  SUSPENDED&lt;br /&gt;
8   alsa_output.pci-0000_00_1f.3-platform-skl_hda_dsp_generic.HiFi__hw_sofhdadsp__sink.monitor    module-alsa-card.c  s16le  2ch  48000Hz  SUSPENDED&lt;br /&gt;
9   alsa_input.pci-0000_00_1f.3-platform-skl_hda_dsp_generic.HiFi__hw_sofhdadsp__source           module-alsa-card.c  s16le  2ch  48000Hz  SUSPENDED&lt;br /&gt;
10  alsa_input.pci-0000_00_1f.3-platform-skl_hda_dsp_generic.HiFi__hw_sofhdadsp_6__source         module-alsa-card.c  s16le  4ch  48000Hz  SUSPENDED&lt;br /&gt;
15  alsa_output.usb-DisplayLink_Dell_Universal_Dock_D6000_1806021690-02.analog-stereo.monitor     module-alsa-card.c  s16le  2ch  48000Hz  SUSPENDED&lt;br /&gt;
17  alsa_output.usb-Plantronics_Plantronics_DA40-00.multichannel-output.monitor                   module-alsa-card.c  s16le  1ch  48000Hz  SUSPENDED&lt;br /&gt;
19  alsa_input.usb-Plantronics_Plantronics_DA40-00.mono-fallback                                  module-alsa-card.c  s16le  1ch  16000Hz  SUSPENDED&lt;br /&gt;
20  alsa_input.usb-DisplayLink_Dell_Universal_Dock_D6000_1806021690-02.iec958-stereo              module-alsa-card.c  s16le  2ch  48000Hz  RUNNING&lt;br /&gt;
&lt;br /&gt;
# Set defaut output device. Tab autocompletion should work (U20.04)&lt;br /&gt;
pactl set-default-sink alsa_output.usb-Plantronics_Plantronics_DA40-00.multichannel-output&lt;br /&gt;
# Set defaut input device&lt;br /&gt;
pactl set-default-source alsa_input.usb-Plantronics_Plantronics_DA40-00.mono-fallback&lt;br /&gt;
&lt;br /&gt;
# Test, play some audio then run. IDLE - means in use&lt;br /&gt;
pactl list short sources | column -t | grep -e RUNNING -e IDLE&lt;br /&gt;
17  alsa_output.usb-Plantronics_Plantronics_DA40-00.multichannel-output.monitor                   module-alsa-card.c  s16le  1ch  48000Hz  IDLE&lt;br /&gt;
19  alsa_input.usb-Plantronics_Plantronics_DA40-00.mono-fallback                                  module-alsa-card.c  s16le  1ch  16000Hz  RUNNING&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Make it permanent by setting default device in PulseAudio system configuration file&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Output device&lt;br /&gt;
OUTPUT_DEVICE=alsa_output.usb-Plantronics_Plantronics_DA40-00.mono-fallback&lt;br /&gt;
sudo sed -i &amp;quot;s/#\(set-default-sink\) output/\1 ${OUTPUT_DEVICE}/g&amp;quot; /etc/pulse/default.pa # remove '-i' to test before apply&lt;br /&gt;
# Input device&lt;br /&gt;
INPUT_DEVICE=alsa_input.usb-Plantronics_Plantronics_DA40-00.mono-fallback&lt;br /&gt;
sudo sed -i &amp;quot;s/#\(set-default-source\) input/\1 ${INPUT_DEVICE}/g&amp;quot; /etc/pulse/default.pa&lt;br /&gt;
&lt;br /&gt;
vi /etc/pulse/default.pa # make sure lines below are in place&lt;br /&gt;
### Make some devices default&lt;br /&gt;
set-default-sink   alsa_output.usb-Plantronics_Plantronics_DA40-00.mono-fallback&lt;br /&gt;
set-default-source  alsa_input.usb-Plantronics_Plantronics_DA40-00.mono-fallback&lt;br /&gt;
&lt;br /&gt;
# Delete local user profile and restart system, after boot new defaults should be set&lt;br /&gt;
rm -r ~/.config/pulse&lt;br /&gt;
&lt;br /&gt;
# After reboot, defaults should be set&lt;br /&gt;
cat ~/.config/pulse/*default*&lt;br /&gt;
alsa_output.usb-Plantronics_Plantronics_DA40-00.mono-fallback&lt;br /&gt;
alsa_input.usb-Plantronics_Plantronics_DA40-00.mono-fallback&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Troubleshooting&lt;br /&gt;
PulseAudio cli&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
pacmd&lt;br /&gt;
&amp;gt;&amp;gt;&amp;gt; help # lists all available commands&lt;br /&gt;
&lt;br /&gt;
pulseaudio --check # Check if any pulseaudio instance is running. It normally prints no output, just exit code. 0 means running&lt;br /&gt;
pulseaudio --kill  # kill, then --start&lt;br /&gt;
pulseaudio -D      # start pulseaudio as a daemon&lt;br /&gt;
# | using /etc/pulse/daemon.conf&lt;br /&gt;
&lt;br /&gt;
# Pulseaudio is a user service&lt;br /&gt;
systemctl --user restart pulseaudio.service&lt;br /&gt;
systemctl --user restart pulseaudio.socket&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I have a port replicator Dell D-6000, that gets randomly disconnected causing switching audio to new connected device - means itself. As workaround commenting out lines below stops this behaviour.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi /etc/pulse/default.pa&lt;br /&gt;
### Use hot-plugged devices like Bluetooth or USB automatically (LP: #1702794)&lt;br /&gt;
# .ifexists module-switch-on-connect.so&lt;br /&gt;
# load-module module-switch-on-connect&lt;br /&gt;
# .endif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Input devices =&lt;br /&gt;
Motivation is to enable horizontal scrolling in Ubuntu 20.04 using Perixx Gamig Mouse Mx2000&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
xinput list&lt;br /&gt;
⎡ Virtual core pointer                    	id=2	[master pointer  (3)]&lt;br /&gt;
⎜   ↳ Virtual core XTEST pointer              	id=4	[slave  pointer  (2)]&lt;br /&gt;
⎜   ↳ Holtek USB Gaming Mouse                 	id=11	[slave  pointer  (2)]&lt;br /&gt;
⎜   ↳ SYNA8007:00 06CB:CD8C Mouse             	id=14	[slave  pointer  (2)]&lt;br /&gt;
⎜   ↳ SYNA8007:00 06CB:CD8C Touchpad          	id=15	[slave  pointer  (2)]&lt;br /&gt;
⎜   ↳ TPPS/2 Elan TrackPoint                  	id=19	[slave  pointer  (2)]&lt;br /&gt;
⎣ Virtual core keyboard                   	id=3	[master keyboard (2)]&lt;br /&gt;
    ↳ Virtual core XTEST keyboard             	id=5	[slave  keyboard (3)]&lt;br /&gt;
    ↳ Power Button                            	id=6	[slave  keyboard (3)]&lt;br /&gt;
    ↳ Video Bus                               	id=7	[slave  keyboard (3)]&lt;br /&gt;
    ↳ Sleep Button                            	id=8	[slave  keyboard (3)]&lt;br /&gt;
    ↳ CHICONY HP Basic USB Keyboard           	id=9	[slave  keyboard (3)]&lt;br /&gt;
    ↳ Holtek USB Gaming Mouse                 	id=10	[slave  keyboard (3)]&lt;br /&gt;
    ↳ Integrated Camera: Integrated C         	id=12	[slave  keyboard (3)]&lt;br /&gt;
    ↳ Integrated Camera: Integrated I         	id=13	[slave  keyboard (3)]&lt;br /&gt;
    ↳ sof-hda-dsp Headset Jack                	id=16	[slave  keyboard (3)]&lt;br /&gt;
    ↳ Intel HID events                        	id=17	[slave  keyboard (3)]&lt;br /&gt;
    ↳ AT Translated Set 2 keyboard            	id=18	[slave  keyboard (3)]&lt;br /&gt;
    ↳ ThinkPad Extra Buttons                  	id=20	[slave  keyboard (3)]&lt;br /&gt;
    ↳ Holtek USB Gaming Mouse                 	id=21	[slave  keyboard (3)]&lt;br /&gt;
&lt;br /&gt;
# test mouse aka Virtual core pointer&lt;br /&gt;
xinput test 11&lt;br /&gt;
motion a[0]=2023  # &amp;lt;- cursor moving&lt;br /&gt;
motion a[0]=2024 a[1]=1411 &lt;br /&gt;
motion a[3]=19545 # &amp;lt;- scroll down &lt;br /&gt;
button press   5 &lt;br /&gt;
button release 5 &lt;br /&gt;
&lt;br /&gt;
# test 'virtual core keyboard' aka additional programmable buttons&lt;br /&gt;
## '10' - this virtual keyboard for all buttons except the scrolling wheel&lt;br /&gt;
xinput test 10&lt;br /&gt;
key press   37&lt;br /&gt;
key press   38&lt;br /&gt;
&lt;br /&gt;
## '21' - this is scrolling wheel buttons left/right, not scrolling itself&lt;br /&gt;
xinput test 21&lt;br /&gt;
key press   248 &lt;br /&gt;
key release 248 &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# List of properties of a device. We want to see 'horizontal scrolling wheel buttons'&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ xinput list-props  21&lt;br /&gt;
Device 'Holtek USB Gaming Mouse':&lt;br /&gt;
	Device Enabled (169):	1&lt;br /&gt;
	Coordinate Transformation Matrix (171):	1.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 1.000000&lt;br /&gt;
	libinput Send Events Modes Available (291):	1, 0&lt;br /&gt;
	libinput Send Events Mode Enabled (292):	0, 0&lt;br /&gt;
	libinput Send Events Mode Enabled Default (293):	0, 0&lt;br /&gt;
	Device Node (294):	&amp;quot;/dev/input/event10&amp;quot;&lt;br /&gt;
	Device Product ID (295):	1241, 41063&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
[[Category:linux]]&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=Docker&amp;diff=7024</id>
		<title>Docker</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=Docker&amp;diff=7024"/>
		<updated>2024-05-31T22:29:10Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Add a user to docker group */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Containers taking a world J&lt;br /&gt;
&lt;br /&gt;
= [https://docs.docker.com/install/linux/docker-ce/ubuntu/ Installation] =&lt;br /&gt;
General procedure:&lt;br /&gt;
# Make sure you don't have docker already installed from your packet manager&lt;br /&gt;
# The /var/lib/docker may be &lt;br /&gt;
&lt;br /&gt;
To install the latest version of Docker with curl:&lt;br /&gt;
&amp;lt;source  lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
curl -sSL https://get.docker.com/ | sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== CentOS ==&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo yum install bash-completion bash-completion-extras #optional, requires you log out&lt;br /&gt;
sudo yum install -y yum-utils device-mapper-persistent-data lvm2 #utils&lt;br /&gt;
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo #docker-ee.repo for EE edition&lt;br /&gt;
                      # --enable docker-ce-{edge|test} #for beta releases&lt;br /&gt;
sudo yum update&lt;br /&gt;
sudo yum clean all #not sure why this command is here&lt;br /&gt;
sudo yum install docker-ce&lt;br /&gt;
#old: sudo yum install -y --setopt=obsoletes=0 docker-ce-17.03.1.ce-1.el7.centos docker-ce-selinux-17.03.1.ce-1.el7.centos&lt;br /&gt;
sudo systemctl enable docker &amp;amp;&amp;amp; sudo systemctl start docker &amp;amp;&amp;amp; sudo systemctl status docker&lt;br /&gt;
yum-config-manager --disable jenkins #disable source to prevent accidental update ?jenkins?&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Ubuntu 16.04, 18.04, 20.04 ==&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Optional, clear out config files&lt;br /&gt;
sudo rm /etc/systemd/system/docker.service.d/docker.conf&lt;br /&gt;
sudo rm /etc/systemd/system/docker.service&lt;br /&gt;
sudo rm /etc/default/docker #environment file&lt;br /&gt;
&lt;br /&gt;
# New docker package is called now 'docker-ce'&lt;br /&gt;
sudo apt-get remove docker docker-engine docker.io containerd runc docker-ce  # start fresh&lt;br /&gt;
sudo apt-get -yq install apt-transport-https ca-certificates curl gnupg-agent software-properties-common # apt over HTTPs&lt;br /&gt;
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - # Docker official GPG key&lt;br /&gt;
sudo apt-key fingerprint 0EBFCD88 #verify&lt;br /&gt;
&lt;br /&gt;
#add the repository&lt;br /&gt;
sudo add-apt-repository &amp;quot;deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable&amp;quot; # or {edge|test}&lt;br /&gt;
sudo apt-get update # optional&lt;br /&gt;
&lt;br /&gt;
# Option 1 - install latest&lt;br /&gt;
sudo apt-get install docker-ce docker-ce-cli containerd.io&lt;br /&gt;
&lt;br /&gt;
# Option 2 - install fixed version&lt;br /&gt;
sudo apt-cache madison docker-ce # display available versions&lt;br /&gt;
sudo apt-get   install docker-ce=&amp;lt;VERSION_STRING&amp;gt;          docker-ce-cli=&amp;lt;VERSION_STRING&amp;gt;          containerd.io&lt;br /&gt;
sudo apt-get   install docker-ce=18.09.0~3-0~ubuntu-bionic docker-ce-cli=18.09.0~3-0~ubuntu-bionic containerd.io&lt;br /&gt;
sudo apt-mark  hold    docker-ce docker-ce-cli containerd.io&lt;br /&gt;
sudo apt-mark  showhold # show packages that version upgrade has been put on hold&lt;br /&gt;
&lt;br /&gt;
# Unhold&lt;br /&gt;
sudo apt-mark unhold   docker-ce docker-ce-cli containerd.io&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://docs.docker.com/engine/release-notes/ Newer versions] (&amp;gt;18.09.0) of Docker come with 3 packages:&lt;br /&gt;
* &amp;lt;code&amp;gt;containerd.io&amp;lt;/code&amp;gt; - daemon to interface with the OS API (in this case, LXC - Linux Containers), essentially decouples Docker from the OS, also provides container services for non-Docker container managers&lt;br /&gt;
* &amp;lt;code&amp;gt;docker-ce&amp;lt;/code&amp;gt; - Docker daemon, this is the part that does all the management work, requires the other two on Linux&lt;br /&gt;
* &amp;lt;code&amp;gt;docker-ce-cli&amp;lt;/code&amp;gt; - CLI tools to control the daemon, you can install them on their own if you want to control a remote Docker daemon&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example of how to run [[Jenkins CI|Jenkins docker image]]&lt;br /&gt;
&lt;br /&gt;
== Add a user to docker group ==&lt;br /&gt;
Add your user to &amp;lt;tt&amp;gt;docker group&amp;lt;/tt&amp;gt; to be able to run docker commands without need of ''sudo'' as the &amp;lt;code&amp;gt;docker.socket&amp;lt;/code&amp;gt; is owned by group &amp;lt;code&amp;gt;docker&amp;lt;/code&amp;gt;.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo usermod -aG docker $(whoami)&lt;br /&gt;
&lt;br /&gt;
# log in to the new docker group (to avoid having to log out / log in again)&lt;br /&gt;
newgrp docker&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Reason&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
[root@piotr]$ ls -al /var/run/docker.sock&lt;br /&gt;
srw-rw----. 1 root docker 7 Jan 09:00 docker.sock&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= HTTP proxy =&lt;br /&gt;
Configure ''docker'' if you run behind a proxy server. In this example CNTLM proxy runs on the host machine listening on localhost:3128. This example overrides the default docker.service file by adding configuration to the Docker systemd service file.&lt;br /&gt;
&lt;br /&gt;
First, create a systemd drop-in directory for the docker service:&lt;br /&gt;
&amp;lt;source lang=bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo mkdir /etc/systemd/system/docker.service.d&lt;br /&gt;
sudo vi    /etc/systemd/system/docker.service.d/http-proxy.conf&lt;br /&gt;
[Service]&lt;br /&gt;
Environment=&amp;quot;HTTP_PROXY=http://proxy.example.com:80/&amp;quot;&lt;br /&gt;
Environment=&amp;quot;HTTP_PROXY=http://172.31.1.1:3128/&amp;quot; #overrides previous entry&lt;br /&gt;
Environment=&amp;quot;HTTPS_PROXY=http://172.31.1.1:3128/&amp;quot;&lt;br /&gt;
# If you have internal Docker registries that you need to contact without proxying you can specify them via the NO_PROXY environment variable&lt;br /&gt;
Environment=&amp;quot;NO_PROXY=localhost,127.0.0.1,10.6.96.172,proxy.example.com:80&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Flush changes:&lt;br /&gt;
 $ sudo systemctl daemon-reload&lt;br /&gt;
Verify that the configuration has been loaded:&lt;br /&gt;
 $ systemctl show --property=Environment docker&lt;br /&gt;
 Environment=HTTP_PROXY=&amp;lt;nowiki&amp;gt;http://proxy.example.com:80/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
Restart Docker:&lt;br /&gt;
 $ sudo systemctl restart docker&lt;br /&gt;
&lt;br /&gt;
= Docker create and run, basic options = &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# It will create a container but won't start it up&lt;br /&gt;
docker container create -it --name=&amp;quot;my-container&amp;quot; ubuntu:latest /bin/bash&lt;br /&gt;
docekr container start my-container&lt;br /&gt;
&lt;br /&gt;
docker run -it --name=&amp;quot;mycentos&amp;quot; centos:latest /bin/bash&lt;br /&gt;
# -i   :- interactive mode (attach to STDIN)          \command to execute when instantiating container &lt;br /&gt;
# -t   :- attach to the current terminal (sudo TTY)&lt;br /&gt;
# -d   :- disconnect mode, daemon mode, detached mode, running the task in the background&lt;br /&gt;
# -p   :- publish to host exposed container port [ host_port(8080):container_exposedPort(80) ]&lt;br /&gt;
# --rm :- remove container after command has been executed&lt;br /&gt;
# --name=&amp;quot;name_your_container&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# -e|--env MYVAR=123 exports/passing variable to the container, echo $MYVAR will have a value 123&lt;br /&gt;
# --privileged :- option will allow Docker to perform actions normally restricted, &lt;br /&gt;
#                 like binding a device path to an internal container path. &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Docker inspect =&lt;br /&gt;
== inspect image ==&lt;br /&gt;
&amp;lt;source&amp;gt; &lt;br /&gt;
docker image inspect centos:6&lt;br /&gt;
docker image inspect centos:6 --format '{{.ContainerConfig.Hostname}}' #just a single value&lt;br /&gt;
docker image inspect centos:6 --format '{{json .ContainerConfig}}'     #json key/value output&lt;br /&gt;
docker image inspect centos:6 --format '{{.RepoTags}}'                 #shows all associated tags with the image&lt;br /&gt;
&amp;lt;/source&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Using &amp;lt;code&amp;gt;--format&amp;lt;/code&amp;gt; is similar to &amp;lt;code&amp;gt;jq&amp;lt;/code&amp;gt;&lt;br /&gt;
== inspect container ==&lt;br /&gt;
Shows current configuration state of a docker container or an image.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
docker inspect &amp;lt;container_name&amp;gt; | grep IPAddress&lt;br /&gt;
           &amp;quot;SecondaryIPAddresses&amp;quot;: null,&lt;br /&gt;
           &amp;quot;IPAddress&amp;quot;: &amp;quot;172.17.0.3&amp;quot;,&lt;br /&gt;
                   &amp;quot;IPAddress&amp;quot;: &amp;quot;172.17.0.3&amp;quot;,&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Attach/exec to a docker process =&lt;br /&gt;
If you are running eg. &amp;lt;tt&amp;gt;/bin/bash&amp;lt;/tt&amp;gt; as a command you can get attached to this running docker process. Note that when you exit the container will stop.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker attach mycentos&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To avoid stopping a container on exit of &amp;lt;code&amp;gt;attach&amp;lt;/code&amp;gt; command we can use &amp;lt;code&amp;gt;exec&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker exec -it mycentos /bin/bash&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Attaching directly to a running container and then exiting the shell will cause the container to stop. Executing another shell in a running container and then exiting that shell will not stop the underlying container process started on instantiation.&lt;br /&gt;
&lt;br /&gt;
= Entrypoint, CMD, PID1 and [https://github.com/krallin/tini tini] =&lt;br /&gt;
== Entrypoint and receiving signals ==&lt;br /&gt;
Reciving signals and handling them within containers here Docker it's the same important as for any other application. Remember containers it's a group of processes running on your host so you need to take care of signals send to your applications.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As a principal container management tool eg. &amp;lt;code&amp;gt;docker stop&amp;lt;/code&amp;gt; sends a configurable (in Dockerfile)  signal to the entrypoint of your application where &amp;lt;code&amp;gt;SIGTERM - 15 - Termination (ANSI)&amp;lt;/code&amp;gt; is default.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;ENTRYPOINT syntax&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
# exec form, require JSON array; IT SHOULD ALWAYS BE USED&lt;br /&gt;
ENTRYPOINT [&amp;quot;/app/bin/your-app&amp;quot;, &amp;quot;arg1&amp;quot;, &amp;quot;arg2&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
# shell form, it always runs as a subcommand of '/bin/sh -c', thus your application will never see any signal sent to it&lt;br /&gt;
ENTRYPOINT &amp;quot;/app/bin/your-app arg1 arg2&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;ENTRYPOINT is a shell script&lt;br /&gt;
If application is started by shell script regular way, your shell spawns your application in a new process and you won’t receive signals from Docker. Therefore we need to tell shell to replace itself with your application using the &amp;lt;code&amp;gt;[https://stackoverflow.com/questions/18351198/what-are-the-uses-of-the-exec-command-in-shell-scripts exec]&amp;lt;/code&amp;gt; command, check also &amp;lt;code&amp;gt;[https://en.wikipedia.org/wiki/Exec_(system_call) exec syscall]&amp;lt;/code&amp;gt;. Use:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
/app/bin/my-app      # incorrect, signal won't be received by 'my-app'&lt;br /&gt;
exec /app/bin/my-app # correct way&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;ENTRYPOINT exec with a pipe commands causing starting a subshell&lt;br /&gt;
If you &amp;lt;code&amp;gt;exec&amp;lt;/code&amp;gt; piping will force a command to be run in a subshell with the usual consequence: no signals to an app.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
exec /app/bin/your-app | tai64n # here you want to add timestamps by piping through tai64n,&lt;br /&gt;
                                # causing running your command in a subshell&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Let another program to be PID1 and handle signalling&lt;br /&gt;
* tini&lt;br /&gt;
* dump-init&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
ENTRYPOINT [&amp;quot;/tini&amp;quot;, &amp;quot;-v&amp;quot;, &amp;quot;--&amp;quot;, &amp;quot;/app/bin/docker-entrypoint.sh&amp;quot;]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
tini and dumb-init are also able to proxy signals to process groups which technically allows you to pipe your output.However, your pipe target receives that signal at the same time so you can’t log anything on cleanup lest you crave race conditions and SIGPIPEs. So, it's better to avoid logging at termination at all.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Change signal that will terminate your container process&lt;br /&gt;
Listen for SIGTERM or set STOPSIGNAL in your Dockerfile.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi Dockerfile&lt;br /&gt;
STOPSIGNAL SIGINT # this will trigger container termination process if someone press Ctrl^C&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;References:&lt;br /&gt;
* [https://hynek.me/articles/docker-signals/ Why Your Dockerized Application Isn’t Receiving Signals]&lt;br /&gt;
* [http://smarden.org/runit/ runit] alternative to tini&lt;br /&gt;
&lt;br /&gt;
== Tini ==&lt;br /&gt;
It's a tiny but valid init for containers:&lt;br /&gt;
* protects you from software that accidentally creates zombie processes&lt;br /&gt;
* ensures that the default signal handlers work for the software you run in your Docker image&lt;br /&gt;
* does so completely transparently! Docker images that work without Tini will work with Tini without any changes&lt;br /&gt;
* Docker 1.13+ has Tini included, to enable Tini, just pass the &amp;lt;code&amp;gt;--init&amp;lt;/code&amp;gt; flag to docker run&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Understanding Tini&lt;br /&gt;
After spawning your process, Tini will wait for signals and forward those to the child process, and periodically reap zombie processes that may be created within your container. When the &amp;quot;first&amp;quot; child process exits (/your/program in the examples above), Tini exits as well, with the exit code of the child process (so you can check your container's exit code to know whether the child exited successfully).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add Tini - general dynamicly-linked library (in the 10KB range)&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
ENV TINI_VERSION v0.18.0&lt;br /&gt;
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini&lt;br /&gt;
RUN chmod +x /tini&lt;br /&gt;
ENTRYPOINT [&amp;quot;/tini&amp;quot;, &amp;quot;--&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
# Run your program under Tini&lt;br /&gt;
CMD [&amp;quot;/your/program&amp;quot;, &amp;quot;-and&amp;quot;, &amp;quot;-its&amp;quot;, &amp;quot;arguments&amp;quot;]&lt;br /&gt;
# or docker run your-image /your/program ...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Add Tini to Alpine based image&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
RUN apk add --no-cache tini&lt;br /&gt;
# Tini is now available at /sbin/tini&lt;br /&gt;
ENTRYPOINT [&amp;quot;/sbin/tini&amp;quot;, &amp;quot;--&amp;quot;]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Existing entrypoint&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
ENTRYPOINT [&amp;quot;/tini&amp;quot;, &amp;quot;--&amp;quot;, &amp;quot;/docker-entrypoint.sh&amp;quot;]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;References:&lt;br /&gt;
*[https://github.com/krallin/tini/issues/8 What is advantage of Tini?]&lt;br /&gt;
*[https://ahmet.im/blog/minimal-init-process-for-containers/ Choosing an init process for multi-process containers]&lt;br /&gt;
&lt;br /&gt;
= Mount directory in container =&lt;br /&gt;
We can mount host directory into docker container so the content will be available from the container&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
docker run -it -v /mnt/sdb1:/opt/java pio2pio/java8&lt;br /&gt;
# syntax: -v /path/on/host:/path/in/container&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Build image = &lt;br /&gt;
== Dockerfile ==&lt;br /&gt;
Each line ''RUN'' creates a container so if possible, we should join lines so it ends up with less layers.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt; &lt;br /&gt;
$ wget jkd1.8.0_111.tar.gz&lt;br /&gt;
$ cat Dockerfile &amp;lt;&amp;lt;- EOF #'&amp;lt;&amp;lt;-' heredoc with '-' minus ignores &amp;lt;tab&amp;gt; indent&lt;br /&gt;
ARG TAGVERSION=6                    #only command allowed b4 FROM&lt;br /&gt;
FROM ubuntu:${TAGVERSION}&lt;br /&gt;
FROM ubuntu:latest                  #defines base image eg. ubuntu:16.04&lt;br /&gt;
LABEL maintainer=&amp;quot;myname@gmail.com&amp;quot; #key/value pair added to a metadata of the image&lt;br /&gt;
&lt;br /&gt;
ARG ARG1=value1&lt;br /&gt;
&lt;br /&gt;
ENV ENVIRONMENT=&amp;quot;prod&amp;quot;&lt;br /&gt;
ENV SHARE /usr/local/share  #define env variables with syntax ENV space EnvironmetVariable space Value&lt;br /&gt;
ENV JAVA_HOME $SHARE/java&lt;br /&gt;
&lt;br /&gt;
# COPY jkd1.8.0_111.tar.gz /tmp #works only with files, copy a file to container filesystem, here to /tmp&lt;br /&gt;
# ADD http://example.com/file.txt&lt;br /&gt;
ADD jkd1.8.0_111.tar.gz /  #add files into the image root folder, can add also URLs&lt;br /&gt;
&lt;br /&gt;
# SHELL [&amp;quot;executable&amp;quot;,&amp;quot;params&amp;quot;] #overrides /bin/sh -c for RUN,CMD, etc..&lt;br /&gt;
&lt;br /&gt;
# Executes commands during build process in a new layer E.g., it is often used for installing software packages&lt;br /&gt;
RUN mv /jkd1.8.0_111.tar.gz $JAVA_HOME &lt;br /&gt;
RUN apt-get update&lt;br /&gt;
RUN [&amp;quot;apt-get&amp;quot;, &amp;quot;update&amp;quot;, &amp;quot;-y&amp;quot;] #in json array format, allows to run a commands but does not require shell executable&lt;br /&gt;
&lt;br /&gt;
VOLUME /mymount_point #this command does not mount anything from a host, just creates a mountpoint&lt;br /&gt;
&lt;br /&gt;
EXPOSE 80 #it doesn't automatically map the port to a hosts&lt;br /&gt;
&lt;br /&gt;
#containers usually don't have system maangement eg. systemctl/service/init.d as designed to run as single process&lt;br /&gt;
#entry point becomes main command that start the main proces&lt;br /&gt;
ENTRYPOINT apachectl &amp;quot;-DFOREGROUND&amp;quot; #think about it as the MAIN_PURPOSE_OF_CONTAINER command. &lt;br /&gt;
# It's always run by default it cannot be overridden&lt;br /&gt;
&lt;br /&gt;
#Single command that will run after the image has been created. Only one per dockerfile, can be overriden.&lt;br /&gt;
CMD [&amp;quot;/bin/bash&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
# STOPSIGNAL&lt;br /&gt;
&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Build ==&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt; &lt;br /&gt;
docker build --tag myrepo/java8 .  #-f point to custom Dockerfile name eg. -f Dockerfile2&lt;br /&gt;
# myrepo dockerhub username, java8 -image name, &lt;br /&gt;
# .      directory where is the Dockerfile&lt;br /&gt;
&lt;br /&gt;
docker build -t myrepo/java8 . --pull --no-cache --squash&lt;br /&gt;
# --pull     regardless a local copy of an image can exist force to download a new image&lt;br /&gt;
# --no-cache don't use cache to build, forcing to rebuild all interim containers&lt;br /&gt;
# --squash   after the build squash all layers into a single layer. &lt;br /&gt;
&lt;br /&gt;
docker images             #list images&lt;br /&gt;
docker push myrepo/java8 #upload the image to DockerHub repository&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;squash&amp;lt;/code&amp;gt; is enabled only on docker demon with experimental features enabled.&lt;br /&gt;
&lt;br /&gt;
= Manage containers and images =&lt;br /&gt;
== Run a container ==&lt;br /&gt;
When you ''run'' a container you will create a new container from a image that has been already build/ is available then put in running state&lt;br /&gt;
* -d detached mode, the container will continue to run after the CMD or passed on command exited&lt;br /&gt;
* -i interactive mode, allows you to login in /ssh to the container&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# docker container run [OPTIONS]           IMAGE    [COMMAND] [ARG...] # usage&lt;br /&gt;
  docker container run -it --name mycentos centos:6 /bin/bash&lt;br /&gt;
  docker           run -it pio2pio/java8 #container section command is optional&lt;br /&gt;
# -i       :- run in interactive mode, then run command /bin/bash&lt;br /&gt;
# --rm     :- will delete container after run&lt;br /&gt;
# --publish | -p 80:8080 :- publish exposed container port 80-&amp;gt; to 8080 on the docker-host&lt;br /&gt;
# --publish-all | -P     :- publish all exposed container ports to random port &amp;gt;32768&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== List images ==&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ctop #top for containers&lt;br /&gt;
docker ps -a #list containers&lt;br /&gt;
docker image ls #list images&lt;br /&gt;
docker images #short form of the command above&lt;br /&gt;
docker images --no-trunc&lt;br /&gt;
docker images -q #--quiet&lt;br /&gt;
docker images --filer &amp;quot;before=centos:6&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# List exposed ports on a container&lt;br /&gt;
docker port CONTAINER [PRIVATE_PORT[/PROTOCOL]]&lt;br /&gt;
docker port web2&lt;br /&gt;
80/tcp -&amp;gt; 0.0.0.0:81&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Search images in remote repository ==&lt;br /&gt;
Search the DockerHub for images,. You may require to do `docker login` first&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
IMAGE=ubuntu&lt;br /&gt;
docker search $IMAGE&lt;br /&gt;
NAME                            DESCRIPTION                                     STARS OFFICIAL   AUTOMATED&lt;br /&gt;
ubuntu                          Ubuntu is a Debian-based Linux operating sys…   8206  [OK]       &lt;br /&gt;
dorowu/ubuntu-desktop-lxde-vnc  Ubuntu with openssh-server and NoVNC            210              [OK]&lt;br /&gt;
rastasheep/ubuntu-sshd          Dockerized SSH service, built on top of offi…   167              [OK]&lt;br /&gt;
&lt;br /&gt;
IMAGE=apache&lt;br /&gt;
docker search $IMAGE --filter stars=50 # search images that have 50 or more stars&lt;br /&gt;
docker search $IMAGE --limit 10        # display top 10 images&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
List all available tags&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
IMAGE=nginx&lt;br /&gt;
wget -q https://registry.hub.docker.com/v1/repositories/${IMAGE}/tags -O - | sed -e 's/[][]//g' -e 's/&amp;quot;//g' -e 's/ //g' | tr '}' '\n'  | awk -F: '{print $3}' | sort -V&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Pull images ==&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
                 &amp;lt;name&amp;gt;:&amp;lt;tag&amp;gt;&lt;br /&gt;
docker pull hello-world:latest # pull latest&lt;br /&gt;
docker pull --all hello-world  # pull all tags&lt;br /&gt;
docker pull --disable-content-trust hello-world # disable verification &lt;br /&gt;
&lt;br /&gt;
docker images --digests #displays sha256: digest of an image&lt;br /&gt;
&lt;br /&gt;
# Dangling images - transitional images&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
=== [https://docs.aws.amazon.com/AmazonECR/latest/userguide/registries.html#registry_auth from Amazon ECR] ===&lt;br /&gt;
;Docker login to ECR service using IAM&lt;br /&gt;
&amp;lt;code&amp;gt;docker login&amp;lt;/code&amp;gt; does not support native IAM authentication methods. Therefore use a command below that will retrieve, decode, and convert the &amp;lt;code&amp;gt;authorization IAM token&amp;lt;/code&amp;gt; into a pre-generated &amp;lt;code&amp;gt;docker login&amp;lt;/code&amp;gt; command. Therefore produced login credentials will assume your current IAM User/Role permissions. If your current IAM user can only pull from ECR, after login with &amp;lt;code&amp;gt;docker login&amp;lt;/code&amp;gt; you still won't be able to push image to the registry. Example error you may get is &amp;lt;code&amp;gt;not authorized to perform: ecr:InitiateLayerUpload&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Login to ECR service, your IAM user requires to have relevant pull/push permissions&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
eval $(aws ecr get-login --region eu-west-1 --no-include-email)&lt;br /&gt;
     # aws ecr get-login # generates below docker command with the login token&lt;br /&gt;
     # docker login -u AWS -p **token** https://$ACCOUNT.dkr.ecr.us-east-1.amazonaws.com # &amp;lt;- output&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Docker login to ECR singular repository, min awscli v1.17&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ACCOUNT=111111111111&lt;br /&gt;
REPOSITORY=myrepo&lt;br /&gt;
aws ecr get-login-password --region eu-west-1 | docker login --username AWS --password-stdin $ACCOUNT.dkr.ecr.eu-west-1.amazonaws.com/$REPOSITORY&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html push to Amazon ECR] ===&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# List images&lt;br /&gt;
$ docker images&lt;br /&gt;
REPOSITORY                                                 TAG   IMAGE ID     CREATED        SIZE&lt;br /&gt;
ansible-aws                                                2.0.1 b09807c20c96 5 minutes ago  570MB&lt;br /&gt;
111111111111.dkr.ecr.eu-west-1.amazonaws.com/ansible-aws   1.0.0 9bf35fe9cc0e 4 weeks ago    515MB&lt;br /&gt;
&lt;br /&gt;
# Tag an image 'b09807c20c96'&lt;br /&gt;
docker tag b09807c20c96 111111111111.dkr.ecr.eu-west-1.amazonaws.com/ansible-aws:2.0.1&lt;br /&gt;
&lt;br /&gt;
# List images, to verify your newly tagged one&lt;br /&gt;
$ docker images&lt;br /&gt;
REPOSITORY                                                 TAG   IMAGE ID     CREATED        SIZE&lt;br /&gt;
111111111111.dkr.ecr.eu-west-1.amazonaws.com/ansible-aws   2.0.1 b09807c20c96 6 minutes ago  570MB # &amp;lt;- new tagged image&lt;br /&gt;
ansible-aws                                                2.0.1 b09807c20c96 6 minutes ago  570MB&lt;br /&gt;
111111111111.dkr.ecr.eu-west-1.amazonaws.com/ansible-aws   1.0.0 9bf35fe9cc0e 4 weeks ago    515MB&lt;br /&gt;
&lt;br /&gt;
# Push an image to ECR&lt;br /&gt;
docker push 111111111111.dkr.ecr.eu-west-1.amazonaws.com/ansible-aws:2.0.1&lt;br /&gt;
The push refers to repository [111111111111.dkr.ecr.eu-west-1.amazonaws.com/ansible-aws]&lt;br /&gt;
2c405c66e675: Pushed &lt;br /&gt;
...&lt;br /&gt;
77cae8ab23bf: Layer already exists &lt;br /&gt;
2.0.1: digest: sha256:111111111193969807708e1f6aea2b19a08054f418b07cf64016a6d1111111111 size: 1796&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Save and import image ==&lt;br /&gt;
In course to move a image to another filesystem we can save it into &amp;lt;code&amp;gt;.tar&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Export&lt;br /&gt;
docker image save myrepo/centos:v2 &amp;gt; mycentos.v2.tar&lt;br /&gt;
tar -tvf mycentos.v2.tar&lt;br /&gt;
&lt;br /&gt;
# Import&lt;br /&gt;
docker image import mycentos.v2.tar &amp;lt;new_image_name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Load from a stream&lt;br /&gt;
docker load &amp;lt; mycentos.v2.tar #or --input mycentos.v2.tar to avoid redirections&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Export aka commit container into image ==&lt;br /&gt;
Let's say we wanto modify stock image centos:6 by installing Apache interactivly, set to autostart then export as an new image. Let's do it!&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
docker pull centos:6&lt;br /&gt;
docker container run -it --name apache-centos6 centos:6&lt;br /&gt;
# Interactively do: yum -y update; yum install -y httpd; chkconfig httpd on; exit&lt;br /&gt;
&lt;br /&gt;
# Save container changes - option1&lt;br /&gt;
docker commit -m &amp;quot;added httpd daemon&amp;quot; -a &amp;quot;Piotr&amp;quot; b237d65fd197 newcentos:withapache #creates new image from a container's changes&lt;br /&gt;
docker commit -m &amp;quot;added httpd daemon&amp;quot; -a &amp;quot;Piotr&amp;quot; &amp;lt;container_name&amp;gt; &amp;lt;repo&amp;gt;/&amp;lt;image&amp;gt;:&amp;lt;tag&amp;gt;&lt;br /&gt;
# -a :- author&lt;br /&gt;
&lt;br /&gt;
# Save container changes - option2&lt;br /&gt;
docker container export apache-centos6 &amp;gt; apache-centos6.tar&lt;br /&gt;
docker image     import apache-centos6.tar newcentos:withapache&lt;br /&gt;
&lt;br /&gt;
docker images&lt;br /&gt;
REPOSITORY    TAG          IMAGE ID            CREATED             SIZE&lt;br /&gt;
newcentos     withapache   ea5215fb46ed        50 seconds ago      272MB&lt;br /&gt;
&lt;br /&gt;
docker image history newcentos:withapache&lt;br /&gt;
IMAGE        CREATED        CREATED BY   SIZE   COMMENT&lt;br /&gt;
ea5215fb46ed 2 minutes ago               272MB  Imported from -&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I am unsure what is a difference in creation of images from a container between:&lt;br /&gt;
* &amp;lt;code&amp;gt;docker container commit&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;docker container export&amp;lt;/code&amp;gt; - this seems creates smaller image&lt;br /&gt;
&lt;br /&gt;
== Tag images ==&lt;br /&gt;
Tags are used to usually to name Official image with a new name that we are planning to modify. This allows to create a new image, run a new container from a tag, delete the original image without affecting the new image or container started from the new tagged image.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
docker image tag #long version&lt;br /&gt;
docker tag centos:6 myucentos:v1 #this will create a duplicate of centos:6 named myucentos:v1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Tagging allows to modify repository name and maanges references to images located on a filesystem.&lt;br /&gt;
&lt;br /&gt;
== History of an image ==&lt;br /&gt;
We can display history of layers that created the image by showing interim images build in creation order. It shows only layers created on a local filesystem.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
docker image history myrepo/centos:v2&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Stop and delete all containers ==&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
docker stop $(docker ps -aq) &amp;amp;&amp;amp; docker rm $(docker ps -aq)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Delete image ==&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ docker images&lt;br /&gt;
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE&lt;br /&gt;
company-repo        0.1.0               f796d7f843cc        About an hour ago   888MB&lt;br /&gt;
&amp;lt;none&amp;gt;              &amp;lt;none&amp;gt;              04fbac2fdf48        3 hours ago         565MB&lt;br /&gt;
ubuntu              16.04               7aa3602ab41e        3 weeks ago         115MB&lt;br /&gt;
&lt;br /&gt;
# Delete image&lt;br /&gt;
$ docker rmi company-repo:0.1.0&lt;br /&gt;
Untagged: company-repo:0.1.0&lt;br /&gt;
Deleted: sha256:e5cca6a080a5c65eacff98e1b17eeb7be02651849b431b46b074899c088bd42a&lt;br /&gt;
..&lt;br /&gt;
Deleted: sha256:bc7cda232a2319578324aae620c4537938743e46081955c4dd0743a89e9e8183&lt;br /&gt;
&lt;br /&gt;
# Prune image - delete dangling (temp/interim) images. &lt;br /&gt;
# These are not associated with end-product image or containers.&lt;br /&gt;
docker image prune&lt;br /&gt;
docker image prune -a #remove all images not associated with any container &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Cleaning up space by removing docker objects ==&lt;br /&gt;
This applied both to docker container and swarm systems.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
docker system df     #show disk usage&lt;br /&gt;
TYPE                TOTAL               ACTIVE              SIZE                RECLAIMABLE&lt;br /&gt;
Images              1                   0                   131.7MB             131.7MB (100%)&lt;br /&gt;
Containers          0                   0                   0B                  0B&lt;br /&gt;
Local Volumes       0                   0                   0B                  0B&lt;br /&gt;
Build Cache         0                   0                   0B                  0B&lt;br /&gt;
&lt;br /&gt;
docker network ls #note all networks below are system created, so won't get removed&lt;br /&gt;
NETWORK ID          NAME                DRIVER              SCOPE&lt;br /&gt;
452b1c428209        bridge              bridge              local&lt;br /&gt;
528db1bf80f1        docker_gwbridge     bridge              local&lt;br /&gt;
832c8c6d73a5        host                host                local&lt;br /&gt;
t8jxy5vsy5on        ingress             overlay             swarm&lt;br /&gt;
815a9c2c4005        none                null                local&lt;br /&gt;
&lt;br /&gt;
docker system prune #removes objects created by a user only, on the current node only&lt;br /&gt;
                    #add --volumes to remove them as well&lt;br /&gt;
WARNING! This will remove:&lt;br /&gt;
        - all stopped containers&lt;br /&gt;
        - all networks not used by at least one container&lt;br /&gt;
        - all dangling images&lt;br /&gt;
        - all dangling build cache&lt;br /&gt;
Are you sure you want to continue? [y/N]&lt;br /&gt;
&lt;br /&gt;
docker system prune -a --volumes #remove all&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Docker Volumes ==&lt;br /&gt;
Docker's 'copy-on-write' philosophy drives both performance and efficiency. It's only the top layer that is writable and it's a delta of underlying layer.&lt;br /&gt;
&lt;br /&gt;
Volumes can be mounted to your container instances from your underlying host systems.&lt;br /&gt;
&lt;br /&gt;
''_data'' volumes, since they are not controlled by the storage driver (since they represent a file/directory on the host filesystem /var/lib/docker), are able to bypass the storage driver. As a result, their contents are not affected when a container is removed.&lt;br /&gt;
&lt;br /&gt;
Volumes are data mounts created on a host in &amp;lt;code&amp;gt;/var/lib/docker/volumes/&amp;lt;/code&amp;gt; directory and refereed by name in a Dockerfile.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
docker volume ls                   #list volumes created by VOLUME directive in a Dockerfile&lt;br /&gt;
sudo tree /var/lib/docker/volumes/ #list volumes on host-side&lt;br /&gt;
docker volume create  my-vol-1&lt;br /&gt;
docker volume inspect my-vol-1&lt;br /&gt;
[&lt;br /&gt;
    {&lt;br /&gt;
        &amp;quot;CreatedAt&amp;quot;: &amp;quot;2019-01-17T08:47:01Z&amp;quot;,&lt;br /&gt;
        &amp;quot;Driver&amp;quot;: &amp;quot;local&amp;quot;,&lt;br /&gt;
        &amp;quot;Labels&amp;quot;: {},&lt;br /&gt;
        &amp;quot;Mountpoint&amp;quot;: &amp;quot;/var/lib/docker/volumes/my-vol-1/_data&amp;quot;,&lt;br /&gt;
        &amp;quot;Name&amp;quot;: &amp;quot;my-vol-1&amp;quot;,&lt;br /&gt;
        &amp;quot;Options&amp;quot;: {},&lt;br /&gt;
        &amp;quot;Scope&amp;quot;: &amp;quot;local&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using volumes with Swarm services &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker container run  --name web1 -p 80:80 --mount source=my-vol-1,target=/internal-mount --replicas 3 httpd #container&lt;br /&gt;
docker service create --name web1 -p 80:80 --mount source=my-vol-1,target=/internal-mount --replicas 3 httpd #swarm service&lt;br /&gt;
# --mount --volumes|-v is not supported with services, this will replicate volumes across swarm when needed,&lt;br /&gt;
# but it will not replicate files&lt;br /&gt;
&lt;br /&gt;
docker exec -it web1 /bin/bash #connect to the container&lt;br /&gt;
roor@c123:/ echo &amp;quot;Created when connected to container: volume-web1&amp;quot; &amp;gt; /internal-mount/local.txt; exit&lt;br /&gt;
&lt;br /&gt;
# prove the file is on a host filesystem created volume&lt;br /&gt;
user@dockerhost$ cat /var/lib/docker/volumes/my-vol-1/_data/local.txt&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Host storage mount&lt;br /&gt;
Bind mapping is binding host filesystem directory to a container directory. It's not mouting volume that it'd require a mount point and volume on a host.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
mkdir /home/user/web1&lt;br /&gt;
echo &amp;quot;web1 index&amp;quot; &amp;gt; /home/user/web1/index.html&lt;br /&gt;
docker container run -d --name testweb -p 80:80 --mount type=bind,source=/home/user/web1,target=/usr/local/apache2/htdocs httpd&lt;br /&gt;
curl http://localhost:80&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Removing service is not going to remove the volume unless you delete the volume itself. It that case will be removed from all swarms.&lt;br /&gt;
&lt;br /&gt;
== Networking ==&lt;br /&gt;
=== Container Network Model ===&lt;br /&gt;
It's a concept of network implementation that is built on multiple private networks across multiple hosts overlayed and managed by IPAM. Protocol that keeps track and provision addesses.&lt;br /&gt;
&lt;br /&gt;
Main 3 components:&lt;br /&gt;
* sandbox -  contains the configuration of a container's network stack, incl. management of interfaces, routing and DNS. An implementation of a Sandbox could be a eg. Linux Network Namespace. A Sandbox may contain many endpoints from multiple networks.&lt;br /&gt;
* endpoint - joins a Sandbox to a Network. Interfaces, switches, ports, etc and belong to only one network at the time. The Endpoint construct exists so the actual connection to the network can be abstracted away from the application. This helps maintain portability.&lt;br /&gt;
* network - a clollection of endpoints that can communicate directly (bridges, VLANs, etc.) and can consist of 1toN endpoints&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Container-network-model.png|none||left|Container Network Model]]&lt;br /&gt;
&lt;br /&gt;
;IPAM (Internet Protocol Address Management)&lt;br /&gt;
Managing addesees across multiple hosts on a separate physical networks while providing routing to the underlaying swarm networks externally is ''the IPAM prblem'' for Docker. Depends on the netwok driver choice, IPAM is handled at different layers in the stack. ''Network drivers'' enable IPAM through ''DHCP drivers'' or plugin drivers so the complex implementation that would be normally overlapping addesses is supported.&lt;br /&gt;
&lt;br /&gt;
;References&lt;br /&gt;
* [https://success.docker.com/article/networking Docker Reference Architecture: Designing Scalable, Portable Docker Container Networks]&lt;br /&gt;
&lt;br /&gt;
=== Publish exposed container/service ports ===&lt;br /&gt;
;Publishing modes&lt;br /&gt;
;host: set using &amp;lt;code&amp;gt; --publish mode=host,8080:80&amp;lt;/code&amp;gt;, makes ports available only on the undelaying host system not outside the host the service may exist; defits ''routing mesh'' so user is responsible for routing&lt;br /&gt;
;ingress: responsible for ''routing mesh'' makes sure all published ports are avaialble on all hosts in the swarm cluster regardless is a service replica running on it or not&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# List exposed ports on a container&lt;br /&gt;
docker port CONTAINER [PRIVATE_PORT[/PROTOCOL]]&lt;br /&gt;
docker port web2&lt;br /&gt;
80/tcp -&amp;gt; 0.0.0.0:81&lt;br /&gt;
&lt;br /&gt;
# Publish port&lt;br /&gt;
                                          host  :  container&lt;br /&gt;
                                             \  :  /&lt;br /&gt;
docker container run -d --name web1 --publish 81:80 httpd&lt;br /&gt;
# --publish | -p :- publish to host exposed container port&lt;br /&gt;
# 81             :- port on a host, can use range eg. 81-85, so based on port availability port will be used&lt;br /&gt;
# 80             :- exposed port on a container&lt;br /&gt;
&lt;br /&gt;
ss -lnt&lt;br /&gt;
State       Recv-Q Send-Q Local Address:Port Peer Address:Port&lt;br /&gt;
LISTEN      0      100        127.0.0.1:25              *:*&lt;br /&gt;
LISTEN      0      128                *:22              *:*&lt;br /&gt;
LISTEN      0      100              ::1:25             :::*&lt;br /&gt;
LISTEN      0      128               :::81             :::*&lt;br /&gt;
LISTEN      0      128               :::22             :::*&lt;br /&gt;
&lt;br /&gt;
docker container run -d --name web1 --publish-all 81:80 httpd&lt;br /&gt;
# --publish-all | -P publish all cotainer exposed ports to random ports above &amp;gt;32768&lt;br /&gt;
CONTAINER ID IMAGE COMMAND              CREATED STATUS PORTS                   NAMES&lt;br /&gt;
c63efe9cbb94 httpd &amp;quot;httpd-foreground&amp;quot;   2 sec.. Up 1 s 80/tcp                  testweb  #port exposed but not published&lt;br /&gt;
cb0711134eb5 httpd &amp;quot;httpd-foreground&amp;quot;   4 sec.. Up 2 s 0.0.0.0:32769-&amp;gt;80/tcp   testweb1 #port exposed and published to host:32769&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Network drivers ===&lt;br /&gt;
Default network for a single host docker-host is ''bridge'' network.&lt;br /&gt;
&lt;br /&gt;
;List of Native (part of Docker Engine) Network Drivers:&lt;br /&gt;
;bridge: default on stand-alone hosts, it's private network internal to the host system, all containers on this host using Bridge network can communicate, external access is granted by port exposure or static-routes added with teh host as the gateway for that network&lt;br /&gt;
;none: used when a container does not need any networkng, still can be accessed from the host using &amp;lt;code&amp;gt;docker attach [containerID]&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;docker exec -it [containerID]&amp;lt;/code&amp;gt; commands&lt;br /&gt;
;host: aka ''Host Only Networking'', only accessable via underlaying host, access to services can be provided by exposing ports to the host system&lt;br /&gt;
;overlay: swarm scope driver, allows communication to all Docker Daemons in a cluster, self-extending if needed, maanged by Swarm manager, it's default mode of Swarm communication&lt;br /&gt;
;ingress: extended network across all nodes in the cluster; special overlay network that load balances netowrk traffic amongst a given service's working nodes; maintains a list of all IP addresses from nodes that participate in that service (using the IPVS module) and when a request comes in, routes to one of them for the indicated service; provides ''routing mesh' that allows services to be exposed to the external network without having replica running on every node in the Swarm&lt;br /&gt;
;docker gateway bridge: special bridge network that allows overlay networks (incl. ingress) access an individual DOcker daemon's physical network; every container run within a service is connected to the local Docerk daemon's host network; automatically created when Docker is initialised or joined by &amp;lt;code&amp;gt;docker swarm init&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;docker join&amp;lt;/code&amp;gt; commands.&lt;br /&gt;
&lt;br /&gt;
;Docker interfaces&lt;br /&gt;
* &amp;lt;code&amp;gt;docker0&amp;lt;/code&amp;gt; - adapter is installed by default during Docker setup and will be assigned an address range that will determine the local host IPs available to the containers running on it&lt;br /&gt;
&lt;br /&gt;
;Create bridge network&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker network ls #default networks list&lt;br /&gt;
NETWORK ID    NAME                DRIVER   SCOPE&lt;br /&gt;
130833da0920  bridge              bridge   local&lt;br /&gt;
528db1bf80f1  docker_gwbridge     bridge   local&lt;br /&gt;
832c8c6d73a5  host                host     local&lt;br /&gt;
t8jxy5vsy5on  ingress             overlay  swarm  #'ingress' special network 1 per cluster&lt;br /&gt;
815a9c2c4005  none                null     local&lt;br /&gt;
&lt;br /&gt;
docker network inspect bridge #bridge is a default network containers are deployed to&lt;br /&gt;
&lt;br /&gt;
docker container run  -d web1 -p 8080:80 httpd #expose container port :80 -&amp;gt; :8080 on the docker-home&lt;br /&gt;
docker container inspect web1 | grep IPAdd&lt;br /&gt;
docker container inspect --format=&amp;quot;{{.NetworkSettings.Networks.bridge.IPAddress}}&amp;quot; web1 #get container ip&lt;br /&gt;
curl http://$(IPAddr)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Create bridge network&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker network create --driver=bridge --subnet=192.168.1.0/24 --opt &amp;quot;com.docker.network.driver.mtu&amp;quot;=1501 deviceeth0&lt;br /&gt;
&lt;br /&gt;
docker network ls&lt;br /&gt;
docker network inspect deviceeth0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Create overlay network&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker network create --driver=overlay --subnet=192.168.1.0/24 --gateway=192.168.1.1 overlay0&lt;br /&gt;
docker network ls &lt;br /&gt;
NETWORK ID          NAME                DRIVER              SCOPE&lt;br /&gt;
130833da0920        bridge              bridge              local&lt;br /&gt;
528db1bf80f1        docker_gwbridge     bridge              local&lt;br /&gt;
832c8c6d73a5        host                host                local&lt;br /&gt;
t8jxy5vsy5on        ingress             overlay             swarm&lt;br /&gt;
815a9c2c4005        none                null                local&lt;br /&gt;
2x6bq1czzdc1        overlay0            overlay             swarm&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Inspect network&lt;br /&gt;
&amp;lt;source lang=json&amp;gt;&lt;br /&gt;
docker network inspect overlay0&lt;br /&gt;
[&lt;br /&gt;
    {&lt;br /&gt;
        &amp;quot;Name&amp;quot;: &amp;quot;overlay0&amp;quot;,&lt;br /&gt;
        &amp;quot;Id&amp;quot;: &amp;quot;2x6bq1czzdc102sl6ge7gpm3w&amp;quot;,&lt;br /&gt;
        &amp;quot;Created&amp;quot;: &amp;quot;2019-01-19T11:24:02.146339562Z&amp;quot;,&lt;br /&gt;
        &amp;quot;Scope&amp;quot;: &amp;quot;swarm&amp;quot;,&lt;br /&gt;
        &amp;quot;Driver&amp;quot;: &amp;quot;overlay&amp;quot;,&lt;br /&gt;
        &amp;quot;EnableIPv6&amp;quot;: false,&lt;br /&gt;
        &amp;quot;IPAM&amp;quot;: {&lt;br /&gt;
            &amp;quot;Driver&amp;quot;: &amp;quot;default&amp;quot;,&lt;br /&gt;
            &amp;quot;Options&amp;quot;: null,&lt;br /&gt;
            &amp;quot;Config&amp;quot;: [&lt;br /&gt;
                {&lt;br /&gt;
                    &amp;quot;Subnet&amp;quot;: &amp;quot;192.168.1.0/24&amp;quot;,&lt;br /&gt;
                    &amp;quot;Gateway&amp;quot;: &amp;quot;192.168.1.1&amp;quot;&lt;br /&gt;
                }&lt;br /&gt;
            ]&lt;br /&gt;
        },&lt;br /&gt;
        &amp;quot;Internal&amp;quot;: false,&lt;br /&gt;
        &amp;quot;Attachable&amp;quot;: false,&lt;br /&gt;
        &amp;quot;Ingress&amp;quot;: false,&lt;br /&gt;
        &amp;quot;ConfigFrom&amp;quot;: {&lt;br /&gt;
            &amp;quot;Network&amp;quot;: &amp;quot;&amp;quot;&lt;br /&gt;
        },&lt;br /&gt;
        &amp;quot;ConfigOnly&amp;quot;: false,&lt;br /&gt;
        &amp;quot;Containers&amp;quot;: null,&lt;br /&gt;
        &amp;quot;Options&amp;quot;: {&lt;br /&gt;
            &amp;quot;com.docker.network.driver.overlay.vxlanid_list&amp;quot;: &amp;quot;4097&amp;quot;&lt;br /&gt;
        },&lt;br /&gt;
        &amp;quot;Labels&amp;quot;: null&lt;br /&gt;
    }&lt;br /&gt;
]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Inspect container network&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker container inspect testweb --format {{.HostConfig.NetworkMode}}&lt;br /&gt;
overlay0&lt;br /&gt;
docker container inspect testweb --format {{.NetworkSettings.Networks.dev_bridge.IPAddress}}&lt;br /&gt;
192.168.1.3&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Connect/disconnect from a network can be done when a container is running. Connect won't disconnect from a current network.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker network connect --ip=192.168.1.10 deviceeth0 web1&lt;br /&gt;
docker container inspect --format=&amp;quot;{{.NetworkSettings.Networks.bridge.IPAddress}}&amp;quot; web1&lt;br /&gt;
docker container inspect --format=&amp;quot;{{.NetworkSettings.Networks.deviceeth0.IPAddress}}&amp;quot; web1&lt;br /&gt;
curl http://$(IPAddr)&lt;br /&gt;
docker network disconnect deviceeth0 web1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Overlay network in Swarm cluster ===&lt;br /&gt;
Overlay network can be created/removed/updated like any other docker objects. It allows inter-service(containers) communication, where &amp;lt;code&amp;gt;--gateway&amp;lt;/code&amp;gt; ip address is used to reach to outside eg. Inernet or the host network. When creating the &amp;lt;code&amp;gt;overlay&amp;lt;/code&amp;gt; network on the manager host it will get recreated on worker nodes only when is referenced by any service that is using it. See below.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
swarm-mgr$ docker network create --driver=overlay --subnet=192.168.1.0/24 --gateway=192.168.1.1 overlay0&lt;br /&gt;
swarm-mgr$ docker service create --name web1 -p 8080:80 --network=overlay0 --replicas 2 httpd&lt;br /&gt;
uvxymzdkcfwvs2oznbnk7nv03&lt;br /&gt;
overall progress: 2 out of 2 tasks &lt;br /&gt;
1/2: running   [==================================================&amp;gt;] &lt;br /&gt;
2/2: running   [==================================================&amp;gt;] &lt;br /&gt;
&lt;br /&gt;
swarm-wkr$ docker network ls&lt;br /&gt;
NETWORK ID          NAME                DRIVER              SCOPE&lt;br /&gt;
ba175ebd2a6f        bridge              bridge              local&lt;br /&gt;
a5848f607d8c        docker_gwbridge     bridge              local&lt;br /&gt;
fccfb9c1fdc3        host                host                local&lt;br /&gt;
t8jxy5vsy5on        ingress             overlay             swarm&lt;br /&gt;
127b10783faa        none                null                local&lt;br /&gt;
2x6bq1czzdc1        overlay0            overlay             swarm&lt;br /&gt;
&lt;br /&gt;
# remove network, only affected newly created servces not the running onces&lt;br /&gt;
swarm-mgr$ docker service update --network-rm=overlay0 web1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== DNS ===&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
docker container run -d --name testweb1 -P --dns=8.8.8.8 \&lt;br /&gt;
                                           --dns=8.8.4.4 \&lt;br /&gt;
                                           --dns-search &amp;quot;mydomain.local&amp;quot; \&lt;br /&gt;
                                           httpd&lt;br /&gt;
# -P :- publish-all exposed ports to random port &amp;gt;32768&lt;br /&gt;
&lt;br /&gt;
docker container exec -it testweb1 /bin/bash -c 'cat /etc/resolv.conf'&lt;br /&gt;
search us-east-2.compute.internal&lt;br /&gt;
nameserver 8.8.8.8&lt;br /&gt;
nameserver 8.8.4.4&lt;br /&gt;
&lt;br /&gt;
# System wide settings, requires docker.service restart&lt;br /&gt;
cat &amp;gt; /etc/docker/daemon.json &amp;lt;&amp;lt;EOF&lt;br /&gt;
{ &lt;br /&gt;
  &amp;quot;dns&amp;quot;: [&amp;quot;8.8.8.8&amp;quot;, &amp;quot;8.8.4.4&amp;quot;]&lt;br /&gt;
}&lt;br /&gt;
EOF&lt;br /&gt;
sudo systemctl restart docker.service #required&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Examples =&lt;br /&gt;
== Lint - best practices ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ docker run --rm -i hadolint/hadolint &amp;lt; Dockerfile&lt;br /&gt;
/dev/stdin:9:16 unexpected newline expecting &amp;quot;\ &amp;quot;, '=', a space followed by the value for the variable 'MAC_ADDRESS', or at least one space&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Default project ==&lt;br /&gt;
As good practice all Docker files should be source controlled. The basic self explanatory structure can looks like below, and skeleton be created with a commend below:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
mkdir APROJECT &amp;amp;&amp;amp; d=$_; touch $d/{build.sh,run.sh,Dockerfile,README.md,VERSION};mkdir $d/assets; touch $_/{entrypoint.sh,install.sh}&lt;br /&gt;
&lt;br /&gt;
└── APROJECT&lt;br /&gt;
    ├── assets&lt;br /&gt;
    │   ├── entrypoint.sh&lt;br /&gt;
    │   └── install.sh&lt;br /&gt;
    ├── build.sh&lt;br /&gt;
    ├── Dockerfile&lt;br /&gt;
    ├── README.md&lt;br /&gt;
    └── VERSION&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Dockerfile ==&lt;br /&gt;
&amp;lt;code&amp;gt;Dockerfile&amp;lt;/code&amp;gt; it is simply a build file.&lt;br /&gt;
=== Semantics ===&lt;br /&gt;
;&amp;lt;code&amp;gt;entrypoint&amp;lt;/code&amp;gt;: Container config: what to start when this image is ran. &lt;br /&gt;
&lt;br /&gt;
;&amp;lt;code&amp;gt;entrypoint&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cmd&amp;lt;/code&amp;gt;: Docker allows you to define an Entrypoint and Cmd which you can mix and match in a Dockerfile. Entrypoint is the executable, and Cmd are the arguments passed to the Entrypoint. The Dockerfile schema is quite lenient and allows users to set Cmd without Entrypoint, which means that the first argument in Cmd will be the executable to run.&lt;br /&gt;
&lt;br /&gt;
=== User management ===&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
RUN addgroup --gid 1001 jenkins -q&lt;br /&gt;
RUN adduser  --gid 1001 --home /tank --disabled-password --gecos '' --uid 1001 jenkins&lt;br /&gt;
# --gid add user to group 1001&lt;br /&gt;
# --gecos parameter is used to set the additional information. In this case it is just empty.&lt;br /&gt;
# --disabled-password it's like  --disabled-login,  but  logins  are still possible (for example using SSH RSA keys) but not using password authentication&lt;br /&gt;
USER jenkins:jenkins #sets user for next RUN, CMD and ENTRYPOINT command&lt;br /&gt;
WORKDIR /tank #changes cwd for next RUN, CMD, ENTRYPOINT, COPY and ADD&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Multiple stage build ===&lt;br /&gt;
Introduced in Docker 17.06, allows to use multiple &amp;lt;code&amp;gt;FROM&amp;lt;/code&amp;gt; statements allowing for multi stage builds.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
FROM microsoft/aspnetcore-build AS build-env&lt;br /&gt;
WORKDIR /app&lt;br /&gt;
&lt;br /&gt;
# copy csproj and restore as distinct layers&lt;br /&gt;
COPY *.csproj ./&lt;br /&gt;
RUN dotnet restore&lt;br /&gt;
&lt;br /&gt;
# copy everything else and build&lt;br /&gt;
COPY . ./&lt;br /&gt;
RUN dotnet publish -c Release -o output&lt;br /&gt;
&lt;br /&gt;
# build runtime image&lt;br /&gt;
FROM microsoft/aspnetcore&lt;br /&gt;
WORKDIR /app&lt;br /&gt;
COPY --from=build-env /app/output .   #multi stage: copy files from previous container [as build-env]&lt;br /&gt;
ENTRYPOINT [&amp;quot;dotnet&amp;quot;, &amp;quot;LetsKube.dll&amp;quot;]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Squash an image =&lt;br /&gt;
Docker uses &amp;lt;code&amp;gt;Union&amp;lt;/code&amp;gt; filesystem that allows multiple volumes (layers) to share common and override changes by applying them on top layer. &lt;br /&gt;
There is no official way to ''flatten'' layers to a single storage layer or minimize an image size (2017). Below it's just practical approach. &lt;br /&gt;
# Start container from an image&lt;br /&gt;
# Export a container to &amp;lt;code&amp;gt;.tar&amp;lt;/code&amp;gt; with all it's file systems&lt;br /&gt;
# Import container with new image name&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once the process completes and original image gets deleted the new image &amp;lt;code&amp;gt;docker image history&amp;lt;/code&amp;gt; command will show only one layer. Often the image will be smaller.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# run a container from an image&lt;br /&gt;
docker run myweb:v3&lt;br /&gt;
# export container to .tar&lt;br /&gt;
docker export &amp;lt;contr_name&amp;gt; &amp;gt; myweb.v3.tar&lt;br /&gt;
docker save   &amp;lt;image id&amp;gt;   &amp;gt; image.tar #not verified command&lt;br /&gt;
docker import myweb.v3.tar   myweb:v4&lt;br /&gt;
docker load   myweb.v3.tar             #not verified command  &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Resources&lt;br /&gt;
*[https://github.com/jwilder/docker-squash docker-squash] GitHub&lt;br /&gt;
&lt;br /&gt;
= Gracefully stop / kill a container =&lt;br /&gt;
''all below are only notes''&lt;br /&gt;
&lt;br /&gt;
Trap ctrl_c then kill/rm container.&lt;br /&gt;
*--init&lt;br /&gt;
*--sig-proxy this only works when --tty=false but by default is true&lt;br /&gt;
&lt;br /&gt;
= Proxy =&lt;br /&gt;
If you are behind corporate proxy, you should use Docker client &amp;lt;code&amp;gt;~/.docker/config.json&amp;lt;/code&amp;gt; config file. It requires Docker &lt;br /&gt;
17.07 minimum version.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
 &amp;quot;proxies&amp;quot;:&lt;br /&gt;
 {&lt;br /&gt;
   &amp;quot;default&amp;quot;:&lt;br /&gt;
   {&lt;br /&gt;
     &amp;quot;httpProxy&amp;quot;: &amp;quot;http://10.0.0.1:3128&amp;quot;,&lt;br /&gt;
     &amp;quot;httpsProxy&amp;quot;: &amp;quot;http://10.0.0.1:3128&amp;quot;,&lt;br /&gt;
     &amp;quot;noProxy&amp;quot;: &amp;quot;localhost,127.0.0.1,*.test.example.com,.example2.com&amp;quot;&lt;br /&gt;
   }&lt;br /&gt;
 }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
More you can find [https://docs.docker.com/network/proxy/#configure-the-docker-client here]&lt;br /&gt;
&lt;br /&gt;
== Insecure proxy ==&lt;br /&gt;
These can be added to different places, the order is based on latest practices and versioning&lt;br /&gt;
;docker-ce 18.6&lt;br /&gt;
&amp;lt;source lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
    &amp;quot;insecure-registries&amp;quot; : [ &amp;quot;localhost:443&amp;quot;,&amp;quot;10.0.0.0/8&amp;quot;, &amp;quot;172.16.0.0/12&amp;quot;, &amp;quot;192.168.0.0/16&amp;quot; ]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo systemctl daemon-reload&lt;br /&gt;
sudo systemctl restart docker&lt;br /&gt;
sudo systemctl show docker | grep Env&lt;br /&gt;
docker info #check Insecure Registries&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Using environment file, prior version 18&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sudo vi /etc/default/docker&lt;br /&gt;
DOCKER_HOME='--graph=/tank/docker'&lt;br /&gt;
DOCKER_GROUP='--group=docker'&lt;br /&gt;
DOCKER_LOG_DRIVER='--log-driver=json-file'&lt;br /&gt;
DOCKER_STORAGE_DRIVER='--storage-driver=btrfs'&lt;br /&gt;
DOCKER_ICC='--icc=false'&lt;br /&gt;
DOCKER_IPMASQ='--ip-masq=true'&lt;br /&gt;
DOCKER_IPTABLES='--iptables=true'&lt;br /&gt;
DOCKER_IPFORWARD='--ip-forward=true'&lt;br /&gt;
DOCKER_ADDRESSES='--host=unix:///var/run/docker.sock'&lt;br /&gt;
DOCKER_INSECURE_REGISTRIES='--insecure-registry 10.0.0.0/8 --insecure-registry 172.16.0.0/12 --insecure-registry 192.168.0.0/16'&lt;br /&gt;
DOCKER_OPTS=&amp;quot;${DOCKER_HOME} ${DOCKER_GROUP} ${DOCKER_LOG_DRIVER} ${DOCKER_STORAGE_DRIVER} ${DOCKER_ICC} ${DOCKER_IPMASQ} ${DOCKER_IPTABLES} ${DOCKER_IPFORWARD} ${DOCKER_ADDRESSES} ${DOCKER_INSECURE_REGISTRIES}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
$ sudo vi /etc/systemd/system/docker.service.d/docker.conf&lt;br /&gt;
[Service]&lt;br /&gt;
EnvironmentFile=-/etc/default/docker&lt;br /&gt;
ExecStart=/usr/bin/dockerd $DOCKER_HOME $DOCKER_GROUP $DOCKER_LOG_DRIVER $DOCKER_STORAGE_DRIVER $DOCKER_ICC $DOCKER_IPMASQ $DOCKER_IPTABLES $DOCKER_IPFORWARD $DOCKER_ADDRESSES $DOCKER_INSECURE_REGISTRIES&lt;br /&gt;
&lt;br /&gt;
$ sudo vi /etc/systemd/system/docker.service&lt;br /&gt;
[Unit]&lt;br /&gt;
Description=Docker Application Container Engine&lt;br /&gt;
Documentation=https://docs.docker.com&lt;br /&gt;
After=network-online.target docker.socket firewalld.service&lt;br /&gt;
Wants=network-online.target&lt;br /&gt;
Requires=docker.socket&lt;br /&gt;
&lt;br /&gt;
[Service]&lt;br /&gt;
EnvironmentFile=-/etc/default/docker&lt;br /&gt;
Type=notify&lt;br /&gt;
# the default is not to use systemd for cgroups because the delegate issues still&lt;br /&gt;
# exists and systemd currently does not support the cgroup feature set required&lt;br /&gt;
# for containers run by docker&lt;br /&gt;
ExecStart=/usr/bin/dockerd -H fd://&lt;br /&gt;
ExecReload=/bin/kill -s HUP $MAINPID&lt;br /&gt;
LimitNOFILE=1048576&lt;br /&gt;
# Having non-zero Limit*s causes performance problems due to accounting overhead&lt;br /&gt;
# in the kernel. We recommend using cgroups to do container-local accounting.&lt;br /&gt;
LimitNPROC=infinity&lt;br /&gt;
LimitCORE=infinity&lt;br /&gt;
# Uncomment TasksMax if your systemd version supports it.&lt;br /&gt;
# Only systemd 226 and above support this version.&lt;br /&gt;
TasksMax=infinity&lt;br /&gt;
TimeoutStartSec=0&lt;br /&gt;
# set delegate yes so that systemd does not reset the cgroups of docker containers&lt;br /&gt;
Delegate=yes&lt;br /&gt;
# kill only the docker process, not all processes in the cgroup&lt;br /&gt;
KillMode=process&lt;br /&gt;
# restart the docker process if it exits prematurely&lt;br /&gt;
Restart=on-failure&lt;br /&gt;
StartLimitBurst=3&lt;br /&gt;
StartLimitInterval=60s&lt;br /&gt;
&lt;br /&gt;
[Install]&lt;br /&gt;
WantedBy=multi-user.target&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Run docker without sudo ==&lt;br /&gt;
Adding a user to docker group should be sufficient. However on apparmor, SELinux or a filesystem with ACL enabled additional permissions might be required in respect to access a &amp;lt;tt&amp;gt;socket file&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
$ ll /var/run/docker.sock&lt;br /&gt;
srw-rw---- 1 root docker 0 Sep  6 12:31 /var/run/docker.sock=&lt;br /&gt;
# ACL&lt;br /&gt;
$ sudo getfacl /var/run/docker.sock&lt;br /&gt;
getfacl: Removing leading '/' from absolute path names&lt;br /&gt;
# file: var/run/docker.sock&lt;br /&gt;
# owner: root&lt;br /&gt;
# group: docker&lt;br /&gt;
user::rw-&lt;br /&gt;
group::rw-&lt;br /&gt;
other::---&lt;br /&gt;
&lt;br /&gt;
# Grant ACL to jenkns user&lt;br /&gt;
$ sudo setfacl -m user:username:rw /var/run/docker.sock&lt;br /&gt;
&lt;br /&gt;
$ sudo getfacl /var/run/docker.sock&lt;br /&gt;
getfacl: Removing leading '/' from absolute path names&lt;br /&gt;
# file: var/run/docker.sock&lt;br /&gt;
# owner: root&lt;br /&gt;
# group: docker&lt;br /&gt;
user::rw-&lt;br /&gt;
user:jenkins:rw-&lt;br /&gt;
group::rw-&lt;br /&gt;
mask::rw-&lt;br /&gt;
other::---&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
;References&lt;br /&gt;
* [https://askubuntu.com/questions/477551/how-can-i-use-docker-without-sudo how-can-i-use-docker-without-sudo]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
*[https://www.weave.works/blog/my-container-wont-stop-on-ctrl-c-and-other-minor-tragedies/ my-container-wont-stop-on-ctrl-c-and-other-minor-tragedies]&lt;br /&gt;
*[https://github.com/moby/moby/pull/12228 PID1 in container aka tinit]&lt;br /&gt;
*[https://container-solutions.com/understanding-volumes-docker/ understanding-volumes-docker]&lt;br /&gt;
&lt;br /&gt;
= Docker Enterprise Edition =&lt;br /&gt;
*[https://success.docker.com/article/compatibility-matrix Compatibility Matrix]&lt;br /&gt;
Components:&lt;br /&gt;
* Docker daemon (fka &amp;quot;Engine&amp;quot;)&lt;br /&gt;
* Docker Trusted Registry (DTR)&lt;br /&gt;
* Docker Universal Control Plane (UCP)&lt;br /&gt;
&lt;br /&gt;
= Docker Swarm =&lt;br /&gt;
== Swarm - sizing ==&lt;br /&gt;
;Universal Control Plane (UCP)&lt;br /&gt;
This is for only Enterpsise Edition&lt;br /&gt;
* ports managers, workers in/out&lt;br /&gt;
&lt;br /&gt;
Hardware requirments:&lt;br /&gt;
* 8gb RAM for  managers or DTR Docker Trsuted Registry&lt;br /&gt;
* 4gb RAM for workers&lt;br /&gt;
* 3gb free space&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Performance Consideration (Timing)&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
Component                              Timeout(ms)  Configurable&lt;br /&gt;
Raft consensus between manager nodes   3000         no&lt;br /&gt;
Gossip protocol for overlay networking 5000         no&lt;br /&gt;
etcd                                   500          yes&lt;br /&gt;
RethinkDB                              10000        no&lt;br /&gt;
Stand-alone swarm                      90000        no&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Compatibility Docker EE&lt;br /&gt;
* Docker Engine 17.06+&lt;br /&gt;
* DTR 2.3+&lt;br /&gt;
* UCP 2.2+&lt;br /&gt;
== Swarm with single host manager ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Initialise Swarm&lt;br /&gt;
docker swarm init --advertise-addr 172.31.16.10 #Iyou get SWMTKN-token&lt;br /&gt;
To add a worker to this swarm, run the following command:&lt;br /&gt;
    docker swarm join --token SWMTKN-1-1i2v91qbj0pg88dxld15vpx3e74qm5clk7xkcrg6j3xknedqui-dh60f4j09itiqjfhqa196ufvo 172.31.16.10:2377&lt;br /&gt;
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.&lt;br /&gt;
&lt;br /&gt;
# Join tokens&lt;br /&gt;
docker swarm join-token manager #display manager join-token, run on manager&lt;br /&gt;
docker swarm join-token worker  #display worker  join-token, run on manager&lt;br /&gt;
&lt;br /&gt;
# Join worker, run new-worker-node&lt;br /&gt;
#                                 -&amp;gt;            swarm cluster id                    &amp;lt;-&amp;gt; this part is mgr/wkr &amp;lt;- -&amp;gt; mgr node &amp;lt;-&lt;br /&gt;
docker swarm join --token SWMTKN-1-1i2v91qbj0pg88dxld15vpx3e74qm5clk7xkcrg6j3xknedqui-dh60f4j09itiqjfhqa196ufvo 172.31.16.10:2377&lt;br /&gt;
&lt;br /&gt;
# Join another manager, run on new-manager-node&lt;br /&gt;
docker swarm join-token manager #run on primary manager if you wish add another manager&lt;br /&gt;
# in output you get a token. You notice that 1st part up to dash identifies Swarm cluster and the other part is role id.&lt;br /&gt;
&lt;br /&gt;
# join to swarm (cluster), token will identify a role in the cluster manager or worker&lt;br /&gt;
docker swarm join --token SWMTKN-xxxx&lt;br /&gt;
docker swarm join --token SWMTKN-1-1i2v91qbj0pg88dxld15vpx3e74qm5clk7xkcrg6j3xknedqui-dh60f4j09itiqjfhqa196ufvo 172.31.16.10:2377&lt;br /&gt;
This node joined a swarm as a worker.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Check Swarm status&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker node ls&lt;br /&gt;
[cloud_user@ip-172-31-16-10 swarm-manager]$ docker node ls&lt;br /&gt;
ID                            HOSTNAME                          STATUS   AVAILABILITY MANAGER STATUS ENGINE VERSION&lt;br /&gt;
641bfndn49b1i1dj17s8cirgw *   ip-172-31-16-10.mylabserver.com   Ready    Active       Leader         18.09.0&lt;br /&gt;
vlw7te728z7bvd7ulb3hn08am     ip-172-31-16-94.mylabserver.com   Ready    Active                      18.09.0&lt;br /&gt;
&lt;br /&gt;
docker system info | grep -A 7 Swarm&lt;br /&gt;
Swarm: active&lt;br /&gt;
 NodeID: 641bfndn49b1i1dj17s8cirgw&lt;br /&gt;
 Is Manager: true&lt;br /&gt;
 ClusterID: 4jqxdmfd0w5pc4if4fskgd5nq&lt;br /&gt;
 Managers: 1&lt;br /&gt;
 Nodes: 2&lt;br /&gt;
 Default Address Pool: 10.0.0.0/8  &lt;br /&gt;
 SubnetSize: 24&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Troubleshooting&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo systemctl disable firewalld &amp;amp;&amp;amp; sudo systemctl stop firewalld # CentOS&lt;br /&gt;
sudo -i; printf &amp;quot;\n10.0.0.11 mgr01\n10.0.0.12 node01\n&amp;quot; &amp;gt;&amp;gt; /etc/hosts # Add nodes to hosts file; exit&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Swarm cluster ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker node update -availability drain [node] #drain services for Manager Only nodes&lt;br /&gt;
docker service update --force [service_name]  #force re-balance services across cluster&lt;br /&gt;
&lt;br /&gt;
docker swarm leave #node leaves a cluster&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Locking / unlocking swarm cluster ==&lt;br /&gt;
Logs used by Swarm manager are encrypted on disk. Access to nodes gives access to keys that encrypt them. It further protects cluster as requires a unlocking key when restarting manager/nodes.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker swarm init   --auto-lock=true #initialize with &lt;br /&gt;
docker swarm update --auto-lock=true #update current swarm&lt;br /&gt;
# both will produce unlock token STKxxx&lt;br /&gt;
docker swarm unlock #it'll ask for the unlock token&lt;br /&gt;
docker swarm update --auto-lock=false #disable key locking&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you have access to a manager you can always get unlock key using:&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker swarm unlock-key&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Key management&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker swarm unlock-key --rotate #could be in a cron&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Backup and restore swarm cluster ==&lt;br /&gt;
This priocess describes how to backup whole cluster configuration so can be restored on a new set of servers.&lt;br /&gt;
&lt;br /&gt;
Create docker apps running across swarm&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker service create --name bkweb --publish 80:80 --replicas 2 httpd&lt;br /&gt;
$ docker service ls&lt;br /&gt;
ID           NAME      MODE          REPLICAS  IMAGE         PORTS&lt;br /&gt;
q9jki3n2hffm bkweb     replicated    2/2       httpd:latest  *:80-&amp;gt;80/tcp&lt;br /&gt;
&lt;br /&gt;
$ docker service ps bkweb #note containers run on 2 different nodes&lt;br /&gt;
ID           NAME      IMAGE         NODE                      DESIRED STATE CURRENT STATE          &lt;br /&gt;
j964jm1lq3q5 bkweb.1   httpd:latest  server2c.mylabserver.com  Running       Running about a minute ago&lt;br /&gt;
jpjx3mk7hhm0 bkweb.2   httpd:latest  server1c.mylabserver.com  Running       Running about a minute ago&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Backup state files&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo -i&lt;br /&gt;
cd /var/lib/docker/swarm&lt;br /&gt;
cat docker-state.json #contains info about managers, workers, certificates, etc..&lt;br /&gt;
cat state.json&lt;br /&gt;
sudo systemctl stop docker.service&lt;br /&gt;
&lt;br /&gt;
# Backup swarm cluster, this file can be then used to recover whole swarm cluster on another set of servers&lt;br /&gt;
sudo tar -czvf swarm.tar.gz /var/lib/docker/swarm/&lt;br /&gt;
&lt;br /&gt;
#the running docker containers should be brought up as they were before stopping the service&lt;br /&gt;
systemctl start docker&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Recover using swarm.tar backup&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# scp swarm.tar to recovery node - what it'd be a node with just installed docker&lt;br /&gt;
sudo rm -rf /var/lib/docker/swarm&lt;br /&gt;
sudo systemctl stop docker&lt;br /&gt;
&lt;br /&gt;
# Option1 untar directly&lt;br /&gt;
sudo tar -xzvf swarm.tar.gz -C /var/lib/docker/swarm&lt;br /&gt;
&lt;br /&gt;
# Option2 copy recursivly, -f override if a file exists&lt;br /&gt;
tar -xzvf swarm.tar.gz; cd /var/lib/docker&lt;br /&gt;
cp -rf swarm/ /var/lib/docker/&lt;br /&gt;
&lt;br /&gt;
sudo systemctl start docker&lt;br /&gt;
docker swarm init --force-new-cluster # produces the exactly same token&lt;br /&gt;
# you should join all required nodes to new manager ip&lt;br /&gt;
# scale services down to 1, then scale up so get distributed to other nodes&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Run containers as a services ==&lt;br /&gt;
Docker container has a number limitation therefore running as a service where Cluster Manager: Swarm or Kubernetes manages networking, access, loadbalancing etc.. is a way to scale with ease. The service is using eg. mesh routing to deal with access to containers.&lt;br /&gt;
&lt;br /&gt;
Swarm nodes setup 1 manager and 2 workers&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
ID                            HOSTNAME                          STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION&lt;br /&gt;
641bfndn49b1i1dj17s8cirgw *   swarm-mgr-1.example.com   Ready   Active Leader       18.09.1&lt;br /&gt;
vlw7te728z7bvd7ulb3hn08am     swarm-wkr-1.example.com   Ready   Active              18.09.1&lt;br /&gt;
r8h7xmevue9v2mgysmld59py2     swarm-wkr-2.example.com   Ready   Active              18.09.0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create and run a service&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docekr pull httpd&lt;br /&gt;
docker service create --name serviceweb --publish 80:80 httpd&lt;br /&gt;
# --publish|-p -expose a port on all containers in the running cluster&lt;br /&gt;
&lt;br /&gt;
docker service ls&lt;br /&gt;
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS&lt;br /&gt;
vt0ftkifbd84        serviceweb          replicated          1/1                 httpd:latest        *:80-&amp;gt;80/tcp&lt;br /&gt;
&lt;br /&gt;
docker service ps serviceweb #show nodes that a container is running on, here on mgr-1 node&lt;br /&gt;
ID           NAME         IMAGE        NODE                    DESIRED STATE CURRENT STATE  ERROR  PORTS&lt;br /&gt;
e6rx3tzgp1e5 serviceweb.1 httpd:latest swarm-mgr-1.example.com Running       Running about                  &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When running as a service even if a container runs on a single node (replica=1) the container can be accessed from any of swarm nodes. It's because service exposed port has been exposed to extended mesh private network that the container is running on.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
[user@swarm-mgr-1 ~]$ curl -k http://swarm-mgr-1.example.com&lt;br /&gt;
  &amp;lt;html&amp;gt;&amp;lt;body&amp;gt;&amp;lt;h1&amp;gt;It works!&amp;lt;/h1&amp;gt;&amp;lt;/body&amp;gt;&amp;lt;/html&amp;gt;&lt;br /&gt;
[user@swarm-mgr-1 ~]$ curl -k http://swarm-wkr-1.example.com&lt;br /&gt;
  &amp;lt;html&amp;gt;&amp;lt;body&amp;gt;&amp;lt;h1&amp;gt;It works!&amp;lt;/h1&amp;gt;&amp;lt;/body&amp;gt;&amp;lt;/html&amp;gt;&lt;br /&gt;
[user@swarm-mgr-1 ~]$ curl -k http://swarm-wkr-2.example.com&lt;br /&gt;
  &amp;lt;html&amp;gt;&amp;lt;body&amp;gt;&amp;lt;h1&amp;gt;It works!&amp;lt;/h1&amp;gt;&amp;lt;/body&amp;gt;&amp;lt;/html&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Service update, can be done to limits, volumes, env-variables and more...&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker service scale devweb=3                 #or&lt;br /&gt;
docker service update --replicas 3 serviceweb #--detach=false shows visual progress in older versions, default in v18.06&lt;br /&gt;
serviceweb&lt;br /&gt;
overall progress: 3 out of 3 tasks &lt;br /&gt;
1/3: running   [==================================================&amp;gt;] &lt;br /&gt;
2/3: running   [==================================================&amp;gt;] &lt;br /&gt;
3/3: running   [==================================================&amp;gt;] &lt;br /&gt;
verify: Service converged &lt;br /&gt;
&lt;br /&gt;
# Limits(soft limit) and reservations(hard limit), this causes to start new services(containers)&lt;br /&gt;
docker service update --limit-cpu=.5 --reserve-cpu=.75 --limit-memory=128m --reserve-memory=256m serviceweb&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Templating service names ==&lt;br /&gt;
This allows to control eg. hostname in a cluster. Useful for big clusters to easier identify services where they run from hostname.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker service create --name web --hostname&amp;quot;{{.Node.ID}}-{{.Service.Name}}&amp;quot; httpd&lt;br /&gt;
docker service ps --no-trunc web&lt;br /&gt;
docker inspect --format=&amp;quot;{{}.Config.Hostname}&amp;quot; web.1.ab10_serviceID_cd&lt;br /&gt;
aa_nodeID_bb-web&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Node lables for task/service placement ==&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker node ls&lt;br /&gt;
ID                            HOSTNAME                  STATUS AVAILABILITY        MANAGER STATUS      ENGINE VERSION&lt;br /&gt;
641bfndn49b1i1dj17s8cirgw *   swarm-mgr-1.example.com   Ready  Active              Leader              18.09.1&lt;br /&gt;
vlw7te728z7bvd7ulb3hn08am     swarm-wkr-1.example.com   Ready  Active                                  18.09.1&lt;br /&gt;
r8h7xmevue9v2mgysmld59py2     swarm-wkr-2.example.com   Ready  Active                                  18.09.1&lt;br /&gt;
&lt;br /&gt;
docker node inspect 641bfndn49b1i1dj17s8cirgw --pretty&lt;br /&gt;
ID:                     641bfndn49b1i1dj17s8cirgw&lt;br /&gt;
Hostname:               swarm-mgr-1.example.com &lt;br /&gt;
Joined at:              2019-01-08 12:16:56.277717163 +0000 utc&lt;br /&gt;
Status:&lt;br /&gt;
 State:                 Ready&lt;br /&gt;
 Availability:          Active&lt;br /&gt;
 Address:               172.31.10.10&lt;br /&gt;
Manager Status:&lt;br /&gt;
 Address:               172.31.10.10:2377&lt;br /&gt;
 Raft Status:           Reachable&lt;br /&gt;
 Leader:                Yes&lt;br /&gt;
Platform:&lt;br /&gt;
 Operating System:      linux&lt;br /&gt;
 Architecture:          x86_64&lt;br /&gt;
Resources:&lt;br /&gt;
 CPUs:                  2&lt;br /&gt;
 Memory:                3.699GiB&lt;br /&gt;
Plugins:&lt;br /&gt;
 Log:           awslogs, fluentd, gcplogs, gelf, journald, json-file, local, logentries, splunk, syslog&lt;br /&gt;
 Network:               bridge, host, macvlan, null, overlay&lt;br /&gt;
 Volume:                local&lt;br /&gt;
Engine Version:         18.09.1&lt;br /&gt;
TLS Info:&lt;br /&gt;
 TrustRoot:&lt;br /&gt;
-----BEGIN CERTIFICATE-----&lt;br /&gt;
MIIBajCCARCgAwIBAgIUKXz3wtc8OA8uzTo1pO86ko+PB+EwCgYIKoZIzj0EAwIw&lt;br /&gt;
..&lt;br /&gt;
-----END CERTIFICATE-----&lt;br /&gt;
 Issuer Subject:        MBMxETAPBgNVBAMTCHN3YX.....h&lt;br /&gt;
 Issuer Public Key:     MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEy......==&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Apply label to a node&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker node update --label-add node-env=testnode r8h7xmevue9v2mgysmld59py2&lt;br /&gt;
docker node inspect r8h7xmevue9v2mgysmld59py2 --pretty | grep -B1 -A2 Labels&lt;br /&gt;
ID:                     r8h7xmevue9v2mgysmld59py2&lt;br /&gt;
Labels:&lt;br /&gt;
 - node-env=testnode&lt;br /&gt;
Hostname:               swarm-wkr-1.example.com&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
How to use it. Run a service with &amp;lt;code&amp;gt;--constraint&amp;lt;/code&amp;gt; option that pins services to run on a node meeting given criteria. In our case to run on a node where &amp;lt;code&amp;gt;node.labels.node-env == testnode&amp;lt;/code&amp;gt;. Note that all replicas are running on the same node unlike they'd be distributed across the cluster.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker service create --name constraints -p 80:80 --constraint 'node.labels.node-env == testnode' --replicas 3 httpd #node.role, node.id, node.hostname&lt;br /&gt;
zrk15vfdaitc1rvw9wqh2s0ot&lt;br /&gt;
overall progress: 3 out of 3 tasks &lt;br /&gt;
1/3: running   [==================================================&amp;gt;] &lt;br /&gt;
2/3: running   [==================================================&amp;gt;] &lt;br /&gt;
3/3: running   [==================================================&amp;gt;] &lt;br /&gt;
verify: Service converged &lt;br /&gt;
[cloud_user@mrpiotrpawlak1c ~]$ docker service ls&lt;br /&gt;
ID           NAME          MODE         REPLICAS IMAGE         PORTS&lt;br /&gt;
zrk15vfdaitc constraints   replicated   3/3      httpd:latest  *:80-&amp;gt;80/tcp&lt;br /&gt;
&lt;br /&gt;
[user@swarm-wkr-2 ~]$ docker service ps constraints&lt;br /&gt;
ID           NAME          IMAGE        NODE                      DESIRED STATE       CURRENT STATE            ERROR               PORTS&lt;br /&gt;
y5z4mt99uzpo constraints.1 httpd:latest swarm-wkr-2.example.com   Running Running 41 seconds ago                       &lt;br /&gt;
zqbn4ips969q constraints.2 httpd:latest swarm-wkr-2.example.com   Running Running 41 seconds ago                       &lt;br /&gt;
vnb10jcs2915 constraints.3 httpd:latest swarm-wkr-2.example.com   Running Running 41 seconds ago &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Scaling services ==&lt;br /&gt;
These commands be issued on a manager node&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker pull docker nginx&lt;br /&gt;
docker service create --name web --publish 80:80 httpd&lt;br /&gt;
docker service ps web                  #there is only 1 replica&lt;br /&gt;
docker service update --replicas 3 web #update to 3 replicas&lt;br /&gt;
docker service create --name nginx --publish 5901:80 nginx&lt;br /&gt;
elinks http://swarm-mgr-1.com:5901     #nginx website will be presented&lt;br /&gt;
&lt;br /&gt;
# scale is equivalent to update --replicas command for a single or multiple services&lt;br /&gt;
docker service scale web=3 nginx=3&lt;br /&gt;
docker service ls&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Replicated services vs global services ==&lt;br /&gt;
;Global Replicated: mode runs at least one copy of a service on each swarm node, even if you join another node the service will coverage there as well. In global mode you cannot use &amp;lt;code&amp;gt;update --replicats&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;scale&amp;lt;/code&amp;gt; commands. It is not possible to update the mode type.&lt;br /&gt;
;Replicated mode: allows for grater control and flexibility of running number of services.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# creates a single service running across whole cluster in replicated mode&lt;br /&gt;
docker service create --name web --publish 80:80 httpd&lt;br /&gt;
&lt;br /&gt;
# run in a global node&lt;br /&gt;
docker service create --name web --publish 5901:80 --mode global httpd&lt;br /&gt;
docker service ls #note distinct mode names: global and replicated&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Docker compose and deploy to Swarm =&lt;br /&gt;
Install&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo yum install epel&lt;br /&gt;
sudo yum install pip&lt;br /&gt;
sudo pip install --upgrade pip&lt;br /&gt;
# install docker CE or EE to avoid Python libs conflits&lt;br /&gt;
sudo pip install docker-compose&lt;br /&gt;
&lt;br /&gt;
# Troubleshooting&lt;br /&gt;
## Err: Cannot uninstall 'requests'. It is a distutils installed project...&lt;br /&gt;
pip install --ignore-installed requests&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Dockerfile&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
cat &amp;gt;Dockerfile &amp;lt;&amp;lt;EOF&lt;br /&gt;
FROM centos:latest&lt;br /&gt;
RUN yum install -y httpd&lt;br /&gt;
RUN echo &amp;quot;Website1&amp;quot; &amp;gt;&amp;gt; /var/www/html/index.html&lt;br /&gt;
EXPOSE 80&lt;br /&gt;
ENTRYPOINT apachectl -DFOREGROUND&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Docker compose file&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
cat &amp;gt;docker-compose.yml &amp;lt;&amp;lt;EOF&lt;br /&gt;
version: '3'&lt;br /&gt;
services:&lt;br /&gt;
  apiweb1:&lt;br /&gt;
    image: httpd_1:v1&lt;br /&gt;
    build: .&lt;br /&gt;
    ports:&lt;br /&gt;
      - &amp;quot;81:80&amp;quot;&lt;br /&gt;
  apiweb2:&lt;br /&gt;
    image: httpd_1:v1&lt;br /&gt;
    ports:&lt;br /&gt;
      - &amp;quot;82:80&amp;quot;&lt;br /&gt;
  load-balancer:&lt;br /&gt;
    image: nginx:latest&lt;br /&gt;
    ports:&lt;br /&gt;
      - &amp;quot;80:80&amp;quot;&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Run docker compose, on the current node only&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker-compose up -d&lt;br /&gt;
WARNING: The Docker Engine you're using is running in swarm mode.&lt;br /&gt;
Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.&lt;br /&gt;
To deploy your application across the swarm, use `docker stack deploy`.&lt;br /&gt;
Creating compose_apiweb2_1       ... done&lt;br /&gt;
Creating compose_apiweb1_1       ... done&lt;br /&gt;
Creating compose_load-balancer_1 ... done&lt;br /&gt;
&lt;br /&gt;
docker ps&lt;br /&gt;
CONTAINER ID IMAGE        COMMAND                 CREATED  STATUS   PORTS              NAMES&lt;br /&gt;
14f8b6b10c2d nginx:latest &amp;quot;nginx -g 'daemon of…&amp;quot;  2 minutesUp 2 min 0.0.0.0:80-&amp;gt;80/tcp compose_load-balancer_1&lt;br /&gt;
e9b5b37fe4e5 httpd_1:v1   &amp;quot;/bin/sh -c 'apachec…&amp;quot;  2 minutesUp 2 min 0.0.0.0:81-&amp;gt;80/tcp compose_apiweb1_1&lt;br /&gt;
28ee22a8eae0 httpd_1:v1   &amp;quot;/bin/sh -c 'apachec…&amp;quot;  2 minutesUp 2 min 0.0.0.0:82-&amp;gt;80/tcp compose_apiweb2_1&lt;br /&gt;
&lt;br /&gt;
# Verify&lt;br /&gt;
curl http://localhost:81&lt;br /&gt;
curl http://localhost:82&lt;br /&gt;
curl http://localhost:80 #nginx&lt;br /&gt;
&lt;br /&gt;
# Prep before deploying docker-compose to Swarm. Also images needs to be build before hand.&lt;br /&gt;
# Docker stack does not support building images&lt;br /&gt;
docker-compose down --volumes #save everything to storage volumes&lt;br /&gt;
Stopping compose_load-balancer_1 ... done&lt;br /&gt;
Stopping compose_apiweb1_1       ... done&lt;br /&gt;
Stopping compose_apiweb2_1       ... done&lt;br /&gt;
Removing compose_load-balancer_1 ... done&lt;br /&gt;
Removing compose_apiweb1_1       ... done&lt;br /&gt;
Removing compose_apiweb2_1       ... done&lt;br /&gt;
Removing network compose_default&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Deploy compose to Swarm&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker stack deploy --compose-file docker-compose.yml customcompose-stack #customcompose-stack is a prefix for service name&lt;br /&gt;
Ignoring unsupported options: build&lt;br /&gt;
Creating network customcompose-stack_default&lt;br /&gt;
Creating service customcompose-stack_apiweb1&lt;br /&gt;
Creating service customcompose-stack_apiweb2&lt;br /&gt;
Creating service customcompose-stack_load-balancer&lt;br /&gt;
&lt;br /&gt;
docker stack services customcompose-stack #or&lt;br /&gt;
docker service ls&lt;br /&gt;
ID           NAME                               MODE       REPLICAS IMAGE        PORTS&lt;br /&gt;
k7wwkncov49p customcompose-stack_apiweb1        replicated 0/1      httpd_1:v1   *:81-&amp;gt;80/tcp&lt;br /&gt;
nl0j5folpmha customcompose-stack_apiweb2        replicated 0/1      httpd_1:v1   *:82-&amp;gt;80/tcp&lt;br /&gt;
x6p14gmpjyra customcompose-stack_load-balancer  replicated 1/1      nginx:latest *:80-&amp;gt;80/tcp&lt;br /&gt;
&lt;br /&gt;
docker stack rm customcompose-stack #remove stack&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Selecting a Storage Driver = &lt;br /&gt;
Go to Docker version matrix to verify what drivers are supported on your platform. Changing storage driver is destructive and you loose all containers volumes. Therefore you need to export/backup then re-import after the storage driver change.&lt;br /&gt;
&lt;br /&gt;
;CentOS&lt;br /&gt;
Device mapper is officialy supported on CentOS. It can be used on a disc as blockstorage it uses loopback adapter to provide that. Or can be blockstorage devive allowing Docker to mamange it.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker info --format '{{json .Driver}}'&lt;br /&gt;
docker info -f '{{json .}}' | jq .Driver&lt;br /&gt;
docker info | grep Storage&lt;br /&gt;
&lt;br /&gt;
sudo touch /etc/docker/daemon.json&lt;br /&gt;
sudo vi    /etc/docker/daemon.json #additional options are available&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;storage-driver&amp;quot;:&amp;quot;devicemapper&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Preserving any current images, requires export/backup and re-import after the storage driver change.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker images&lt;br /&gt;
sudo systemctl docker restart&lt;br /&gt;
ls -l /var/lib/docker/devicemapper #new location to storing images&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note, in &amp;lt;code&amp;gt;/var/lib/docker&amp;lt;/code&amp;gt; new directory &amp;lt;code&amp;gt;devicemapper&amp;lt;/code&amp;gt; has been created to store images from now on.&lt;br /&gt;
&lt;br /&gt;
;Update 2019 - Docker Engine 18.09.1&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
WARNING: the devicemapper storage-driver is deprecated, and will be removed in a future release.&lt;br /&gt;
WARNING: devicemapper: usage of loopback devices is strongly discouraged for production use.&lt;br /&gt;
         Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Selecting a logginig driver =&lt;br /&gt;
Available list of [https://docs.docker.com/config/containers/logging/configure/#supported-logging-driverslogging drivers] can be seen on Docker documentation page. Most popular are:&lt;br /&gt;
*none - No logs are available for the container and docker logs does not return any output.&lt;br /&gt;
*json-file - (default) the logs are formatted as JSON. The default logging driver for Docker.&lt;br /&gt;
*syslog - Writes logging messages to the syslog facility. The syslog daemon must be running on the host machine.&lt;br /&gt;
*journald - Writes log messages to journald. The journald daemon must be running on the host machine.&lt;br /&gt;
*fluentd - Writes log messages to fluentd (forward input). The fluentd daemon must be running on the host machine.&lt;br /&gt;
*awslogs - Writes log messages to Amazon CloudWatch Logs.&lt;br /&gt;
*splunk - Writes log messages to splunk using the HTTP Event Collector.&lt;br /&gt;
*etwlogs - (Windows) Writes log messages as Event Tracing for Windows (ETW) events&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker info | grep logging&lt;br /&gt;
docker container run -d --name &amp;lt;webjson&amp;gt; --logg-driver json-file httpd #per docker container setup&lt;br /&gt;
docker logs &amp;lt;testjson&amp;gt;&lt;br /&gt;
&lt;br /&gt;
docker container run -d --name &amp;lt;web&amp;gt; httpd #start new container&lt;br /&gt;
docker logs -f _testweb_                   #display standard-out logs&lt;br /&gt;
docker service log -f &amp;lt;web&amp;gt; #for swarm all container replicas logs&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable syslog logginig driver&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo vi /etc/rsyslog.conf&lt;br /&gt;
#uncomment below&lt;br /&gt;
$ModLoad imudp&lt;br /&gt;
$UDPServerRun 514&lt;br /&gt;
&lt;br /&gt;
sudo systemctl start rsyslog&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Change logging driver. Then standard output won't be available after the change.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;log-driver&amp;quot;: &amp;quot;syslog&amp;quot;,&lt;br /&gt;
  &amp;quot;log-opts&amp;quot;: {&lt;br /&gt;
    &amp;quot;syslog-address&amp;quot;: &amp;quot;udp://172.31.10.1&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
sudo systemctl restart docker&lt;br /&gt;
docker info | grep logging&lt;br /&gt;
tail -f /var/log/messages #this will show all logging eg. access logs for httpd server&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Docker daemon logs ==&lt;br /&gt;
System level logs&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# CentOS&lt;br /&gt;
/var/messages | grep -i docker&lt;br /&gt;
&lt;br /&gt;
# Ubuntu&lt;br /&gt;
sudo journalctl -u docker.service --no-hostname&lt;br /&gt;
sudo journalctl -u docker -o json | jq -cMr '.MESSAGE'&lt;br /&gt;
sudo journalctl -u docker -o json | jq -cMr 'select(has(&amp;quot;CONTAINER_ID&amp;quot;) | not) | .MESSAGE'&lt;br /&gt;
/var/log/syslog | grep -i docker&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Docker container or service logs&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker container logs [OPTIONS] containerID  #single container logs&lt;br /&gt;
docker service   logs [OPTIONS] service|task #agregate logs across all cluster deployed container replicas &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Container life-cycle policies - eg. autostart =&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker container run -d --name web --restart &amp;lt;on-failure|unless-stopped|no|none(default)|always&amp;gt; httpd&lt;br /&gt;
# --restart -restart on crash or exit 1 or service or system reboot&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Definitions:&lt;br /&gt;
* always - it will restart container always, even if stopped manually, restarting docker-deamon will start container&lt;br /&gt;
* unless-stopped - it will restart container always unless stopped manually by &amp;lt;code&amp;gt;docker container stop&amp;lt;/code&amp;gt;&lt;br /&gt;
* on-failure - restart if container exits with non-zero exit code&lt;br /&gt;
&lt;br /&gt;
= Universal Control Plance - UCP =&lt;br /&gt;
It's an application what allow to see all operational details for Swarm cluster when using Docker EE editin. 30 days trial is available.&lt;br /&gt;
&lt;br /&gt;
;Communication between Docker Engine, UCP and DTR (Docker Trusted Registry)&lt;br /&gt;
* over TCP/UDP - depends on a port, and whether a response is required, or if a message is a notification&lt;br /&gt;
* IPC - interprocess communication (intra-host), services on the same node&lt;br /&gt;
* API - over TCP, uses API directlyto query or update components in a cluster&lt;br /&gt;
&lt;br /&gt;
;References&lt;br /&gt;
* [https://docs.docker.com/ee/ucp/ucp-architecture/ UCP architecture]&lt;br /&gt;
&lt;br /&gt;
== Install/uninstall UCP &amp;lt;code&amp;gt;image: docker/ucp&amp;lt;/code&amp;gt; ==&lt;br /&gt;
OS support: &lt;br /&gt;
* UCP 2.2.11 is supported running on RHEL 7.5 and Ubuntu 18.04&lt;br /&gt;
&lt;br /&gt;
For labs purpose, we can use eg. &amp;lt;code&amp;gt;ucp.example.com&amp;lt;/code&amp;gt; the domain &amp;lt;code&amp;gt;example.com&amp;lt;/code&amp;gt; is included in UCP and DTR wildcard self-signed certificate.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install on a manager node&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
export UCP_USERNAME=ucp-admin&lt;br /&gt;
export UCP_PASSWORD=ucp-admin&lt;br /&gt;
export UCP_MGR_NODE_IP=172.31.101.248&lt;br /&gt;
&lt;br /&gt;
docker container run --rm -it --name ucp \&lt;br /&gt;
  -v /var/run/docker.sock:/var/run/docker.sock docker/ucp:2.2.15 \&lt;br /&gt;
  install --host-address=$UCP_MGR_NODE_IP --interactive --debug&lt;br /&gt;
&lt;br /&gt;
# --rm  :- because this container will be only transitinal container&lt;br /&gt;
# -it   :- because installation we want interactive&lt;br /&gt;
# -v    :- link the container with a file on a host&lt;br /&gt;
# --san :- add subject alternative names to certificates (e.g. --san www1.acme.com --san www2.acme.com)&lt;br /&gt;
# --host-address    :- IP address or network interface name to advertise to other nodes&lt;br /&gt;
# docker/ucp:2.2.11 :- image version&lt;br /&gt;
# --dns        :- custom DNS servers for the UCP containers&lt;br /&gt;
# --dns-search :- ustom DNS search domains for the UCP containers&lt;br /&gt;
# --admin-username &amp;quot;$UCP_USERNAME&amp;quot; --admin-password &amp;quot;$UCP_PASSWORD&amp;quot; #seems these are not supported, although are in a guide&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If not provided you will be asked for: &lt;br /&gt;
* Admin password during the process&lt;br /&gt;
* You may enter additional aliases (SANs) now or press enter to proceed with the above list:&lt;br /&gt;
** Additinall aliases: ucp ucp.example.com&lt;br /&gt;
 DEBU[0062] User entered: ucp ucp.ciscolinux.co.uk&lt;br /&gt;
 DEBU[0062] Hostnames: [host1c.mylabserver.com 127.0.0.1 172.17.0.1 172.31.101.248 ucp ucp.ciscolinux.co.uk] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You may want to add DNS entries in &amp;lt;code&amp;gt;/etc/hosts&amp;lt;/code&amp;gt; for&lt;br /&gt;
* ''ucp'' or ''ucp.example.com'' pointing to manager public ip&lt;br /&gt;
* ''dtr'' or ''dtr.example.com'' pointing a worker node public IPs. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Verify&lt;br /&gt;
* connect to https://ucp.example.com:443. &lt;br /&gt;
* &amp;lt;code&amp;gt;docker ps&amp;lt;/code&amp;gt; should see a number of containers running now, they need to see each other therefore we used &amp;lt;code&amp;gt;hosts&amp;lt;/code&amp;gt; entries to allow this.&lt;br /&gt;
&lt;br /&gt;
;Uninstall&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker container run --rm -it --name ucp \&lt;br /&gt;
  -v /var/run/docker.sock:/var/run/docker.sock \&lt;br /&gt;
  docker/ucp uninstall-ucp --interactive&lt;br /&gt;
&lt;br /&gt;
INFO[0000] Your engine version 18.09.1, build 4c52b90 (4.15.0-1031-aws) is compatible with UCP 3.1.2 (b822777) &lt;br /&gt;
INFO[0000] We're about to uninstall from this swarm cluster. UCP ID: t0ltwwcw5tdbtjo2fxlzmj8p4 &lt;br /&gt;
Do you want to proceed with the uninstall? (y/n): y&lt;br /&gt;
INFO[0000] Uninstalling UCP on each node...             &lt;br /&gt;
INFO[0031] UCP has been removed from this cluster successfully. &lt;br /&gt;
INFO[0033] Removing UCP Services&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Install DTR Docker Trusted Repository &amp;lt;code&amp;gt;image: docker/dtr&amp;lt;/code&amp;gt; ==&lt;br /&gt;
It's recommended for single core systems to wait 5 minutes after UCP deployment to relese more cpu cycle. You can see the load may peaking up at 1.0 using &amp;lt;code&amp;gt;w&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
Connect to UCP service https://ucp.example.com, login with creds created. Uload a license.lic file.&lt;br /&gt;
Go to Admin Settings &amp;gt; Docker Trusted Registry &amp;gt; Pick one of UCP Nodes [worker]&lt;br /&gt;
You may disable TLS verification on self-signed certificate&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Run a given command on a node you want to install DTR. &amp;lt;code&amp;gt;UCP_NODE&amp;lt;/code&amp;gt; in lab environment can cause a few issues. For a convinience to avoid avoid port conflict :80,:443 use different node that UCP is instaled. Eg. dns ''user2c.mylabserver.com'' or private IP. &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
export UCP_NODE=wkr-172.31.107.250 #for convinince, to avoid port conflict :80,:443 use worker IP&lt;br /&gt;
export UCP_USERNAME=ucp-admin&lt;br /&gt;
export UCP_PASSWORD=ucp-admin&lt;br /&gt;
export UCP_URL=https://ucp.example.com:443 #avoid using example.com to avoid SSL name validation issues&lt;br /&gt;
docker pull docker/dtr&lt;br /&gt;
&lt;br /&gt;
# Optional. Download UCP public certificate&lt;br /&gt;
curl -k https://ucp.ciscolinux.co.uk/ca &amp;gt; ucp-ca.pem&lt;br /&gt;
&lt;br /&gt;
docker container run -it --rm docker/dtr install \&lt;br /&gt;
  --ucp-node $UCP_NODE --ucp-url $UCP_URL --debug \&lt;br /&gt;
  --ucp-username $UCP_USERNAME --ucp-password $UCP_PASSWORD \&lt;br /&gt;
  --ucp-insecure-tls  # --ucp-ca &amp;quot;$(cat ucp-ca.pem)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# --ucp-node :- hostname/IP of the UCP node (any node managed by UCP) to deploy DTR. Random by default&lt;br /&gt;
# --ucp-url  :- the UCP URL including domain and port.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It will ask for if not specified:&lt;br /&gt;
* ucp-password: you know it from UCP installation step&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Sygnificiant installation logs&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
..&lt;br /&gt;
INFO[0006] Only one available UCP node detected. Picking UCP node 'user2c.labserver.com' &lt;br /&gt;
..&lt;br /&gt;
INFO[0006] verifying [80 443] ports on user2c.labserver.com &lt;br /&gt;
..&lt;br /&gt;
INFO[0000] Using default overlay subnet: 10.1.0.0/24    &lt;br /&gt;
INFO[0000] Creating network: dtr-ol                     &lt;br /&gt;
INFO[0000] Connecting to network: dtr-ol                &lt;br /&gt;
..&lt;br /&gt;
INFO[0008] Generated TLS certificate. dnsNames=&amp;quot;[*.com *.*.com example.com *.dtr *.*.dtr]&amp;quot; domains=&amp;quot;[*.com *.*.com 172.17.0.1 example.com *.dtr *.*.dtr]&amp;quot; ipAddresses=&amp;quot;[172.17.0.1]&amp;quot;&lt;br /&gt;
..&lt;br /&gt;
INFO[0073] You can use flag '--existing-replica-id 10e168476b49' when joining other replicas to your Docker Trusted Registry Cluster &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Verify by logging in to https://dtr.example.com&lt;br /&gt;
DTR installation process above has also installed a number of containers on maanger/worker nodes named &amp;lt;code&amp;gt;ucp-agent&amp;lt;/code&amp;gt; and number of containers on dedicated DTR node. &lt;br /&gt;
You can verify DTR by logging to https://dtr.example.com with UCP credentials &amp;lt;code&amp;gt;ucp-admin&amp;lt;/code&amp;gt; and the same password if you haven't changed any commands above. Then you should be presented with registry.docker.io like theme. Any images stored there will be trusted from a perspective of our organisation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Verify by going to UCP https://ucp.example.com, admin settings &amp;gt; Docker Trusted Registry&lt;br /&gt;
[[File:Ucp-dtr-in-admin.png|none|400px|left|Ucp-dtr-in-admin]]&lt;br /&gt;
&lt;br /&gt;
== Backup UCP and DTR  configuration ==&lt;br /&gt;
This is build into UCP. The process is to start a special container to export UCP configuration to tar file. This can be run as &amp;lt;code&amp;gt;cron&amp;lt;/code&amp;gt; job.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker container run --log-driver non --rm -i --name ucp -v /var/run/docker.sock:/var/run/docker.sock docker/ucp backup &amp;gt; backup.tar&lt;br /&gt;
# --rm it's transitional container&lt;br /&gt;
# -i run interactivly&lt;br /&gt;
&lt;br /&gt;
# At first run it will error with --id m79xxxxxxxxx, asking to re-run teh command with this id.&lt;br /&gt;
&lt;br /&gt;
# Restore command&lt;br /&gt;
docker container run --log-driver non --rm -i --name ucp -v /var/run/docker.sock:/var/run/docker.sock docker/ucp restore --id m79xxx &amp;lt; backup.tar&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;DTR&lt;br /&gt;
Durign a backup DTR will not be available.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker container run --log-driver non --rm docker/dtr backup  --ucp-insecure-tls --ucp-url &amp;lt;ucp_server_dns:443&amp;gt; --ucp-username admin --ucp-password &amp;lt;password&amp;gt; &amp;gt; dtr-backup.tar&lt;br /&gt;
&lt;br /&gt;
# will you be asked for:&lt;br /&gt;
# Choose a replica to back up from: enter&lt;br /&gt;
&lt;br /&gt;
# Restore command&lt;br /&gt;
docker container run --log-driver non --rm docker/dtr restore --ucp-insecure-tls --ucp-url &amp;lt;ucp_server_dns:443&amp;gt; --ucp-username admin --ucp-password &amp;lt;password&amp;gt; &amp;lt; dtr-backup.tar&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== UCP RBAC ==&lt;br /&gt;
The main concept is:&lt;br /&gt;
* administrators can make changes to the UCP swarm/kubernetes, User Management, Orgainisation, Team and Roles&lt;br /&gt;
* users - range of access from Full Control of resources to no access&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Ucp-rbac.png|500px|none|left|Ucp-rbac]]&lt;br /&gt;
&lt;br /&gt;
Note that only Scheduler role allows access to Node to view nodes. Plus schedule workloads of course.&lt;br /&gt;
&lt;br /&gt;
= UCP Client bundle =&lt;br /&gt;
UCP client bundle allows to export a bundle containing a certificate and environment settings that will poind docker-client to UCP in order to use a cluster, create images and services.&lt;br /&gt;
&lt;br /&gt;
;Download bundle&lt;br /&gt;
# Create a user with priviliges that yuo wish docker-client to run as&lt;br /&gt;
# Download a client budle from User Profile &amp;gt; Client bundle &amp;gt; + New Client Bundle&lt;br /&gt;
# File &amp;lt;code&amp;gt;ucp-bundle-[username].zip will get downloaded&amp;lt;/code&amp;gt; &amp;lt;p&amp;gt;&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
unzip ucp-bundle-bob.zip &lt;br /&gt;
Archive:  ucp-bundle-bob.zip&lt;br /&gt;
 extracting: ca.pem                  &lt;br /&gt;
 extracting: cert.pem                &lt;br /&gt;
 extracting: key.pem                 &lt;br /&gt;
 extracting: cert.pub                &lt;br /&gt;
 extracting: env.sh                  &lt;br /&gt;
 extracting: env.ps1                 &lt;br /&gt;
 extracting: env.cmd     &lt;br /&gt;
&lt;br /&gt;
cat env.sh &lt;br /&gt;
export COMPOSE_TLS_VERSION=TLSv1_2&lt;br /&gt;
export DOCKER_TLS_VERIFY=1&lt;br /&gt;
export DOCKER_CERT_PATH=&amp;quot;$PWD&amp;quot;&lt;br /&gt;
export DOCKER_HOST=tcp://3.16.143.49:443&lt;br /&gt;
#&lt;br /&gt;
# Bundle for user bob&lt;br /&gt;
# UCP Instance ID t0ltwwcw5tdbtjo2fxlzmj8p4&lt;br /&gt;
#&lt;br /&gt;
# This admin cert will also work directly against Swarm and the individual&lt;br /&gt;
# engine proxies for troubleshooting.  After sourcing this env file, use&lt;br /&gt;
# &amp;quot;docker info&amp;quot; to discover the location of Swarm managers and engines.&lt;br /&gt;
# and use the --host option to override $DOCKER_HOST&lt;br /&gt;
#&lt;br /&gt;
# Run this command from within this directory to configure your shell:&lt;br /&gt;
# eval $(&amp;lt;env.sh)&lt;br /&gt;
&lt;br /&gt;
eval $(&amp;lt;env.sh) # apply ucp-bundle&lt;br /&gt;
&lt;br /&gt;
docker images # to list UCP managed images&lt;br /&gt;
&amp;lt;/source&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
# &amp;lt;li value=&amp;quot;4&amp;quot;&amp;gt; In my lab I had to update DOCKER_HOST from public IP to private IP &amp;lt;/li&amp;gt;&lt;br /&gt;
Err: error during connect: Get https://3.16.143.49:443/v1.39/images/json: x509: certificate is valid for 127.0.0.1, 172.31.101.248, 172.17.0.1, not 3.16.143.49&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
export DOCKER_HOST=tcp://172.31.101.248:443&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# &amp;lt;li value=&amp;quot;5&amp;quot;&amp;gt; Verify if you have permissions to create a service&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker service create --name test111 httpd&lt;br /&gt;
Error response from daemon: access denied:&lt;br /&gt;
no access to Service Create, on collection swarm&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# &amp;lt;li value=&amp;quot;6&amp;quot;&amp;gt; Add Grants to the user&amp;lt;/li&amp;gt;&lt;br /&gt;
## Go to User Management &amp;gt; Granst &amp;gt; Create Grant&lt;br /&gt;
## Base on a Roles, select Full Control&lt;br /&gt;
## Select Subjects, All Users, select the user&lt;br /&gt;
## Click Create&lt;br /&gt;
# Re-run service create command that should succeed now. This service can be managed now also within UCP console.&lt;br /&gt;
&lt;br /&gt;
= Docker Secure Registry | image: registry =&lt;br /&gt;
Docker provides a special docker image that can be used to manage docker imagages both internally or externally thus steps below include securing the access with SSL certificate.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create certificate&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
mkdir ~/{auth,certs}&lt;br /&gt;
# create self-signed certificate for Docker Repository&lt;br /&gt;
mkdir certs; cd _$ #cd to last argument in history&lt;br /&gt;
openssl req -newkey rsa:4096 -nodes -sha256 -keyout repo-key.pem -x509 -days 365 -out repo-cer.pem -subj /CN=myrepo.com&lt;br /&gt;
# trusted-certs docker client directory, docker client looks for trusted certs when conencting to reomote repo&lt;br /&gt;
sudo mkdir -p /etc/docker/certs.d/myrepo.com:5000 #port 5000 is a default port&lt;br /&gt;
sudo cp repo-cer.pem /etc/docker/certs.d/myrepo.com:5000/ca.crt &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;ca.crt&amp;lt;/code&amp;gt; is default/required CAroot trustcert file name, that the docker client (docker login API) uses when conencting to remote repository. In our case we trust any cert signed by CA=ca.crt when connecting to myrepo.com:5000 as same certs (selfsigned), got installed in &amp;lt;code&amp;gt;repository:2&amp;lt;/code&amp;gt; container via &amp;lt;code&amp;gt;-v /certs/&amp;lt;/code&amp;gt; option.&lt;br /&gt;
&lt;br /&gt;
Optional for development purposes to add doamin ''myrepo.com'' to hostfile binding to local interface ip address.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo -i; echo &amp;quot;172.16.10.10 myrepo.com&amp;quot; &amp;gt;&amp;gt; /etc/hosts; exit&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Optional add insecure-registry entry&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
sudo vi /etc/docker/deamon.json&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;insecure-registries&amp;quot; : [ &amp;quot;myrepo.com:5000&amp;quot;]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Pull special Docker Registry image&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
mkdir -p ~/auth #authentication directory, used when deploying local repository&lt;br /&gt;
docker pull registry:2&lt;br /&gt;
docker run --entrypoint htpasswd registry:2 -Bbn reg-admin Passw0rd123 &amp;gt; ~/auth/htpasswd&lt;br /&gt;
# -Bbn        -parameters&lt;br /&gt;
# reg-admin   -user&lt;br /&gt;
# Passw0rd123 -password string for basic htpasswd authentication method, the hashed password will be displayed to STDOUT&lt;br /&gt;
&lt;br /&gt;
$ cat ~/auth/htpasswd&lt;br /&gt;
reg-admin:$2y$05$DnTWDHp7uTwaDrw4sXpUbuDDIlLwu3c8MEMsHPjK/AcUMdK/TD6fO&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Run Registry container&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
cd ~&lt;br /&gt;
docker run -d -p 5000:5000 --name myrepo \&lt;br /&gt;
       -v $(pwd)/certs:/certs \&lt;br /&gt;
       -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/repo-cer.pem \&lt;br /&gt;
       -e REGISTRY_HTTP_TLS_KEY=/certs/repo-key.pem \&lt;br /&gt;
       -v $(pwd)/auth:/auth \&lt;br /&gt;
       -e REGISTRY_AUTH=htpasswd \&lt;br /&gt;
       -e REGISTRY_AUTH_HTPASSWD_REALM=&amp;quot;Registry Realm&amp;quot; \&lt;br /&gt;
       -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \&lt;br /&gt;
       registry:2&lt;br /&gt;
# -v                               -indicate where our certificates will be mounted within a container&lt;br /&gt;
# -e REGISTRY_HTTP_TLS_CERTIFICATE -path to cert inside the container&lt;br /&gt;
# -v $(pwd)/auth:/auth             -mounting authentication directory where a file with password is&lt;br /&gt;
# -e REGISTRY_AUTH htpasswd        -setting up to use 'htpasswd' authentication method&lt;br /&gt;
# registry:2                       -image name, positinal params  &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Verify&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker pull  alpine&lt;br /&gt;
docker tag   alpine     myrepo.com:5000/aa-alpine #create a tagged image (copy) on a local filesystem, &lt;br /&gt;
     # it must be prefixed with the private repo name '/' image name you want to upload as&lt;br /&gt;
&lt;br /&gt;
docker logout  # if logged in to another repository&lt;br /&gt;
docker login myrepo.com:5000/aa-alpine #login to a repository that runs as a container, stays login untill logout/reboot&lt;br /&gt;
docker login myrepo.com:5000/aa-alpine --username=rep-admin --password Passw0rd123&lt;br /&gt;
docker push  myrepo.com:5000/aa-alpine        &lt;br /&gt;
&lt;br /&gt;
docker image rmi alpine myrepo.com:5000/aa-alpine #delete image stored locally&lt;br /&gt;
docker pull             myrepo.com:5000/aa-alpine #pull image from a container repository&lt;br /&gt;
&lt;br /&gt;
# List private-repository images&lt;br /&gt;
curl --insecure -u &amp;quot;reg-admin:password&amp;quot; https://myrepo.com:5000/v2/_catalog&lt;br /&gt;
{&amp;quot;repositories&amp;quot;:[&amp;quot;aa-alpine&amp;quot;]}&lt;br /&gt;
&lt;br /&gt;
wget --no-check-certificate --http-user=reg-admin --http-password=password https://myrepo.com:5000/v2/_catalog&lt;br /&gt;
cat _catalog                                                                                                                                                                       &lt;br /&gt;
{&amp;quot;repositories&amp;quot;:[&amp;quot;my-alpine&amp;quot;,&amp;quot;myalpine&amp;quot;,&amp;quot;new-aa-busybox&amp;quot;]}&lt;br /&gt;
&lt;br /&gt;
# List tags&lt;br /&gt;
curl --insecure -u &amp;quot;reg-admin:password&amp;quot; https://myrepo.com:5000/v2/aa-alpine/tags/list&lt;br /&gt;
{&amp;quot;name&amp;quot;:&amp;quot;myalpine&amp;quot;,&amp;quot;tags&amp;quot;:[&amp;quot;latest&amp;quot;]}&lt;br /&gt;
curl --insecure -u &amp;quot;reg-admin:password&amp;quot; https://myrepo.com:5000/v2/aa-alpine/manifests/latest #entire image metadata&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note. There is no easy way to delete images from repository:2 container.&lt;br /&gt;
&lt;br /&gt;
= Docker push =&lt;br /&gt;
;Login to a docker repository&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
docker info | grep -B1 Registry #check if you are logged in to docker.hub repository&lt;br /&gt;
WARNING: No swap limit support&lt;br /&gt;
Registry: https://index.docker.io/v1/&lt;br /&gt;
&lt;br /&gt;
docker login&lt;br /&gt;
&lt;br /&gt;
docker info | grep -B1 Registry&lt;br /&gt;
Username: pio2pio&lt;br /&gt;
Registry: https://index.docker.io/v1/&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
; Tag and push an image&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# docker tag local-image:tagname new-repo:tagname  #create a local copy of an image&lt;br /&gt;
# docker push new-repo:tagname                     &lt;br /&gt;
&lt;br /&gt;
docker pull busybox&lt;br /&gt;
docker --tag busybox:latest pio2pio/testrepo&lt;br /&gt;
docker push pio2pio/testrepo&lt;br /&gt;
The push refers to repository [docker.io/pio2pio/testrepo]&lt;br /&gt;
683f499823be: Mounted from library/busybox &lt;br /&gt;
latest: digest: sha256:bbb143159af9eabdf45511fd5aab4fd2475d4c0e7fd4a5e154b98e838488e510 &lt;br /&gt;
size: 527&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
; Docker Content Trust&lt;br /&gt;
All images are implicitly trusted by your Docker daemon. Buy can set that ONLY signed images are allowed. You can configure your systems for trusting only image tags that have been signed.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
export=DOCKER_CONTENT_TRUST=1 #enable system to sign an image during push process&lt;br /&gt;
docker build -t myrepo.com:5000/untrusted.latest&lt;br /&gt;
docker push myrepo.com:5000/untrusted.latest&lt;br /&gt;
...&lt;br /&gt;
No tag specified, skipping trust metadata push&lt;br /&gt;
# 2nd attempt, with a tag specified now&lt;br /&gt;
docker push myrepo.com:5000/untrusted.latest:latest&lt;br /&gt;
Error: error contacting notary server: x509: certificate signed by unknown authority&lt;br /&gt;
&lt;br /&gt;
docker pull myrepo.com:5000/untrusted.latest:latest&lt;br /&gt;
Error: error contacting notary server: x509: certificate signed by unknown authority&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Errors explained:&lt;br /&gt;
Err: No tag specified, skipping trust metadata push&amp;lt;br /&amp;gt;&lt;br /&gt;
* Explenation: When image gets signed is signed by a tag. Thereofre if you skip a tag it won't get signed and metada get skipped.&lt;br /&gt;
Err: error contacting notary server: x509: certificate signed by unknown authority&lt;br /&gt;
* when uploading the image gets uploaded, but it is not trusted becasue signed with self-signed CA&lt;br /&gt;
* when downloading, and &amp;lt;code&amp;gt;DOCKER_CONTENT_TRUST=1&amp;lt;/code&amp;gt; is enabled, the image cannot be downloaded because is untrusted&lt;br /&gt;
&lt;br /&gt;
= Theory =&lt;br /&gt;
== What is a docker ==&lt;br /&gt;
Docker is a container runtime platform, where Swarm is a container orchestration platform.&lt;br /&gt;
&lt;br /&gt;
== Security ==&lt;br /&gt;
=== Mutually Authenticated TLS ===&lt;br /&gt;
Docker Swarm is ''secure by default'', it means all communication is encrypted. ''Mutually Authenticated TLS'' is the implementation was chosen to secure that communication. Any time a swarm is inicialised, a self-signed CA is generated and issues certificates to every node (mgr or wkr) to facilicate registration (join mgr or wkr) and latter those secure communications. It's transient container brought up to generate CA certs every time a cert is needed. MTLS communication is between Managers and Workers.&lt;br /&gt;
&lt;br /&gt;
== [[Linux Namespaces and Control Groups]] ==&lt;br /&gt;
&lt;br /&gt;
== Difference between docker attach and docker exec ==&lt;br /&gt;
;Attach&lt;br /&gt;
The docker attach command allows you to attach to a running container using the containers ID or name, either to view its ongoing output or to control it interactively. You can attach to the same contained process multiple times simultaneously, screen sharing style, or quickly view the progress of your detached process.&lt;br /&gt;
&lt;br /&gt;
The command docker attach is for attaching to the existing process. So when you exit, you exit the existing process.&lt;br /&gt;
&lt;br /&gt;
If we use docker attach, we can use only one instance of shell. So if we want open new terminal with new instance of container's shell, we just need run docker exec&lt;br /&gt;
&lt;br /&gt;
If the docker container was started using /bin/bash command, you can access it using attach, if not then you need to execute the command to create a bash instance inside the container using exec. Attach isn't for running an extra thing in a container, it's for attaching to the running process.&lt;br /&gt;
&lt;br /&gt;
To stop a container, use CTRL-c. This key sequence sends SIGKILL to the container. If --sig-proxy is true (the default),CTRL-c sends a SIGINT to the container. You can detach from a container and leave it running using the CTRL-p CTRL-q key sequence.&lt;br /&gt;
&lt;br /&gt;
;exec&lt;br /&gt;
&lt;br /&gt;
&amp;quot;docker exec&amp;quot; is specifically for running new things in a already started container, be it a shell or some other process. The docker exec command runs a new command in a running container.&lt;br /&gt;
&lt;br /&gt;
The command started using docker exec only runs while the containerâ€™s primary process (PID 1) is running, and it is not restarted if the container is restarted.&lt;br /&gt;
&lt;br /&gt;
exec command works only on already running container. If the container is currently stopped, you need to first run it. So now you can run any command in running container just knowing its ID (or name):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
docker exec &amp;lt;container_id_or_name&amp;gt; echo &amp;quot;Hello from container!&amp;quot;&lt;br /&gt;
docker run -it -d shykes/pybuilder /bin/bash&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The most important here is the -d option, which stands for detached. It means that the command you initially provided to the container (/bin/bash) will be run in background and the container will not stop immediately.&lt;br /&gt;
&lt;br /&gt;
= Dockerfile - python =&lt;br /&gt;
* [https://luis-sena.medium.com/creating-the-perfect-python-dockerfile-51bdec41f1c8 perfect python dockerfile] Medium&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
*[https://docs.docker.com/v1.8/installation/ubuntulinux/ Ubuntu installation] official website&lt;br /&gt;
*[https://docs.docker.com/engine/admin/systemd/ PROXY settings for systemd]&lt;br /&gt;
*[http://goinbigdata.com/docker-run-vs-cmd-vs-entrypoint/ Docker RUN vs CMD vs ENTRYPOINT]&lt;br /&gt;
*[https://vsupalov.com/docker-arg-vs-env/ docker ARG vs ENV]&lt;br /&gt;
*[https://www.fromlatest.io/#/ Docker online lintel]&lt;br /&gt;
*[https://hub.docker.com/r/portainer/portainer/ portainer] Monitor your containers via Web GUI&lt;br /&gt;
*[https://treescale.com/ treescale.com] Free private Docker registry&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
	<entry>
		<id>http://wiki.ciscolinux.co.uk/index.php?title=HashiCorp/Vagrant&amp;diff=7023</id>
		<title>HashiCorp/Vagrant</title>
		<link rel="alternate" type="text/html" href="http://wiki.ciscolinux.co.uk/index.php?title=HashiCorp/Vagrant&amp;diff=7023"/>
		<updated>2024-05-29T07:54:25Z</updated>

		<summary type="html">&lt;p&gt;Pio2pio: /* Install | Changelog */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
Vagrant is configured on a per project basis. Each of these projects has its own Vagran file. The Vagrant file is a text file in which vagrant reads that sets up our environment. There is a description of what OS, how much RAM, and what software to be installed etc. You can version control this file.&lt;br /&gt;
&lt;br /&gt;
= Install | [https://github.com/hashicorp/vagrant/blob/v2.2.10/CHANGELOG.md Changelog] =&lt;br /&gt;
Download or upgrade&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Install using Ubuntu package manager (2024)&lt;br /&gt;
wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg&lt;br /&gt;
echo &amp;quot;deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main&amp;quot; | sudo tee /etc/apt/sources.list.d/hashicorp.list&lt;br /&gt;
apt-cache policy vagrant&lt;br /&gt;
sudo apt update &amp;amp;&amp;amp; sudo apt install vagrant&lt;br /&gt;
&lt;br /&gt;
# Install downloading a package from sources (2022)&lt;br /&gt;
LATEST=$(curl -s GET https://api.github.com/repos/hashicorp/vagrant/tags | jq -r '.[].name' | head -n1 | tr -d v); echo $LATEST&lt;br /&gt;
VERSION=${LATEST:=2.2.18}; &lt;br /&gt;
wget https://releases.hashicorp.com/vagrant/${VERSION}/vagrant_${VERSION}_linux_amd64.zip&lt;br /&gt;
sudo install /usr/bin/vagrant&lt;br /&gt;
#sudo dpkg -i vagrant_${VERSION}_x86_64.deb&lt;br /&gt;
#sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -f   # resolve missing dependencies&lt;br /&gt;
&lt;br /&gt;
# Fix plugins if needed&lt;br /&gt;
vagrant plugin update&lt;br /&gt;
vagrant plugin repair&lt;br /&gt;
vagrant plugin expunge --reinstall&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Install ruby is recommended as configuration within '''Vagrant''' file is written in Ruby language. &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo apt-get install ruby&lt;br /&gt;
sudo gem install bundler&lt;br /&gt;
sudo gem update  bundler    # if update needed&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repair plugins after the upgrade&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant plugin repair    # use first&lt;br /&gt;
vagrant plugin expunge --reinstall&lt;br /&gt;
vagrant plugin update    # then update broken plugin&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Images aka &amp;lt;code&amp;gt;box&amp;lt;/code&amp;gt; management =&lt;br /&gt;
Vagrant comes with a preconfigured image repositories.&lt;br /&gt;
;Manage boxes&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant box [list | add | remove] [--help]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Add a box (image) into local repository&lt;br /&gt;
These are standard VMs from providers in Virtualbox, VMware or Hyper-V format taken from a given repository&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant box add hashicorp/precise64      #user: hashicorp boximage: precise64, this is preconfigured repository&lt;br /&gt;
vagrant box add ubuntu/xenial64&lt;br /&gt;
vagrant box add ubuntu/xenial64    --box-version 20170618.0.0 --provider virtualbox&lt;br /&gt;
vagrant box add bento/ubuntu-18.04 --box-version 201812.27.0  --provider hyperv&lt;br /&gt;
&lt;br /&gt;
# Box can be specified via URLs or local file paths, Virtualbox can only nest 32bit machines&lt;br /&gt;
vagrant box add --force ubuntu/14.04      https://cloud-images.ubuntu.com/vagrant/trusty/current/trusty-server-cloudimg-amd64-vagrant-disk1.box&lt;br /&gt;
vagrant box add --force ubuntu/14.04-i386 https://cloud-images.ubuntu.com/vagrant/precise/current/precise-server-cloudimg-i386-vagrant-disk1.box&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Windows images&lt;br /&gt;
* devopsgroup-io/windows_server-2012r2-standard-amd64-nocm&lt;br /&gt;
* peru/windows-server-2016-standard-x64-eval&lt;br /&gt;
* scotch/box&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Update a box to the latest version&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$&amp;gt; vagrant box list&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20190411.0.0)&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20190718.0.0)&lt;br /&gt;
&lt;br /&gt;
$&amp;gt; vagrant box update --box ubuntu/bionic64&lt;br /&gt;
Checking for updates to 'ubuntu/bionic64'&lt;br /&gt;
Latest installed version: 20190718.0.0&lt;br /&gt;
Version constraints: &amp;gt; 20190718.0.0&lt;br /&gt;
Provider: virtualbox&lt;br /&gt;
Updating 'ubuntu/bionic64' with provider 'virtualbox' from version&lt;br /&gt;
'20190718.0.0' to '20200124.0.0'...&lt;br /&gt;
Loading metadata for box 'https://vagrantcloud.com/ubuntu/bionic64'&lt;br /&gt;
Adding box 'ubuntu/bionic64' (v20200124.0.0) for provider: virtualbox&lt;br /&gt;
Downloading: https://vagrantcloud.com/ubuntu/boxes/bionic64/versions/20200124.0.0/providers/virtualbox.box&lt;br /&gt;
Download redirected to host: cloud-images.ubuntu.com&lt;br /&gt;
&lt;br /&gt;
$&amp;gt; vagrant box list&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20190411.0.0)&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20190718.0.0)&lt;br /&gt;
ubuntu/bionic64                                          (virtualbox, 20200124.0.0) # &amp;lt;- new downloaded&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Delete all images (aka boxes)&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant box prune&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= vagrant init - your first project =&lt;br /&gt;
;Configure Vagrantfile to use the box as your base system&lt;br /&gt;
&amp;lt;source lang=&amp;quot;ruby&amp;quot;&amp;gt;&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
 config.vm.box      = &amp;quot;ubuntu/bionic64&amp;quot;&lt;br /&gt;
 config.vm.hostname = &amp;quot;ubuntu&amp;quot; #hostname, requires reload&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Create Vagrant project, by creating ''Vagrantfile'' in your current directory&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
vagrant init                    #initialises an project &lt;br /&gt;
vagrant init ubuntu/xenial64    # initialises official Ubuntu 16.04 LTS (Xenial Xerus) Daily Build&lt;br /&gt;
vagrant init ubuntu/bionic64    #supports only VirtualBox provider&lt;br /&gt;
vagrant init bento/ubuntu-18.04 #supports variety of providers&lt;br /&gt;
&lt;br /&gt;
#Windows&lt;br /&gt;
vagrant init devopsgroup-io/windows_server-2012r2-standard-amd64-nocm #Windows 2012r2, VirtualBox only; cannot ssh&lt;br /&gt;
vagrant init peru/windows-server-2016-standard-x64-eval               #Windows 2016, halt works&lt;br /&gt;
vagrant init gusztavvargadr/windows-server                            #Windows 2019, full integration&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Power up your Vagrant box&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
vagrant up&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Ssh to the box. Below example of nested virtualisation 64bit VM(host) runs 32bit (guest vm)&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
piotr@vm-ubuntu64:~/git/vagrant$ vagrant ssh    #default password is &amp;quot;vagrant&amp;quot;&lt;br /&gt;
vagrant@vagrant-ubuntu-precise-32:~$ w&lt;br /&gt;
13:08:35 up 15 min,  1 user,  load average: 0.06, 0.31, 0.54&lt;br /&gt;
USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT&lt;br /&gt;
vagrant  pts/0    10.0.2.2         13:02    1.00s  4.63s  0.09s w&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Shared directory between Vagrant VM and an hypervisor provider&lt;br /&gt;
Vagrant VM shares a directory mounted at &amp;lt;tt&amp;gt;/vagrant&amp;lt;/tt&amp;gt;  with the directory on the host containing your Vagrantfile. This can be manually mounted from within VM as long the shared directory is setup in  GUI. &lt;br /&gt;
&lt;br /&gt;
Eg. vm_name &amp;gt; Settings &amp;gt; Shared Folders &amp;gt; Name: vagrant | Path: /home/piotr/vm_name&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
 sudo mount -t vboxsf -o uid=1000 vagrant /vagrant #firts arg 'vagrant' refers to GUI setiing &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
vagrant --debug up&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Nesting VMs ==&lt;br /&gt;
The error below is due to Virtualbox cannot run nested 64bit virtualbox VM. Spinning up a 64bit VM stops with an error that no 64bit CPU could be found. Update [https://forums.virtualbox.org/viewtopic.php?f=1&amp;amp;t=90831 VirtualBox 6.x Nested virtualization, VT-x/AMD-V in the guest].&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Error:&lt;br /&gt;
 Timed out while waiting for the machine to boot. This means that&lt;br /&gt;
 Vagrant was unable to communicate with the guest machine within&lt;br /&gt;
 the configured (&amp;quot;config.vm.boot_timeout&amp;quot; value) time period.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Manage power states =&lt;br /&gt;
*&amp;lt;code&amp;gt;vagrant suspend&amp;lt;/code&amp;gt; - saves the current running state of the machine and stop it&lt;br /&gt;
*&amp;lt;code&amp;gt;vagrant halt&amp;lt;/code&amp;gt; - gracefully shuts down the guest operating system and power down the guest machine&lt;br /&gt;
*&amp;lt;code&amp;gt;vagrant destroy&amp;lt;/code&amp;gt; - removes all traces of the guest machine from your system. It'll stop the guest machine, power it down, and remove all of the guest hard disks&lt;br /&gt;
&lt;br /&gt;
= Snapshots =&lt;br /&gt;
You can easily save snapshots.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Get status&lt;br /&gt;
$ vagrant status&lt;br /&gt;
Current machine states:&lt;br /&gt;
default                   poweroff (virtualbox) # &amp;lt;- 'default' it's machine name&lt;br /&gt;
                                                # in multi-vm Vagrant config file&lt;br /&gt;
The VM is powered off. To restart the VM, simply run `vagrant up`&lt;br /&gt;
&lt;br /&gt;
# List&lt;br /&gt;
vagrant snapshot list&lt;br /&gt;
==&amp;gt; default: &lt;br /&gt;
11_b4-upgradeVbox-stopped&lt;br /&gt;
12_Dec01_stopped&lt;br /&gt;
&lt;br /&gt;
# Save&lt;br /&gt;
                        &amp;lt;nameOfvm&amp;gt; &amp;lt;snapshot-name&amp;gt; &lt;br /&gt;
vagrant snapshot save    default    13_Dec30_external-eks_stopped&lt;br /&gt;
&lt;br /&gt;
# Restore&lt;br /&gt;
vagrant snapshot restore default    13_Dec30_external-eks_stopped&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Lookup path precedence for Vagrant project file =&lt;br /&gt;
When you run any vagrant command, Vagrant climbs your directory tree starting first in the current directory you are in. Example:&lt;br /&gt;
 /home/peter/projects/la/Vagrant&lt;br /&gt;
 /home/peter/projects/Vagrant&lt;br /&gt;
 /home/peter/Vagrant&lt;br /&gt;
 /home/Vagrant&lt;br /&gt;
 /Vagrant&lt;br /&gt;
&lt;br /&gt;
= Configuration =&lt;br /&gt;
== Networking ==&lt;br /&gt;
'''Private''' network is network that is not accessible from Internet. Networking stanza is a part of the main &amp;lt;tt&amp;gt;|config|&amp;lt;/tt&amp;gt; loop.&lt;br /&gt;
&lt;br /&gt;
DHCP IP address assigned&lt;br /&gt;
 config.vm.network &amp;quot;private_network&amp;quot;, type: &amp;quot;dhcp&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Static IP assigment&lt;br /&gt;
 config.vm.network &amp;quot;private_network&amp;quot;, ip: &amp;quot;192.168.80.5&amp;quot;&lt;br /&gt;
 auto_config: false     #optional to disable auto-configure&lt;br /&gt;
&lt;br /&gt;
'''Public network'''&lt;br /&gt;
These networks can be accessible from outside of the host machine including Internet, are usually '''Bridged Networks'''.&lt;br /&gt;
&lt;br /&gt;
Examples of dhcp and static IP assignment&lt;br /&gt;
 config.vm.network &amp;quot;public_network&amp;quot;, type: &amp;quot;dhcp&amp;quot;&lt;br /&gt;
 config.vm.network &amp;quot;public_network&amp;quot;, ip: &amp;quot;192.168.80.5&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Default interface. The name need to match your system name otherwise Vagrant will prompt you to choose from available interfaces during ''vagrant up'' process.&lt;br /&gt;
 config.vm.network &amp;quot;public_network&amp;quot;, bridge: 'eth1'&lt;br /&gt;
&lt;br /&gt;
== Port forwarding ==&lt;br /&gt;
Vagrant can forward any host(hypervisor) TCP port to guest vm specyfing in ~/git/vargant/Vagrant file&lt;br /&gt;
 config.vm.network :forwarded_port, guest: 80, host: 4567&lt;br /&gt;
Reload virtual machine &amp;lt;code&amp;gt;vagrant reload&amp;lt;/code&amp;gt; and run from hypervisor web browser http://127.0.0.1:4567 to test it.&lt;br /&gt;
&lt;br /&gt;
== Sync folders ==&lt;br /&gt;
Vagrant v2 renamed ''Shared folders'' into '''Sync folders'''. This feature mounts HostOS directory into GuestOS. allowing workflow of editing files with IDE installed on a host machine but access them within GuestOS. The files sync both directions (as mount on GuestOS), Remember, taking &amp;lt;code&amp;gt;vagrant snapshot save ubuntu-snap1&amp;lt;/code&amp;gt; '''will NOT save''' the '''Sync folder''' content as it's just mounted directory.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When configuring, 1st argument is a path existing on a '''host machine'''. If relative then it's relative to the root-project folder (where Vagrantfile exists) and 2nd arg is a full path to the mounted dir on guest-os.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Enabling Sync folders and Symlinks&lt;br /&gt;
This can be done at any time, it's applied during &amp;lt;code&amp;gt;vagrant up | reload&amp;lt;/code&amp;gt;. In general symlinks are disabled by VirtualBox as insecure.&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
#                    path on the host       mount on the guestOS&lt;br /&gt;
                              \             /         &lt;br /&gt;
     config.vm.sync_folder &amp;quot;git-host/&amp;quot;, &amp;quot;/git&amp;quot;, disabled: false&lt;br /&gt;
 end&lt;br /&gt;
  config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
    vb.name   = File.basename(Dir.pwd) + &amp;quot;_vagrant&amp;quot;&lt;br /&gt;
    ...&lt;br /&gt;
    vb.customize [&amp;quot;setextradata&amp;quot;, :id, &amp;quot;VBoxInternal2/SharedFoldersEnableSymlinksCreate//git&amp;quot;,     &amp;quot;1&amp;quot;]&lt;br /&gt;
#   vb.customize [&amp;quot;setextradata&amp;quot;, :id, &amp;quot;VBoxInternal2/SharedFoldersEnableSymlinksCreate//vagrant&amp;quot;, &amp;quot;1&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
    # symlinks should be active in root of vm by default&lt;br /&gt;
#   vb.customize [&amp;quot;setextradata&amp;quot;, :id, &amp;quot;VBoxInternal2/SharedFoldersEnableSymlinksCreate/v-root&amp;quot;,   &amp;quot;1&amp;quot;]&lt;br /&gt;
  end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Disabling&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
     config.vm.sync_folder &amp;quot;../data/&amp;quot;, &amp;quot;/vagrant-data&amp;quot;, disabled: true&lt;br /&gt;
 end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Modifying the Owner/Group&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 config.vm.sync_folder &amp;quot;../data/&amp;quot;, &amp;quot;/vagrant-data&amp;quot;, disabled: true,&lt;br /&gt;
    owner: &amp;quot;root&amp;quot;, group: &amp;quot;root&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
References&lt;br /&gt;
* [https://www.vagrantup.com/docs/synced-folders/basic_usage.html#id synced-folders] Hashicorp docs&lt;br /&gt;
&lt;br /&gt;
= Vagrant providers =&lt;br /&gt;
Vagrant can work with a wide variety of backend providers, such as VMware, AWS, and more without changing Vagrantfile. It's enough to  specify the provider and Vagrant will do the rest:&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
vagrant up --provider=vmware_fusion&lt;br /&gt;
vagrant up --provider=aws&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Hyper-V ==&lt;br /&gt;
*Enable Hyper-V&lt;br /&gt;
*if you running Docker for Windows make sure is disabled as only one application can bound to Internal NAT vswitch, if you are using it&lt;br /&gt;
*WSL and Windows Vagrant versions must match&lt;br /&gt;
*the terminal you run WSL or PowerShell runs with elevated privileges&lt;br /&gt;
When running in WSL, make sure you have &amp;lt;code&amp;gt;export VAGRANT_WSL_ENABLE_WINDOWS_ACCESS=&amp;quot;1&amp;quot;&amp;lt;/code&amp;gt;,&lt;br /&gt;
*you are in native Bash.exe not eg. ConEmu terminal with as it was proven not working at the time. You can change default provider by &amp;lt;code&amp;gt;export VAGRANT_DEFAULT_PROVIDER=hyperv&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Optional: Set the user-level environment variable in PowerShell: &lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
[Environment]::SetEnvironmentVariable(&amp;quot;VAGRANT_DEFAULT_PROVIDER&amp;quot;, &amp;quot;hyperv&amp;quot;, &amp;quot;User&amp;quot;) &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Workarounds&lt;br /&gt;
Copy insecure private key from &amp;lt;code&amp;gt;https://raw.githubusercontent.com/hashicorp/vagrant/master/keys/vagrant to WSL ~/.vagr&lt;br /&gt;
ant_key/private_key&amp;lt;/code&amp;gt; because Microsoft filesystem does not support Unix style file permissions, until WSL2 is released.&lt;br /&gt;
&amp;lt;source&amp;gt; &lt;br /&gt;
$ wget https://raw.githubusercontent.com/hashicorp/vagrant/master/keys/vagrant -O ~/.vagrant_key/private_key&lt;br /&gt;
# then set in Vagrantfile&lt;br /&gt;
config.ssh.private_key_path = &amp;quot;~/.vagrant_key/private_key&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When running on HyperV you need to choose a vswitch you will use. Vagrant will prompt you, select &amp;quot;Default Switch&amp;quot;, w&lt;br /&gt;
hich is eqvivalent of NAT Network. You need to creat your own vswitch if you want access to Internet.&lt;br /&gt;
&lt;br /&gt;
Go to Hyper-V Manager, open Virtual Switch Manager..., create External switch, name: vagrant-external, press OK. Then&lt;br /&gt;
 run.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
vagrant up --provider hyperv&lt;br /&gt;
&lt;br /&gt;
    default: Please choose a switch to attach to your Hyper-V instance.&lt;br /&gt;
    default: If none of these are appropriate, please open the Hyper-V manager&lt;br /&gt;
    default: to create a new virtual switch.&lt;br /&gt;
    default:&lt;br /&gt;
    default: 1) DockerNAT&lt;br /&gt;
    default: 2) Default Switch&lt;br /&gt;
    default: 3) vagrant-external&lt;br /&gt;
    default:&lt;br /&gt;
    default: What switch would you like to use?3    #&amp;lt;-- select 3&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Read more https://www.vagrantup.com/docs/hyperv/limitations.html&lt;br /&gt;
&lt;br /&gt;
Run Vagrant file&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant up --provider=hyperv&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== References ===&lt;br /&gt;
*[https://gist.github.com/savishy/8ed40cd8692e295d64f45e299c2b83c9 Create vSwitch in Hyper-V to run Vagrant]&lt;br /&gt;
*[https://techcommunity.microsoft.com/t5/Virtualization/Copying-Files-into-a-Hyper-V-VM-with-Vagrant/ba-p/382376 Copying Files into a Hyper-V VM with Vagrant]&lt;br /&gt;
*[https://techcommunity.microsoft.com/t5/Virtualization/Vagrant-and-Hyper-V-Tips-and-Tricks/ba-p/382373 Vagrant and Hyper-V -- Tips and Tricks] techcommunity.microsoft.com&lt;br /&gt;
&lt;br /&gt;
= Provisioners =&lt;br /&gt;
==Shell provisioner==&lt;br /&gt;
Vagrant can run from shared location script or from inline: Vagrant file shell provisioning commands.&lt;br /&gt;
&lt;br /&gt;
Create provisioning script&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi ~/git/vagrant/bootstrap.sh     &lt;br /&gt;
#!/usr/bin/env bash&lt;br /&gt;
export http_proxy=&amp;lt;nowiki&amp;gt;http://username:password@proxyserver.local:8080&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
export https_proxy=$http_proxy &lt;br /&gt;
apt-get update&lt;br /&gt;
apt-get install -y apache2&lt;br /&gt;
if ! [ -L /var/www ]; then &lt;br /&gt;
  rm -rf /var/www&lt;br /&gt;
  ln -sf /vagrant /var/www  # sets Vagrant shared dir to Apache DocumentRoot&lt;br /&gt;
fi&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure Vagrant to run this shell script above when setting up our machine&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vi ~/git/vagrant/Vagrantfile   &lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
   config.vm.box = ubuntu/14.04-i386&lt;br /&gt;
   config.vm.provision: shell, path: &amp;quot;bootstrap.sh&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another example of using shell provisioner, separating a script out&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$script = &amp;lt;&amp;lt;SCRIPT&lt;br /&gt;
echo    &amp;quot; touch /home/vagrant/test_\\`date +%s\\`.txt &amp;quot; &amp;gt; /home/vagrant/newfile&lt;br /&gt;
chmod +x        /home/vagrant/newfile&lt;br /&gt;
echo &amp;quot;* * * * * /home/vagrant/newfile&amp;quot; &amp;gt; mycron&lt;br /&gt;
crontab mycron&lt;br /&gt;
SCRIPT&lt;br /&gt;
&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
  config.vm.box = &amp;quot;ubuntu/xenial64&amp;quot;&lt;br /&gt;
  config.vm.provision &amp;quot;shell&amp;quot;, inline: $script , privileged: false&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Bring the environment up  &lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant up                   #runs provisioning only once&lt;br /&gt;
vagrant reload --provision   #reloads VM skipping import and runs provisioning&lt;br /&gt;
vagrant ssh                  #ssh to VM&lt;br /&gt;
wget -qO- 127.0.0.1          #test Apache is running on VM&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Provisioners - shell, ansible, ansible_local and more&lt;br /&gt;
&lt;br /&gt;
This section is about using Ansible with Vagrant, &lt;br /&gt;
 *&amp;lt;code&amp;gt;ansible&amp;lt;/code&amp;gt;, where Ansible is executed on the '''Vagrant host'''&lt;br /&gt;
 *&amp;lt;code&amp;gt;ansible_local&amp;lt;/code&amp;gt;, where Ansible is executed on the '''Vagrant guest'''&lt;br /&gt;
&lt;br /&gt;
==Ansible provisioner==&lt;br /&gt;
&lt;br /&gt;
Specify Ansible as a provisioner in Vagrant file&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 # Run Ansible from the Vagrant Host&lt;br /&gt;
 config.vm.provision &amp;quot;ansible&amp;quot; do |ansible|&lt;br /&gt;
    ansible.playbook = &amp;quot;playbook.yml&amp;quot;&lt;br /&gt;
 end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Chef_solo provisioner ==&lt;br /&gt;
Create recipe, the following dirctory structure is required, eg. recipe name is: vagrant_la&lt;br /&gt;
 ├── cookbooks&lt;br /&gt;
 │   └── vagrant_la&lt;br /&gt;
 │       └── recipes&lt;br /&gt;
 │           └── default.rb&lt;br /&gt;
 Vagrant&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Recipe&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
vi cookbooks/vagrant_la/recipes/default.rb&lt;br /&gt;
execute &amp;quot;apt-get update&amp;quot;&lt;br /&gt;
package &amp;quot;apache2&amp;quot;&lt;br /&gt;
execute &amp;quot;rm -rf /var/www&amp;quot;&lt;br /&gt;
link &amp;quot;var/www&amp;quot; do&lt;br /&gt;
        to &amp;quot;/vagrant&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In Vagrant file add following&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
config.vm.provision &amp;quot;chef_solo&amp;quot; do |chef|&lt;br /&gt;
        chef.add_recipe &amp;quot;vagrant_la&amp;quot;&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Run &amp;lt;code&amp;gt;vagrant up&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Puppet manifest ==&lt;br /&gt;
Create Vagrant provisioning stanza&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
 config.vm.define &amp;quot;web&amp;quot; do |web|&lt;br /&gt;
         web.vm.hostname = &amp;quot;web&amp;quot;&lt;br /&gt;
         web.vm.box = &amp;quot;apache&amp;quot;&lt;br /&gt;
         web.vm.network &amp;quot;private_network&amp;quot;, type: &amp;quot;dhcp&amp;quot;&lt;br /&gt;
         web.vm.network &amp;quot;forwarded_port&amp;quot;, guest: 80, host: 8080&lt;br /&gt;
         web.vm.provision &amp;quot;puppet&amp;quot; do |puppet|&lt;br /&gt;
                 puppet.manifests_path = &amp;quot;manifests&amp;quot;&lt;br /&gt;
                 puppet.manifest_file = &amp;quot;default.pp&amp;quot;&lt;br /&gt;
         end&lt;br /&gt;
 end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create a required folder structure for puppet manifests&lt;br /&gt;
 ├── manifests&lt;br /&gt;
 │   └── default.pp&lt;br /&gt;
 └── Vagrantfile&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Puppet manifest file&lt;br /&gt;
 vi manifests/default.pp&lt;br /&gt;
 exec { &amp;quot;apt-get update&amp;quot;:&lt;br /&gt;
        command =&amp;gt; &amp;quot;/usr/bin/apt-get update&amp;quot;,&lt;br /&gt;
 }&lt;br /&gt;
 package { &amp;quot;apache2&amp;quot;:&lt;br /&gt;
        require =&amp;gt; Exec[&amp;quot;apt-get update&amp;quot;],&lt;br /&gt;
 }&lt;br /&gt;
 file { &amp;quot;/var/www&amp;quot;:&lt;br /&gt;
        ensure =&amp;gt; link,&lt;br /&gt;
        target =&amp;gt; &amp;quot;/vagrant&amp;quot;,&lt;br /&gt;
        force =&amp;gt; true,&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
= Box images advanced=&lt;br /&gt;
 vagrant box list   #list all downloaded boxes&lt;br /&gt;
&lt;br /&gt;
Default path of boxes image, it can be specified by environment variable &amp;lt;tt&amp;gt;VAGRANT_HOME&amp;lt;/tt&amp;gt;&lt;br /&gt;
 C:\Users\%username%\.vagrant.d\boxes  #Windows&lt;br /&gt;
 ~/.vagrant.d/boxes                    #Linux&lt;br /&gt;
&lt;br /&gt;
Change default path via environment variable&lt;br /&gt;
 export VAGRANT_HOME=my/new/path/goes/here/&lt;br /&gt;
&lt;br /&gt;
==Box format==&lt;br /&gt;
When you un-tar the &amp;lt;tt&amp;gt;.box&amp;lt;/tt&amp;gt; file it contains 4 files:&lt;br /&gt;
 |--Vagrantfile&lt;br /&gt;
 |--box-disk1.vmdk  #compressed virtual disk&lt;br /&gt;
 |--box.ovf         #description of virtual hardware&lt;br /&gt;
 |--metadata.json&lt;br /&gt;
&lt;br /&gt;
== [https://www.vagrantup.com/docs/virtualbox/boxes.html Create box] from current project (package a box) ==&lt;br /&gt;
This allows to create a reusable box that contains all changes to the software we made, VirtualBox or Hyper-V supported only.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[https://www.vagrantup.com/docs/cli/package.html Command basics]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant package [options] [name|id]&lt;br /&gt;
# --base NAME - instead of packaging a VirtualBox machine that Vagrant manages, &lt;br /&gt;
#               this will package a VirtualBox machine that VirtualBox manages&lt;br /&gt;
# --output NAME - default is package.box&lt;br /&gt;
# --include x,y,z -  additional files will be packaged with the box&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Package&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
$ vagrant version # -&amp;gt; Installed Version: 2.2.9&lt;br /&gt;
&lt;br /&gt;
# Optional '--vagrantfile NAME' can be included, that automatically restores '--include' files &lt;br /&gt;
# learn more at https://www.vagrantup.com/docs/vagrantfile#load-order&lt;br /&gt;
$ time vagrant package --output u18cli-1.box --include data,git-host,git-host3rd,sync.sh,cleanup.sh&lt;br /&gt;
==&amp;gt; default: Clearing any previously set forwarded ports...&lt;br /&gt;
==&amp;gt; default: Exporting VM...&lt;br /&gt;
==&amp;gt; default: Compressing package to: /home/piotr/vms-vagrant/u18cli-1/2020-05-23-u18cli-1.box&lt;br /&gt;
==&amp;gt; default: Packaging additional file: data               # &amp;lt;- dir&lt;br /&gt;
==&amp;gt; default: Packaging additional file: git-host           # &amp;lt;- dir&lt;br /&gt;
==&amp;gt; default: Packaging additional file: git-host3rd        # &amp;lt;- dir&lt;br /&gt;
==&amp;gt; default: Packaging additional file: cleanup.sh         # &amp;lt;- file&lt;br /&gt;
real	15m27.324s user	8m23.550s sys	0m16.827s&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Re-ristribute the &amp;lt;tt&amp;gt;.box&amp;lt;/tt&amp;gt; file then restore it.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# Add the packaged box to local system box repository&lt;br /&gt;
#                        _____box-name________ __box-file_____&lt;br /&gt;
$ vagrant box add --name box-packages/u18cli-1 u18cli-1-v1.box&lt;br /&gt;
==&amp;gt; box: Box file was not detected as metadata. Adding it directly...&lt;br /&gt;
==&amp;gt; box: Adding box 'u18cli-1-v1.box' (v0) for provider: &lt;br /&gt;
    box: Unpacking necessary files from: file:///home/piotr/vms-vagrant/test-box-restore/u18cli-1-v1.box&lt;br /&gt;
==&amp;gt; box: Successfully added box 'box-packages/u18cli-1' (v0) for 'virtualbox'!&lt;br /&gt;
&lt;br /&gt;
# List boxes&lt;br /&gt;
$ vagrant box list&lt;br /&gt;
box-packages/u18cli-1 (virtualbox, 0)&lt;br /&gt;
&lt;br /&gt;
$ ls -l ~/.vagrant.d/boxes&lt;br /&gt;
total 16&lt;br /&gt;
drwxrwxr-x 3 piotr piotr 4096 Jul 16 17:44 box-packages-VAGRANTSLASH-u18cli-1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Restore. Create/re-use Vagrantfile using box you added to your local box repository&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
# vi Vagrantfile&lt;br /&gt;
config.vm.box = &amp;quot;box-packages/u18cli-1.box&amp;quot;&lt;br /&gt;
&lt;br /&gt;
vagrant up&lt;br /&gt;
# restore '--include' files coping them from&lt;br /&gt;
# 'ls -l ~/.vagrant.d/boxes/box-packages-VAGRANTSLASH-u18cli-1/0/virtualbox/include/*'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [https://tuhrig.de/resizing-vagrant-box-disk-space/ Resizing Vagrant box disk] =&lt;br /&gt;
* [https://www.vagrantup.com/docs/disks/usage Resizing primary disk] native way&lt;br /&gt;
&lt;br /&gt;
= Enable Vagrant to use proxy server for VMs =&lt;br /&gt;
Install proxyconf plugin or use &amp;lt;code&amp;gt;vagrant plugin list&amp;lt;/code&amp;gt; to verify if installed&lt;br /&gt;
 vagrant plugin install vagrant-proxyconf&lt;br /&gt;
&lt;br /&gt;
Configure your Vagrantfile, here particularly host 10.0.0.1:3128 runs CNTLM proxy&lt;br /&gt;
 Vagrant.configure(&amp;quot;2&amp;quot;) do |config| &lt;br /&gt;
     &amp;lt;nowiki&amp;gt;config.proxy.http = &amp;quot;http://10.0.0.1:3128&amp;quot;&lt;br /&gt;
    config.proxy.https = &amp;quot;http://10.0.0.1:3128&amp;quot;&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
     config.proxy.no_proxy = &amp;quot;localhost,127.0.0.1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
= Virtualbox Guest Additions =&lt;br /&gt;
== Sync using vagrant-vbguest plugin ==&lt;br /&gt;
Install plugin&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant gem install vagrant-vbguest    #for vagrant &amp;lt; 1.1.5&lt;br /&gt;
vagrant plugin install vagrant-vbguest #for vagrant 1.1.5+&lt;br /&gt;
&lt;br /&gt;
#Verify current version, running on a host(hypervisor)&lt;br /&gt;
vagrant vbguest --status&lt;br /&gt;
&lt;br /&gt;
#Add to your Vagrant file (Vagrant 1.1.5+)&lt;br /&gt;
if Vagrant.has_plugin?(&amp;quot;vagrant-vbguest&amp;quot;)&lt;br /&gt;
  host.vbguest.auto_update = true&lt;br /&gt;
  host.vbguest.no_remote   = true&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Then install the correct version matching your host installation&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant vbguest --do install &lt;br /&gt;
&lt;br /&gt;
#Full command options&lt;br /&gt;
vagrant vbguest [vm-name] [--do start|rebuild|install] [--status] [-f|--force] \&lt;br /&gt;
                 [-b|--auto-reboot] [-R|--no-remote] [--iso VBoxGuestAdditions.iso] [--no-cleanup]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
More you will find at [https://github.com/dotless-de/vagrant-vbguest vagrant-vbguest] plugin project.&lt;br /&gt;
&lt;br /&gt;
== Manual upgrade ==&lt;br /&gt;
Find out what version you are running, execute on a guest VM&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
vagrant@ubuntu:~$ modinfo vboxguest | grep ^version&lt;br /&gt;
version:        6.0.10 r132072&lt;br /&gt;
&lt;br /&gt;
vagrant@ubuntu:~$ lsmod | grep -io vboxguest | xargs modinfo | grep -iw version&lt;br /&gt;
version:        6.0.10 r132072&lt;br /&gt;
&lt;br /&gt;
vagrant@u18cli-3:~$ sudo /usr/sbin/VBoxService --version&lt;br /&gt;
6.0.10r132072&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Download the extension, you can explore [http://download.virtualbox.org/virtualbox here]&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
wget http://download.virtualbox.org/virtualbox/5.0.32/VBoxGuestAdditions_5.0.32.iso&lt;br /&gt;
#you need to get it mounted or extracted the content and run inside the VM.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
*[https://github.com/chilcano/box-vagrant-wso2-dev-srv/blob/master/_downloads/vagrant-vboxguestadditions-workaroud.md Upgrade Vbox extension additions within Vagrant box]&lt;br /&gt;
&lt;br /&gt;
= List all Virtualbox SSH redirections =&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
vboxmanage list vms | cut -d ' ' -f 2  &amp;gt; /tmp/vms.out &amp;amp;&amp;amp; for vm in $(cat /tmp/vms.out); do vboxmanage showvminfo &amp;quot;$vm&amp;quot; | grep ssh; done&lt;br /&gt;
vboxmanage list vms | cut -d ' ' -f 1 | sed 's/&amp;quot;//g' &amp;gt; /tmp/vms.out &amp;amp;&amp;amp; for vm in $(cat /tmp/vms.out); do echo $vm; vboxmanage showvminfo &amp;quot;$vm&amp;quot; | grep ssh; done&lt;br /&gt;
vboxmanage list vms \&lt;br /&gt;
  | cut -d ' ' -f 1 \&lt;br /&gt;
  | sed 's/&amp;quot;//g' &amp;gt; /tmp/vms.out \&lt;br /&gt;
  &amp;amp;&amp;amp; for vm in $(cat /tmp/vms.out); do vboxmanage showvminfo &amp;quot;$vm&amp;quot; \&lt;br /&gt;
                                      | grep ssh \&lt;br /&gt;
                                      | tr --delete '\n'; echo &amp;quot; $vm&amp;quot;; done&lt;br /&gt;
&lt;br /&gt;
sed 's/&amp;quot;//g'      #removes double quotes from whole string&lt;br /&gt;
tr --delete '\n'  #deletes EOL, so the next command output is appended to the previous line&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Vagrant file =&lt;br /&gt;
;Ruby gotchas&lt;br /&gt;
Vagrant configuration file is written in Ruby specification, therefore you need to remember&lt;br /&gt;
*don't use dashes in object names, '''don't''': &amp;lt;tt&amp;gt;jenkins-minion_config.vm.box = &amp;quot;ubuntu/xenial64&amp;quot;&amp;lt;/tt&amp;gt;&lt;br /&gt;
*don't use symbols (here underscore) in variable names, '''don't''': &amp;lt;tt&amp;gt;(1..2).each do |minion_number|&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== HAProxy cluster, multi-node Vagrant config  ==&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
git clone https://github.com/jweissig/episode-45&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This creates ''Ansible'' mgmt server, Load Balancer and Web nodes [1..2]. HAProxy will be configured via Ansible code.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# -*- mode: ruby -*-&lt;br /&gt;
# vi: set ft=ruby :&lt;br /&gt;
&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
 # create mgmt node&lt;br /&gt;
 config.vm.define :mgmt do |mgmt_config|&lt;br /&gt;
     mgmt_config.vm.box = &amp;quot;ubuntu/trusty64&amp;quot;&lt;br /&gt;
     mgmt_config.vm.hostname = &amp;quot;mgmt&amp;quot;&lt;br /&gt;
     mgmt_config.vm.network :private_network, ip: &amp;quot;10.0.15.10&amp;quot;&lt;br /&gt;
     mgmt_config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
       vb.memory = &amp;quot;256&amp;quot;&lt;br /&gt;
     end&lt;br /&gt;
     mgmt_config.vm.provision :shell, path: &amp;quot;bootstrap-mgmt.sh&amp;quot;&lt;br /&gt;
 end&lt;br /&gt;
&lt;br /&gt;
 # create load balancer&lt;br /&gt;
 config.vm.define :lb do |lb_config|&lt;br /&gt;
     lb_config.vm.box = &amp;quot;ubuntu/trusty64&amp;quot;&lt;br /&gt;
     lb_config.vm.hostname = &amp;quot;lb&amp;quot;&lt;br /&gt;
     lb_config.vm.network :private_network, ip: &amp;quot;10.0.15.11&amp;quot;&lt;br /&gt;
     lb_config.vm.network &amp;quot;forwarded_port&amp;quot;, guest: 80, host: 8080&lt;br /&gt;
     lb_config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
       vb.memory = &amp;quot;256&amp;quot;&lt;br /&gt;
     end&lt;br /&gt;
 end&lt;br /&gt;
&lt;br /&gt;
 # create some web servers&lt;br /&gt;
 # https://docs.vagrantup.com/v2/vagrantfile/tips.html&lt;br /&gt;
  (1..2).each do |i|&lt;br /&gt;
    config.vm.define &amp;quot;web#{i}&amp;quot; do |node|&lt;br /&gt;
        node.vm.box = &amp;quot;ubuntu/trusty64&amp;quot;&lt;br /&gt;
        node.vm.hostname = &amp;quot;web#{i}&amp;quot;&lt;br /&gt;
        node.vm.network :private_network, ip: &amp;quot;10.0.15.2#{i}&amp;quot;&lt;br /&gt;
        node.vm.network &amp;quot;forwarded_port&amp;quot;, guest: 80, host: &amp;quot;808#{i}&amp;quot;&lt;br /&gt;
        node.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
          vb.memory = &amp;quot;256&amp;quot;&lt;br /&gt;
        end&lt;br /&gt;
    end&lt;br /&gt;
  end&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Boot strap script &amp;lt;tt&amp;gt;bootstrap-mgmt.sh&amp;lt;/tt&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;shell&amp;quot;&amp;gt;&lt;br /&gt;
#!/usr/bin/env bash &lt;br /&gt;
# install ansible (http://docs.ansible.com/intro_installation.html)&lt;br /&gt;
apt-get -y install software-properties-common&lt;br /&gt;
apt-add-repository -y ppa:ansible/ansible&lt;br /&gt;
apt-get update&lt;br /&gt;
apt-get -y install ansible&lt;br /&gt;
&lt;br /&gt;
# copy examples into /home/vagrant (from inside the mgmt node)&lt;br /&gt;
cp -a /vagrant/examples/* /home/vagrant&lt;br /&gt;
chown -R vagrant:vagrant /home/vagrant&lt;br /&gt;
&lt;br /&gt;
# configure hosts file for our internal network defined by Vagrantfile&lt;br /&gt;
cat &amp;gt;&amp;gt; /etc/hosts &amp;lt;&amp;lt;EOL&lt;br /&gt;
# vagrant environment nodes&lt;br /&gt;
10.0.15.10  mgmt&lt;br /&gt;
10.0.15.11  lb&lt;br /&gt;
10.0.15.21  web1&lt;br /&gt;
10.0.15.22  web2&lt;br /&gt;
10.0.15.23  web3&lt;br /&gt;
10.0.15.24  web4&lt;br /&gt;
10.0.15.25  web5&lt;br /&gt;
10.0.15.26  web6&lt;br /&gt;
10.0.15.27  web7&lt;br /&gt;
10.0.15.28  web8&lt;br /&gt;
10.0.15.29  web9&lt;br /&gt;
EOL&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Gitbash path -  &amp;lt;code&amp;gt;/c/Program\ Files/Oracle/VirtualBox/VBoxManage.exe&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Set bootstrap script for Proxy or No-proxy specific system&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
Vagrant status&lt;br /&gt;
Vagrant up&lt;br /&gt;
Vagrant ssh mgmt&lt;br /&gt;
ansible all --list-hosts&lt;br /&gt;
ssh-keyscan web1 web2 lb &amp;gt; ~/.ssh/known_hosts&lt;br /&gt;
ansible-playbook ssh-addkey.yml -u vagrant --ask-pass&lt;br /&gt;
ansible-playbook site.yml&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Once set it up you can navigate on your laptop to:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
http://localhost:8080/              #Website test&lt;br /&gt;
http://localhost:8080/haproxy?stats #HAProxy stats&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Use to verify end server&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
curl -I http://localhost:8080&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:X-Backend-Server.png|none|left|Curl -i X-Backend-Server]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Generate web traffic&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
vagrant ssh lb&lt;br /&gt;
sudo apt-get install apache2-utils&lt;br /&gt;
ansible localhost -m apt -a &amp;quot;pkg=apache2-utils state=present&amp;quot; --become&lt;br /&gt;
ab -n 1000 -c 1 http://10.0.2.15:80/ # Apache replaced 'ab' with 'hey'&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Vagrant DNS =&lt;br /&gt;
== Multi-machine mDNS discovery ==&lt;br /&gt;
Multi-machine setup requires 3 ingredients* :&lt;br /&gt;
* each machine to have different hostname&lt;br /&gt;
* a way of getting the IP address for a hostname (eg. mDNS)&lt;br /&gt;
* connect the VMs through a private network&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In multi-machine configuration we need a way of getting the IP address for a hostname. We use &amp;lt;code&amp;gt;mDNS&amp;lt;/code&amp;gt; for this. By default &amp;lt;codemDNS&amp;lt;/code&amp;gt; only resolves host names ending with the &amp;lt;code&amp;gt;.local&amp;lt;/code&amp;gt; top-level domain (TLD). This can cause problems if that domain includes hosts which do not implement mDNS but which can be found via a conventional unicast DNS server. Resolving such conflicts requires network-configuration changes that violate the zero-configuration goal. Install &amp;lt;code&amp;gt;avahi&amp;lt;/code&amp;gt; system on all machines to facilitate service discovery on a local network via the &amp;lt;code&amp;gt;mDNS/DNS-SD&amp;lt;/code&amp;gt; protocol suite.&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
config.vm.provision &amp;quot;shell&amp;quot;, inline: &amp;lt;&amp;lt;-SCRIPT&lt;br /&gt;
  apt-get install -y avahi-daemon libnss-mdns&lt;br /&gt;
SCRIPT&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;References&lt;br /&gt;
*[https://github.com/lathiat/nss-mdns nss-mdns] system which allows hostname lookup of *.local hostnames via mDNS in all system programs using nsswitch&lt;br /&gt;
*[https://www.avahi.org/ avahi.org]&lt;br /&gt;
&lt;br /&gt;
== Set host system DNS server resolver ==&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
config.vm.provider :virtualbox do |vb|&lt;br /&gt;
    vb.customize [&amp;quot;modifyvm&amp;quot;, :id, &amp;quot;--natdnshostresolver1&amp;quot;, &amp;quot;on&amp;quot;]&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Ubuntu with GUI =&lt;br /&gt;
This article is going to describe to setup Vagrant Virtualbox VM with GUI, setting up xserver with xfce4 as desktop environment.&lt;br /&gt;
== Locales ==&lt;br /&gt;
This is not working&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
     locale-gen en_GB.utf8 #en_GB.UTF-8&lt;br /&gt;
     update-locale LANG=en_GB.UTF-8&lt;br /&gt;
     locale-gen --purge &amp;quot;en_GB.UTF-8&amp;quot;&lt;br /&gt;
     dpkg-reconfigure --frontend=noninteractive locales&lt;br /&gt;
     dpkg-reconfigure --frontend=noninteractive keyboard-configuration&lt;br /&gt;
     localedef -i en_GB -c -f UTF-8 en_GB.utf8&lt;br /&gt;
     sudo update-locale LANG=en_GB.UTF-8&lt;br /&gt;
     locale-gen --purge &amp;quot;en_GB.UTF-8&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Troubleshooting&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
locale -a #shows which locales are available on your system&lt;br /&gt;
sudo less /usr/share/i18n/SUPPORTED&lt;br /&gt;
cat /etc/default/locale&lt;br /&gt;
&lt;br /&gt;
#Set system wide locales (does not work for users)&lt;br /&gt;
localectl set-locale LANG=en_GB.UTF-8 LANGUAGE=en_GB:en&lt;br /&gt;
localectl set-keymap gb&lt;br /&gt;
localectl set-x11-keymap gb&lt;br /&gt;
&lt;br /&gt;
#Quick kb change&lt;br /&gt;
apt-get install -yq x11-xkb-utils; setxkbmap gb&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Gnome3 ==&lt;br /&gt;
This setup installs Ubuntu desktop and may require a restart to apply changes to like a taskbar with shortcuts.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;ruby&amp;quot;&amp;gt;&lt;br /&gt;
# -*- mode: ruby -*-&lt;br /&gt;
# vi: set ft=ruby :&lt;br /&gt;
&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
  config.vm.box      = &amp;quot;ubuntu/bionic64&amp;quot; #bento/ubuntu-18.04, ubuntu/xenial64&lt;br /&gt;
&lt;br /&gt;
  machineName = File.basename(Dir.pwd) #name as a current working dir&lt;br /&gt;
# machineName = 'u18gui-1'&lt;br /&gt;
  config.vm.hostname = machineName&lt;br /&gt;
&lt;br /&gt;
  # Manually check for updates `vagrant box outdated`&lt;br /&gt;
  config.vm.box_check_update = false&lt;br /&gt;
&lt;br /&gt;
  # Vbguest plugin&lt;br /&gt;
  if Vagrant.has_plugin?(&amp;quot;vagrant-vbguest&amp;quot;)&lt;br /&gt;
    config.vbguest.auto_update = false&lt;br /&gt;
    config.vbguest.no_remote   = true&lt;br /&gt;
  end&lt;br /&gt;
&lt;br /&gt;
  # config.vm.network &amp;quot;forwarded_port&amp;quot;, guest: 80, host: 8080, host_ip: &amp;quot;127.0.0.1&amp;quot;&lt;br /&gt;
  # Public network, which generally matched to bridged network.&lt;br /&gt;
  # config.vm.network &amp;quot;public_network&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  # config.vm.synced_folder &amp;quot;hostDir&amp;quot;, &amp;quot;/InVagrantMount/path&amp;quot; &lt;br /&gt;
  # config.vm.synced_folder &amp;quot;../data&amp;quot;, &amp;quot;/vagrant_data&amp;quot;&lt;br /&gt;
&lt;br /&gt;
   config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
     vb.gui    = true&lt;br /&gt;
     vb.memory = &amp;quot;3072&amp;quot;&lt;br /&gt;
     vb.name   = machineName + &amp;quot;_vagrant&amp;quot;&lt;br /&gt;
   end&lt;br /&gt;
  &lt;br /&gt;
   config.vm.provision &amp;quot;shell&amp;quot;, inline: &amp;lt;&amp;lt;-SHELL&lt;br /&gt;
     export DEBIAN_FRONTEND=noninteractive&lt;br /&gt;
     setxkbmap gb&lt;br /&gt;
     apt-get update &amp;amp;&amp;amp; apt-get upgrade -yq&lt;br /&gt;
     apt-get install -yq ubuntu-desktop --no-install-recommends&lt;br /&gt;
     apt-get install -yq terminator tmux&lt;br /&gt;
     #only U16 xenial to fix Unity&lt;br /&gt;
     #apt-get install -yq unity-lens-files unity-lens-applications indicator-session --no-install-recommends &lt;br /&gt;
   SHELL&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Running up&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
vagrant plugin install vagrant-vbguest&lt;br /&gt;
vagrant up &amp;amp;&amp;amp; vagrant vbguest --do install &amp;amp;&amp;amp; vagrant reload&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Xface ==&lt;br /&gt;
Get a basic Ubuntu image working, boot it up and vagrant ssh.&lt;br /&gt;
Next, enable the VirtualBox display, which is off by default. Halt the VM and uncomment these lines in Vagrantfile:&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
config.vm.provider :virtualbox do |vb|&lt;br /&gt;
  vb.gui = true&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Boot the VM and observe the new display window. Now you just need to install and start xfce4. Use vagrant ssh and:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo apt-get install -y xfce4 virtualbox-guest-dkms virtualbox-guest-utils virtualbox-guest-x11&lt;br /&gt;
#guest additions are already installed on most of the Vagrant boxes&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Don't start the GUI as root because you really want to stay the vagrant user. To do this you need to permit anyone to start the GUI: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo vim /etc/X11/Xwrapper.config and edit it to allowed_users=anybody&lt;br /&gt;
sudo startxfce4&amp;amp;&lt;br /&gt;
sudo VBoxClient-all #optional&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should be landed in a xfce4 session.&lt;br /&gt;
&lt;br /&gt;
(Optional) If VBoxClient-all script isn't installed or anything is missing, you can replace with the equivalent:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo VBoxClient --clipboard&lt;br /&gt;
sudo VBoxClient --draganddrop&lt;br /&gt;
sudo VBoxClient --display&lt;br /&gt;
sudo VBoxClient --checkhostversion&lt;br /&gt;
sudo VBoxClient --seamless&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
*[http://stackoverflow.com/questions/18878117/using-vagrant-to-run-virtual-machines-with-desktop-environment Vagrant GUI vms] stackoverflow&lt;br /&gt;
&lt;br /&gt;
= Windows=&lt;br /&gt;
&amp;lt;source lang=ruby&amp;gt;&lt;br /&gt;
# -*- mode: ruby -*-&lt;br /&gt;
# vi: set ft=ruby :&lt;br /&gt;
&lt;br /&gt;
Vagrant.configure(&amp;quot;2&amp;quot;) do |config|&lt;br /&gt;
  config.vm.box = &amp;quot;gusztavvargadr/windows-server&amp;quot;&lt;br /&gt;
  config.vm.box_check_update = false&lt;br /&gt;
  config.vm.provider &amp;quot;virtualbox&amp;quot; do |vb|&lt;br /&gt;
     vb.gui = true       # Display the VirtualBox GUI when booting the machine&lt;br /&gt;
     vb.memory = &amp;quot;3072&amp;quot;  # Customize the amount of memory on the VM:&lt;br /&gt;
  end&lt;br /&gt;
  # Plugins&lt;br /&gt;
  config.vbguest.auto_update = false&lt;br /&gt;
  config.vbguest.no_remote = true&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Shared location&lt;br /&gt;
* enable Network Sharing&lt;br /&gt;
* Vagrant path is mapped to &amp;lt;code&amp;gt;\\VBOXSVR\vagrant&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= WIP DevOps workstation =&lt;br /&gt;
This to contain:&lt;br /&gt;
*bashrc with git branch in ps1&lt;br /&gt;
*bash autocomplete (...samename)&lt;br /&gt;
*bash colored symlinks&lt;br /&gt;
*bash_logout and .profile to eval ssh-agent and kill on exit&lt;br /&gt;
*git install&lt;br /&gt;
*ansible 1.9.4&lt;br /&gt;
*java Oracle&lt;br /&gt;
*clone tfenv and install terraform&lt;br /&gt;
*vim install&lt;br /&gt;
*vundle install&lt;br /&gt;
*[done] python 2.7 OOB in 16.04&lt;br /&gt;
*[done]python pip: awscli, boto, boto3, etc..&lt;br /&gt;
&lt;br /&gt;
Challenges:&lt;br /&gt;
*Ubuntu 16.04 official box does not come with a default ''vagrant'' user but instead comes with ''ubuntu'' user. This causes a number of incompatibilities.&lt;br /&gt;
**Read more at launchpad [https://bugs.launchpad.net/cloud-images/+bug/1569237 vagrant xenial box is not provided with vagrant/vagrant username and password ]&lt;br /&gt;
* Solutions&lt;br /&gt;
** on W10 host both users: ubuntu &amp;amp; vagrant exist. Only vagrant has insecure_pub installed OOB. I am coping vagrant user pub key into ubuntu user authorized_keys&lt;br /&gt;
** on U16.05 host the official image does not seem to come with vagrant user but Ubuntu user works OOB&lt;br /&gt;
** Read more at SO &lt;br /&gt;
***[Vagrant's Ubuntu 16.04 vagrantfile default password https://stackoverflow.com/questions/41337802/vagrants-ubuntu-16-04-vagrantfile-default-password]&lt;br /&gt;
***[https://stackoverflow.com/questions/30075461/how-do-i-add-my-own-public-key-to-vagrant-vm How do I add my own public key to Vagrant VM?]&lt;br /&gt;
*** [https://blog.ouseful.info/2015/07/27/running-a-shell-script-once-only-in-vagrant/ Running a Shell Script Once Only in vagrant]&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
*[https://www.vagrantup.com/docs/getting-started/ Vagrant Start up documentation]&lt;br /&gt;
*[https://atlas.hashicorp.com/boxes/search Vagrant Hashicorp VMs repository] Virtualbox&lt;br /&gt;
*[https://cloud-images.ubuntu.com/vagrant/ Vagrant Ubuntu VMs images] Virtualbox&lt;br /&gt;
*[https://www.vagrantup.com/docs/provisioning/ansible_intro.html Vagrant and Ansible provisioner] Vagrant docs&lt;br /&gt;
*[https://manski.net/2016/09/vagrant-multi-machine-tutorial/#multi-machine.3A-the-naive-way Vagrant Tutorial – From Nothing To Multi-Machine] Tutorial&lt;/div&gt;</summary>
		<author><name>Pio2pio</name></author>
	</entry>
</feed>