WordPress on Google Container Engine
In this post, I'm going to describe how to deploy the stateless docker container image from one of my last posts onto Google Container Engine. Container Engine is a hosted Kubernetes Service, which means this setup is based on Kubernetes specific deployment configurations. Building our configurations based on Kubernetes allows us to decouple from the Cloud Provider which enables us to switch to another Kubernetes based Service, or even host our own cluster. Google Container Engine provides a fully managed solution that allows great flexibility in terms of scaling up nodes and provisioning resources and deploying docker containers with all the dependencies of our applications onto the nodes. We are also going to use the hosted MySQL Service CloudSQL for the WordPress database. This post is based on the stateless WordPress image from my previous post linked here.
Google Cloud Setup
If you haven't already, register for Google Cloud and install the gcloud command line tools. We are going to use the command line for certain tasks, it is a powerful tool to manage almost all of your cloud resources. You can download and install them from the Google docs here.
We also install another command line tool called kubectl. This tool is specific to Kubernetes and it is used to manage all the resources/configurations of the Kubernetes Cluster.
gcloud components install kubectl
After you installed the command tools, connect your command line with your cloud account via the following command.
gcloud auth application-default login
Container Cluster Setup
First we have to setup a new Container Cluster. This can be done via the UI in the Google Cloud Console or via the following commands using the gcloud command line.
gcloud create new container cluster
- List the available zones
For easier command line use, we set our desired compute zone. This dictates in which data center the cluster is going to be started in.
gcloud compute zones list
- Select a zone from the list
gcloud config set compute/zone $zone
- Create new cluster
To save costs we do provide some additional settings like image type and node count.
gcloud container clusters create example-cluster --machine-type g1-small --num-nodes 2 --no-enable-cloud-endpoints --no-enable-cloud-monitoring
- Fetch credentials to allow to control the cluster via kubectl command line tool
gcloud container clusters get-credentials example-cluster
gcloud clean up
- To temporary stop the cluster
gcloud container clusters resize example-cluster --size=0
- To delete the cluster
gcloud container clusters delete example-cluster
CloudSQL Setup
In our setup, we are going to use the 2nd generation MySQL instances. I would advise you to use the guided UI to create an instance. Visit the docs here. Make sure to save your newly set root password. Some settings that I chose for simplicity and cost savings:
instance type: db-f1-micro storage: 10GB disktype: HDD authorized networks: None
Now we have to create a service account to allow our Kubernetes Cluster to talk to the CloudSQL instance.
Setup Kubernetes to connect to the Cloud SQL instance
- Enable Cloud SQL API
- Create Service Account
As Role select Cloud SQL > Cloud SQL Client. Select "Furnish a new private key" with JSON format. Download your private key.
- Create a custom User for access via the Container Engine Cluster
Link to UI select your database -> Access Control -> Users Create a user "kubernetes" and choose a password.
- Register the Private Key as a secret in the Container Engine Cluster
kubectl create secret generic cloudsql-instance-credentials --from-file=credentials.json=downloaded-privatekey.json
- Register User/Password as a secrets in the Container Engine Cluster
kubectl create secret generic cloudsql --from-literal=username=kubernetes --from-literal=password=kubernetes
- List saved secrets for control
kubectl get secrets
Importing Data into Cloud SQL
There are multiple ways to import your exported .sql database dumps. Now there is even an option in the Cloud Console backend on your instance page called "import" to import data via the UI. I prefer to use PHPMyAdmin to manage the database. With docker, we can run PHPMyAdmin locally and connect to the CloudSQL instance. For this to work, we first have to run the Cloud SQL Proxy application which creates a tunnel from a local port on our machine to the database in the cloud.
Starting local Cloud SQL Proxy Container
Google provides an already built docker container image which includes the proxy application. To start a container on our machine we can execute the following command.
docker run --name cloud_sql_proxy -d \
-v /etc/ssl/certs:/etc/ssl/certs \
-v $PRIVATE_KEY:/credential.json \
-p 127.0.0.1:3306:3306 \
b.gcr.io/cloudsql-docker/gce-proxy /cloud_sql_proxy \
-instances=$INSTANCE=tcp:0.0.0.0:3306 -credential_file=/credential.json
Replace $PRIVATE_KEYwith the full path pointing to your previously downloaded service account private key file. Replace $INSTANCE with the string found on your CloudSQL instance details page listed under Properties > "Instance connection name". If you cannot find the information the string consists of "project-id:zone:instance-name".
Starting local PHPMyAdmin Container
Now that the Cloud SQL Proxy is running we start another local docker container with the publicly available phpmyadmin image and link it to the proxy. We map the container port 80 to port 8090 on our local interfaces.
docker run --name phpmyadmin -d --link cloud_sql_proxy:db -p 8090:80 phpmyadmin/phpmyadmin
Open your browser at 127.0.0.1:8090. You should see the PHPMyAdmin login page. Login with your root account credentials and import your WordPress Database onto the Cloud SQL instance.
Uploading your WordPress Docker image to Google Container Registry
Before we are able to run our WordPress image on our cluster, we first have to upload it to Google Container Registry. The Registry is a hosted Docker Image Repository Service which allows us to upload custom built images and access them from our Container Engine cluster. To enable image uploads we have to activate the Registry API first. In your Google Cloud Console web project go to API Manager -> Search for "Registry" -> Select & Activate Google Container Registry.
The sample folder in the wordpress-stateless repo contains the Dockerfile that I'm using for this example.
In the sample folder, execute the following commands.
# build and tag the image locally with the Dockerfile in the current dir "."
docker build -t my-wordpress .
# tag the image with the full url where the image will be hosted
docker tag my-wordpress us.gcr.io/$PROJECT_ID/my-wordpress:v1
# upload the image to Google Container Registry
gcloud docker -- push us.gcr.io/$PROJECT_ID/my-wordpress:v1
Replace $PROJECT_ID with your project id.
Kubernetes
Deployment
Now we want to create a deployment configuration of our WordPress container image. For this, we create a new wordpress-deployment.yml file in the _sample _folder of with the following contents.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-wordpress
spec:
replicas: 1
revisionHistoryLimit: 3
template:
metadata:
labels:
app: my-wordpress
spec:
containers:
- image: $IMAGE_URL
name: my-web
env:
- name: WORDPRESS_DEV
# Show Error Logs
value: "true"
- name: WORDPRESS_DB_HOST
# Connect to the SQL proxy over the local network on a fixed port.
value: 127.0.0.1:3306
- name: WORDPRESS_DB_NAME
value: $WORDPRESS_DB_NAME
# These secrets are required to start the pod.
# [START cloudsql_secrets]
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql
key: password
- name: WORDPRESS_DB_USER
valueFrom:
secretKeyRef:
name: cloudsql
key: username
# [END cloudsql_secrets]
ports:
- containerPort: 80
name: wordpress-nginx
# Change $INSTANCE here to include your GCP
# project, the region of your Cloud SQL instance and the name
# of your Cloud SQL instance. The format is
# -instances=$PROJECT:$REGION:INSTANCE=tcp:3306.
# [START proxy_container]
- image: gcr.io/cloudsql-docker/gce-proxy
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=$INSTANCE=tcp:3306",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
# [END proxy_container]
# [START volumes]
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
- name: cloudsql
emptyDir:
# [END volumes]
Replace $IMAGE_URL with the previously tagged full image url. Replace $WORDPRESS_DB_NAME with the name of your imported database. Replace $INSTANCE with the same string "Instance connection name" used above for the local cloudsql proxy.
The deployment configuration looks a bit like a docker-compose.yml config file. There are some familiarities like the ability to define multiple containers and define volumes that are mapped into the containers. Kubernetes has a concept called Pods, which allow us to deploy multiple containers together as a single entity onto a node. Since we need the cloudsql-proxy container to connect to the CoudSQL database instance, this is a perfect use case for a deployment configuration with a pod that consists of two containers, our wordpress image and the cloudsql-proxy image.
ENV Variables
We use the env config setting to pass in data into our wordpress image. The image contains startup scripts that read those env variables and reconfigure the wp-config.php files. To prevent storing sensitive password information in the deployment configuration file, we can use another Kuberentes feature that allows us to reference a secret that was defined in the cluster. In the CloudSQL Setup section we registered the database user and password into our cluster, now we can reference them in our configuration.
Volumes
In the _volumes _config setting we define 3 different types of data volumes for use with the volumeMounts setting in the container definition. The first volume "cloudsql-instance-credentials" is referencing the service account privatekey secret in the cluster. The volume "ssl-certs" maps a local host directory from the cluster node into the conatiner, this is necessary to provide access to cert authority files for the cloudsql-proxy. The volume "cloudsql" is an empty directory for the cloudsql-proxy to save a file socket entry, which then could be referenced in multiple containers. But in this config we are binding the cloudsql-proxy to the port 3306 and we connect from the wordpress container via localhost.
As you can see only the cloudsql-proxy container is using _volumeMounts _to reference defined volumes. The wordpress container does not need any outside volumes because it contains all the wordpress files inside of the image we built before hand. For the wp-content directory we are using a Plugin to store uploaded files in Google Cloud Storage. The setup is intentionally done this way to enable the container to be scaled onto multiple nodes without having to setup a replicated file system.
Deploying the configuration onto the Cluster
Now we want to tell our cluster to download our image and deploy it onto the cluster. For this we are using the Kubernetes command line tool kubectl.
# Apply the deployment configuration
kubectl apply -f wordpress-deployment.yml
# View the deployment status
kubectl get deployments
# View the pods status
kubectl get pods
# Wait till both pods ( wordpress container / cloudsql proxy ) are running.
NAME READY STATUS RESTARTS AGE
my-wordpress-1422364771-s1vfv 2/2 Running 0 2m
Now lets try to access the WordPress site to see if it is up and running. At the moment the WordPress container is not yet accessible via a public IP. To check if the deployment is running successfully we can use the kubectl port-forward command to map a local port to a port on the container running in the cluster.
To listen on port 8080 locally, forwarding to port 80 in the container on the cluster, enter the following command.
# use the pod name listed from the "kube get pods" command
kubectl port-forward my-wordpress-1422364771-s1vfv 8080:80
Now when you visit your Browser on 127.0.0.1:8080 you will notice that you will get redirected to your pre-configured WordPress Domain with "/wp-signup.php?new=127.0.0.1:8080" added to the URL. This is actually a sign that the site is running and that the database connection is working. WordPress stores your site URLs in the Database, when it receives a request with no matching URL it sends a redirect to the configured main site domain. To prevent this we would have to replace the URLs in the database with a tool like WP-CLI. But for now, we skip this and move forward putting a Service layer in front to access your WordPress via a public IP.
Service
Since our container Pods can be started or scaled onto any node inside our Kubernetes cluster, we have to create a Service configuration to provide a unified way to access our WordPress installation. A Kubernetes Service allows us to provide an abstraction layer on top of the deployments which automatically resolves the current location of our containers inside the cluster. The service can also act as a load-balancer when you scale your containers onto multiple nodes. A service can also create a public IP address that allows anyone can access it over the Internet.
We create a new file wordpress-service.yml with the following contents.
apiVersion: v1
kind: Service
metadata:
name: wordpress-service
labels:
app: public-webservice
spec:
type: LoadBalancer
ports:
- port: 80
protocol: TCP
selector:
app: my-wordpress
Type
There are different types of services. We are using the "LoadBalancer" type. On Google Container Engine this creates a Google Network TCP Loadbalancer with a public IP address.
Deploying the service onto the Cluster
# Apply the service configuration
kubectl apply -f wordpress-service.yml
# View the service status
kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.23.240.1 443/TCP 2d
wordpress-service 10.23.246.77 104.111.111.111 80:32744/TCP 1m
Note the newly created External-IP address. This address is allocated as a static IP in the Google Cloud Console Networking backend. You will get billed for the static IP so do not forget to remove it when you don't need it anymore.
Now visit your browser on your EXTERNAL-IP. If everything worked correctly, you should again get redirected to your WordPress domain name with the "/wp-signup.php..". Now to be able to browse the site we can temporarily add a new entry to the /etc/hosts file with our configured WordPress domain name and the EXTERNAL-IP.
# Add Domain to the local name resolution
sudo sh -c "echo '104.111.111.111 www.mydomain.com' >> /etc/hosts"
On windows, this file is in another location. You can google it, or perhaps rethink your OS selection.
Now when you refresh your browser you should see your WordPress site. If not try to clear your browsers DNS cache or check the contents of your /etc/hosts file.
Next you want to create a public DNS record for your domain pointing at the EXTERNAL-IP so anyone can visit your site. There are many DNS services that enable you to edit your DNS settings for your domain.
Closing
The setup at first may seem quite complex, but if you want to leverage to flexibility and scale-ability of a Kubernetes cluster in conjunction with the power the Google Cloud, creating only 3 configuration files ( Dockerfile, deployment.yml, service.yml) doesn't seem to be that complex considering the complexities if you would want to create a similar setup yourself. Using Container Engine, you can also deploy any other types of applications next to your WordPress Site to build any type of Service.
You can improve and tweak this setup to fit your needs. I personally added another load-balancer layer based on nginx to manage my sites and do SSL termination. I'm also going to add a Caching container to store session information and other shared data to improve the scaling of containers.
If you have any questions about the setup, feel free to leave a comment below.