Oracle Linux Cloud Native Environment has gained some notable additions. Specifically, three core components for unified management: the Oracle Linux Cloud Native Environment Platform API Server, Platform Agent and Platform Command-Line Interface (CLI). These new open source management tools simplify the installation and day-to-day management of the cloud native environment, and provide extensibility to support new functionality.
If you would like to know more about the core concepts, then read this article.
Last week Oracle has announced the general availability of Oracle Linux Cloud Native Environment Release 1.1. This release includes several new features for cluster management, updates to the existing
Kubernetes
module, and introduces new Helm
and Istio
modules.
Developers often want a quick and simple solution while developing solution. In this article I will show you how a developer can stand up a single node
Kubernetes
environment using Oracle Linux Cloud Native Environment
.
Please note that this method of standing up OLCNE
module for Oracle Cloud Infrastructure (OCI) provides a reusable and extensible way that provisions Oracle Linux Cloud Native Environment on OCI. It is meant for learning and experimenting with OLCNE. It is neither ready for production use nor is it a replacement for OKE. If you’re deploying your Kubernetes cluster on OCI, you should strongly consider using OKE.
I strongly suggest to use OCI Cloud Shell to deploy this whole solution as it doesn't require any binaries to be installed locally or any other configuration.
Deploy the OLCNE Environment
Login to your OCI environment and prepare the environment variables that you need to use while sending the
oci cli
commands to create the VM and configure the OLCNE
.$ ssh-keygen -t rsa
Accept the default to save the
ssh
keys to $HOME/.ssh/
directory. This is required for you to access the instance later on and perform further operations.
You need to get the compartment OCID where you want to deploy your instance. Run this command to get the OCID. Change the
compartment name
to the name of your compartment.$ export compid=`oci iam compartment list | jq -r ‘.data[] | select(.name | contains("compartment-name")) | .id’`
You need the Availability Domain name for your environment. For this I will assume that you will deploy the VM onto your first AD. Run this command to get the AD.
$ export ad=`oci iam availability-domain list | jq -r .data[0].name`
Next thing that you need is the
Image OCID
of the OS image that you want to use. For this case, you have to use Oracle Linux
. Their latest version on OCI is Oracle Linux 7.8
. Let's use that, and to get the OCID, run this command.$ export imageid=`oci compute image list -c $compid --display-name "Oracle-Linux-7.8-2020.04.17-0" | jq -r .data[0].id`
You need to know the
subnet ocid
and for that you need to know the VCN OCID
as well. Run this to get the VCN OCID
first. Change the name of your VCN that you have created in your tenancy.$ export vcnid=`oci network vcn list -c $compid | jq -r '.data[] | select(."display-name" | contains("name-of-vcn")) | .id’`
Now to get the subnet ocid, run this command.
$ export subnetid=`oci network subnet list --compartment-id $compid --vcn-id $vcnid | jq -r '.data[] | select(."display-name" | contains("Public")) | .id’`
You are almost there. Now you need to create the cloud init script within the
Cloud Shell
and run the oci cli
command to bootstrap the environment. This is the Bash Shell
script that you need to use. Create a file name olcne-deploy.sh
and paste this:#!/bin/sh
sudo systemctl stop osms-agent
sudo osms unregister
sudo sed -i 's/enabled = 1/enabled = 0/' /etc/yum/pluginconf.d/ulninfo.conf
sudo yum-config-manager --disable ol7_developer_EPEL
sudo yum install -y oracle-olcne-release-el7
sudo yum-config-manager --enable ol7_olcne11 ol7_kvm_utils ol7_addons ol7_latest ol7_UEKR5
sudo yum-config-manager --disable ol7_olcne
sudo swapoff -a
sudo sed -i 's/^SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config
sudo /usr/sbin/setenforce 0
sudo firewall-cmd --add-port=8091/tcp --permanent
sudo firewall-cmd --zone=trusted --add-interface=cni0 --permanent
sudo firewall-cmd --add-port=8090/tcp --permanent
sudo firewall-cmd --add-port=10250/tcp --permanent
sudo firewall-cmd --add-port=10255/tcp --permanent
sudo firewall-cmd --add-port=8472/udp --permanent
sudo firewall-cmd --add-port=6443/tcp --permanent
sudo firewall-cmd --zone=public --add-port=10251/tcp --permanent
sudo firewall-cmd --zone=public --add-port=10252/tcp --permanent
sudo firewall-cmd --zone=public --add-port=2379/tcp --permanent
sudo firewall-cmd --zone=public --add-port=2380/tcp --permanent
sudo systemctl restart firewalld
sudo modprobe br_netfilter
sudo sh -c 'echo "br_netfilter" > /etc/modules-load.d/br_netfilter.conf'
sudo yum install -y olcnectl olcne-api-server olcne-utils olcne-agent
sudo systemctl enable olcne-api-server.service
sudo systemctl enable olcne-agent.service
cd /etc/olcne
export HOST=`hostname -f`
export DNS=`cat /etc/resolv.conf | grep -i search | awk '{print $3}'`
export IPADDR=`ifconfig | grep -i inet | grep -v 127. | grep -v 10.244 | awk '{print $2}'`
sudo ./gen-certs-helper.sh --cert-request-organization-unit "My Company Unit" --cert-request-organization "My Company" --cert-request-locality "My Town" --cert-request-state "My State" --cert-request-country US --cert-request-common-name $DNS --nodes $HOST
sudo /etc/olcne/bootstrap-olcne.sh --secret-manager-type file --olcne-node-cert-path /etc/olcne/configs/certificates/tmp-olcne/$HOST/node.cert --olcne-ca-path /etc/olcne/configs/certificates/production/ca.cert --olcne-node-key-path /etc/olcne/configs/certificates/tmp-olcne/$HOST/node.key --olcne-component api-server
sudo /etc/olcne/bootstrap-olcne.sh --secret-manager-type file --olcne-node-cert-path /etc/olcne/configs/certificates/tmp-olcne/$HOST/node.cert --olcne-ca-path /etc/olcne/configs/certificates/production/ca.cert --olcne-node-key-path /etc/olcne/configs/certificates/tmp-olcne/$HOST/node.key --olcne-component agent
olcnectl --api-server 127.0.0.1:8091 environment create --environment-name myenvironment --update-config --secret-manager-type file --olcne-node-cert-path /etc/olcne/configs/certificates/tmp-olcne/$HOST/node.cert --olcne-ca-path /etc/olcne/configs/certificates/production/ca.cert --olcne-node-key-path /etc/olcne/configs/certificates/tmp-olcne/$HOST/node.key
olcnectl --api-server 127.0.0.1:8091 module create --environment-name myenvironment --module kubernetes --name mycluster --container-registry container-registry.oracle.com/olcne --apiserver-advertise-address $IPADDR --master-nodes $HOST:8090
olcnectl --api-server 127.0.0.1:8091 module install --environment-name myenvironment --name mycluster
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=$HOME/.kube/config
echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc
kubectl taint nodes --all node-role.kubernetes.io/master-
olcnectl --api-server 127.0.0.1:8091 module create --environment-name myenvironment --module istio --name myistio --helm-kubernetes-module mycluster --istio-helm-module myhelm
olcnectl --api-server 127.0.0.1:8091 module install --environment-name myenvironment --name myhelm
olcnectl --api-server 127.0.0.1:8091 module install --environment-name myenvironment --name myistio
kubectl label namespace default istio-injection=enabled
sudo yum install -y git
git clone https://github.com/oracle-quickstart/oci-cloudnative
cd oci-cloudnative/deploy/complete/helm-chart
helm install mymushop mushop --namespace default --set global.mock.service=all
cat <<EOF > gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: mushop-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: mushop
spec:
hosts:
- "*"
gateways:
- mushop-gateway
http:
- route:
- destination:
host: edge.default.svc.cluster.local
port:
number: 80
EOF
kubectl apply -f gateway.yaml
Make this script executable by running
sudo chmod +x olcne-deploy.sh
You are now all set to fire up the
oci cli
command. Run this command now:$ oci compute instance launch --availability-domain $ad --compartment-id $compid --shape VM.Standard2.4 --assign-public-ip true --display-name test --agent-config '{"isManagementDisabled":"true","isMonitoringDisabled":"false"}' --ssh-authorized-keys-file .ssh/id_rsa.pub --user-data-file olcne-deploy.sh --wait-for-state RUNNING --subnet-id $subnetid --image-id $imageid
This will create an instance of shape
VM.Standard2.4
, deploy OLCNE API
and Agent
service, deploy Kubernetes 1.17.4
module, Istio 1.4.6
module, deploy a sample retail application using it's Helm chart
, expose the application endpoint using istio ingress gateway
.
As OLCNE doesn't have any direct integration with OCI Load Balancer like OKE, it won't be able to spin up a OCI LB and get a public ip for the
istio ingress gateway
. You need to manually do that.
The process of deploying all of these takes about 8 minutes
Startup finished in 2.006s (kernel) + 4.209s (initrd) + 7min 31.091s (userspace) = 7min 37.307s.
Verify the Deployment
Login to your instance to verify the deployment. Once you login run these commands to check and verify the deployment.
$ ssh -i .ssh/id_rsa opc@publicip
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ export KUBECONFIG=$HOME/.kube/config
$ echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc
$ kubectl get nodes
$ kubectl get po
$ kubectl get svc
$ kubectl get po -n istio-system
$ kubectl get svc -n istio-system
As you can see that
istio ingress gateway
redirects port 80 to nodeport 31380 and 443 to 31390, you can create a load balancer to forward port 80 to the instance's 31380 port to access the retail application.Create a Load Balancer & Access Retail Application
- From the OCI Console, go to
Networking
->Load Balancers
. - Click on
Create Load Balancer
. - Specify a
name
, choose thevisibility type
asPublic
. - Choose the
shape
, in this case, I choose100Mbps
. - Choose the
VCN
and thesubnet
where this load balancer is going to be connected. - Click on Next.
- Specify a
Load Balancing Policy
, in this case, I chooseWeighted Round Robin
. - Change the
Health check policy
protocol toTCP
and set the port to31380
. - Click on Next.
- On the
Configure Listener
page, specify a name of the listener. - Specify the type as
HTTP
. - Specify the port as
80
. - Click on Submit.
Once the Load Balancer is created, you need to create a
backend set
and backend
.- Click on the
Load Balancer details
and click onBackend Sets
. - Click on
Create Backend Set
. - Specify a
name
, change thehealth check protocol
toTCP
and port to31380
. - Once the backend sets is created. Click on the back end set and click on the
Backend
. - Select the OLCNE Compute Instance and click on
add
.
That's it. Your load balancer is now pointing to the node's 31380 port which will redirect it to the Istio Ingress Gateway's port 80.
Open up another tab in the browser and type http://<loadbalancer-ip> and you should be able to access the retail application.
No comments:
Post a Comment