Friday, November 17, 2017

Nginx Ingress Controller on Bare Metal

After many hours of reading, trial-&-error, and general frustration… I have collected a few helpful bits WRT configuring the nginx ingress controller for a bare metal configuration.

The Challenge:
Deploy a k8s environment to host a small collection of apps, which sits behind a single routable IP address. Each app must:
  • manage its own SSL certificate from Lets Encrypt
  • be addressed with name-based routing and also support rewrites correctly. (www.example.com, if it redirects to /login must not send to wrong back-end service)
  • whenever possible, leverage a fully automated end-to-end deployment pipeline, all in-house, within the cluster (Jenkins, private repos, etc. - will be in a different post)

The Journey:
Initially struggled with the nginx ingress controller because of some of the default parameters and a general bias the controller has for running in a cloud providers IaaS - such as AWS / GCE. This also led to a general lack of examples and documentation for the scenario I was trying to solve. Specifically:
  • The controller defaults to forwarding http to https.  I had to turn this off to be able to test http only services.  I did this by adding 'ssl-redirect: false’ to the ingress controller’s configmap in the data section.
  • It also implements a strict HSTS configuration. This caused the temp certs created during setup to become “stuck" in my browsers and lead me down 'troubleshooting rabbit holes' which were not relevant or fruitful.  To correct, I had to set these values in the ingress controller configmap:
      data:  
        hsts: "true"  
        hsts-include-subdomains: "true"
        hsts-max-age: "0"  
        hsts-preload: "false"
  • Using the helm chart for the ingress controller installation did not work as desired.  I wound up installing manually from yml files which I massaged from the nginx ingress controller repo and examples.  All told, I wound up with a series of 6 scripts which I installed sequentially (I'll publish these later - time permitting):
    • 01.default-backend.yml
    • 02.default-backend-svc.yml
    • 03.ingress-controller-configmap.yml
    • 04.ingress-controller-svc.yml
    • 05.ingress-controller-rbac-roles.yml 
    • 06.ingress-controller-deploy-rbac.yml
  • Configuring the kube-lego package was also a challenge, as getting the cert validation step to work required the site to be routable before it was secured. It also exposed the temp self-signed cert which led me to the issues above with HSTS. Ultimately, I learned I needed to use these parameters when installing the lego chart:

    helm install stable/kube-lego --namespace <my_namespace> --name <my_deploy_name> --set config.LEGO_EMAIL=<me_email>,config.LEGO_URL=https://acme-v01.api.letsencrypt.org/directory,LEGO_SUPPORTED_INGRESS_CLASS=nginx,LEGO_SUPPORTED_INGRESS_PROVIDER=nginx,LEGO_DEFAULT_INGRESS_CLASS=nginx,image.pullPolicy=Always,rbac.create=true,rbac.serviceAccountName=<my_sa_name>

    I hope it goes without saying that the <> parts are variable which I put appropriate inputs in myself for. You need to do the same.
  • I needed something other than the default backend running to tell when I got the correct settings for the ingress. The easiest thing to use wound up being the http-svc described as a prerequisite in the nginx ingress controller repo:   https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/PREREQUISITES.md
As a side note - besides the documentation for each of the projects involved, and the k8s docs, I found this site to be VERY helpful:  http://stytex.de/blog/2017/01/25/deploy-kubernetes-to-bare-metal-with-nginx/

The morale of the story is this - routing in k8s is complex enough, but the examples readily available for those of us trying to now apply this to an on-premise and/or bare metal deployment have a lot of gaps. That leaves us to hunt and search to find the materials we need. Keep good notes and share with all, as the troubleshooting will be critical to us everyone getting better with Kubernetes. I am just trying to do my part.

Tuesday, October 17, 2017

Just When You’ve Figured Things Out… Life Gets Cloudy

I have been on a personal journey of late, trying to define this next chapter in my professional life. For the first twenty years of my professional career, I was explicitly focused on email and collaboration. At a young age, I realized there is genuine need for businesses to help people communicate faster, to break down barriers of communication, and help people share knowledge despite time and space. I am an email & collaboration expert!

As time ticked by, email turned to instant messaging, which turned to collaboration, then to social collab, until finally these technologies became commoditized. A differentiator no more, everyone has multiple email addresses, are now capable of self-organizing into collaborative groups with digital presences and mature tools for sharing and communicating. I found myself asking: Where does that leave me?

Much like the aging technology stack I was aligned with, the people involved also need to repurpose. As a technical and organizational leader in my past few roles, I have found a joy in working with younger technologists and watching them grow and thrive in the technology world. I thought I figured it out… I am a coach!

Then I joined a start-up...

If you have ever worked with a start-up company, then you appreciate the need to wear any and every hat, not being worn by anyone else at that moment. The ever-shifting, extremely agile and exciting world of tech start-ups is anything but boring! That also leads to a greatly reduced need for mentors and coaches, so… what does this IT dinosaur, with commoditized technical skill and no-one to lead do? I EVOLVED!

There are many emotions attached to my journey… concern, worry, nervousness - but also excitement, exhilaration, wonder. I found a parallel which I could understand. Business need to change, and they turn to cloud computing to do that… I need to change, and I also can turn to cloud computing to do that. Wait, what?!? Indeed! The fundamental realization that hit me was this:

Businesses need nimble IT tools AND nimble technologists to succeed.

A movement which would allow me to leverage the technical knowledge I have acquired from building cross-platform, cross-function social-collaboration portals and building teams from around the globe to deliver them is the world of DevOps! I recognized not only opportunity to be completely relevant, but that I am uniquely qualified in this growing discipline. So I made a checklist:

  • Experience with SDLC - CHECK
  • Knowledge of architecture patterns - CHECK
  • Functional knowledge of modern development frameworks, languages and conventions - CHECK
  • Enterprise deployment and support experience - CHECK
  • Knowledge and experience with… (you get the picture)

As the list grew, what I recognized is I have an extremely broad set of skills which are not just interesting, but critical to becoming a responsible participant and leader in a DevOps culture. For the last two years, our modest company has been building towards what is now an extremely talented and skilled staff in end-to-end cloud technologies. The discipline which seems to span and touch the entire gamut of these concepts is DevOps. I am a DevOps Architect!

I have trained and studied XP, Agile, and many of the wonderful contributions brought to the mature software development practice by the likes of Martin Fowler and other leaders in application development methodologies. Then I sought and consumed copious hours of training and studies on Docker, Kubernetes, and modern containerized computing. Much like my career has been developing towards this for the last 20 years, so have these disciplines… even longer! (admittedly closer to 30 years) Thanks to pioneers and visionaries, I have a place to grow into again. A field of discipline which not only encourages cross-discipline knowledge, but requires one to take action as both developer and sys-admin. To write code as if you will be supporting it… because you do! Build orchestrations which are elegant and reduce barriers to delivery, not create them… because you will use them too!

The moral of the story is this… change is inevitable. Yes, sometimes scary, but always necessary in some form or fashion for anyone to grow. For some this is more obvious than others. To avoid becoming obsolete in a world that communicates and moves faster than ever, we need to be agents of change, and it starts with us as individuals. I embraced this change, and found that building DevOps practices and systems to help enterprises be more nimble and poised to respond to market changes is every bit as exciting and rewarding as it was to give them email, or a knowledge base… or more rewarding!


Coda does cloud. Cloud enables rapid transformation and growth.

Coda does cloud transformation. Cloud transformation is leveraging cloud native technologies, architectures, development patterns and tools to transform your business or enterprise.

True transformation takes every type of discipline… creative, development, architecture, engineering, operations. They all must change and grow together to be transformational… and that is DevOps. Converging all technical practices into a single team, which is an agent of change, innovation and growth.

So who am I? I am Coda… agent of change, bringer of cloud transformations, and disruptor of technology… and we are here to help!

Thursday, October 5, 2017

Ingress Routing for ICP

I have spent time this week trying to play through a few more customer scenarios with IBM Cloud Private.  The issue under test this week was related to a set of services which all want to use the same port, but need to be mapped to either different URLs OR be mapped to different paths on the same URL.  We know Kubernetes is capable of this, but is there anything different in doing this with ICP, which already has some opinions about things like which ingress controller to use?

I started with the basics and deployed a simple, containerized application into my ICP environment.  This blog post outlines some of the basics for exposing an application which is running in ICP.  In particular, I used it to help with deploying a container and the basics of exposing an app.  (For a more complete experience, I also ran through this with a node.js app which my firm wrote, built it with Jenkins and placed the image into the ICP repo.  When deploying from the ICP Docker repo, you must specify the image with the repo address like this: mycluster.icp:8500/default/helloworld:latest where the values are - <internal_cluster_url>:8500/<namespace>/<image>:<tag>)

With an application deployed, and a couple of instances running, I set out to work through the ingress configuration to make the site available in a fan-out pattern to start with.  For this exercise, I was using the apache web server, so I called the app static.  The path on which the app was to be exposed was <proxy-address>/static/.

I created an Ingress by navigating to the Applications link in the menu, and selecting the settings icon on the right side of the screen corresponding to the application I deployed.

The form to configure the ingress looked like this:


I realized there was no easy way to define path based routing.  I attempted to add the path to the url, adjusted settings in numerous fields, until ultimately I concluded this cannot happen through the ICP form interface... NO PROBLEM!  I switched on JSON mode and we were off to the races.

I navigated to my ingress record, selected Edit, and clicked the toggle at the top to manipulate my entry.

{
  "kind": "Ingress",
  "apiVersion": "extensions/v1beta1",
  "metadata": {
    "name": "default-static-demo",
    "namespace": "default",
    "selfLink": "/apis/extensions/v1beta1/namespaces/default/ingresses/default-static-demo",
    "uid": "ac450ec0-aa0d-11e7-9443-0800279d0b70",
    "resourceVersion": "359332",
    "generation": 4,
    "creationTimestamp": "2017-10-05T20:42:15Z",
    "annotations": {
      "ingress.kubernetes.io/rewrite-target": "/"
    }
  },
  "spec": {
    "rules": [
      {
        "http": {
          "paths": [
            {
              "path": "/static/",
              "backend": {
                "serviceName": "default-static-demo",
                "servicePort": 80
              }
            }
          ]
        }
      }
    ]
  }
}

By appending the necessary bits and clicking submit, ICP stored my new configuration, incremented my generation number, and triggered the steps to update the configuration of my cluster to serve up my application on <host>/static/.  The rewrite directive handled relative URL rewriting and everything.

Next steps for me will be to convert this into a yaml template so I can bundle this into my next helm chart!

To make the application work for a different host (static.<host>) for example, I was able to use the form method, and place the desired URL in the hostname filed as depicted above.


Tuesday, September 19, 2017

Jenkins CI / CD in ICP

As we continue our exploration of IBM Cloud Private (ICP), we wanted to be able to grab some of our existing CI patterns and port them into ICP as opposed to our own Docker Swarm based technology stack.  Someone shared this link with me as a starting point:

https://medium.com/ibm-cloud/running-jenkins-ci-cd-deployments-in-an-ibm-cloud-private-environment-525d5eff33dc

It was a good place to begin, and describes the steps to get started, there were some gaps from this set of instruction because they were created with an older version of what is now ICP.

Tips from my experiences:
  1. Do NOT configure Jenkins executors.  The ICP Jenkins chart will automatically spawn disposable slaves to build apps and images.  Be patient while the jobs sit in a pending state until the slave is built, communicates with the master, ... well, for all the magic stuff that makes it work to happen.  If you configured your build step correctly, it will eventually get there.
  2. If building a Docker container, the image repository on a default ICP installation has a different address than the one in the instructions.  The correct default repo address is:
    https://mycluster:8500
  3. When building your registry credential entry in Jenkins, the name of the variable is flexible.  You can make it something that is meaningful to you, but remember that the username and password are the ICP admin user information you specified at installation time, unless you created your own set of credentials in the namespace.
  4. Building and replacing the image in the repo does not update the running container on its own.  Writing a chart and leveraging the Kubernetes native way to manage release lifecycle would be the best native way to handle releases without needing to write it yourself.
This is just the tip of the iceberg, as we explore patterns of end-to-end CI/CD with the ICP platform and the tooling included, and available within the platform.  For Jenkins itself, there are many opportunities for optimizing the pattern, such as triggering deployments from SCM as opposed to needlessly polling the source management system are apparent, but not in the scope of this post as it is merely an addendum to the starter pattern already found available.

Cheers!

Tuesday, August 29, 2017

ICp CLI Access

I was fortunate to have some time off and be able to spend it with my family last week.  As a result, I needed to turn off my ICp instance for that time, to then resume my investigation and learning this week.  I decided to pick it back up by configuring the kubectl CLI to work against my instance.

Immediately upon starting, two issues were encountered:
  1. My Certificates Expired
    Apparently this was an issue which was the result of installing before last week.  There is an easy script which updates your certs for you:
    https://www.ibm.com/developerworks/community/blogs/fe25b4ef-ea6a-4d86-a629-6f87ccf4649e/entry/Certificate_update?lang=en
  2. Unable to Log Into CLI on Day 2
    This second issue is due to the way security is implemented, and requires one to setup a service account to avoid this being a task every 12 hours.  Instructions can be found here:
    https://www.ibm.com/developerworks/community/blogs/fe25b4ef-ea6a-4d86-a629-6f87ccf4649e/entry/Configuring_the_Kubernetes_CLI_by_using_service_account_tokens1?lang=en
Hopefully these tips will help more than just future-me, and will get others back up and running quickly.

Thursday, August 17, 2017

IBM Cloud private - Initial Impressions and Notes

This week I was introduced to IBM Cloud private.  It is an offering from IBM which brings together a host of OpenSource PaaS or cloud orchestration tools, some home-grown IBM ip, and many of things I have been surrounded by for the last two years or so.  While the move to cloud is not really a new one, there are still a large number of enterprises which are still just starting to ramp up their efforts in this arena.  Many cite the need for security or compliance, while others just have tech that works for running their business and have not seen enough compelling information to make cloud transformation a priority for them yet.  Regardless of the reason, we now have seen a series of opportunities to work with many of these customers as they embark on their journey of embracing cloud technologies.  It is the intention of IBM to satisfy some of the needs for customers who have found reasons to avoid the cloud, and help them make their own cloud in which they can enforce their needs and policies while enabling developers to leverage the virtues of cloud, continuous deployment and micro-service architectures.

IBM is very forward about the fact that ICp is powered by Kubernetes, Docker and ELK.  It brings an installer built on Ansible to the party which makes setting up a test environment relatively simple.  Included in the stack, you will also find familiar names like Calico which manages networking.  All said, the suite of technology which is included is very modern and current. The whole thing is wrapped in a management console which is elegant and modern looking.

To get started, I simply needed an Ubuntu LTS system with Docker 1.12 or newer installed.  I grabbed my trusty Ubuntu image and built a vm, updated with latest patches, and installed Docker CE following these instructions.

With my newly minted Ubuntu box, I then followed the instructions found here to download and install the ICp community edition.  Before installing, be certain to follow the steps in the "Configuring your cluster" document, even if you are setting up a single system.  There are specific Docker settings to pay attention to.  The other configuration I decided to do was to configure SSL authentication for SSH and that the key file was put in the prescribed locations.  When I reached the step of configuring my /cluster/hosts I specified a singular IP address for all three roles. This was intentional as I was setting up a test system, but it would be more correct to at least separate out the worker into its own VM. (Clustering is a topic to address in a different post.)

After a little patience, I was presented with the URL to log in and manage the ICp deployment I just setup.  The first thing I did was install Kubectl on the host so I had a CLI in addition to the GUI for managing the system.  Up next for me will be adding workers to the installation, adding additional application repositories and working out custom tool chains for continuous delivery.

Happy clouding!

Nginx Ingress Controller on Bare Metal

After many hours of reading, trial-&-error, and general frustration… I have collected a few helpful bits WRT configuring the nginx ingr...