Sunday, December 17, 2017

SpringOne Platform 2017

The Recap

It was a great time at SpringOne.  With the conference being sold out, it was great seeing the masses of people that turned out to the event.  Below is a quick shot of before the keynote started.


There were a lot of announcements and a lot of great sessions.  If you were unable to attend or missed one of the sessions, they are all available on YouTube and SpringOne's site.

Some of the more buzz worthy topics during the conference were PCF 2.0, Kotlin, and Concourse.  My session on Automating PCF Upgrades using Concourse was well attended and there were a lot of great questions asked.  You can see a shot of me below talking about the challenge presented to automate our upgrades and be current within 30 days.  If you want to see the full session, you can watch it here.


~RRRII

Sunday, October 29, 2017

SpringOne Platform 2017 Preview

Almost Here

Coming up in a little over a month, the SpringOne Platform conference will be happening in San Francisco.  In my last post, I mentioned that I hoped to go to this event this year.  Well, not only am I going, but I will be speaking there.  I am scheduled to talk about automated upgrades for Pivotal Cloud Foundry and how you can use Concourse to accomplish this feat. 

In the post Pivotal Operations Manager CLI, I briefly hinted about using the PCF Pipelines GitHub repo to help you start automating your upgrades.  At this talk, I will go in further detail on how you can use the pipelines, hints on how you can customize them for your environment, and how you can expand them for multi-foundation support.  Hopefully, I can fit this information in my half hour time slot.  So if you are going, I will see you there and stop by my talk, Automated PCF Upgrades with Concourse.

~RRRII

Sunday, June 25, 2017

Cloud Foundry Summit 2017

The Recap 

Another Cloud Foundry Summit has come and gone.  This year was just as good if not better than last year.  The sessions were great and meeting up with everybody was great too.  If you were not there or if you couldn't get to all of the sessions, here is the YouTube link to the ones that have been published.

One great announcement was that the Kubo project has been donated to the Cloud Foundry Foundation as an open source project.  This project was originally a joint venture between Pivotal and Google.  If you don't know what Kubo is, it is a BOSH release of Kubernetes.  The official GitHub page will give you more detailed information on it.

Another announcement during the keynote was that Microsoft is joining the Cloud Foundry Foundation as a Gold Member.  Microsoft offers Pivotal Cloud Foundry through their Azure service. They also have integrated the Cloud Foundry CLI inside of their Azure Cloud Shell.

My Experience

Enough of what you probably heard or already read out on the Internet about the conference. The biggest benefit of being there was the sharing of ideas and stories with the rest of the community.  I met and talked with a lot of people during this conference.  Some of you were just starting to evaluate Cloud Foundry and wanted to hear how it has helped.  Others have been using it for awhile and come to find out that their experiences (Good and Bad) sounded very familiar.

As I said in the last post, my company (Express Scripts) was a sponsor at this year's summit.  I couldn't say enough good things about representing our company and showing support for this open source community.  Thank you to those who are reading this article and stopped by the booth to chat.  Not only did you get earbuds but you also got to learn about who are company is and why what we do is so important.  Also, we got to show off the cool things we are doing with Cloud Foundry and hopefully gave you ideas for your own environment.  Below is a picture of all the smiling people working at this year's booth.


Summary

I would say this year's trip was a success.  I am looking to also go to the Sping One Conference this year in December.  Hopefully, there will be as a good a turnout there as there was at Cloud Foundry Summit.  If you are going, hope to see you there.

~RRRII

Sunday, June 4, 2017

Cloud Foundry Summit 2017 Preview

That Time Of Year

I have the opportunity again this year to go to the Cloud Foundry Summit Silicon Valley on June 13th - 15th.  This summit will be the second one I have attended.  I am looking forward to meet up with this community and share ideas of what we have accomplished over the past year.  Going along that theme of sharing, I and some of my coworkers will be there in our company's booth.  Express Scripts will be a bronze member sponsor for Cloud Foundry Summit 2017 so we will be there to answer questions and share our Cloud Foundry stories. You can go to Cloud Foundry's site to find out more information and the list of events happening for those days.

Cloud Foundry Certified Developer

One of the exciting things to come out at this event is the Cloud Foundry Certified Developer (CFCD).  More information can be found here.  Getting this certification will show your expertise in the fundamentals of Cloud Foundry and developing cloud-native applications.  The best part is that these skills will transfer if you are developing on an in-house Cloud Foundry instance or one hosted by a Cloud provider.  I would highly recommend looking into this certification because companies will be looking for employees with these skills.  You can read some testimonies of what people are looking for when hiring new candidates.

More To Follow

Once I get back from this event, I will have more to share with what I saw and heard at the conference.  If you are going out to this event, stop by and say hello at our booth.

~RRRII

Sunday, April 2, 2017

Pivotal Operations Manager CLI

Introduction

This post will be around a Github project by Pivotal Cloud Foundry (PCF) where they created a CLI called OM for interacting with Operations Manager (OpsMan).   This CLI has been a life safer when trying to automate any actions around upgrading components of PCF.

The Problem

When first trying to interact with the OpsMan appliance, there was not much of a choice other than manually clicking around in the interface.  The OpsMan API was available but interacting with it was not the best so PCF upgrade automation was tricky.

The Solution

When stumbling across better ways to solve the PCF automated upgrade problem, I was directed towards this CLI.  I was blown away on how easy it is now to interact with OpsMan.  You can see the full list of available options on the readme.md page.  But here is a sample workflow to get you started on how you can use this CLI on your journey of automated product tile upgrades.
  1. Upload Product to OpsMan
  2. Stage Product to OpsMan
  3. Upload Stemcell to OpsMan
  4. Set Errand State to Enabled (Step not needed if running PCF 1.10)
  5. Apply Changes
  6. Set Errand State to Disabled (Again not need if running PCF 1.10)
  7. Apply Changes
All of these commands are available to use in the CLI and can be easily run inside of a Concourse pipeline.  But there are other problems to be solved now like the following:
  • How do I get the Pivotal product tile?
  • How do I know what stemcell to upload with the new product tile?
  • When I know what stemcell to download, how do I download it?
  • Is there currently an apply changes happening in OpsMan or is it not available to apply changes?
These questions are now real issues when trying to upgrade in an automated way.  These are the manual steps you will need to solve for so you can have little or no interaction with the upgrade.  It is out of the scope for this article to answer some of these, but Pivotal has provided some sample pipelines to answer them.

Summary

Hopefully, you find the benefit in this OM tool as much as I have.  It will save you a lot of time with staying up to date with upgrades so you can work on other cool stuff.

~RRRII

Saturday, January 28, 2017

Adding SSH Users To BOSH Deployments

Introduction

It has been a while since I have posted anything so let's start off 2017 with some work using BOSH.  What is BOSH?  A question you didn't even know you wanted to ask.  Some documentation can be found on their site at bosh.io.  As they state on their site, BOSH was developed to deploy Cloud Foundry, but it can deploy other software as well.

The Problem

With the newer versions of the BOSH director (the virtual machine that controls the deployments), it has updated its policy to randomize the passwords for the vcap SSH user password.  This change is great because now it is not the default password (cloud cow) any more.  But the bad thing is now you don't know what this random password is and an operator cannot SSH into the deployment VMs any more without having to use the BOSH CLI.  What if a service account is needed on the deployment VMs?  Any new account will be blown away after a redeploy or upgrade.

The Answer

Good news!!! There is a BOSH release to save the day.  The os-conf BOSH release answers the above problem and solves a few more.  You can view the project on GitHub.  The job that we are going to focus on is the user_add job.  With this job, you can add either a public key or an encrypted password for the user to be added.

The os-conf release uses a newer feature for the BOSH director called the runtime config.  This feature allows you to apply configuration outside of the deployment manifest to all deployments the director manages.  You can find more information about this feature on their site.  I have successfully tested the os-conf release 10 with the BOSH director release 260.

Upload Release

Before you can use the release, you have to upload it to your BOSH director.  You can do that with the following BOSH CLI commands.  Make sure you have the BOSH CLI installed before running these commands.

[user@linux_prompt]$ bosh login
[user@linux_prompt]$ bosh upload release https://bosh.io/d/github.com/cloudfoundry/os-conf-release?v=10

If you don't have access to the Internet on the machine where the BOSH CLI is installed, you can move the release local to the server and run the following command.

[user@linux_prompt]$ bosh upload release os-conf-release-10.tgz

Job Configuration

Below I have a sample addon deployment manifest so you can see the structure.

The easiest way to create the encrypted password is using an Ubuntu box.  You can install the following package to get the command mkpasswd; sudo apt-get install whois.  Once it is installed, you can create the password by running; mkpasswd -m sha-512 <PASSWORD> <SALT>.  Replace <PASSWORD> with your password and <SALT> with at least an eight character string.  Once you have the encrypted string, you can added it to your manifest.

Apply Configuration

Once the addon deployment manifest is complete, here are the steps to run it.

[user@linux_prompt]$ bosh login
[user@linux_prompt]$ bosh update runtime-config addon_example.yml
[user@linux_prompt]$ bosh runtime-config
[user@linux_prompt]$ bosh download manifest deployment1 deployment1.yml
[user@linux_prompt]$ bosh deployment deployment1.yml
[user@linux_prompt]$ bosh deploy

So these commands do the following things:
  1. Logs into your BOSH director
  2. Updates the runtime-config with your addon deployment manifest
  3. View the updated runtime-config
  4. Downloads the deployment manifest for the deployment called deployment1
  5. Sets your current deployment to deployment1
  6. Redeploys the VMs in the deployment which in turns adds the new user configured in the addon deployment manifest.
You have to run the bosh deploy command on the deployment your updating.  This command is the only way it applies the new configuration after the runtime-config has been updated.

Summary

So now you have a new SSH user that is applied to your deployment that will always return even after a redeploy.  Again, this is only one job that the os-conf BOSH release can do.  You can update the SSH login banner, DNS search domains, etc.  It is a great way to update your VMs without having to manually customize the BOSH stemcell.

Hopefully, you learned something new about a BOSH director feature and a supplemental BOSH release.

~RRRII

Sunday, October 23, 2016

Getting Started With Concourse CI - Part 2

Introduction

Now that you have a Concourse CI test instance running from the last post, let's use it.  In this post, we will go over uploading a "Hello World" pipeline.  Also, since this product is always improving, we will go over the steps to upgrade this test instance.

Getting Familiar with Concourse

There is a great example of setting up a Hello World pipeline on the Concourse documentation site.  I am going to go through some of the common commands they reference and explain how to do some of these steps if you are using a private Docker repository.  You can review my blog series on Pivotal Cloud Foundry and Docker to see the references of using a private Docker repository.

Fly CLI

The way you interact with Concourse is through the Fly cli.  The easiest way to download the version you need is to go to your Concourse test instance, http://concourse-hostname:8080.  It will have a link to download the correct version for your operating system.  After you download the executable, make sure to place the file some where in your operating system's PATH or update your PATH to include the location of this executable.

Once you have the cli in your path, we need to setup the Concourse target and login.  Referencing the test instance that you stood up, you will enter in the following.

[user@linux_prompt]$ fly -t concourse-test login -c http://concourse-hostname:8080

The command will return with a prompt asking for a username and password.  You will just need to enter the username and password you setup during your Concourse web service startup script.  After you successfully logged in, you have created a Concourse target called concourse-test.  You can name the target whatever you want, but you will just need to remember the target name because that name is how you interact with your instance.  If you forget the target name, you can run the following command to list all of the targets you setup.

[user@linux_prompt]$ fly targets

Creating Your First Pipeline

Now that you have successfully logged into your instance of Concourse, let's create a pipeline. I have modified the Concourse sample pipeline below to include a private Docker registry.

I have added the insecure_registries entry into the script. This will ignore certificate errors when the registry has certificates signed by a private Certificate Authority.  If you are not using a private Docker registry, you can use the example on the Concourse documentation site.

Once you have placed the text of that script into a file called hello.yml, you can upload the pipeline to Concourse with the following command.

[user@linux_prompt]$ fly -t concourse-test set-pipeline -p hello-world -c hello.yml
[user@linux_prompt]$ fly -t concourse-test unpause-pipeline -p hello-world

The pipeline will now be loaded into Concourse as well as started.  It will have downloaded the Ubuntu image from the Docker repo you specified and printed out "Hello, world!".  Congratulations, you now have tested your Concourse instance with your first pipeline.

Upgrading Concourse

To keep up to date with the latest enhancements, you will want to upgrade Concourse to the latest version.  If you followed my guide in the last post, it is really easy.  You can download the latest version from the Concourse downloads page.  After you have downloaded the latest binary, you will need to SCP the file to the server.  When it is there, just run through these steps.
  1. Stop the Concourse service: concourse-service stop
  2. Rename the old binary: mv /opt/concourse/bin/concourse /opt/concourse/bin/concourse.old
  3. Move the upgraded binary to the /opt/concourse/bin folder
  4. Start the Concourse service: concourse-service start
It is as easy as that.  You will want to have a copy of the old binary in case the upgraded one does not work.  

Now that Concourse is updated, you will see that there is a new directory inside of your Concourse worker directory.  In this example, the worker directory is located /opt/concourse/worker.  This new directory will be named the version of Concourse you upgraded to.  

Another task you have to do now that Concourse is upgraded is to upgrade your Fly cli.  To update your cli, run the following command.

[user@linux_prompt]$ fly -t concourse-test sync

This will download the version of the cli to match the upgraded version of Concourse.

Summary

This post just scratched the surface of what Concourse can do.  Hopefully, you have been pointed in the right direction on using this tool.  The biggest benefit I have seen using this tool has been writing pipelines as code.  No longer do I have to click through multiple screens and option boxes to configure my pipeline.  Also now you can source control your pipelines to keep a versioning history of them.

~RRRII