Masters Project Defense

On 5 December 2014, I will present/defend my masters project, Demand-Provisioned Linux Containers for Private Network Access at Rochester Institute of Technology, room 70-2115 at 2:30 PM EST.

At work, I have created a neat system for non-persistant bastion or "jump" hosts, that allow you to connect one network to another. I toss in some auditing, multifactor auth, and containers give me (a little) isolation. It's a neat proof of concept, and I'm excited about presenting it. This is a public event, so, I suppose if you are in the area and interested in Docker, come on down.

I am especially excited for the snake fight portion of my project defense. Hopefully my assigned snake will be small. Also: hopefully it does not snow and cancel my flight... that would be the worst.

People and Ops

I'm a master's student at Rochester Institute of Technology. Someday, it's likely that I'll graduate. I'm excited to write a post about that someday, especially because I think the people who may read this site would find the project aspect particularly interesting.

Anyways, back in 2011 I wrote a personal statement as part of my application package. Squeezing out the mushy details of my past projects and employers, leaves this:

At the end of the day, it is my belief that system administration is about bringing people together.

System Administrators are the maintainers of spaces that exist only physically by way of printed circuit boards, integrated circuits and interconnected copper and glass. The gap between states, nations, continents, cultures, and even space are now almost purely a matter of one's interest in bridging the distance between two points. The responsibility of maintaining this virtual space is not one to be taken lightly. People depend on the networks we create and maintain for a range of seemingly petty and very important (life or death) reasons. It is no longer practical or reasonable to be okay with "good enough" system administration.

[...] While technologically impressive, all these networks and architectures exist for a bigger reason.

I'm not quite sure what the point of posting this is, other than to remember the human aspect of what we do.

sudo serve && protect

Not too long ago, this guy held a logo contest to make an ops-themed logo with a union flare to it. Someone else came along and threw it up on The logo is CC-BY-4.0 which is pretty neat, so I took some time to make a few more locals.

Area code...

Not perfect, but fun enough if you happen to be in one of those area codes. These derivatives are licensed CC-BY-4.0 as well.

How'd you do that?

I made these using iDraw for Mac OS X. It seems to be a decent vector image editing tool for the mac, and only $25.

Docker and CentOS4

As much as we would want a world in which all applications were updated regularly and licensed sanely, that is not the world we live in. Some applications cost several hundreds of thousands of dollars (per seat!) and their users expect to be able to use them into the future, even after the vendor has moved on.

One solution to let these applications run and not have to keep older OS machines on your network is Docker. Docker does have a variety of base images available through their site, however typically they are newer OSes.

If you find yourself in the sad position of making a CentOS4 base image, here are some helpful pointers:

  1. CentOS4 used up2date to pull down system updates and new RPMS. However, it also supports Yum, and the CentOS4 repos do have a repodata folder for yum use. Grab the appropriate repodata folder, and put it in your /etc/yum.repos.d folder. Make sure to add a line for enabled=0.
  2. You will find the script here to be of particular value. I cut out the usage(), yum_config, and getops stuff.
  3. Add in a test to make sure $target exists. What if you accidentally kill off /tmp and then run the script and you forget to test for if $target exists? This exercise is left to the reader (but see #7)
  4. You will need to modify the first yum (...) install line to include --enablerepo=centos4 --disablerepo=*.
  5. Need extra packages? Make sure to copy in your repo definition from /etc/yum.repos.d to $target/etc/yum.repos.d as the the groupinstall will provide a repo that may not be what you want it to be.
  6. You will likely need to add in some important .i386/.i686 RPMs, especially if you are running the build script on a RHEL6 machine. Some good ones to include include libgcc and glibc.
  7. Kill off the RPM repository.1 rm -Rf $target/var/lib/rpm -- the reason for this is that you are likely running this on a newer RHEL machine. Older versions of RPM will not be able to read the RPM database that you have created. If you really need the ability to do development and install RPMs in a docker repo to figure out what you need, after you have built a base image, run rpm --initdb and re-run your groupinstall command. Then, tar the RPM library and put it into the script so that you may experiment without having to perform this step manually. I do suggest that once you are done with dev, you just kill off the RPM library to begin with -- no one should be installing RPMs inside your artisanal container, right?
  8. Since you know your OS name, it may be beneficial to just set name=centos up front.
  9. Bam. You have a CentOS4 image now!

There are arguments to be made for being able to use yum in a Dockerfile however, given that only one application i know of needs to be run like this, I find that building this once and keeping the scripts in revision control work well enough for me.

My Dockerfile is fairly simple - I have a FROM and MAINTAINER line, with an ADD and an ENTRYPOINT - the ADD simply drops in an init script that adds a local-to-container user with the appropriate uid/gid, then switches to that user and runs the application. X11 use is handled by proper management of the DISPLAY variable and/or by exposing /tmp/X11-Unix to the container.

Hopefully you find yourself not needing to support older apps. But, in the event you do, maybe this will help.

  1. katzj notes that he has a solution to this. See his tweet for more info. 

Solaris & TLS, Best Friends

I got TLS LDAP authentication working in Solaris 10 today. Hooray!

Realistically, this is not really that big of a deal, except that finding the appropriate instructions to do so is near impossible. People say "use certutil!" or "load up firefox go to https://ldapserver:636 and save the cert then copy the files" and this and that and ugh.

So here are some notes. I hope they help you.

  • The NSS certificate DB for LDAP on Solaris lives in /var/ldap — with the rest of the ldap settings.
  • The easiest way to "just make it work" is to load up firefox, import your CAs, go to https://yourldapserver:636 and permanently accept the certificate.
  • Firefox will complain that this is not a port you rock HTTP on. So go to about:config, right click, make a new string value called and add 636 to it.
  • Go back to that site again.
  • Copy ~/.mozilla/firefox/profilename/*db to /var/ldap

You’re done. Run ldapclient to set up LDAP and it should work fine. Do this, then use certutil -d /var/ldap -L and figure out how you may be able to script it. Or just run with it. Your call.

Note: This was posted a year ago. I should mention, you should use a fresh Firefox profile for this -- no use in accidentially carrying over unnecessary secrets to a config that may be distributed out to many systems.