Posts

Showing posts from April, 2016

Syncing repositories that need authentication using a proxy in Satellite 6

At my current client we found that when we used a proxy without authentication we could not sync external repository's that required authentication in Satellite 6. After a trek down the pulp-code-lane with a collegue (check  http://binbash.org/  for his blog) we found the problem to be a simple python statement inside the pulp-nectar code. After submitting a pull request ( https://github.com/pulp/nectar/pull/47 ) and mentioning it to RedHat via a support case I have been told that they have created an internal bugzilla entry and will fix it in an upcoming release (thanks for the quick response!). Until then, if you find you get authentication errors when you try to sync external repository's (like https://username:password@repo.org) and you use a proxy without authentication, take a look at the path, it's really as simple as it seems:)

Recursively update Composite Content Views in Satellite 6/Katello

The basic idea Satellite 6 (and Katello for that matter) have a new way of dealing with content, whether it being puppet modules, rpm's or docker images. Below I will focus on rpm's, which I think will be the use case for most people. A content view can contain one or more rpm repository's at a specific point in time but can consist of multiple versions. So let's say I create a content view named RHEL7_BASE on Monday containing 2 repositories I just synced:  rhel7_server and rhel7_epel.  Version 1 of that content view points to the packages as they are on that Monday. Now the next Friday I do a sync of my repositories so I get the latest versions and patches and whatnot, but note that version 1 of my RHEL7_BASE content view is unchanged, and any servers that are using this version will not have access to the new packages. In order to make these new packages available, I need to publish a new version of the view and promote this version to the environment that con

Mounting data onto your filesystem for fun and unfortunately no profit

A little more than a year ago I was working for a client that desired to have a simple way do do an inventory of their linux servers (running SLES ). They had their networking DTAP configured to make every environment only available via a bastion host and only that host. Luckily you can do a lot with Ansible in conjunction with ProxyCommands (see  https://en.wikibooks.org/wiki/OpenSSH/Cookbook/Proxies_and_Jump_Hosts ) so reaching the servers was not really a problem and since Ansible's excellent setup  module provides a wealth of information, I had all the ingredients I needed, I just needed to connect the dots. Since I wanted to learn more about filesystems (and FUSE in particular) I thought it would be a nice exercise to try and "map" the collected data from Ansible's setup module onto a mountpoint to make it easy to grep and parse and all that jazz. Since the data format Ansible uses is JSON anyway, I thought it would be best to first focus on creating a script

First!

Well, I finally bit the bullet and started a blog. In here I will try to share some problems and solutions I encountered/devised in my daily dealings with mainly Linux as a DevOps Linux Engineer.