Category Archives: Linux

Fedora Kickstart Installation Sources

The previous post showed the kickstart file generated using a minimal installation on my Thinkpad W530.  It’s this base kickstart file which we’ll update and customise, in much the same way as we would do if we were working on the target machine.

I typically install the following packages via yum:

sysstat
conky
autofs
simple-mtpfs
critical-path-kde

Apart from conky and simple-mtpfs all of those applications are fairly generic.  As such I was hoping that they would be available on the Fedora installation DVD.  So, I first updated the packages section of the kickstart file like this:


%packages
@core
sysstat
conky
autofs
simple-mtpfs
@critical-path-kde

However, this kickstart file resulted in errors stating that the packages could not be found.

On an installed Fedora 19 system, I could see these packages came from either the “updates” repository or from a repository called “fedora”.


# yum list sysstat conky autofs simple-mtpfs
autofs.x86_64        1:5.0.7-28.fc19                @updates
conky.x86_64         1.9.0-4.20121101gitbfaa84.fc19 @fedora
simple-mtpfs.x86_64  0.1-6.fc19                     @fedora
sysstat.x86_64       10.1.5-1.fc19                  @fedora

I then came across the following links:

Anaconda/Kickstart – repo usage

Red Hat Bugzilla 979154 – Fedora 19 RC2 kickstart with “repo –name=fedora” crashes

Fedora 19 Common Bugs – Problems with Installation Source and Installation Destination spokes when installing from a partially complete kickstart

This first link states that “By default, anaconda has a configured set of repos taken from /etc/anaconda.repos.d plus a special Installation Repo in the case of a media install. The exact set of repos in this directory changes from release to release and cannot be listed here. There will likely always be a repo named “updates”.

I had another look on the DVD and sure enough those packages were not listed. So, what I actually needed to do was enable these extra repositories in the kickstart file.

Here’s what the updated sections of the kickstart file will look like:


# Use network installation
url --url="http://192.168.105.1/os/fedora/19/Fedora-19-x86_64-DVD"
repo --name=fedora-kickstart --baseurl=http://192.168.105.1/os/fedora/19/Fedora-19-x86_64-DVD
# Need fedora so we can pull down things like sysstat
repo --name=fedora
# Use this to get full updates
repo --name=updates

The updated kickstart file will cause the installer to use these extra repositories and use them when it gets to the %packages section of the kickstart file.  In this manner, it will also pull down updates to the O/S from the Internet using the “updates” repository.

Ultimate Fedora Kickstart

I recently decided to re-install Fedora19 on my Thinkpad W530. I thought it would be worthwhile documenting the steps and using a kickstart server (in this case running Centos 6) to be able to replicate the build in the future – for example when Fedora 20 is released – and for kickstarting other devices. Sure, it’s now possible to upgrade Fedora between releases using FedUp, but if all of your personal data is on a separate (backed up) partition then a clean, custom install will give you a fresh start and make sure no old configuration files or packages are left behind.  If you document your customisations via a kickstart file, it means the headaches of the re-install can be minimal.  In fact, everything you would do on the command line post-install can be done via a kickstart file.  Another advantage is that should a newer filesystem type come along you can simply reformat your O/S partition to this new type.

The next couple of posts will document some of the steps in creating the kickstart file and will cover:

Fedora Kickstart – Installation Sources
Fedora Kickstart – DNS Dependences
Fedora Kickstart – Ultra Minimal KDE Installation
Fedora Kickstart – Additional Repositories
Fedora Kickstart – Thinkpad W530 add-ons
Fedora Kickstart – Post-Installation Tasks

To begin this process, I first installed Fedora 19 by hand and chose minimal as the default installation. This gave me a /root/anaconda-ks.cfg kickstart file from which we can work.

It will look something like this:


#version=DEVEL
# System authorization information
auth --enableshadow --passalgo=sha512
# Use network installation
url --url="http://192.168.105.1/os/fedora/19/Fedora-19-x86_64-DVD"
# Run the Setup Agent on first boot
firstboot --enable
ignoredisk --only-use=sda
# Keyboard layouts
# old format: keyboard us
# new format:
keyboard --vckeymap=us --xlayouts='us'
# System language
lang en_GB.UTF-8
# Network information
network --bootproto=dhcp --device=eth0 --noipv6 --activate
network --hostname=localhost.localdomain
# Root password
rootpw --iscrypted XXX
# System timezone
timezone Europe/London
user --groups=wheel --homedir=/home/user --name=user --password=XXX "User"
# System bootloader configuration
bootloader --location=mbr --boot-drive=sda
# Partition clearing information
clearpart --none --initlabel
# Disk partitioning information
part /boot --fstype="ext4" --onpart=sda6 --label=fedora19-boot
part / --fstype="ext4" --onpart=sda7 --label=fedora19-root
part swap --fstype="swap" --noformat --onpart=sda8
part /boot/efi --fstype="efi" --noformat --onpart=sda2 --fsoptions="umask=0077,shortname=winnt"
%packages
@core
%end

For clarity, the ‘url’ shown above would refer to an internal apache webserver from which the Fedora DVD is shared.

Whilst the final kickstart file won’t be for everyone, I’ve called it the ‘Ultimate Fedora Kickstart‘ because it’s the ultimate for my needs. No doubt, you’ll have your own version of this 🙂

Using flock to ensure only one instance of a script can run

Whilst browsing, I came across the following post from Randall Schwartz of Perl and FLOSS Weekly fame.

“flock” is one of those utilities I’ve not used very much, but if you want to create a script and ensure that only a single instance of it can run at any one time then this is a really neat utility. No lock or PID files to mess with, no “ps -ef | grep” type of scripting to incorporate.


#!/bin/sh
(
if ! flock -n -x 200
then
echo "$$ cannot get flock"
exit 0
fi
echo "$$ start"
sleep 10 # real work would be here
echo "$$ end"
) 200< $0

One to file away for future use :)

Puppet Camp 2013

Yesterday I attended Puppet Camp London 2013 at Somerset House. It was an interesting day with a lot of good talks and demonstrations.  In this article, I’ll attempt to link to all of the speakers and slides from the event and describe what I found interesting.  The day was sponsored by Red Hat and Quru.

The began with Dawn Foster, Community Manager at Puppet Labs, introducing Puppet Labs CEO Luke Kanies.

  • State of Puppet: Luke Kanies – Puppet Labs CEO

State of Puppet detailed the history behind the creation of puppet, how things started and where they are now. It was apparent from the slides that there has been a large growth in puppet deployments, community and modules over the last 12 months. I especially enjoyed the point that the ‘old’ ways of doing upgrades – eg taking down services for a migration on a Friday evening, performing the required steps, and then starting things up again on Monday – just don’t work in today’s environment. We’re used to having IT available at all times – we want to access Internet Banking when we want to. We expect access to news, blogs and entertainment 24 hours a day. And we’re more likely to be running services that are available internationally, so the traditional ‘maintenance window’ is no more. Another important fact was that when puppet was created, there wasn’t much cloud deployment. Nowadays, it’s everywhere and having a tool like puppet to manage these instances is very useful. We even have VM’s being created and destroyed dynamically for just a single HTTP request. With Puppet, we can basically keep everything in ‘sync’ using a standard programming syntax rather than custom scripts. Luke explained that Puppet Labs began with an Open Source product, and made money by providing consultancy services to set this up. Nowadays, they’re keeping some of the features hidden away their Enterprise products. There’s nothing wrong with this, I just hope that the Open Source version with features that might overlap with Enterprise, such as Puppet Dashboard, don’t fall by the wayside. Other items mentioned in the presentation include Puppet DB (which tracks the status and changes in your environment in a database) and plans for more configuration tools to push configurations to servers at specific times or under controlled conditions. There was also talk about the ability to add machine dependencies within Puppet, eg provision a database, but don’t start the webserver that talks to it until the database host has been fully provisioned. In terms of user base, Puppet has lots of clients including Barclays, FT and LSE in London, and Google, Cisco and HBO in the US. Plus many more. The size of deployments varies too, from managing just a few servers to managing tens of thousands.

The slides from the Luke’s talk can be found here: State of Puppet – London. Readers may also be interested in Chris Spence State Of Puppet slides featured on the Puppet Camp Barcelona Wrap Up blog post or the slides from the San Francisco Puppet Camp – State of Puppet – San Francisco 2013.

  • Building reusable modules: Jon Topper – Scale Factory

All of the talks were interesting, but this is the one where I can start to reap immediate rewards. Firstly, it provided good ways of writing puppet modules, and there are definitely good take-aways from this such as writing puppet modules that perform very small, discrete pieces of work. Dependencies between puppet classes is also a bad idea. RSPEC Puppet, puppet parser and Puppet Lint are great tools for checking your code, although it was pointed out that puppet-lint can be very, very picky, so use with appropriate settings that work for you.

You can find more about Scale Factory from their website, whilst the slides from Jon’s presentation can be found here – Building Reusable Puppet Modules.

Jon’s Twitter profile is jtopper.

  • Automated OS and Application deployment using Razor and Puppet: Jonas Rosland – EMC

The slides that Jonas presented can be found at Puppet Camp London 2013 Puppet And Razor Jonas Rosland.

Razor is a provisioning system that can be used quickly provision new servers – both physical and virtual. The key thing is that it’s event driven rather than user driven. In the demo, Jonas configured Razor to provision certain types of servers depending on certain conditions. The example used physical RAM to determine what type of Operating System should be installed when a server is PXE booted, but you can use it on any kind of variables that you get from factor. I’m not sure how this would work in remote sites where you don’t have a PXE server. The install of Razor looks very straightforward.

Other tools worth looking at are: The Foreman, Cobbler, vSphere Auto Deploy

Jonas has some useful links on his pureVirtual website: Puppet and Razor.

Jonas’s Twitter profile is virtualswede

  • De-centralise and Conquer: Masterless Puppet in a dynamic environment: Sam Bashton – Bashton Ltd.

The slides that Sam presented can be found at Decentralise And Conquer Masterless Puppet In A Dynamic Environment.

This was a really interesting presentation. Essentially, Sam was building a set of RPM’s which can then be deployed to the target servers via Pulp. Puppet then runs locally on the remote target, triggered from a postinstall command in the RPM package. There’s no central puppetmaster in this setup, so no single point of failure.

Sam’s Twitter profile is bashtoni

  • Building self-service on demand infrastructure with Puppet and VMware: Cody Herriges – Puppet Labs

Cody talked about the pros and cons about running your own infrastructure versus using hosted solutions such as Amazon. His slides can be found here – Building self-service on demand infrastructure with Puppet and VMware

  • Enterprise Cloud Management and Automation: John Hardy – Red Hat

John presented ManageIQ. This clever piece of software interrogates your SAN arrays and discovers the Virtual Machines that are installed there. It can then look into these machines to determine what’s running, what files are installed, record changes on these files and perform full inventory control. It can even prevent a VM from being powered on if it violates a policy, such as not being an approved O/S. ManageIQ is being used by UBS and other big organisations. Red Hat acquired ManageIQ in December 2012, so expect to see this rolled into Red Hat products soon. Hopefully, much of it will become open source too.

  • Puppet Demos: Chris Spence – Puppet Labs

There was no slideshow from Chris, it was a hands-on demo showing how Hiera can simplify puppet code, how configuration files (such as a load balancer) can be dynamically generated as servers are powered up and powered down, and he showed some useful Puppet 3.0 commands.

Chris has written some puppet modules which can be found on Puppet Forge and has some useful material on his blog.

Chris’s Twitter profile is tophlammiepie

  • Closing thoughts

Overall, it was a good set of talks and great to talk other puppet users to discover how they are using it. I’ll certainly be using Hiera for deployments and I’m going to start using tests for my modules. In terms of contact with the Puppet community, I’ll definitely make use of ask.puppetlabs.com and puppet-users.

Finally, here’s a link to the official Puppet Camp London 2013 blog – Fun Times and Great Info at Puppet Camp London

Oh yes, and thanks for the post-camp drinks, T-Shirt and Hat! I look forward to Puppet Camp London 2014!

Red Hat Puppet

Red Hat Puppet

Display a future or past date in the bash shell

Here’s a quick and easy way to establish what the date will be in a specific number of days from today using the bash shell on Linux. Simply use the ‘-d’ option to the ‘date’ command.

Here’s the current timestamp:

-bash-3.2$ date
Thu Jan 17 15:04:28 GMT 2013

And this will be the date 60 days from now:

-bash-3.2$ date -d "now 60 days"
Mon Mar 18 15:04:31 GMT 2013

You can also use the same code to display dates from past. What was the date 94 days ago?

-bash-3.2$ date -d "now -94 days"
Mon Oct 15 15:07:35 GMT 2012

To get the last calendar day of the previous month:

date -d "-$(date +%d) day" +%Y-%m-%d

(Display the date days ago. So on 17 January, 17 days ago would be 31 December)