Programming and writing about it.

echo $RANDOM

Tag: gsoc

GSoC 2012: On-Demand Fedora Build Service: Update #10

In my latest commit [1], I added a number of features to the code which I had in mind:

  1. Application wide Logging
  2. Basic Build Monitoring
  3. Email notification

These features are most useful when the service is deployed with multiple worker nodes. For local image building, these are not so important. I also added support for specifying local filesystem as staging – useful for local image building.

Next up, I have a few things to work on. Test the Web UI and Rest API interface again. Update the HOWTO (its outdated already!). And few other things remain.

Demo of Email Notification

When you submit a build request (in multiple worker nodes mode), you get an email upon notification:

Your Image Building Request have been submitted. You may monitor the progress by going to You will also recieve an email upon completion.

And then when the job completes, another email:

The build was completed by worker:: Detailed log:

2012-07-22 12:18:24,074 – Registered a new Image Build request
2012-07-22 12:18:24,075 – Image type:: dvd
2012-07-22 12:18:29,074 – Image Build notification sent
2012-07-22 12:18:29,383 – Starting the Image Build Process
2012-07-22 12:18:29,385 – DVD image arch requested should be the same as the build arch
2012-07-22 12:18:29,386 – Image building process complete
2012-07-22 12:18:29,387 – Error creating image. Transferring Logs.
2012-07-22 12:18:29,388 – Initiating local transfer of logs to /tmp/staging
2012-07-22 12:18:29,401 – Logs available at file:///tmp/staging

At this point of time, the best place to understand what is happening is to begin with the command line clients in cli/ : and Should have an updated HOWTO soon.


[1] Latest commit:


GSoC 2012: On-Demand Fedora Build Service: Update #9

The code saw a few additions/enhancements since the last update:

  • There is now a command line client directly submitting build jobs and another client using the basic REST interface exposed via the Web application [1].
  • Remote Kickstart files specified via a http:// or ftp:// are now supported. Once again, this has to be ‘ksflattened’.
  • I added some validation on the web based interface. However it happens on the server side using Flask’s pre_validate( ) method. Ideally I would like to familiarize myself with enough Java Script to do this on the client side
  • I started writing some Unit tests using py.test and Mock. But, I am currently waiting for some guidance from Tim to really write some good tests.
With the Mid-term evaluation due in less than a week, I am quite happy with the way things are progressing at this moment.  The TODO and GOOD-TO-HAVE notes down the things I would like to finish before the project deadline is over [2].

GSoC 2012: On-Demand Fedora Build Service: Update #8

I spent some time cleaning up the code, mainly using pylint as an indicator. Also rewrote the fabfile to use fabric commands rather than native Linux commands wherever possible and is cleaner now. The HOWTO is also updated now.

This code refactoring has brought home (quite strongly!) the need for having my unit tests up and running soon. I plan to use py.test for my testing, and hopefully will have some tests ready in a week.

The code is available at

GSoC 2012: On-Demand Fedora Build Service: Update #7

In my last update, I reported that I had a functional build service, capable of harnessing multiple build nodes and a functional Web UI as well. The process to deploy the build service was however quite manual and tedious. I was looking for a way to help do this without as much manual intervention as possible and fabric (Thanks, Tim) was the answer. I wrote a fabfile for the whole task, called ‘’.  As outlined in my earlier post, I am using celery to distribute the build tasks, so the celeryd process needs to be running on the build daemons. I found zdaemon (Thanks Jan on fabric mailing list) to be the easiest way to run the celeryd process as daemons.  The updated code is available in the repository now [1].

There is a HOWTO [2] document, which should help you deploy the service and run your own home based build service. I recently used it to build myself a Fedora Scientific ISO. Please don’t abuse it. There is hardly any error handling now. I am however accepting bug reports now. Thanks for all your help.


GSoC 2012: On-Demand Fedora Build Service: Update #6

The  code saw a number of cleanups and feature additions since the last change.

Major Additions:

  • Basic Web UI functional: The web interface is quite clunky and ugly but functional. It is a Flask web application
  • Celery for task delegation: No more SSH/SCP. The build service now uses celery (with AMQP as broker) for task delegation with workers designated as build nodes. The celery daemon however needs to run as root, since the image building process requires root acces. Configuration is specified via webapp/nodes.conf file
  • FTP Image Hosting: You can specify a FTP server (enabled with anonymous login) to send your images to. The images are automatically copied to the FTP server by the code.

Build Service Web UI


  • Single configuration file to specify the type of image to create
  • No hard-coded directory structure to be created
  • Miscellaneous others

High-level Functioning

  1. User specifies the image creation information via the Web UI
  2. This information is used to create the imagebuild.conf configuration file
  3. A new process is created to delegate the task to a celery worker
  4. (The celeryd needs to be running on the workers)
  5. The worker process takes care of the image building and transfers the image to the specified FTP host


Here are the things I intend to work on next:

  • Script to install dependencies
  • Finish the cli client in webapp/
  • x86_64 images
  • Automatically copy the worker_src to celery workers
  • Unit testing
  • Implement error handling on the client UI
  • Logging
  • Email notification
  • Enhanced UI (dynamic forms?)
  • Need to think more on how the images will be stored and ability to identify an existing image uniquely. Timestamp?

Whereas, most other tasks of this project will be solved with time and effort, I am seriously concerned about the final look and feel of the Web UI. I would like it to look pretty, besides being functional.

GSoC 2012: On-Demand Fedora Build Service: Update #5

The current github code [1] does quite a few things as of now. Let me try to explain to the changes since last update and my rationale behind them.

Support for building Live images: I attempted to use livemedia-creator (which is going to be THE tool from F18+), but unfortunately ran into issues which prevented me from building images. So for now, I have implemented this feature using ‘livecd-creator’. The Kickstart file (flattened) needs to be specified and other details such as architecture, any extra packages to be pulled from Koji, etc. The specifications are specified via the config/live.conf file.

The User specifications are now completely via .conf files: Myrationale behind that in the first place was that since this code is really going to serve as the ‘backend’, command line arguments could be done away with. But, even if we want to use this as standalone, specifying .conf files is fine as well. (We will see what happens with this after discussing with my mentors). Here is a brief description of the config files in config/

  • imagebuild.conf: type of image, architecture, staging area (to be explained later) and email (for notification)
  • boot.conf: configuration for boot.iso images
  • repoinfo.conf: repository configuration required for the above
  • pungi.conf: configuration for DVD images
  • live.conf: configuration for live images

The kickstart files if needed are to be placed in the kickstarts/ sub-directory. To use it, you will need to ‘cd’ into ‘image_builder’ directory and run ‘$python’ after setting up the appropriate config files in image_builder/config and kickstart files in kickstarts/
if any.

Support for copying images: I have also now enabled support for copying the images to a ‘staging area’ as mentioned earlier. I assume a passwordless login setup and hence do a ‘scp’ once the desired image has been created. This is how you would use it standalone.

Now, as a first step towards being able to distribute build jobs to different node, I have also now added simple support for carrying out the build process on a different host. This is done by the file

Here is what it does:

  • Assumes that the config/ and kickstart/ files have been correctly setup by the web-form handler or manually.
  • Then copies these files to the image_builder/ directory
  • It creates a .tar archive of the image_builder
  • Then it reads the appropriate node (architecture) from the nodes.conf file and also retrieves the working location specified.
  • The .tar file is then ‘scp’-ed to the appropriate location specified
  • Then runs the script on the build node by ‘SSHing’
  • The image is then automatically transferred to the staging area
    specified as earlier.

To try this feature, simply setup nodes.conf file correctly and the config/ and kickstarts/ in data/ and run $sudo python

Note that the specified nodes should have all the dependencies installed, such as lorax, koji (setup correctly), pykickstart and livecd-creator.


GSoC 2012: On-Demand Fedora Build Service: Update #4

In my last update, I reported that I had the basic code to create a boot.iso including extra packages specified by the NVR or Koji build IDs.

Couple of days back, I added the support for creating a DVD iso (using ‘pungi’). Basically the code requires a pungi.conf file to be specified. For example, here is a sample pungi.conf:


#Specify a working directory
# specify packages via NVR
# specify packages via Build IDs (separated by a semicolon)

If you are familiar with pungi, you will notice the resemblance of the configuration options with the pungi command line options. The code basically reads this configuration file and fires pungi appropriately.

As you can see, the last two options allow you to specify more recent builds of packages (via NVR and Build IDs) which are not yet available in any of the release repositories and include them instead of the less recent ones. Since pungi requires a kickstart file to be specified, I update the kickstart file by adding the side repository URL to the list of repositories specified (script here at [1]). (I have hit a problem with this step:

The updated code is now in the git repository [2] and a sample command line to build a DVD iso would be: python -t dvd -a i686 (after you have created the appropriate pungi.conf file). You can also use the run_imagebuild shell script for the same.



I am beginning to think of modifying the earlier boot.iso code to read in a configuration file instead of command line arguments, like I have done for the DVD iso. This is keeping in mind the fact this code will really be the backend of the web-based build service. Hence, .conf files created at the web based frontend and sent to this image builder code would be a good way to go about it, me thinks. No? We will see.

Next up will be support for creating Live media.

GSoC 2012: On-Demand Fedora Build Service: Update #3

I have pushed a working snapshot of the image building code to github[1]. Here is a sample run of the code:

$ sudo python -t boot -a i386 -o image_op1 -w image_work -p fedora -r 17 -v 1 -nvr 'anaconda-17.26-1.fc17' -bid '318281' '311809'

This command line spawns the build process of a Fedora 17 boot.iso with a number of extra packages (specified via their NVR or build IDs):

$ sudo python -t boot -a i386 -o image_op1 -w image_work -p fedora -r 17 -v 1 -nvr 'anaconda-17.26-1.fc17' -bid '318281' '311809'

Downloading Extra Packages

Saving Primary metadata
Saving file lists metadata
Saving other metadata
Generating sqlite DBs
Sqlite DBs complete
Building Boot ISO
checking for root privileges
checking yum base object
setting up build architecture
setting up build parameters
installing runtime packages
running runtime-install.tmpl

downloading packages
( 1/592) [100%] downloading GConf2-3.2.5-1.fc17.i686.rpm
( 2/592) [100%] downloading ModemManager-
( 3/592) [100%] downloading NetworkManager-
( 4/592) [100%] downloading NetworkManager-glib-
( 4/592) [100%] downloading NetworkManager-glib-

Finally, a boot.iso is created. Next, I plan to inegrate package retrieval from Koji via other methods. And then, support for creating Live images and DVD images. And then, the REST API/Web based service.

Some of the scripts I was experimenting with to pull packages are here [2].


GSoC 2012: On-Demand Fedora Build Service: Update #2

Over the past week, I gained some familiarity with lorax [1]. lorax is used to create the boot.iso and pylorax is used by pungi [2] and livemedia-creator [3] to create the DVD installer and Live images of the various spins, respectively.

  1. lorax
  2. pungi
  3. livemedia-creator

Having had a basic idea of how lorax works,  I then proceeded to use pylorax to create a boot.iso by building upon Tim Flink’s image building code he had sent me during our project discussions. A build is now in progress as I write this.

My next plan is to integrate the creation of the side repository from extra packages retrieved from Koji so that newer builds of packages are included in the boot.iso.

Also, start using configuration files for specifying the repository/mirror information, architecture, release, etc. By next week, I should have this code in my git hub.

GSoC 2012: On-Demand Fedora Build Service: Update #1

A key component of the project is downloading packages from Koji. Over the past few days, I have been playing around with Koji client functionalities to get some familiarity with listing/retrieving packages from the build service. (Setup instructions)

Once I setup Koji, I started playing around with the client code that Tim Flink had sent me earlier. I adopted Tim’s code to create a script to download RPM’s from Koji and create a side repository with them. The Python code is called:

Next, I wanted to have a script which would download the latest build of a package for a particular tag from Koji. For this, I used from Autoqa‘s code. The code is called: As of now, this script just downloads the RPMs for each of the tags.

Both these scripts are available here: