Programming and writing about it.

echo $RANDOM

Category: Linux

/proc/cpuinfo on various architectures

The /proc/cpuinfo file contains runtime information about the processors on your Linux computer (including your Android phone). For example, here is what it looks like on my phone:

u0_a123@android:/ $ cat /proc/cpuinfo
Processor       : ARMv7 Processor rev 1 (v7l)
processor       : 0
BogoMIPS        : 1592.52

processor       : 1
BogoMIPS        : 2388.78

Features        : swp half thumb fastmult vfp edsp neon vfpv3 tls
CPU implementer : 0x41
CPU architecture: 7
CPU variant     : 0x2
CPU part        : 0xc09
CPU revision    : 1

Hardware        : SMDK4210
Revision        : 000e
Serial          : 304d19f36a02309e

Depending on what you are looking for, these are all useful information. Being plain text files, you can write shell scripts or some other programming language (see my earlier article using CPython on this topic) to parse this information and mine the data you are looking for. These are useful information and for projects such as lshw and Beaker, quite vital too. However, one problem with dealing with this file is that depending on the hardware architecture, the information varies – both in their presentation format and the information available. If you compare the contents of your Intel/AMD desktop or laptop with the above, you will know what I am talking about. Hence, it is necessary that whatever tool/script one writes to read and use the data from this file and hopes it to work across architectures should consider these differences.

I won’t attempt to make any guesses to why they are different. However, I will share with you how to find out the information you may find in this file on different architectures. The post is admittedly half baked and may not satisfy all your queries, but I think I am on the right track.

Get the Kernel sources

Download the Linux kernel sources (tarball from http://kernel.org or clone it from https://github.com/torvalds/linux/). The arch/ sub directory has architecture specific code and in all you will see 31 subdirectories in this sub directory: alpha, arc, arm, arm64, and others. The links in the rest of this post are cross referenced links, so you may not need to download the sources.

Definition of cpuinfo_op

One file for each of the above architectures defines a cpuinfo_op variable of type seq_operations. For example, for the arm64 architecture, this variable is defined in arm64/kernel/setup.c and this is what it looks like:

956 const struct seq_operations cpuinfo_op = {
957         .start  = c_start,
958         .next   = c_next,
959         .stop   = c_stop,
960         .show   = c_show
961 };

The key member assignment for our purpose here is the .show attribute which is a function pointer pointing to the c_show() function. This is the function where you can see the information that you will see when you see the contents of /proc/cpuinfo. So for example, the c_show() function for arm64 is here and you can see the fields earlier shown in the blog post. (I can’t see “Serial” there, which I am not sure why yet, I am still to figure out if it’s the right architecture at all, but you get the idea, I hope).

You can search for cpuinfo_op and see the files for each arch where they are defined in. The function which the .show member points has the information that will be show in /proc/cpuinfo. Note that, the function name can be different. For example it is show_cpuinfo() for s390x.

Examples

For an example of how the different architecture specific information can be dealt with in C/C++ program/tool using architecture specific macros, see lshw’s cpuinfo.cc file. For shell scripts or a Python program, using uname (via os.uname() in CPython) may be a possible approach.

Advertisement

GSoC 2012: On-Demand Fedora Build Service: Update #6

The  code saw a number of cleanups and feature additions since the last change.

Major Additions:

  • Basic Web UI functional: The web interface is quite clunky and ugly but functional. It is a Flask web application
  • Celery for task delegation: No more SSH/SCP. The build service now uses celery (with AMQP as broker) for task delegation with workers designated as build nodes. The celery daemon however needs to run as root, since the image building process requires root acces. Configuration is specified via webapp/nodes.conf file
  • FTP Image Hosting: You can specify a FTP server (enabled with anonymous login) to send your images to. The images are automatically copied to the FTP server by the code.

Build Service Web UI

Cleanups

  • Single configuration file to specify the type of image to create
  • No hard-coded directory structure to be created
  • Miscellaneous others

High-level Functioning

  1. User specifies the image creation information via the Web UI
  2. This information is used to create the imagebuild.conf configuration file
  3. A new process is created to delegate the task to a celery worker
  4. (The celeryd needs to be running on the workers)
  5. The worker process takes care of the image building and transfers the image to the specified FTP host

Todo

Here are the things I intend to work on next:

  • Script to install dependencies
  • Finish the cli client in webapp/
  • x86_64 images
  • Automatically copy the worker_src to celery workers
  • Unit testing
  • Implement error handling on the client UI
  • Logging
  • Email notification
  • Enhanced UI (dynamic forms?)
  • Need to think more on how the images will be stored and ability to identify an existing image uniquely. Timestamp?

Whereas, most other tasks of this project will be solved with time and effort, I am seriously concerned about the final look and feel of the Web UI. I would like it to look pretty, besides being functional.

GSoC 2012: On-Demand Fedora Build Service: Update #2

Over the past week, I gained some familiarity with lorax [1]. lorax is used to create the boot.iso and pylorax is used by pungi [2] and livemedia-creator [3] to create the DVD installer and Live images of the various spins, respectively.

  1. lorax
  2. pungi
  3. livemedia-creator

Having had a basic idea of how lorax works,  I then proceeded to use pylorax to create a boot.iso by building upon Tim Flink’s image building code he had sent me during our project discussions. A build is now in progress as I write this.

My next plan is to integrate the creation of the side repository from extra packages retrieved from Koji so that newer builds of packages are included in the boot.iso.

Also, start using configuration files for specifying the repository/mirror information, architecture, release, etc. By next week, I should have this code in my git hub.

The Case for Virtual Appliances

Virtual appliances are custom made operating system images for the purpose of serving a particular need like say, a web server appliance.  They are built upon abase operating system and the custom selected and configured software installed on top of it. They can be either graphical or can be operated as a headless system. Usually they are run using a virtualization software, such as VirtualBox or more recently can used in one of the supported cloud computing facilities, such as the EC2.

Off-late, I have been exploring BoxGrinder (an article is coming up in Linux Magazine in May, 2012). It makes creating virtual appliances really easy and hence has made me think of a few ways strictly in an academic/research setting how virtual appliances may be useful:

Generic Use-cases

  • Provide a Linux based environment in a non-Linux environment – personal desktop to a computing lab. This can be easily achieved using VirtualBox. Each user will have a person copy of your appliance running. No remote login setup. No central server management
  • If you are working on a tutorial/book which covers a programming language whose compilers/interpreters are dependent for the correct functioning on a Linux distribution, why not make a virtual appliance available to your readers? It will save you from making sure that the code works on your readers’ OS and will save your readers the pain of installing Linux on their computers. Even if you assume that readers may themselves install a Linux distribution using a virtualization software, providing a virtual appliance makes their life a lot easier.

Specific Use-cases Demo

Available solutions

  • Oz (Need to check this out!)
  • BoxGrinder
  • Ubuntu JeOS (looks inactive)
  • Create your own appliance: An appliance is basically a “virtual” hard-disk file. So, if you want to create an appliance with a base distribution/operating system for which there is no appliance creator, just install the distribution in a virtual hard-disk, install the desired software, remove the non-desired ones and ship.

Parallel Computing Test Bed using Virtual Appliances

For a recent article of mine, I had to do some experimentation with OpenMPI. I wanted to make sure that some of the basic code snippets accompanying my article worked as expected and did not want to take the pains of setting up a real cluster and hence decided to set up a virtual cluster using Virtual Box.

Virtual Appliances using BoxGrinder

Now, I could definitely install one of the available Linux distributions onto each of my virtual machine (say, 3). But, why not go for something more barebone, such as Virtual Appliances. I had also been playing around with BoxGrinder and thought of creating a simple parallel computing node virtual appliance with some of the software you might need for such an appliance. Here is the appliance definition file (f16-node.appl):

name: f16-node
summary: A Parallel Computing Node appliance based on Fedora 16
version: 1
release: 0
os:
  name: fedora
  version: 16
hardware:
  partitions:
    "/":
      size: 2.0
packages:
  - @core
  - @development-tools
  - gsl
  - openmpi
  - openmpi-devel
  - python-pp
  - scipy
  - ipython
  - python-pip
  - screen

As you can see, this appliance file defines a Fedora Linux based virtual appliance including some of the parallel computing libraries and some other miscellaneous utilities. Now, create a virtual appliance for VirtualBox using: $ boxgrinder-build f16-node.appl -p virtualbox. Once the build process is completed, locate the f16-node.vmdk file under the build/ sub-directory.

Cloning the Virtual Box appliance

Now, that we have our Virtual Box image ready, we shall clone it using the VBoxManage utility’s VBoxManage clonehd command. Create two clones of the f16-node.vmdk file. So you should have three .vmdk files with exactly the same software. Cloning prevents us from the hassle of creating three separate virtual appliances.

Now, create three new virtual machines each using one of the above hard-disks. While setting them up, remember to setupBridged Networkingso that each virtual machine gets a internally (local network) accessible IP address. Check from your host machine if you can ssh to the virtual machines. If yes, you are good to go!

Let me know if that doesn’t work for you.

Related: My earlier post  on creating an ownCloud appliance using BoxGrinder.

 

Fedora Scientific: Open Source Scientific Computing

Hello Fedora People, this happens to be my first aggregated post on Planet Fedora! Great to be here. Onto real stuff.

Okay, this post comes at a time when December is already upon us, Fedora 16 has been released for a month now and that also means that Fedora Scientific has seen the light of the day for a month now. I felt this might be a good time to describe the current state of the project and my plans for the next release(s).

Software in Fedora Scientific (Fedora 16)

The current list of software available in Fedora Scientific is available here [1]. Briefly, they are:

Scientific Computing tools and environments: The numerical computing package GNU Octave, front-end wxMaxima, the Python scientific libraries SciPy, NumPy and Spyder (a Python environment for scientific computing) are some of the software included in this category. A development environment for R, the statistical computing environment, is also included, and so are the ROOT tools for analysing large amounts of data.

Generic libraries:    Software in this category includes the GNU C/C++ and FORTRAN compilers, the OpenJDK Java development tools, and the IDEs NetBeans and Eclipse. Also included are autotools, flex, bison, ddd and valgrind.

Parallel  and  distributed programming   tools / libraries: Software tools and libraries included in this category include the popular parallel programming libraries OpenMPI, PVM, and the shared-memory programming library OpenMP. Also included is the Torque resource manager to enable you to set up a batch-processing system.

Editing,  drawing  and  visualisation  tools: So you have simulated your grand experiments, and need to visualise the data, plot graphs, and create publication-quality articles and figures. The tools included to help you in this include LaTex compilers and the Texmaker and Kile editors, plotting and visualisation tools Gnuplot, xfig, MayaVi, Dia and Ggobi , and the vector
graphics tool Inkscape.

Version control, backup tools and document managers: Version control and back-up tools are
included to help you manage your data and documents better: Subversion, Git and Mercurial are available, along with the back-up tool backintime. Also included is a bibliography manager, BibTool.

Besides these four main categories, some of the other miscellaneous utilities include: hevea–the awesome LaTex-to-HTML converter, GNU Screen and IPython.

As you can see that the list of software is quite extensive, thanks to the awesome Fedora developers who have packaged this gamut of software.

Future Plans

The current release marks the beginning of a project very close to my heart. I feel that such a spin shall definitely be useful for the current Linux community members and future enthusiasts who use Linux for their computing needs. In the next release(s), I intend to explore the following directions for the spin:

  • A GNOME based spin in addition to the current KDE spin
  • Custom wallpapers
  • Work with the websites team to update the Spin website to include high quality images of scientific software and more content
  • Collect feedback from the community and act on it  :-)

Talk to Us, Contribute

Come, talk to us on the Fedora SciTech SIG mailing list [2]. Thanks to all the members of SciTech SIG for their useful discussions and comments.

Acknowledgements

Fedora Artwork and Fedora Websites team for help in the artwork for the spin,  Bill Nottingham for the initial comments on the idea and Christoph Wickert  for seeing the spin through for release. All the other people who contributed even with a single word of encouragement online and offline, please acknowledge my sincere thanks.

References

[1] https://fedoraproject.org/wiki/Scientific_Packages_List
[2] http://fedoraproject.org/wiki/SIGs/SciTech

Parts of this blog post has been reproduced from my article on Fedora Scientific Spin  published in the December, 2011 issue for Linux For You.

Fedora Scientific: The Prologue

The Itch

When I wrote this [1] article a while back, the intention was to publicize the software tools that I was personally using at the point of time to help me in my research work- plotting graphs, analysing data, writing papers, running simulations, e.t.c. Those tools soon became indispensable for my research and hence I always installed them first after a fresh install of Linux. I longed for a Linux distro which would already have these tools installed and allow me to have a fully functional Linux workstation from the first boot.

The Scratching begins

I was getting wary of Ubuntu after their last release (April, 2011) and was looking for a new distribution to commit to – I thought I will give Fedora a shot (last time I tried Fedora was during the Fedora Core days) on one of my computers. Then, I started looking around for ways to create custom Fedora spins when I came across the tutorial for Fedora [2]. And that’s pretty much all I needed to get started working on a Linux for users in Science and Academia – Fedora Scientific

Discussions on Mailing lists

The most fruitful technical part of the discussion happened on the Canberra Linux User’s Group. [4] Thanks to all the folks who made suggestions for various packages and more importantly opined that the spin would be useful to the target audience.

Fedora Spins SIG

The official word on whether the proposed spin would be found useful by the Fedora community in general and Linux community overall was decided by the Fedora Spins SIG  [5]. Thanks to their support and approval.

Where next

Fedora Scientific is officially on course for release with the Fedora 16 release in the next few days. The nightly builds are now available from [6]

Talk to Us, Contribute

Come, talk to us on the Fedora SciTech SIG mailing list [7]. Thanks to all the members of SciTech SIG for their useful discussions and comments.  This page explains the spin in more detail.

Current List of Packages

The current list of software made available in Fedora Scientific Spin are at [8].

Acknowledgements

Fedora Artwork and Fedora Websites team for help in the artwork for the spin,  Bill Nottingham for the initial comments on the idea and Christoph Wickert  for seeing the spin through for release. All the other people who contributed even with a single word of encouragement online and offline, please acknowledge my sincere thanks.

Links

[1] linuxgazette.net/173/saha.html
[2] http://fedoraproject.org/wiki/How_to_create_and_use_a_Live_CD
[3] https://fedoraproject.org/wiki/Scientific_Spin
[4] http://lists.samba.org/archive/linux/2011-July/030331.html
[5] http://fedoraproject.org/wiki/SIGs/Spins
[6] http://dl.fedoraproject.org/pub/alt/nightly-composes/
[7] http://fedoraproject.org/wiki/SIGs/SciTech
[8] https://fedoraproject.org/wiki/Scientific_Packages_List

In the next post, which I intend to do soon after the official release, I shall talk about the applications and programs installed in Fedora Scientific.

And last, but by no means the least- Snowy, you make this world a better place for me.

Exploring Arduino: Beginnings on Arch Linux

I ordered the Arduino starter kit from Robot Gear which came with the Arduino UNO board and few other essentials – LEDs, Miniature breadboard, Connecting wires, etc. The generic instructions for Linux mentioned in the Arduino playground worked for me without any problems. You will also have to set the permissions for the normal user in Arch Linux to be able to access the serial ports. Please see the Arch Wiki page on Arduino here. I was blinking a LED in no time!

Arduino programming

My first impression of looking at Arduino code is that its based on C/C++, which it is (See here). The Foundations page looks a handy starting point. Personally, I found this booklet provided by Oomlout a very handy reference, you can just hook up the board and start tinkering around with the notes by your side and pretty much learn the basics.

Development Environment

For now, I am just using the Arduino SDK from here. I intend to switch to a relatively mouse free environment right after this post by following the tips on the Arch Wiki above and also here.

Next, I describe my first real experiment with the Arduino board.

Blinking  LEDs

This is a Arduino UNO connected to a miniature breadboard on which you can see a Yellow and a Red LED connected to two digital pins on the UNO via two 330 Ohm resistors. (Excuse the tissue paper base :-) ).

Arduino UNO connected with 2 LEDs on a miniature breadboard

Circuit
The schematic is the same as in CIRC-01, but for two LEDs. So basically you would have one wire going from Pin 12 to one LED, and from Pin 13 to another. Each of them would be connected to the Ground via a 330 Ohm resistor.

Arduino code

What we have here is very simple. The two LEDs shall blink alternately with a delay, which you can specify. This is the code for the above Arduino circuit:

/* Multiple LED Blinking program
  Amit Saha

*/  

// constants won't change. Used here to 
// set pin numbers:
const int numPins = 2;
const int ledPin [] =  {12,13};      // the number of the LED pins

int interval = 1000;           // interval at which to blink (milliseconds)

void setup() {
  // Iterate over each of the pins and set them as output
  for(int i=0;i<numPins;i++)
    pinMode(ledPin[i], OUTPUT);      
}

/* Loop until death */
void loop()
{ 
  for(int i=0;i<numPins;i++)
  {    
    digitalWrite(ledPin[i],HIGH);
    delay(interval); 
    digitalWrite(ledPin[i],LOW);
    delay(interval);        
  } 
}

The basic manuals for Arduino programming shall tell you that the setup() function is executed when you upload your sketch to the board and the loop() function shall continue indefinitely till the power is not plugged out of the board.

The two LEDs, denoted by the integer variable numPins, are connected to the Digital pins 12 and 13 on the UNO board and this is specified by the ledPin[] array. The pins are specified in OUTPUT mode using pinMode() function. That completes the setup() portion of the code.

In the loop ( ) function, what we do is just iterate over each of the LEDs, set to HIGH, wait using delay( ), set it to LOW and do the same for the all the LEDs serially. Once you Verify and Upload your code to the board, you should see the LEDs blinking ON and OFF, one after the other.

As a next step, if you have more LEDs at your disposal, you may want to experiment with CIRC-02.

Article: Getting Started with Inotify

Update: The PDF is now available.

It’s always fun to peek into one of the umpteen features of Linux. In the April, 2011 issue of Linux For You I take a hands-on look at Inotify.

The source code for this article is available at https://bitbucket.org/amitksaha/articles_code/

 

Book Review: Embedded Linux Primer

My review of Embedded Linux Primer: A Practical Real-World Approach is now in the stands – April, 2009 issue for Linux For You. Thanks to Linux For You and Pearson Higher Ed. for the review copy.