Wednesday, September 19, 2012

User Specified Content Security Policy


Content Security Policy is a declarative policy that restricts what content can load on a page.  Its primary purpose is to mitigate Cross-Site Scripting vulnerabilities.  The core issue exploited by Cross-Site Scripting (XSS) attacks is the lack of knowledge in web browsers to distinguish between content that’s intended to be part of web application, and content that’s been maliciously injected into web application.
To address this problem, CSP defines the Content-Security-Policy HTTP header that allows web application developers to create a whitelist of sources of trusted content, and instruct the client browsers to only execute or render resources from those sources.  However, it is often difficult for developers to write a comprehensive Content Security Policy for their website.  They may worry about breaking their page by blocking unanticipated but necessary content.  They may not be able to easily change the CSP header for their site, which makes it challenging for them to experiment with policies until they find one that best protects their page without breaking site functionality.
UserCSP changes this!  A developer can now view the current policy applied to their site and create their own custom policy.  They can choose to apply their custom policy on the site, or even combine their policy with the website’s existing policy.  When combining policies, they have an option to choose from the strictest subset of the two, or the most lax subset.  They can locally test their site with the custom policy applied and tweak the policy until they have one that works.
The coolest feature of UserCSP is the Infer-CSP tab.  This feature can help a developer derive a usable and secure policy for their site.  By looking at the content the website loads, the add-on determines the strictest set of CSP rules it can apply to the site without breaking the current page.  The inferred policy is provided in the proper syntax for the CSP Header, so all a developer needs to do is start serving this policy for their site via the CSP header.
Please visit Tanvi's Blog on Mozilla for more information. 

Monday, August 27, 2012

Configure NFS on Ubuntu

Download: Fast, Fun, Awesome

Network File System (NFS) is useful to share space on other computers.

In this scenario we are going to configure NFS server on 10.1.1.15 host and NFS client on 10.1.1.17 machine.

1. Prerequisites
    Install nfs-common package on both NFS client and NFS server using following command.

     $ sudo apt-get install nfs-common

Additionally we need to install extra package on NFS server (10.1.1.15)

    $ sudo apt-get install nfs-kernel-server

This package is the actual NFS daemon listenning on both UDP and TCP 2049 ports. And portmap should be waiting for instructions on a port 111.

2. Create NFS Share on NFS Server (10.1.1.15)
Create a directory to share on NFS server(10.1.1.15).

Run following command on NFS server.

   $ mkdir /home/kailas

3. Apply Access Control Rules

In our scenario we want only 10.1.1.17 to access the nfs share.

Therefore, open /etc/exports file in any text editor (such as vi, gedit, or emacs) on NFS server (10.1.1.15).

Add following line in (/etc/exports) file.

A. Read/Write Permissions

    /home/kailas/     10.1.1.17(rw,sync)

Above line specifies that export /home/kailas directory for host with IP 10.1.1.17 with read, write permissions, synchronized mode.


B. Only Read Permissions

If you don't want to give write permission and only want to give read permission to client (10.1.1.17) then instead of above line use following line.

    /home/kailas/     10.1.1.17(ro,sync)

C. Read/Write + Root privileges

  /home/kailas/    10.1.1.17(rw,sync,no_root_squash)

Above line in "/etc/exports" file will export /home/kailas directory for host with an IP address 10.1.1.17 with read, write permissions, synchronized mode and the remote root user will be treated as a root and will be able to change any file and directory.

D. Read/Write Privilege to all computers on network

 /home/kailas/     *(rw,sync)

Above line indicates, export /home/kailas directory for any host with read, write permissions and synchronized mode.


E. Read Privilege to All computers on network

   /home/kailas/     *(ro,sync)

Above line indicates, export /home/kailas directory for any host with read only permissions and synchronized mode.


3. Restart NFS daemon

Use following command on Ubuntu to restart NFS service.

$ sudo /etc/init.d/nfs-kernel-server restart 

Note: After any modification you will made  in "/etc/exports" file please restart NFS service to reflect your changes. 


4. Mount NFS directory on client (10.1.1.17) machine

NFS client needs portmap service, simply install nfs-comman package on client (10.1.1.17)

   $ sudo apt-get install nfs-common


Make sure portmap service is running:
  $ sudo service portmap status

Sample outputs:
  portmap start/running, process 4193

If not just start it:
    $ sudo service portmap start

Create a mount directory on Client (10.1.1.17)
  $ sudo mkdir /nfs

$ sudo  mount  10.1.1.15:/home/kailas   /nfs/

To see the content of the directory use following command.
 $ ls /nfs


5. Configure automount

To make this completely transparent to end users, you can automount the NFS file system every time a user boots a Linux system. Simply edit "/etc/fstab" to mount system automatically during a system boot. You can use your favorite editor and create new line like this within /etc/fstab:

10.1.1.15:/home/kailas   /nfs/  nfs  defaults  0  0


 6. Appendix

If above steps doesn't work then please try to stop iptables or configure iptable rules to allow nfs communication.

# service iptables stop








Friday, April 27, 2012

IRC command help

The goal of this post is to play with some IRC commands.

To Register your nickname:
/msg nickserv register [password] [your@email.address.com]
You should substitute an actual password for [password] and actual email address for [your@email.address.com].  You don't need the "["brackets"]".

To identify yourself to IRC nickserv:
If your nickname is registered you can use the following command to identify to it (ensure your current nickname is that of the one you want to identify to):
/msg nickserv identify [password]
You should substitute an actual password for [password].

There are actually a number of ways to identify to a nickname. You can also identify to a nickname that you are not using at the time.
/nickserv identify [nicknamepassword

Example:
/nickserv identify PeanutButter ILovePeanutButter

To change your password:
/msg nickserv set password [YourNewPassword]

To enforce users to identify your nickname with password to protect from identity theft:
/msg nickserv set secure ON

To remove nickname currently in use:
If somehow you close your IRC but didn't get a chance to disconnect from server then server believes you are still online and you cannot use it until server recognizes it. Use following command to resolve this problem.
/nickserv ghost [nickname] [password]
For example, if your nickname is "abc123" and password is "xyz123", then command to use is as follows:

/nickserv ghost abc123 xyz123

How do I check if a nickname is registered or identified
To check if a nickname is already registered, or if someone is identified to a nickname, use the command:
   /ns info nickname

How do I change my email address?
/ns set email password email@address repeatemail@address

Somebody is on my nickname - how can I recover it?
First type:
/ns recover yournickame yourpassword

and then type:
/ns release yournickname yourpassword

After this you can just get back on your nickname.

How can I view what channels I have access in?
/ns alist

How do I view information about my nickname?
 /ns info nickname all

Alternatively, you can use following command:
/nickserv info nickname

Example:
/nickserv info PeanutButter


How do I stop people using my nickname?
First ensure that your nickname is registered! To prevent people from using your nickname without identifying to it you must set protection on your nickname. The best settings is to use 'Quick kill', which will give users 20 seconds to identify after which their nickname will be changed. To do this use:

/ns set kill quick



"I forgot my password". How to recover it?
Keep in mind that passwords are CaSe SeNsItIvE.

/nickserv sendpass [nick] [email address]

The email address that you specify must match the email address that we have on file for the nickname in question.




Monday, March 26, 2012

Unable to ping Guest VM in VirtualBox

Download: Fast, Fun, Awesome

Suppose you have installed guest OS (such as Windows, Ubuntu, etc) in VirtualBox and want to ping it from host OS then you might not be able to ping it,  if Network adapter is configured as NAT mode adapter in VirtualBox for the VM.

To solve this problem. First shutdown your gust VM. Second, change the Guest VM's network adapter settings of "Attached to" from "NAT" to "Bridged Adapter".  Also change "Name" to "vmnet1" or  any other similar name.

The cause of this problem is, in NAT mode the IP headers of any packets that are going out the guest VM are re-written to match the hosts network settings. But VirtualBox does not do any kind of reverse NAT, not even for packets originating from the host machine.  It only does it for established connections.

Hope this helps!


Monday, November 14, 2011

Email Address verification using Perl script

Checking correctness of one email address is easy and can be done manually, however, if you want to validate a bunch of email addresses then automated script plays a very handy role. 
I would like to thank my colleague and friend "Sai Sathyanarayam" for giving me this script. I think this might be useful for others therefore, I am posting it here. 

# email.pl file
#open "email.txt" file from current directory, 
# email.txt file contains email addresses separated by , (comma) and each address is on new line
open(FILE,"email.txt");
while($line = <FILE> ) {
 
   chomp($line);
    if($line =~ /,/) { $line = $`; }else { print $line." is invalid\n";}
    if ($line =~ /^(\w|\-|\_|\.)+\@((\w|\-|\_)+\.)+[a-zA-Z]{2,}$/)
   {
       print "$line is valid\n"; 
   }
   else {
     print "$line is invalid\n";
   }
}

Sample email.txt file is as follows:
xyz@abc.com,
pqr@mnr.ac.in,

To perform validation test run following command:
$ perl email.pl


Friday, August 26, 2011

JaegerMonkey Architecture

JaegerMonkey is a JavaScript engine used in Firefox 4.0 and later versions. The SpiderMonkey JavaScript engine was used by Firefox for version 3.0 or earlier. TraceMonkey is a tracing engine which is an improvement to SpiderMonkey. Trace Monkey was used in Firefox 3.5 and above versions.  Before we will look into architecture of JaegerMonkey, lets first have a glance at TraceMonkey JavaScript engine who is a predecessor of JaegerMonkey.

TraceMonkey Overview
TraceMonkey uses a trace monitor called jstracer. The jstracer monitors a script as interpreted by SpiderMonkey. Whenever jstracer sees a code that would benefit from the native compilation, it activates it recorder. The recorder records the execution of the IR and creates NanoJIT Low Level Intermediate Representation, which is then compiled into native code. NanoJIT produces optimized code. More information on TraceMonkey and its architecture diagram is available  here.

JaegerMonkey Architecture
JaegerMonkey used in Fireox 4.0 and above version is Just-in-Time (JIT) JavaScript execution engine.  JaegerMonkey JIT engine produces native code for JavaScripts. Usually JIT engines take an intermediate representation (IR) from a compiler and produce native (machine) code and execute it on the fly.  Therefore, JIT engines do not parse the code or check its syntax, or create intermediate representation (IR) of code.
Hence, JavaScript engine in Mozilla Firefox we divide into two parts: front-end and back-end. The front-end is responsible to parse the script, check its syntax and generate intermediate representation (IR) of script required for native code generation.  The back-end is responsible for generating native code and memory management.

In Mozilla Firefox front-end is SpiderMonkey which parses script syntax and generates an intermediate representation (IR) of the script. In SpiderMonkey intermediate representation of script is bytecode of the script.  This generated bytecode is then fed to JaegerMonkey JIT engine to be compiled into machine code. JaegerMonkey is a method-base JIT JavaScript engine which compiles script into non-optimized machine code.  JaegerMoneky uses Nitro (borrowed from the WebKit project) as its back-end assembler.  
Nitro does memory management and code generation in JaegerMonkey.

Nitro contains two parts assembler and memory unit. Assembler handles the code assembly and memory unit handles allocation and deallocation of memory for native code. The bulk of the bytecode to native code translation is performed in the mjit::compiler class and it can be found in js/src/methodjit/Compiler.cpp.  This compiler class translates SpiderMonkey bytecode instructions to their native code block equivalents using the AssemblerBuffer and LinkBuffer helper classes.

JaegerMonkey uses inline cache to improve the performance. Inline cache is used to perform faster object type lookups.  JavaScript supports dynamic typing during runtime. To support this feature, in SpiderMonkey JSOP_GETPROP bytecode is responsible to return the value of a specific property by looking up its type first. SpiderMonkey uses property cache which stores the Shape of existing objects.  Shape is a structure in SpiderMonkey that defines how the object can be accessed.

Inline Caching for good locality
When JIT compiles a property access bytecode, emitted machine code look like as follows:


type                     <- load addressof(object) + offsetof(JSObject, type)
shapeIsKnown    <- type equals IMPOSSIBLE_TYPE
None                   <- goto slowLookupCode if shapeIsKnown is False
property              <- load addressof(object) + IMPOSSIBLE_SLOT

JagerMonkey uses self modifying code to inline cache the Shape of the object. Self modifying code is a code that modifies code that currently exists in memory.  When first time JaegerMonkey performs a property access on object its shape is unknown therefore shapeIsKnow will be false.  Hence slowLookupCode will be executed.  After slowLookupCode resolves the property it fills the appropriate value for IMPOSSIBLE_TYPE and IMPOSSIBLE_SLOT.  Hence, next time when this piece of code is executed, if the type of object is not change then shapeIsKnown return true and there is no need to go into slowLookupCode.  This technique of modifying JIT-compiled code to reflect a probable value is called as inline caching: inline, as in "in the emitted code";  caching, as in "cache a probable value".

However, JavaScript supports dynamic typing. This is handles by polymorphic inline caching (PIC).  Lets consider an example of PIC code:

var vals = {1, "hello", [1, 2, 3]};
for (var i in vals) {
   document.write(vals[i].toString());
}

In above code vals array contains different data types such as a Number, a String and a array. For each object in the array, the interpreter has to perform an expressive type lookup and determine the correct toString method to call.  JaegerMonkey uses PIC slots to colve this problem, that is make a chain of cache entries. It creates several blocks of native code that perform property lookups for types the object has already been seen as. It the first type does not match, then a branch is taken to the next code block to perform a lookup. If type is match then it performs a fast slot lookup.  According to our example, first time it recognizes Number object and fills cache entry for it. Second time its a String. So a new piece of code memory is created for type String and modify the jump of first lookup (that is, Number type mismatch in our example) to go to this newly created piece of code memory instead of slowLookupCode.  and so on.

References:

Friday, April 1, 2011

How to Merge Multiple PDF files into single PDF file on Ubuntu

Download: Fast, Fun, Awesome


Multiple PDF files can be merged into single PDF using two different ways: ghostscript or pdftk

A. Use Ghostscript to merge PDF files
Steps:
1. Install two pacakeges GhostScript and PDFtk tools.
 $ sudo apt-get install gs pdftk

2. Use following command to combine multiple files into single PDF file. The output file name is "singleCombinedPdfFile.pdf". The input file names are all files in the current directory, bcoz we used "*.pdf".

$ gs -dNOPAUSE -sDEVICE=pdfwrite -sOUTPUTFILE=singleCombinedPdfFile.pdf -dBATCH *.pdf

If you want to join PDF files in specific order then you can also use file names.
$ gs -dNOPAUSE -sDEVICE=pdfwrite -sOUTPUTFILE=singleCombinedPdfFile.pdf -dBATCH 1.pdf 2.pdf 3.pdf

B. Use pdftk (PDF toolkit) to merge multiple PDF files into Single PDF file
1.  To merge PDF files by using names of the source PDF files:
$ pdftk one.pdf  two.pdf  three.pdf  cat  output  123-combined.pdf

2. To merge PDF files using wildcard when number of files are large and its not feasible to input filenames of all files:
pdftk *.pdf cat output combined.pdf


3. Select specific pages from Multiple PDFs and create new PDF document:
$ pdftk A=one.pdf B=two.pdf cat A1-7 B1-5 A8 output combined.pdf

Monday, March 14, 2011

Embed fonts in PDF file using PDFLaTex

Download: Fast, Fun, Awesome

This post explains how to embed fonts in PDF file.
Embedding the font in the PDF file is useful when you are preparing a paper for conference submission or you want to ensure that your PDF file looks exactly same on other's machine as it does on your computer. 
In this post I will explain how to do it on Linux machine.  I am not sure how to achieve the same on Windows computer. 
We will use tool "pdffonts" to examine PDF file. 

$ pdffonts  mypaper.pdf
name                                                              type                emb  sub uni object ID
------------------------------------ ----------------- --- --- --- ---------
HVGYIY+NimbusRomNo9L-Medi             Type 1            yes   yes no     110  0
TFVQMQ+NimbusRomNo9L-Regu            Type 1            yes   yes no     111  0
XHGNKU+NimbusRomNo9L-MediItal       Type 1            yes   yes no     113  0
UUGCZC+NimbusRomNo9L-ReguItal        Type 1            yes   yes no     114  0
FDULPW+CMSY7                                        Type 1            yes   yes no     148  0
SPCNWZ+NimbusMonL-Regu                     Type 1            yes   yes no     150  0
ABCDEE+Times                                           TrueType        yes   yes no     152  0
Arial                                                              TrueType          no   no  no     153  0
Arial                                                          CID TrueType      yes  no  yes    154  0
Arial                                                               TrueType          no  no  no     220  0
Arial                                                          CID TrueType      yes  no  yes    221  0
ABCDEE+Times                                         TrueType          yes  yes no     222  0
Arial,Italic                                                     TrueType          no   no  no     223  0
ZLLMAJ+CMMI10                                       Type 1            yes  yes no     257  0
Arial                                                              TrueType          no   no  no     259  0
ABCDEE+Calibri                                        TrueType          yes  yes no     260  0
Arial,Italic                                                     TrueType          no   no  no     261  0
Arial                                                             TrueType          no   no  no     282  0
Arial,Italic                                                    TrueType          no   no  no     283  0

$

The important columns are name and emb.  The "name" column displays the name of the font and the "emb" column shows whether that font is embedded in your PDF file or not. "yes" is "emb" column indicates that the font is embedded in the PDF file and "no" indicates that the font is not embedded in the PDF file.  
For example, in the above oputput, Arial, and Arial,Italic fonts are not embedded in the PDF file.

To embed the un-embedded fonts into your PDF file using PDFLaTex:
$  updmap --edit 
The above command will open the configuration file for pdflatex.
Find the pdftexDownloadBase14 directive and make sure it is true. That is, when you're done, the following line should be in the file:
pdftexDownloadBase14 true

Save the file and rebuild your PDF file using "pdflatex". 
Then check your PDF file using "pdffonts" command. It should now have embedded all the fonts use in your PDF file. 
If there are still some fonts missing then it might be because your have embedded another pdf file (as a graphics) into your "mypaper.pdf" file. 
In that case, you need to embedded the fonts into those embedded PDF files as well. 

If you included figures in your PDF file then follow the steps given below:
1.  Convert your PDF file to PS file
  $ pdftops  mypaper.pdf

2. Convert back ps file to pdf using "prepress" settings
  $ ps2pdf14 -dPDFSETTINGS=/prepress mypaper.ps


Conversion from PDF to PS and again back from PS to PDF my cause some formatting errors. I recommend you to double check your PDF file for formatting errors. 


3. Check PDF fonts using pdffonts command
  $ pdffonts mypaper.pdf

Friday, March 11, 2011

LibXML Tutorial

Download: Fast, Fun, Awesome

In this blog post I will show some basic function of libxml, which is a freely licensed C language XML library.
This post gives an idea to beginners how to manipulate xml files using libxml library function. This post does not cover all XML API available in libxml, but it just gives an idea how to use libxml API's with the help of some basic functions.

For detailed XML API list please visit official website of libxml.

To Parse XML file:
xmlDocPtr doc;  // pointer to parse xml Document
  
  // Parse XML file
  doc = xmlParseFile(xmlFileName);

  // Check to see that the document was successfully parsed.
  if (doc == NULL ) {
    fprintf(stderr,"Error!. Document is not parsed successfully. \n");
    return;
  }


To Get the root Document:

// Retrieve the document's root element.
  cur = xmlDocGetRootElement(doc);

  // Check to make sure the document actually contains something
  if (cur == NULL) {
    fprintf(stderr,"Document is Empty\n");
    xmlFreeDoc(doc);
    return;
  }


To Get the child Nodes of the current node element:

  cur = cur->xmlChildrenNode;


To Search for an attribute:

// search for "hash" attribute in the node pointed by cur
 attr = xmlHasProp(cur, (const xmlChar*)"hash");


To add new Attribute:

/*
 * New Attribute "hash" is added to element node pointed by cur,
*  and default value of the attribute is set to "12345678"
 */
 attr = xmlNewProp(cur, (const xmlChar*)"hash", (const xmlChar*)"12345678");


To Save XML document to Disk:

xmlSaveFormatFile (xmlFileName, doc, 1);



Complete Example is given below:
Suppose data.xml file is as follows:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE root SYSTEM "secPolicy2.dtd">
<root>
  <url>
    <host hash="12345678">www.example1.com</host>
    <sctxid>2</sctxid>
  </url>
  <url>
    <host>www.example2.com</host>
    <sctxid>2</sctxid>
  </url>
    <url>
    <host>www.example3.com</host>
    <sctxid>3</sctxid>
  </url>

</root>

Following program reads the above xml file supplied as command line argument.
It adds "hash" attribute with default value set to "12345678" if its not present in the "host" element node.

/*
 * Filename = xmlexample.c
*/
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <libxml/xmlmemory.h>
#include <libxml/parser.h>

/*
 * Parse URL Element Node in XML file
 * <url>
 *    <host hash="hash_val_of_hostname">www.example.com</host>
 *    <sctxid>Integer</sctxid>
 * </url>
 */
void parseURL (xmlDocPtr doc, xmlNodePtr cur) {
  xmlChar *key;
  xmlAttrPtr attr;

  // Get the childern Element Node of "url" node
  cur = cur->xmlChildrenNode;

  while (cur != NULL) {
    // check for "host" childern element node of "url" node
    if ((!xmlStrcmp(cur->name, (const xmlChar *)"host"))) {
      key = xmlNodeListGetString(doc, cur->xmlChildrenNode, 1);
      fprintf(stderr,"host: %s\n", key);
      xmlFree(key);
  
      // search for "hash" attribute in the "host" node
      attr = xmlHasProp(cur, (const xmlChar*)"hash");
     
      // if attr is not found then set it
      if(attr == NULL){
    /*
     * Add the Attribute and value of the attribute
     */
    attr = xmlNewProp(cur, (const xmlChar*)"hash", (const xmlChar*)"12345678");
   
    /* Attribute is now set and has value.
     * Just retrieve the value and display it
     */
    key = xmlGetProp(cur, (const xmlChar*)"hash");
    fprintf(stderr,"hash: %s\n", key);
    xmlFree(key);   

      }else{
    /* Attribute is available
     * Just retrieve the value and display it
     */
    key = xmlGetProp(cur, (const xmlChar*)"hash");
    fprintf(stderr, "hash: %s\n", key);
    xmlFree(key);     
      }
          
    } // end of IF loop " host"
     
    // check for "sctxid" childern element node of "url" node
    if ((!xmlStrcmp(cur->name, (const xmlChar *)"sctxid"))) {
      key = xmlNodeListGetString(doc, cur->xmlChildrenNode, 1);
      fprintf(stderr,"sctxid: %s\n", key);
      xmlFree(key);
    } // end of If loop "sctxid"
 
      cur = cur->next;
  } // end of While loop

  return;

} // end of parseURL function()

/*
 * Parsing the XML file and Reading the Element Nodes
 */
static void parseDoc(char *xmlFileName) {
  xmlDocPtr doc;  // pointer to parse xml Document
  xmlNodePtr cur; // node pointer. It interacts with individual node

  // Parse XML file
  doc = xmlParseFile(xmlFileName);

  // Check to see that the document was successfully parsed.
  if (doc == NULL ) {
    fprintf(stderr,"Error!. Document is not parsed successfully. \n");
    return;
  }

  // Retrieve the document's root element.
  cur = xmlDocGetRootElement(doc);

  // Check to make sure the document actually contains something
  if (cur == NULL) {
    fprintf(stderr,"Document is Empty\n");
    xmlFreeDoc(doc);
    return;
  }

  /* We need to make sure the document is the right type.
   * "root" is the root type of the documents used in user Config XML file
   */
  if (xmlStrcmp(cur->name, (const xmlChar *) "root")) {
    fprintf(stderr,"Document is of the wrong type, root node != root");
    xmlFreeDoc(doc);
    return;
  }

  /* Get the first child node of cur.
   * At this point, cur points at the document root,
   * which is the element "root"
   */
  cur = cur->xmlChildrenNode;

  // This loop iterates through the elements that are children of "root"
  while (cur != NULL) {
    if ((!xmlStrcmp(cur->name, (const xmlChar *)"url"))){
      parseURL (doc, cur);
    }
    cur = cur->next;
  }

  /* Save XML document to the Disk
   * Otherwise, you changes will not be reflected to the file.
   * Currently it's only in the memory
   */
  xmlSaveFormatFile (xmlFileName, doc, 1);

  /*free the document */
  xmlFreeDoc(doc);

  /*
   * Free the global variables that may
   * have been allocated by the parser.
   */
    xmlCleanupParser();

  return;

} // end of XMLParseDoc function


int main(int argc, char **argv) {
  char *xmlFileName;

  if (argc <= 1) {
    printf("Usage: %s inputfile.xml\n", argv[0]);
    return(0);
  }

  // Get the file name from the argv[1]
  xmlFileName = argv[1];

  // Custom function to parse XML file
  parseDoc (xmlFileName);

  return (1);
}


To compile the above program use following command:
$ gcc `xml2-config --cflags --libs` -o xmlexample xmlexample.c

To run the program, use following command:
$ ./xmlexample data.xml

Monday, February 28, 2011

Mercurial HG HOWTO guide

Download: Fast, Fun, Awesome

In this tutorial I will cover the basic commands you will need to use mercurial.
hg help is your first friend and Mercurial Wiki is your second.

Help for Command:
$ hg help <command>
or
$ hg <command> - -help


Commands to Create, Clone Repository
To make a new repository:
$ hg init <path>

To copy a repository from an existing repository:
$ hg clone <sourcePath>  [<DestinationPath>]

To clone specific branch of the repository:
$ hg clone -r <barnchName> <sourcePath> [<destinationPath>]

To copy existing repository to a new locaiton:
$ hg clone . <newPath>

To get changes from server repository and update working set:
$ hg pull -u

To get changes for specific branch from server repository:
$ hg pull -r <branchName>

To see what changes will come in on a PULL command:
$ hg incoming

To publish changes to specific branch on server repository:
$ hg push -r <branchName>

To see what changes will go out on a PUSH command:
$ hg outgoing


Commands for Add, Remove, Rename, Copy Operation
Add Specific file to repository:
$ hg add <filename1, filename2, ...>

To remove file from repository but don't delete from file system:
$hg remove <filename1, filename2...>

To remove file from repository and delete from file system as well"
$ hg remove -f <filename1, filename2,...>

To add all new files and remove all deleted files from repository:
$ hg addremove

To move or rename files in the repository:
$ hg move <oldfilename> <newfilename>

To copy files in the repository:
$ hg copy <oldfilename> <newfilename>


Commands for Commit, Revert Changes
To commit Changes to server repository:
$ hg commit
$ hg push

To commit as a particular user:
$ hg commit -u <username>

To revert all changes in local repository:
$ hg revert -a

To revert specific changes in local repositroy
$ hg revert <filename1, filename2, ..>

Commands to View Changes
To view changes between working set on your local repository and repository tip:
$ hg diff

To view changes between working set on your local repository and specific revision:
$ hg diff -r <revisionNumber>

To view changes between two revisions:
$ hg diff -r <revisionNumber> -r <revisionNumber>

To check what are changes in working set:
$ hg status

To list all changesets:
$ hg log

Commands to Update Working Set
To change working set to tip:
$ hg pull
$ hg up

To change working set with discarding any current work:
$ hg update -C

To change working set to specific revision:
$ hg update -r <revisionNumber>

To change working set to specific branch:
$ hg update -r <branchName>

To see the list of branches available for merging:
$ hg heads



Commands for Handling tags and Branches
To delete a tag:
$ hg tag -r <tagtext>

To tag a revision:
$ hg tag [-r <revisionNumber] <tagtext>

To list tags:
$ hg tags

To create new branch:
$ hg branch <branchName>
$ hg commit -m "New Branch created <branchName>"

To delete a branch:
$ hg commit - - close-branch <branchName>

To see the list of branches available:
$ hg branches

For HG Diff command setting in .hgrc file in /home/username folder: 

[diff]
git=1
showfunc=1
unified=8


Commands related to Patch:
Generating a patch:
$ hg diff  >  patchfilename

Discarding all local changes:
$ hg revert -a