Monthly Archives: November 2015

How to measure function call duration in python

How to measure the duration of a function call or code block in python

The simple way to measure the duration of function call in python using context management.

Some times we may encounter the situation where we need to know total time taken by the function call. Here is the code which is pretty much handy and simple to measure the duration of function call  or  time taken by the function(call) in python

import time

class MeasureDuration:
    def __init__(self):
        self.start = None
        self.end = None

    def __enter__(self):
        self.start = time.time()
        return self

    def __exit__(self, exc_type, exc_val, exc_tb):
        self.end = time.time()
        print "Total time taken: %s" % self.duration()

    def duration(self):
        return str((self.end - self.start) * 1000) + ' milliseconds'

Here is how you apply the above code to get the time taken by the function call

import time

def foo():

with MeasureDuration() as m:
   foo()    # We can place here the multiple calls or 
        # arbitary code block to measure


Output would look like as follows,

Total time taken: 1001.03282928 milliseconds


Introduction to Riak

Riak is a distributed database designed to deliver maximum data availability by distributing data across multiple servers. Riak is an open-source, distributed key/value database for high availability, fault-tolerance, and near-linear scalability.

Riak Components

Riak is a Key/Value (KV) database, built from the ground up to safely distribute data across a cluster of physical servers, called nodes.

Riak functions similarly to a very large hash space. Depending on your background, you may call it hashtable, a map, a dictionary, or an object. But the idea is the same: you store a value with an immutable key, and retrieve it later.

1) Key and value

2) Buckets

Key and Value

Key/value is the most basic construct in all of computerdom.


Buckets in Riak  provide logical namespaces so that identical keys in different buckets will not conflict.

Buckets are so useful in Riak that all keys must belong to a bucket. There is no global namespace. The true definition of a unique key in Riak is actually        bucket + key.

For convenience, we call a bucket/key + value pair as an object, sparing ourselves the verbosity of “X key in the Y bucket and its value”.

Replication and Partitions

Distributing data across several nodes is how Riak is able to remain highly available, tolerating out-ages and  partitioning. Riak combines two styles of distribution to achieve this: replication and partitions.


Replication is the act of duplicating data across multiple nodes. Riak replicates by default.

The obvious benefit  of replication is that if one node goes down, nodes that contain replicated data remain available to serve requests. In other words, the system remains available with no down time.

The downside with replication is that you are multiplying the amount of storage required for every duplicate. There is also some network overhead with this approach, since values must also be routed to all replicated nodes on write.


A partition is how we divide a set of keys onto separate physical servers. Rather than duplicate values, we pick one server to exclusively host a range of keys, and the other servers to host remaining non-overlapping ranges.

With partitioning, our total capacity can increase without any big expensive hardware, just lots of cheap commodity servers. If we decided to partition our database into 1000 parts across 1000 nodes, we have (hypothetically) reduced the amount of work any particular server must do to 1/1000th.

There’s also another downside. Unlike replication, simple partitioning of data actually decreases uptime.

If one node goes down, that entire partition of data is unavailable. This is why Riak uses both replication and partitioning.


Since partitions allow us to increase capacity, and replication improves availability, Riak combines them. We partition data across multiple nodes, as well as replicate that data into multiple nodes.


The Riak team suggests a minimum of 5 nodes for a Riak cluster, and replicating to 3 nodes (this setting is called n_val, for the number of nodes on which to replicate each object).

The Ring

Riak applies consistent hashing to map objects along the edge of a circle (the ring).

Riak partitions are not mapped alphabetically (as we used in the examples above), but instead a partition marks a range of key hashes (SHA-1 function applied to a key). The maximum hash value is 2160 , and divided into some number of partitions—64 partitions by default (the Riak config setting isring_creation_size).

The Ring is more than just a circular array of hash partitions. It’s also a system of metadata that gets copied to every node. Each node is aware of every other node in the cluster, which nodes own which vnodes, and other system data.

N/R/W Values


With our 5 node cluster, having an n_val=3 means values will eventually replicate to 3 nodes, as we’ve discussed above. This is the N value. You can set other values (R,W) to equal the n_val number with the shorthand all.


Reading involves similar tradeoffs. To ensure you have the most recent value, you can read from all 3 nodes containing objects (r=all). Even if only 1 of 3 nodes has the most recent value, we can compare all nodes against each other and choose the latest one, thus ensuring some consistency. Remember when I mentioned that RDBMS databases were write consistent? This is close to read consistency. Just like w=all,however, the read will fail unless 3 nodes are available to be read. Finally, if you only want to quickly read any value, r=1 has low latency, and is likely consistent if w=all.


But you may not wish to wait for all nodes to be written to before returning. You can choose to wait for all 3 to finish writing (w=3 or w=all), which means my values are more likely to be consistent. Or you could choose to wait for only 1 complete write (w=1), and allow the remaining 2 nodes to write asynchronously, which returns a response quicker but increases the odds of reading an inconsistent value in the short term. This is the W value

Since Riak is a KV database, the most basic commands are setting and getting values. We’ll use the HTTP interface, via curl, but we could just as easily use Erlang, Ruby, Java, or any other supported language. The basic structure of a Riak request is setting a value, reading it, and maybe eventually deleting it. The actions are related to HTTP methods (PUT, GET, POST, DELETE).

PUT /riak/bucket/key
GET /riak/bucket/key
DELETE /riak/bucket/key



The simplest write command in Riak is putting a value. It requires a key, value, and a bucket. In curl, all HTTP methods are prefixed with -X. Putting the value pizza into the key favorite under the food bucket is done like this:

curl -XPUT <span class="s2">"http://localhost:8098/riak/food/favorite"</span> <span class="se">\</span>
   -H <span class="s2">"Content-Type:text/plain"</span> <span class="se">\</span>
   -d <span class="s2">"pizza"</span>

The -d flag denotes the next string will be the value. Declaring it as text with the proceeding line -H ‘Content-Type:text/plain’

This declines the HTTP MIME type of this value as plain text. We could have set any value at all, be it XML or JSON—even an image or a video. Riak does not care at all what data is uploaded, so long as the object size doesn’t get much larger than 4MB.


The next command reads the value pizza under the bucket/key food/favorite.

curl -XGET <span class="s2">"http://localhost:8098/riak/food/favorite"</span>

This is the simplest form of read, responding with only the value. Riak contains much more information, which you can access if you read the entire response, including the HTTP header. In curl you can access a full response by way of the -i flag.


Similar to PUT, POST will save a value. But with POST a key is optional. All it requires is a bucket name, and it will generate a key for you.

Let’s add a JSON value to represent a person under the people bucket. The response header is where a POST will return the key it generated for you.

curl -i -XPOST <span class="s2">"http://localhost:8098/riak/people"</span> <span class="se">\</span>
    -H <span class="s2">"Content-Type:application/json"</span> <span class="se">\</span>
    -d <span class="s1">'{"name":"aaron"}'</span>

HTTP/1.1 <span class="m">201</span> Created
Vary: Accept-Encoding
Server: MochiWeb/1.1 WebMachine/1.9.2 <span class="o">(</span>someone had painted...
Location: /riak/people/DNQGJY0KtcHMirkidasA066yj5V
Date: Wed, <span class="m">10</span> Oct <span class="m">2012</span> 17:55:22 GMT
Content-Type: application/json
Content-Length: 0

You can extract this key from the Location value. Other than not being pretty, this key is treated the same as if you defined your own key via PUT.


You may note that no body was returned with the response. For any kind of write, you can add the returnbody=true parameter to force a value to return, along with value-related headers like X-Riak-Vclock and ETag.

curl -i -XPOST <span class="s2">"http://localhost:8098/riak/people?returnbody=true"</span> <span class="se">\</span>
-H <span class="s2">"Content-Type:application/json"</span> <span class="se">\</span>
-d <span class="s1">'{"name":"billy"}'</span>
HTTP/1.1 <span class="m">201</span> Created
X-Riak-Vclock: a85hYGBgzGDKBVIcypz/fgaUHjmdwZTImMfKkD3z10m+LAA<span class="o">=</span>
Vary: Accept-Encoding
Server: MochiWeb/1.1 WebMachine/1.9.0 <span class="o">(</span>someone had painted...
Location: /riak/people/DnetI8GHiBK2yBFOEcj1EhHprss
Link: </riak/people><span class="p">;</span> <span class="nv">rel</span><span class="o">=</span><span class="s2">"up"</span>
Last-Modified: Tue, <span class="m">23</span> Oct <span class="m">2012</span> 04:30:35 GMT
ETag: <span class="s2">"7DsE7SEqAtY12d8T1HMkWZ"</span>
Date: Tue, <span class="m">23</span> Oct <span class="m">2012</span> 04:30:35 GMT
Content-Type: application/json
Content-Length: 16

<span class="o">{</span><span class="s2">"name"</span>:<span class="s2">"billy"</span><span class="o">}</span>


The Final basic operation is deleting keys, which is similar to getting a value, but sending the DELETE method to the url/bucket/key.

curl -XDELETE <span class="s2">"http://localhost:8098/riak/people/DNQGJY0KtcHMirkidasA066yj5V"</span>

A deleted object in Riak is internally marked as deleted, by writing a marker known as a tombstone. Unless configured otherwise, another process called a reaper will later finish deleting the marked objects.

1. In Riak, a delete is actually a read and a write, and should be considered as such when calculating

read/write ratios.

2. Checking for the existence of a key is not enough to know if an object exists. You might be reading a key after it has been deleted, so you should check for tombstone metadata.


Riak provides two kinds of lists. The first lists all buckets in your cluster, while the second lists all keys under a specific bucket. Both of these actions are called in the same way, and come in two varieties.

The following will give us all of our buckets as a JSON object.

curl <span class="s2">"http://localhost:8098/riak?buckets=true"</span>

<span class="o">{</span><span class="s2">"buckets"</span>:<span class="o">[</span><span class="s2">"food"</span><span class="o">]}</span>

And this will give us all of our keys under the food bucket.

curl <span class="s2">"http://localhost:8098/riak/food?keys=true"</span>
<span class="o">{</span>
<span class="s2">"keys"</span>: <span class="o">[</span>
<span class="s2">"favorite"</span>
<span class="o">]</span>
<span class="o">}</span>

Adjusting N/R/W to our needs

N is the number of total nodes that a value should be replicated to, defaulting to 3. But we can set this n_val to less than the total number of nodes.

Any bucket property, including n_val, can be set by sending a props value as a JSON object to the bucket URL. Let’s set the n_val to 5 nodes, meaning that objects written to cart will be replicated to 5 nodes.

curl -i -XPUT <span class="s2">"http://localhost:8098/riak/cart"</span> <span class="se">\</span>
-H <span class="s2">"Content-Type: application/json"</span> <span class="se">\</span>
-d <span class="s1">'{"props":{"n_val":5}}'</span>

Symbolic Values

A quorum is one more than half of all the total replicated nodes (floor(N/2) + 1). This figure is important, since if more than half of all nodes are written to, and more than half of all nodes are read from, then you will get the most recent value (under normal circumstances).


Another utility of buckets are their ability to enforce behaviors on writes by way of hooks. You can attach functions to run either before, or after, a value is committed to a bucket.

Functions that run before a write is called precommit, and has the ability to cancel a write altogether if the incoming data is considered bad in some way. A simple precommit hook is to check if a value exists at all.

I put my custom Erlang code files under the riak installation ./custom/my_validators.erl.

%%Erlang Code


%% Object size must be greater than 0 bytes
val = riak_object:get_value(RiakObject).
case erlang:byte_size(Value) of
        0 -> {fail, "A value size greater than 0 is required"};
    _ -> RiakObject


Then compile the file.(You need to install erlang before installing Riak)

erlc my_validators.erl

Install the file by informing the Riak installation of your new code via app.config (restart Riak).

{add_paths, ["./custom"]}

Then you need to do set the Erlang module (my_validators) and function (value_exists) as a JSON value to the bucket’s precommit array {“mod”:”my_validators”,”fun”:”value_exists”}.

curl -i -XPUT http://localhost:8098/riak/cart \
-H "Content-Type:application/json" \
-d '{"props":{"precommit":[{"mod":"my_validators","fun":"value_exists"}]}}'

If you try and post to the cart bucket without a value, you should expect a failure.

curl -XPOST http://localhost:8098/riak/cart \
-H "Content-Type:application/json"
A value sized greater than 0 is required


Siblings occur when you have conflicting values, with no clear way for Riak to know which value is correct. Riak will try to resolve these conflicts itself if the allow_mult parameter is configured to false, but you can instead ask Riak to retain siblings to be resolved by the client if you set allow_mult to true.

curl -i -XPUT http://localhost:8098/riak/cart \
-H "Content-Type:application/json" \
-d '{"props":{"allow_mult":true}}'

Siblings arise in a couple cases.

1. A client writes a value using a stale (or missing) vector clock.

2. Two clients write at the same time with the same vector clock value.

We used the second scenario to manufacture a conflict in the previous chapter when we introduced the concept of vector clocks, and we’ll do so again here.

Resolving Conflicts

When we have conflicting writes, we want to resolve them. Since that problem is typically use-case specific, Riak defers it to us, and our application must decide how to proceed.

For our example, let’s merge the values into a single result set, taking the larger count if the item is the same. When done, write the new results back to Riak with the vclock of the multipart object, so Riak knows you’re resolving the conflict, and you’ll get back a new vector clock.

Successive reads will receive a single (merged) result.

curl -i -XPUT http://localhost:8098/riak/cart/fridge-97207?returnbody=true \
-H "Content-Type:application/json" \
-H "X-Riak-Vclock:a85hYGBgzGDKBVIcypz/fgaUHjmTwZTInMfKoG7LdoovCwA=" \
-d '[{"item":"kale","count":10},{"item":"milk","count":1},\

Will share more on this arctile soon.

How to implement social Login for Django app

In this article we will get to know about how to login to your django app by using social logins like Facebook and Google.

Start a simple Django project

$ startproject thirdauth
$ tree thirdauth/
└── thirdauth


Running ./ syncdb and then ./ runserver and navigating to localhost:8000 will show the familiar “It worked!” Django page. Let’s put some custom application code in place, so that we can tell whether the current user is authenticated or anonymous.

Show current user’s authentication status

Now, the very small customizations we’ll add are:

Add ‘thirdauth’ to INSTALLED_APPS

Create the template for the home page

Add a view for the home page

Add a URL pointing to the home page view

Relevant portion of



Template: thirdauth/base.html:

<!DOCTYPE html>
<html lang="en">
   <meta charset="utf-8">
   <meta http-equiv="X-UA-Compatible" content="IE=edge">
   <meta name="viewport" content="width=device-width, initial-scale=1">
   <title>{% block title %}Third-party Authentication Tutorial{% endblock %}</title>

   <!-- Bootstrap -->
   <link href="/static/css/bootstrap.min.css" rel="stylesheet">
   <link href="/static/css/bootstrap-theme.min.css" rel="stylesheet">
   <link href="/static/css/fbposter.css" rel="stylesheet">

   <!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries -->
   <!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
   <!--[if lt IE 9]>
     <script src=""></script>
     <script src=""></script>
   {% block main %}{% endblock %}
   <!-- jQuery (necessary for Bootstrap's JavaScript plugins) -->
   <script src=""></script>
   <!-- Include all compiled plugins (below), or include individual files as needed -->
   <script src="/static/js/bootstrap.min.js"></script>

Template: thirdauth/home.html:

{% extends 'thirdauth/base.html' %}

    {% block main %}
    <h1>Third-party authentication demo</h1>

    {% if user and not user.is_anonymous %}
     Hello {{ user.get_full_name|default:user.username }}!
    {% else %}
     I don’t think we’ve met before.
    {% endif %}
{% endblock %}


from django.shortcuts import render_to_response
from django.template.context import RequestContext

def home(request):
   context = RequestContext(request,
                           {'user': request.user})
   return render_to_response('thirdauth/home.html',



from django.conf.urls import patterns, include, url

from django.contrib import admin

urlpatterns = patterns('',
   url(r'^$', 'thirdauth.views.home', name='home'),
   url(r'^admin/', include(,


Install Python Social Auth

 pip install python-social-auth

Second, let’s make some modifications to our to include python-social-auth in our project:





Lets update the urls module to include the new group of URLs:

urlpatterns = patterns('',
url('', include('social.apps.django_app.urls', namespace='social')),


And finally, let’s update the database models:

./ syncdb

Add links for logging in and logging out.

Since we’ll be logging in and out multiple times, let’s include django.contrib.auth URLs into our URLs configuration:

urlpatterns = patterns('',
   url('', include('django.contrib.auth.urls', namespace='auth')),


Let’s modify our Home page template like this:

{% extends 'thirdauth/base.html' %}

{% block main %}
 <h1>Third-party authentication demo</h1>

   {% if user and not user.is_anonymous %}
       <a>Hello {{ user.get_full_name|default:user.username }}!</a>
       <a href="{% url 'auth:logout' %}?next={{ request.path }}">Logout</a>
   {% else %}
       <a href="{% url 'social:begin' 'facebook' %}?next={{ request.path }}">Login with Facebook</a>
       <a href="{% url 'social:begin' 'google-oauth2' %}?next={{ request.path }}">Login with Google</a>
       <a href="{% url 'social:begin' 'twitter' %}?next={{ request.path }}">Login with Twitter</a>
   {% endif %}
{% endblock %}

For the login and logout links in this template to work correctly, we need to modify a few things. First, let’s take care of logout, it’s easier. Just add ‘request’ to the context object that we pass into template-rendering code.


from django.shortcuts import render_to_response
from django.template.context import RequestContext

def home(request):
   context = RequestContext(request,
                           {'request': request,
                            'user': request.user})
   return render_to_response('thirdauth/home.html',


For login to work, let’s first add a LOGIN_REDIRECT_URL parameter to settings (to prevent the default /account/profile from raising a 404):


Get Client IDs for the social sites.

For All the social networks we are using in this demo, the process of obtaining an OAuth2 client ID (also known as application ID) is pretty similar. All of them will require that your application has a “real” URL – that is, not or http://localhost. You can add an entry in your /etc/hosts file that maps to something like “”, and the URL of your application becomes – that is good enough for testing. You can change it in the social app settings when it goes into production.


Go to and click the green “Create New App” button.

In the settings of the newly-created application, click “Add Platform”. From the options provided, choose Web, and fill in the URL of the site ( in our example).

Copy the App ID and App Secret, and place them into file:


This should be enough to get your app to login with Facebook! Try logging in and out – you should get redirected between your app and FB OAuth2 service, and a new record in the User social auths table will get created, along with a new User record pointing to it.


Go to and create a new application.

Under APIs and Auth > Credentials, create a new Client ID.

Make sure to specify the right callback URL:

Copy the values into settings file:



How To Mount S3 Bucket In Linux Using S3FS

Here is the simple step by step procedure to mount s3  bucket on linux

Step 1: Remove Existing Packages

# yum remove fuse fuse-s3fs


Step 2: Install Required Packages

# yum install gcc libstdc++-devel gcc-c++ curl-devel libxml2-devel openssl-devel mailcap


Step 3: Download and Compile Latest Fuse

# cd /usr/src/
# wget
# tar xzf fuse-2.9.3.tar.gz
# cd fuse-2.9.3
# ./configure --prefix=/usr/local
# make && make install
# export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig
# vim /etc/
/usr/local/lib - if you dont add this you will get "s3fs: error while loading shared libraries: cannot open shared object file: No such file or directory" error

# ldconfig
# modprobe fuse


Step 4: Download and Compile Latest S3FS

# cd /usr/src/
# wget
# tar xzf s3fs-1.74.tar.gz
# cd s3fs-1.74
# ./configure --prefix=/usr/local
# make && make install


Step 5: Setup Access Key

# chmod 600 ~/.passwd-s3fs


Step 6: Mount S3 Bucket
# mkdir /tmp/cache
# mkdir /s3mnt
# chmod 777 /tmp/cache /s3mnt

# s3fs -o use_cache=/tmp/cache mydbbackup /s3mnt


To unmount

# fusermount -u /s3mnt



About SCM Source Code Managment System

GIT The Source Code Management System(SCM)

Many people uses version control system but they have on idea why they are using it, as the team is using, they will also use it get work done.

Why do we need version control system? Here are the few requirements, from where the SCM idea comes.

See the project directory with out version control system.

When ever you think to have snap shot of your source code for that particular moment, then you have to copy your source code directory by naming it according to that moment. Like wise you may do it with tarballs to save memory. Its horrible….not?, you will end up with endless tarballs or directories as per the project.

Just think, your other developer asked about release code that you did on some x day. Then you have to send him whole project directory if every file got changed and he is not having those changes. You did some performance fixes, it takes 70% refactoring, and it’s an experiment then you should keep your project directory backup, before you starting doing these performance fixes. And also you need to have directory/tarball when you completed with your changes, because you may not undo your 70% of changes if you want the last stable code that you have before 70% of refactoring for performance. If these changes are not stable and will take long time to get them stable, in such a case you will replace your production with the directory that you copied before you did performance fixes.

If we go like this, you will end up with whole hard disk with your single project. Even though, you will not dare to find some directory(If you think of directory as snap shot or save point).

Now come to team collaboration, there is no pretty much better way to communicate with the team with code changes up to dated as per time with out SCM. Some times your team will need to wait for the fixes those are being implemented by others, to get source code. In this case one person can work faster than the 10 persons. Because every body else will end up by doing nothing other than taking/giving changed files looking at differences, asking others about changes and finally preparing single file by combining working changes.

All this story about when we don’t have source code management system. Now imagine the development with out SCM.

Thats why the version control system comes into picture.

Version control, also known as revision control, source control management, is the management of changes to files, programs and other information. Version control system allows us to track the incremental changes in files or content. Provides the ability for many developers to work on single file or project concurrently.

The project directory with SCM (GIT)

There are lot of SCM softwares out side including open source and commercial. Here are few….

  • CVS
  • SVN
  • BitKeeper
  • GIT
  • Mercurial

There are lot of softwares available other than I have mentioned, you can find them at list of version control systems

We can classify the version control systems based on their model. That is centralized and distributed. You may call centralized as client server model. As there are many version control systems no one meet all requirements at all.

Here are few characteristics of version control systems, which will vary around each version control system.

  • User interface
  • Performance
  • Memory management
  • Learning cure
  • Maintenance

Apart from them there are two other things to consider, Open source/Proprietary Centralized/Distributed

Lets have a look at GIT. GIT is a distributed revision control system which is especially designed for speed. It was designed by Linus Torvalds in early 2005. It was designed to manage kernel source code, and for the BitKeeper
replacement. Linux kernel was managing with BitKeeper before GIT was invented. It’s initial release was 7 April 2005. GIT is really helpful for open source projects where it supports merges much better than any other SCM.

Here is how GIT distributed(decentralized) model would look like.

this distributed model we will have   history in every  host,  that here we can think of each machine as complete git repository. We can revert any other repository from any other repository in network in any hardware failuter. So git history is redundant here, cause it performance. You may call distributed version control system as DVCS. In DVCS there may be many central repositories.

And centralized model would look like:

In centralized model whole git history is resides in only central repository. We need to have connected via network to commit our changes unlike git(distributed). It required additional maintenance and need to take source code backups to regret central repository from any hardware failure of so.

First time with GIT:

Every git repository is nothing but a directory either on server or locally in your machine.

Creating the GIT repository is very simple, go to the directory that you would like make it as your git repository. To make sure you are in that directory justify it by the command pwd.

Here I would like to make my PROJECT directory as GIT repository. It’s simple, with the following command.

One more thing, if your directory is git repository you would have .git inside.

GIT use to track changes and all with this directory.

We do save our changes with in commit. You may think each commit as one save point. You can go back to that history or save point when ever you want. You can tag commits with your version numbers. That is like v1.0, v2.0, v3.2,…. so on. You may call commit as revision or version as well.

As GIT uses a unique SHA1 to identify the commit. So each revision can be described by 40 characters hexadecimal string. Instead of mentioning this long commit hash into the releases and executables git users tag that specific commit with version number. So we can identify and get that specific source from the incremental git source tree for that version.

For each commit we will give some descriptive message we will call that as commit message. Here is the commit look like, it’s from git source.

Here we see four things, commit, author, date and commit message. In each commit git stores the author name and his email. So we need to configure git to take our name and email and other settings if required to have them in commits we do. There are global and repository wide settings/options.

Here is how you can configure settings

If you use –global you will have global settings configured. Those will take affect over each repository. Repository settings have more precedence than global. You need to take –global off from the command to have repository wide options configured. Make sure you are in side repository to configure repository wide options. You don’t need to be in repository to set global options.

To view git configured settings, try the command

$ git config –list

If you use command line option/argument –global that will show all configured global level options.

$ git config –list –global

There are many more options to customize git behavior. But we very few options often.

GIT First Commit:

There are three states in the git commit procedure. Your file resides in any of the following state, those are

  • Modified
  • Staged
  • Commited

Modified means you have changed or added file and have not stored in git database. Staged means you have marked changes in current version to go into next snap shot, that is commit. Committed means, you have saved your modifications into database. The middle state is optional and it is to avoid accidental commits. We can skip this if required but it’s not recommended.

Adding files and modifying committed files comes under modified state. If you don’t add your new files those will be treated as untracked files.

To add or stage files use the following command.

$ git add  file_name

git add is the multi purpose command we use that to both track new files and to stage new files.

Now add first file into our repository

GIT won’t track the changes of newly added files unless we say. That is unless we track that file. Newly added files are untracked, git won’t show modifications to those files.

Command git status will show us the status inside the repository. It will give the idea about three things, Untracked files, Modified file (Modified files are tracked files ) * Staged files

Here is screen shot where new file will be shown untracked.

Now have this file added into the git database.

The status after we added that file,

In the above screen shot command git add made the given file stated. The files which are below the secion #changes to be committed are staged files. It’s second state as we discussed. Now we have to commit the staged file. The git commit will do that. It will take the argument -m along with the commit message. If we don’t give commit message with -m we will have editor open to enter commit message.

Finally, to see our history or previous commits we have the command

git log

How to secure yourself with GPG

Generate your key

  1. Run following command in your shell,
    gpg --gen-key
  2. Now program will ask you to choose couple of options, use following preferences
  3.  Please select what kind of key you want: 1    RSA and RSA (default)
  4.  What keysize do you want? (2048) 4096
  5.  Key is valid for? (0) 0
  6. Is this correct? (y/N) y
  7. Now enter name, email and comment message.
  8. Change (N)ame, (C)omment, (E)-mail or (O)kay/(Q)uit? o
  9. Finally, enter a passphrase to protect your secret key.

Edit your key

We can later edit key to use other options.
e.g Lets set our key to use stronger hashes.

  1. Edit key using following command,
     gpg --edit-key
  2. Now set hash preferences as follows,
    gpg> setpref SHA512 SHA384 SHA256 SHA224 AES256 AES192 AES CAST5 ZLIB BZIP2 ZIP
  3.  Really update the preferences? (y/N) y
  4. Enter your passphrase
  5. Save new preferences by command,

Make available your key

There are 2 ways to make available your key to other users.

  1. Give them manually. Use following command,
    gpg --armor --export

    You will get your public key. Copy and paste it and send to other user.

  2. Upload to key server. You can do this again using 2 ways. One is using, forms available on server. While for second way, first grab your id using following command’s output & then upload to keyservers like
gpg --send-keys --keyserver <key-id>


Importing other keys

  1. Import other user’s keys. We can import keys of other users with multiple ways. From text file – If someone sends you text file containing his public key, import it as,
     gpg --import <pub_key_file>

    From key server – There are some popular key serves which host public keys.
    One of such server is ``. Here you can search particular user’s key as follows,

    gpg --keyserver --search-keys <string>
  2. Validate key. The easy way to validate person’s identification is match fingerprint of key.
    gpg --fingerprint
  3. Sign imported key as,
    gpg --sign-key
  4. Optionally you can send back signed key

Using gpg key

  • To encrypt message using your key use following command,
    gpg --encrypt --sign --armor -r <filename>
  • To decrypt file,
     gpg <filename>

    Creating revocation certificate

There is always possibility that your master key-pair may get lost. (and may be stolen if you are unfortunate). If this happen, you must tell other people to not use your public key. This can be done using revocation certificate. Generate revocation certificate using following command,

gpg --output \<\>.gpg-revocation-certificate --gen-revoke

Store it safe somewhere separately from master key-pair

Some useful commands

  • List available keys,
    gpg --list-keys
  • Update key information,
     gpg --refresh-keys




Coloring shell output

Using coloring, we can enhance output of shell script. Run following script in your terminal and see magic.


#various color codes

#ascii art
printf "${color_custom}"
cat <<EOF
   m""                       #                   
 mm#mm   mmm    mmm          #mmm    mmm    m mm 
   #    #" "#  #" "#         #" "#  "   #   #"  "
   #    #   #  #   #   """   #   #  m"""#   #    
   #    "#m#"  "#m#"         ##m#"  "mm"#   # 
printf "${color_no}"


10 MySQL best practices

When we design database schema it’s recommended to follow the beast practices to use memory in optimal way and to gain performance. Following are 10 MySQL best practices

Always try to avoid redundancy

We can say database schema designed is the best one if it is having no redundancy. If you want to avoid redundancy in your schema, normalize it after you design.

Normalize tables

Database normalization is the process of  organizing columns and tables in relational database to avoid redundancy. Find more about normalization here

Use (unique) indexes on foreign key columns

We use foreign keys for data integrity and to represent relation. Some times these are result of process called normalization. When tables are mutually related obviously we can’t query  the data without using joins

Avoid using varchar for fixed width column instead use char

Choose the right one CHAR vs VARCHAR. CHAR(15) will just allocate the space for 15 characters but VARCHAR(15) will allocate the space only required by number of characters you store.

Always use explain  to investigate your queries and learn about how mysql is using indexes

EXPLAIN  statement is very handy in mysql. I’m sure it will spin your head. This statement will give you analyzed report. Where you can use it to improve your queries and schema. It works on both select and update. If you try it on update queries it will that query as select and will give you the report.

Use right data type

Choosing right data type for you column will help you to get rid of many bottle necks. MySQL query optimizer will choose the indexes based on data type you used in query and column datatype. There are many MySQL datatype.

Use ENUM if required  

ENUM is one datatype that mysql supports. By using this you can save lot of memory if you have predefined and predictable values in your database column.

Don’t use too many indexes, it will slow down the inserts and updates. Only use the indexes on selected column

As you know indexes will help you query data much faster than expected. It’s very tempting to you indexes on unintended columns. Choosing index on every column or unnecessary columns will get you slow inserts and updates. You need to think of indexes as seperate table. Where MySQL needs to create a index for every insert in seperate table/file. It’s extra overhead.

Tune  mysql default parameters

MySQL comes with default parameters. These parameters are not suitable if you want use mysql on dedicated machine or production. You have to tune these parameters. Formally we call them as system variables.

 Always create an account with associated hosts instead of wildcard %

MySQL manages the user with associated hosts. i.e, the user  root@localhost can’t login to mysql from everywhere except localhost. but root@% can login from every where. Using only associated hosts will mitigate many attacks those are in your blind spot.


How to implement Websocket server using Twisted.

HTTP is a request-response type one way protocol. For the web application where continuous data is to be send, websocket was introduced. Unlike HTTP, websocket provides full duplex communication. Websocket, which can be said as an upgraded version of HTTP, is standardized to be used over TCP like HTTP. In this article I will share my experience in implementing websocket with twisted, a framework of python for internet. If you are familiar with websocket, then you can skip to twisted.web or else below is a little introduction to websocket.


To initiate communication using websocket, a Handshake need to be done between client and server. This procedure is backward compatible to HTTP’s request – response structure. First the client sends a handshake request to the server which looks like:

GET /chat HTTP/1.1
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ==
Sec-WebSocket-Protocol: chat, superchat
Sec-WebSocket-Version: 13

Sending Upgrade header in request with value websocket will acknowledge server about websocket communication. Now if server supports websocket with specified sub-protocols (Sec-WebSocket-Protocol) and version (Sec-WebSocket-Version), it will send adequate response . Possible response could be:

HTTP/1.1 101 Switching Protocols
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Accept: s3pPLMBiTxaQ9kYGzzhZRbK+xOo=
Sec-WebSocket-Protocol: chat
In response, server will send 101 Switching Protocols code and Sec-WebSocket-Accept whose value is calculated using Sec-WebSocket-Key. you can find more information here. After a successful handshake, any of the peer can send data to each other which must be encoded in binary format described in websocket RFC. A high-level overview of the framing is given in the following figure.
 0                   1                   2                   3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
|F|R|R|R| opcode|M| Payload len |    Extended payload length    |
|I|S|S|S|  (4)  |A|     (7)     |             (16/64)           |
|N|V|V|V|       |S|             |   (if payload len==126/127)   |
| |1|2|3|       |K|             |                               |
+-+-+-+-+-------+-+-------------+ - - - - - - - - - - - - - - - +
|     Extended payload length continued, if payload len == 127  |
+ - - - - - - - - - - - - - - - +-------------------------------+
|                               |Masking-key, if MASK set to 1  |
| Masking-key (continued)       |          Payload Data         |
+-------------------------------- - - - - - - - - - - - - - - - +
:                     Payload Data continued ...                :
+ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
|                     Payload Data continued ...                |


As in normal twisted.web server , at TCP level, we have HTTPChannel class (a child class of T.I.protocol.Protocol) and server.Site class (which is the child class of T.I.protocol.ServerFactory). Also a Resource instance needs to be passed to class, so that it can serve GET request.

Whenever a data is received, DataReceived method of HTTPChannel is invoked. Now if data starts with ‘GET’, allow the HTTPChannel handle it, which will invoke the render function of the root resource provided to Site class. Render will set 101 response code and will compute the websocket response key. During handshake do not send any raw data, because if handshake is successful this will be considered as framed binary data. Even if you want to send, frame it and send.

If data doesn’t start with ‘GET’, that means we can assume it is a binary encoded message. Now this message can be decoded using, which is a very simple data framing module following WebSocket specification. Data send to the client by server should be unmasked as per the websocket specification.

Below is code example of an echo websocket server.

import base64, hashlib
from twisted.internet import reactor
from twisted.web.server import (Site, http, resource)

class EchoResource(resource.Resource):
    isLeaf = 1

    def render(self, request):

        # Processing the Key as per RFC 6455
        key = request.getHeader('Sec-WebSocket-Key')
        h = hashlib.sha1(key + "258EAFA5-E914-47DA-95CA-C5AB0DC85B11")
        request.setHeader('Sec-WebSocket-Accept', base64.b64encode(h.digest()))

        # setting response headers
        request.setHeader('Upgrade', 'websocket')
        request.setHeader('Connection', 'Upgrade')
        return ''

class EchoChannel(http.HTTPChannel):

    def dataReceived(self, data):

        if data.startswith('GET'):
            # This will invoke the render method of resource provided
            http.HTTPChannel.dataReceived(self, data)

            # decoding Data using Frame module wrote by Morgan Philips.
            f = frame.Frame(bytearray(data))
            received_message = f.message()
            print received_message

            # Sending back the received message.
            msg = frame.Frame.buildMessage(received_message, mask=False)

class EchoSite(Site):
    def buildProtocol(self, addr):
        channel = EchoChannel()
        channel.requestFactory = self.requestFactory = self
        return channel

site = EchoSite(EchoResource())

if __name__ == '__main__':
    reactor.listenTCP(8080, site)


How to install Asterisk on CentOS

In this installment of our How To, we are going to go over on the topic of how to install Asterisk on CentOS. For this we are going to use Asterisk 13 and CentOS 7 minimal version. But, instructions will mostly be similar to other versions of Asterisk and CentOS.

As a first step you need to download latest asterisk on to your machine. For this you need wget tool. As we are using minimal flavor of CentOS even wget tool is not available on fresh install. Run the following command to install wget.

yum install wget

Once, wget is installed successfully, run the following command to download asterisk.


Extract downloaded asterisk tar ball

tar -zxvf asterisk-13-current.tar.gz
cd asterisk-13.6.0

Install the following dependencies

yum install gcc
yum install gcc-c++
yum install ncurses-devel
yum install uuid-devel libuuid-devel
yum install jansson-devel
yum install libxml2-devel
yum install sqlite-devel

Once, all the above dependencies are installed. You can now run the following command to enable or disable modules of your choice.

make menuselect

After you are done with the menuselect screen, run the following command to compile and install asterisk

make install
make samples

That’s it now you have asterisk installed successfully on your you machine. Run, the following command to start asterisk

asterisk -vvvvgc

Now, you should see asterisk console saying “Asterisk Ready”. Instead, if you encounter the following error

/usr/bin/asterisk: error while loading shared libraries: cannot open shared object file: no such file or directory.

Don’t worry, just run the following command and start asterisk again after that.