Domain Registry vs Registrar vs Registrant

There are three different roles involved in domain name registration and management. That is, the registry, registrar, and registrant. The following information will explain these roles and how they work.


A domain name registry is an organization that manages top-level domain names. They create domain name extensions, set the rules for that domain name, and work with registrars to sell domain names to the public. For example, VeriSign manages the registration of .com domain names and their domain name system (DNS).

In short, registry maintains an authoritative database regarding top level or lower level domains including zone files.

Few registries who manages popular top level domains,



The registrar is an (ICANN)accredited organization, that sells domain names to the public, that is registrants like you. That is, registrar works with registry to sell domains to public.

Few popular registrars are,


A registrant is the person or organization or company who registers a domain name. Registrants can manage their domain name’s settings through their registrar. When changes are made to the domain, registrar will send the information to the registry to be updated and saved in the registry’s database. You become a registrant if you are registering domain name!

Registry Registrar and Registrant Hierarchy Illustrated
Registry Registrar and Registrant Hierarchy Illustrated

You can find registry details of domains at root registry database






List of AWS regions and availability zones

List of  AWS Regions

This is complete list of  AWS regions available currently.

S.No Code Name
1 us-east-1 US East (N. Virginia)
2 us-west-2 US West (Oregon)
3 us-west-1 US West (N. California)
4 eu-west-1 EU (Ireland)
5 eu-central-1 EU (Frankfurt)
6 ap-southeast-1 Asia Pacific (Singapore)
7 ap-northeast-1 Asia Pacific (Tokyo)
8 ap-southeast-2 Asia Pacific (Sydney)
9 ap-northeast-2 Asia Pacific (Seoul)
10 sa-east-1 South America (São Paulo)
11 cn-north-1 China (Beijing)
12 ap-south-1 India (Mumbai)

AWS upcoming regions


S.No Code Name
3 N/A UK

List of  AWS regions and their availability zones

S.No AWS region code AWS region name Number Of Availability Zones Availability Zone Names
1 us-east-1 Virginia 4 us-east-1a
2 us-west-2 Oregon 3 us-west-2a
3 us-west-1 N. California 3 us-west-1a
4 eu-west-1 Ireland 3 eu-west-1a
5 eu-central-1 Frankfurt 2 eu-central-1a
6 ap-southeast-1 Singapore 2 ap-southeast-1a
7 ap-southeast-2 Sydney 3 ap-southeast-2a
8 ap-northeast-1 Tokyo 2 ap-northeast-1a
9 ap-northeast-2 Seoul N/A N/A
10 sa-east-1 Sao Paulo 3 sa-east-1a
11 cn-north-1 China (Beijing) N/A N/A
12 ap-south-1 India (Mumbai) 2 ap-south-1a

If you are familiar with AWS CLI you can always check regions and availability zones using following aws cli commands

Find regions using AWS CLI

Command:  aws ec2 describe-regions


Find AWS availability zones using AWS CLI

You can find the availability zones of particular region using following command

There are other two commands ec2-describe-regions and ec2-describe-availability-zone which are also helpful to retrieve regions and availability zones respectively. These are available in the package ec2-api-tools

You can check the availability zones of your current region in AWS console in the dashboard under service health, under availability zones


AWS Regions  google map

Find AWS Regions location here in google map (under development). You are invited to improve.



Note: AWS frequently updates availability zones and regions. Please consider also checking zones on aws console. 




How to list IP addresses of all connected machines in local network

If you want to list valid connected IPs in your local network, you can do it by logging into your router if you have a password, else you can check  connected client IPs command line  using either of the following two commands.


Let’s see, how you can list connected systems IPs in your netowrk using these commands

List IPs using command namp

Basically this command namp is used to scan networks. It is most widely  used as port scanner. You can do many things with this command.

Auditing the security of a device or firewall by identifying the network

Identifying open ports on a target host

Network i maintenance and asset management.

Generating traffic to hosts on a network, response analysis and response time measurement.

Find and exploit vulnerabilities in a network

Ok, lets come to main thing listing IPs of connected systems in network

nmap -sP 192.168.1.*

Where,  you have to specify the IP range or subnet to scan to get the list of connected hosts. Options -sP  no port scan based on your version you can also use option -sn .

Warning:  Do not performs scans on a network without proper authorization.

List IPs using command arp

Basically arp is the protocol which stands for Address Resolution Protocol. Many linux boxes are loaded with command arp

Ping your network using a broadcast address i.e. “ping” if your IP is or something in same network. After that, perform “arp -a” to determine all the computing devices connected to the network

Note: You can find your broadcast IP in ifconfig output for corresponding network interface

You can use the following command to list connected clients after you ping your broadcast IP,

arp -a

This command will list most of the IPs found but it’s not that much accurate. Some times routers hide machines which are connected via wire to the machines which are connected via wifi network.




GIT – How to push code to multiple remotes simultaneously

Well, if you are looking for the way to push multiple remotes at the same time, you might be aware of git well enough. We call remote repositories as remotes in git. Pushing changes to remotes would be the part of usualy development cycle.

Some times you may need to push changes to  multiple remotes like github and bitbucket etc. To do so, you can follow below given instructions.

List your existing remotes

You can list all avaible remotes using following command

$ git remote -v

If you don’t have already any(other) remote configured. You can do by using git remote

$ git remote add  remote_name  remote_url

git remote add github

Usually, we push changes by addressing remote name by default origin something like git push origin. You can configure group multiple remotes and give it a name. So you push to all those remotes by referring that name.

You can add multiple remotes by using either git remote or git config commands or even you can edit config file.

As git can group multiple remotes. You can follow any of the following way to configure multiple remotes to push simultaneously(no need  all)

Add remotes using git remote

You can set multiple remote urls to single remote using git remote

If you don’t have remote named all already create it using git remote add then use git remote set-url –add to add  new url to existing remote  

git remote set-url all –push –add <remote url>
git remote set-url all  –push –add <another repo remote url>
git remote set-url  –add  all   <another>

You can cross check added new remotes using git remote -v


Group multiple remotes using git config

Command git config is used to configure git parameters. It will edit the .git/config file as per given input

git config –add remote.all.url
git config –add remote.all.url  ssh://user@host/repos/repo.git

Note with out –add option command will replace existing remote url. You can verify updated conf  at .git/config


Edit file .git/config  to add remote and multiple remote urls if you are aware of configuration format.

Now you can push multiple remotes simultaneously by referring remote name which is having multiple remote urls assigned.

git push all master

You can always push to multiple remote repos with out grouping them using formal bash syntax

git push server master && git push github master 

How to check system information on linux command line

There are several commands on Linux to fetch system information like number of cpus, partitions, their type and hardware information. In this article we will see few commands which are helpful to fetch information like mentioned above.

Command lscpu

The command lscpu will give the brief summary about cpu and cores. That is total number of cpus and number of cores per cpu are available on machine. The command output would be something like below describing cups and cores.


Command lspci

Command lspci will list all PCI devices  available and recognized by kernel on machine

This command is useful to find available devices on machine and corresponding vendor.

Command procinfo

This command display system statistics gathered from /proc. That is you will get insight overview from file system /proc

This is command is useful to find,

Users, boot up time, load, swap, memory and interrupt etc.


Command lsdev

The command lsdev display information about installed hardware.  This command also gathers some information from /proc file system and will give  quick overview of which hardware uses what I/O addresses, what IRQ and DMA channels

Command lsblk

This is useful to list the block devices available. To list type the following,



Command lsusb

This command will list all available USB ports on machine.


Command lshw

The command lshw will list the complete hardware information. It gives pretty big output. Try this out on your linux box to get complete hardware information,


Command df -ah

This command displays information regarding partitions something like used space, available space and file system type. To do so type following command

df -ah

Where option -a and -h  are to list  all volumes and display usage in human readable format respectively.

Command mount

The command mount is used for both to display mounted volumes information and mount the required volume at specified location.

To display mounted volumes and location, just type the following command


As mentioned this can also be useful to mount paritions, to do so,

mount /dev/sda2  /location/to/mount

Where /dev/sda2  is the partition to mount at specified location

File /proc/interrupts

The /proc/interrupts contains the list of all supported interrupts by this CPU.  This is nothing more than a file having information about interrupts.


How does bash fork bomb :(){ :|:& };: work

The fork bomb is kind of DOS (denial-of-service) attack on system. It attacks the system by consuming all resources. It makes use of the fork operation, that is why it is called as fork bomb.

Here is the bash version of fork bomb

:(){ :|:& };:

It is nothing more than bash function definition and calling it recursively to consume all system resources. We can also write this fork bomb in different programming languages like python. It creates endless number of processes to consumes all system resources.

WARNING!   These examples may crash your system. Please don’t try it on production machines or in your libraries or on your machine. If you want to test. Try it executing in Virtual Machine 


Once the fork bomb is successfully executed on system it may not be possible to bring system back into normal state with out restarting it.

Break down of bash fork bomb :(){ :|:& };:

Bash fork bomb

:(){ :|:& };:

Here is elaborated  version of bash fork bomb,


 :()  –  is formal bash function definition. That is, we created a function named as : (colon).  We are calling the same function recursively twice in side function definition.

:|:   –  This function is getting called itself and piping output to another function call using |.  This is programming technique recursion nothing but a process of calling function itself.

& –   This thing will put the process execution to background

;   –   Termination of function definition

:   –    Calling the defined function (:). It initiates the fork bomb. With out this thing fork bomb won’t get started.

Here is human readable version of bash fork bomb. Probably it might make sense,

fbomb(){ fbomb | fbomb &}; fbomb

That is,

This fork bomb is some times used by system administrators to test process limits of user. You can save the system from crashing by setting process limits. These process limits can be changed using command ulimit  or these limits can configured  at /etc/security/limits.conf and PAM.  Process limits can be configured per user also.
If you set process limits to any process, all child processes will inherit those limits.

Well configured system would not go down when fork  bomb is initiated.

To query all existing limit try the following command.

ulimit -a

You can try executing fork bomb by reducing these limits. So, once these limits are hit, kernel will impose a restriction over creating new processes, this is how you can protect the system from crashing  because these kind DOS attacks.

You can run the fork bomb by reducing max user process limits

ulimit -u 100


As there is restriction over creating endless processes your system won’t go down or no need to restart to resume normal operation.


How to Interactively discard selected changes in git

Question: How to interactively discard the selected changes in git ?
git is offering us the way to discard selected changes interactively. Here is how you will do it,

To checkout or discard selected changes from all modified files

git checkout -p  

If you want to checkout or discard selected changes from specific file, give that file as an argument

git checkout -p  file_name

As per this command git will break down the file into chunks called hunks. After you execute this command, you will be prompted with following question with several options to choose.

Discard this hunk from worktree [y,n,q,a,d,/,e,?]?

Here is the description of each option,

y – discard this hunk from worktree
n – do not discard this hunk from worktree
q – quit; do not discard this hunk nor any of the remaining ones
a – discard this hunk and all later hunks in the file
d – do not discard this hunk nor any of the later hunks in the file
g – select a hunk to go to
/ – search for a hunk matching the given regex
j – leave this hunk undecided, see next undecided hunk
J – leave this hunk undecided, see next hunk
k – leave this hunk undecided, see previous undecided hunk
K – leave this hunk undecided, see previous hunk
s – split the current hunk into smaller hunks
e – manually edit the current hunk
? – print help

You can also unstage the selected changes from all staged files or from specific file.

To unstage selected changes from all staged files use following command

git reset HEAD -p

To unstage changes from specific file give that file as an argument.

git reset HEAD -p  file_name

Here also you will be prompted to choose any one of the option to unstage hunk.


How to create read only attributes and restrict setting attribute values on object in python

There are different way to prevent setting attributes and make attributes read only on object in python. We can use any one of the following way to  make attributes readonly

  1. Property Descriptor
  2. Using descriptor methods __get__ and __set__
  3. Using slots  (only restricts setting arbitary attributes)

Property Descriptor

Python ships with built in function called property. We can use this function to customize the way attributes be accessed and assigned.

First I will explain you about property before I get you idea about how it is useful to make attribute readonly.

Typical signature of the function property is

 property([fget[, fset[, fdel[, doc]]]]

As you can see here this function take four arguments, those are

fget is a function for getting an attribute value. fset is a function for setting an attribute value. fdel is a function for deleting an attribute value. And doc creates a docstring for the attribute.

All these function are for the sake of single attribute. That is fget function will be called when you access/get the attribute. fset function will be called when you are trying to set the attribute.

Simple example

Instantiate Foo and try to play the instance attribute x

I hope, you got what exactly the function property is and how we use it. In many cases we use this property to hide actual attributes and abstract them with another name.

You can use property as decorator also. Something like

Now let’s come to actual thing how we make attribute readonly.

It’s simple you just don’t define setter for the property attribute. Let’s see the following example

Here, as we didn’t define setter for the property attribute. So python won’t allow setting that specific attribute even you can’t delete if you don’t define fdel. Thus, attribute becomes read only. Still you can access b._money  and you can set that attribute there is no restriction over setting this internal attribute.

Descriptor methods __get__ and __set__

These magic methods define descriptor for the object attribute. To get complete understanding and usage about descriptor magic methods, please check other article .

Like fget and fset functions that property function takes, __get__ is used to define behavior when descriptor’s value is retrieved. __set__ method is used to define behavior  when descriptor value is getting set(assigned). Where __delete__ is used to define behavior when descriptor is getting deleted.

To restrict setting attribute and make it readonly. You have to use __set__ magic method of descriptor and raise exception in it.

Let’s see the simple example demonstrating descriptor object and readonly attributes using descriptors

Lets see the result and trying to set speed attribute

As you can see here, we can’t set the attribute speed on instance v of Vehicle. Because we are restricting it in descriptor method __set__ of class Speed

Python __slots__

The basic usage of __slots__ is to save space in objects. Instead of having a dynamic dict that allows adding attributes to objects at anytime, there is a static structure which does not allow additions after creation. This will also gain us some performance due to lack of dynamic  attribute assignment. That is, it saves the overhead of one dict for every object that uses slots.

Think of you are creating lot of (hundreds, thousands) instances from the same class, this could be useful as memory and performance optimization tool.

If you are using __slots__ means you are defining static attributes on class. This is how we save memory and gain performance as there is not dynamic attribute assignment. Thus you can’t set new attributes on object.

You see, in the above example we are not able to set attribute c as it not given in __sots__. Any way it’s about restricting  assignment to new attributes and you can combine either above two methods to make existing attributes readonly.



[1] __get__ and __set__ data descriptors don’t work on instance attributes


How to check memory usage command line on linux

Linux is having different set of utility command line tools, one of them is free. We use it determine used physical memory, swapped memory and available memory. That is, you can get RAM statistics using this command.

Display memory (RAM) information

To display physical memory information just type free

But it is less readable. You can request command free to display memory in mega bytes or kilo bytes.

free takes several arguments. We can use them to format and display output as we need.

Typical usage of command free


To check memory statistics periodically at regular intervals use,

Here command,

free -k -s 1

displays memory in kilo bytes for every 1 second. To monitory memory usage in real time you better use top or any other monitoring tools available on linux rather than this command.

If you want to check memory consumed by specific process, you can use the command pmap

Command pmap

Command pmapsynopsis


How to write and configure startup or init script using linux command chkconfig

If you restart you server and you want your whole stack up and running, you need to configure start or init script. It’s just a simple shell script with some extra headers that the command chkconfig can understand. The approach of adding stratup script is same for most of the distribution with little bit variation in syntax or flow. In the fedora or redhat linux distribution we use the command chkconfig to add startup or init script.

First you need to understand the run levels to understand how linux initiate the start up script

Run Levels

There are different run levels that init script runs as configured, those run levels are described below. Each run level is a state of machine, like 0 describes the shutdown and 6 describes the reboot. This configuration slightly varies between linux distributions. There are different stats assigned from 0 to 6. That is total 7 run levels.

After the Linux kernel has booted, the /sbin/init program reads the/etc/inittab file to determine the behavior for each runlevel. Unless the user specifies another value as a kernel boot parameter, the system will attempt to enter (start) the default runlevel.


0 Halt
1 Single User Text Mode
2 Not Used (User Definable)
3 Full Multi User Text Mode
4 Not Used (User Definable)
5 Full Multi User Graphical Mode
6 Reboot

Little bit more descriptive. Modes may vary little bit as distribution to distribution.

ID Name Description
0 Halt Shuts down the system.
1 Single-user mode Mode for administrative tasks.
2 Multi-user mode Does not configure network interfaces and does not export networks services.
3 Multi-user mode with networking Starts the system normally.
4 Not used/user-definable For special purposes.
5 Start the system normally with appropriate display manager (with GUI) Same as runlevel 3 + display manager.
6 Reboot Reboots the system.

As I said, it’s just a shell script with some extra comments that chkconfig program can understand. Those comments look like

Chkconfig Configuration

Checkconfig is the tool where we can configure our startup script in different  runlevels. This command can also be used to check currently active init scripts

Install chkconfig

If the command chkconfig is not available, it can be installed using package manager (yum on centos)

List Configured Startup Services

Out put describes on which level that corresponding script is set to run.

You can query the specific service by passing that service name as argument



To configure shell script as a init script using chkconfig, we should keep the following comments in  the file. Usually we put them after shabang line. Formally the chkconfig config/comments would look like. These commands would instruct the chkconfig about init script configuration.
init script are usally located in location /etc/init.d

Enable Service

You can enable the service to run in runlevels 2, 3, 4 and 5  by using following command

# chkconfig service_name on

Let’s say if you want enable httpd service to run on startup

# chkconfig httpd on 

If you want to enable service in particular run level pass the option –level to this command followed by run levels from 0 to 6. Something like,

# chkconfig service_name on –level runlevels

For instance, to enable httpd service in runlevels 2, 3 and 4 by specifying runlevels explicitly

# chkconfig httpd on –level 234

Disable Service

To disable a service in all runlevels (2,3,4 and 5)

# chkconfig service_name  off

Let’s say if you want to disable httpd service in all runleves,

# chkconfig httpd off

You can specify the run explicitly with option –level to  disable those specific runlevels

# chkconfig  service_name off –level runlevels

For instance, to disable httpd in runlevels 2  and 3,

# chkconfig httpd off –level 23