How to create read only attributes and restrict setting attribute values on object in python

There are different way to prevent setting attributes and make attributes read only on object in python. We can use any one of the following way to  make attributes readonly

  1. Property Descriptor
  2. Using descriptor methods __get__ and __set__
  3. Using slots  (only restricts setting arbitary attributes)

Property Descriptor

Python ships with built in function called property. We can use this function to customize the way attributes be accessed and assigned.

First I will explain you about property before I get you idea about how it is useful to make attribute readonly.

Typical signature of the function property is

 property([fget[, fset[, fdel[, doc]]]]

As you can see here this function take four arguments, those are

fget is a function for getting an attribute value. fset is a function for setting an attribute value. fdel is a function for deleting an attribute value. And doc creates a docstring for the attribute.

All these function are for the sake of single attribute. That is fget function will be called when you access/get the attribute. fset function will be called when you are trying to set the attribute.

Simple example

Instantiate Foo and try to play the instance attribute x

I hope, you got what exactly the function property is and how we use it. In many cases we use this property to hide actual attributes and abstract them with another name.

You can use property as decorator also. Something like

Now let’s come to actual thing how we make attribute readonly.

It’s simple you just don’t define setter for the property attribute. Let’s see the following example

Here, as we didn’t define setter for the property attribute. So python won’t allow setting that specific attribute even you can’t delete if you don’t define fdel. Thus, attribute becomes read only. Still you can access b._money  and you can set that attribute there is no restriction over setting this internal attribute.

Descriptor methods __get__ and __set__

These magic methods define descriptor for the object attribute. To get complete understanding and usage about descriptor magic methods, please check other article .

Like fget and fset functions that property function takes, __get__ is used to define behavior when descriptor’s value is retrieved. __set__ method is used to define behavior  when descriptor value is getting set(assigned). Where __delete__ is used to define behavior when descriptor is getting deleted.

To restrict setting attribute and make it readonly. You have to use __set__ magic method of descriptor and raise exception in it.

Let’s see the simple example demonstrating descriptor object and readonly attributes using descriptors

Lets see the result and trying to set speed attribute

As you can see here, we can’t set the attribute speed on instance v of Vehicle. Because we are restricting it in descriptor method __set__ of class Speed

Python __slots__

The basic usage of __slots__ is to save space in objects. Instead of having a dynamic dict that allows adding attributes to objects at anytime, there is a static structure which does not allow additions after creation. This will also gain us some performance due to lack of dynamic  attribute assignment. That is, it saves the overhead of one dict for every object that uses slots.

Think of you are creating lot of (hundreds, thousands) instances from the same class, this could be useful as memory and performance optimization tool.

If you are using __slots__ means you are defining static attributes on class. This is how we save memory and gain performance as there is not dynamic attribute assignment. Thus you can’t set new attributes on object.

You see, in the above example we are not able to set attribute c as it not given in __sots__. Any way it’s about restricting  assignment to new attributes and you can combine either above two methods to make existing attributes readonly.

 

References:

[1] __get__ and __set__ data descriptors don’t work on instance attributes http://stackoverflow.com/questions/23309698/why-is-the-descriptor-not-getting-called-when-defined-as-instance-attribute

[2] http://stackoverflow.com/questions/472000/usage-of-slots

How to check memory usage command line on linux

Linux is having different set of utility command line tools, one of them is free. We use it determine used physical memory, swapped memory and available memory. That is, you can get RAM statistics using this command.

Display memory (RAM) information

To display physical memory information just type free

But it is less readable. You can request command free to display memory in mega bytes or kilo bytes.

free takes several arguments. We can use them to format and display output as we need.

Typical usage of command free

 

To check memory statistics periodically at regular intervals use,

Here command,

free -k -s 1

displays memory in kilo bytes for every 1 second. To monitory memory usage in real time you better use top or any other monitoring tools available on linux rather than this command.

If you want to check memory consumed by specific process, you can use the command pmap

Command pmap

Command pmapsynopsis

 

How to write and configure startup or init script using linux command chkconfig

If you restart you server and you want your whole stack up and running, you need to configure start or init script. It’s just a simple shell script with some extra headers that the command chkconfig can understand. The approach of adding stratup script is same for most of the distribution with little bit variation in syntax or flow. In the fedora or redhat linux distribution we use the command chkconfig to add startup or init script.

First you need to understand the run levels to understand how linux initiate the start up script

Run Levels

There are different run levels that init script runs as configured, those run levels are described below. Each run level is a state of machine, like 0 describes the shutdown and 6 describes the reboot. This configuration slightly varies between linux distributions. There are different stats assigned from 0 to 6. That is total 7 run levels.

After the Linux kernel has booted, the /sbin/init program reads the/etc/inittab file to determine the behavior for each runlevel. Unless the user specifies another value as a kernel boot parameter, the system will attempt to enter (start) the default runlevel.

levelDescription

0 Halt
1 Single User Text Mode
2 Not Used (User Definable)
3 Full Multi User Text Mode
4 Not Used (User Definable)
5 Full Multi User Graphical Mode
6 Reboot

Little bit more descriptive. Modes may vary little bit as distribution to distribution.

ID Name Description
0 Halt Shuts down the system.
1 Single-user mode Mode for administrative tasks.
2 Multi-user mode Does not configure network interfaces and does not export networks services.
3 Multi-user mode with networking Starts the system normally.
4 Not used/user-definable For special purposes.
5 Start the system normally with appropriate display manager (with GUI) Same as runlevel 3 + display manager.
6 Reboot Reboots the system.

As I said, it’s just a shell script with some extra comments that chkconfig program can understand. Those comments look like

Chkconfig Configuration

Checkconfig is the tool where we can configure our startup script in different  runlevels. This command can also be used to check currently active init scripts

Install chkconfig

If the command chkconfig is not available, it can be installed using package manager (yum on centos)

List Configured Startup Services

Out put describes on which level that corresponding script is set to run.

You can query the specific service by passing that service name as argument

 

….

To configure shell script as a init script using chkconfig, we should keep the following comments in  the file. Usually we put them after shabang line. Formally the chkconfig config/comments would look like. These commands would instruct the chkconfig about init script configuration.
init script are usally located in location /etc/init.d

Enable Service

You can enable the service to run in runlevels 2, 3, 4 and 5  by using following command

# chkconfig service_name on

Let’s say if you want enable httpd service to run on startup

# chkconfig httpd on 

If you want to enable service in particular run level pass the option –level to this command followed by run levels from 0 to 6. Something like,

# chkconfig service_name on –level runlevels

For instance, to enable httpd service in runlevels 2, 3 and 4 by specifying runlevels explicitly

# chkconfig httpd on –level 234

Disable Service

To disable a service in all runlevels (2,3,4 and 5)

# chkconfig service_name  off

Let’s say if you want to disable httpd service in all runleves,

# chkconfig httpd off

You can specify the run explicitly with option –level to  disable those specific runlevels

# chkconfig  service_name off –level runlevels

For instance, to disable httpd in runlevels 2  and 3,

# chkconfig httpd off –level 23

REFERENCES

[1] https://www.redhat.com/archives/psyche-list/2002-December/msg01555.html

 

How to enable gzip compression in apache

What is gzip compression? Why do we need to enable it?

Gzip is the file format and method of compression and decompression of files using it’s algorithm to reduce the file size. It is used in web server, where web servers  send data to http client with gzip compression for faster data transfer and low bandwidth consumption.

Enabling gzip compression is the best practice and it is highly recommended, so pages are likely to load faster in browsers.

Enable gzip compression to improve website speed
Enable gzip compression to improve website speed

How to check gzip compression enabled or not

You can check if gzip compression is enabled or not on a particular website using following methods.

You can use the google PageSeep Insights tool not only to check gzip compression but also to analyze performance of a website.

You can also check if gzip compression is  enabled on server or not using command line tool curl on linux. Try out the following,

Replace the URL with what ever the website that you want to check for gzip compression.

Where,

-I   means, only make HEAD request to server to get headers

-H    specify the accepted encoding gzip using header ‘Accept-Encoding’

If you see header Content-Encoding with gzip in response headers, then compression is enabled on server and it’s working

It works in such a way that, client would specify it’s supported compression and encoding using header Accept-Encoding. Server will use compression if it is enabled on server else will send plain text back to client. Server will notify encoding format and compression to client through header Content-Encoding in response headers.

Here is the curl request on google.co.in to verify gzip compression

We can see that, the header Content-Encoding: gzip in response headers. Means gzip compression is enabled.

Enable gzip compression in apache

The gzip compression can be enabled  by directly changing httpd conf file. That is httpd.conf  or you can use .htaccess file to target only specific directory or path or site.

Add the following code to  /etc/httpd/conf/httpd.conf    if apache is installed somewhere add the that specific httpd.conf file

To enable gzip compression via .htaccess, put the following code into .htaccess file which  in the desired site directory.

Restart the apache. That’s it, apache will compress the response which is being sent to client. Enabling gzip compression absolutely would boost your site performance.

You can test if gzip compression is enabled or not using curl as mentioned in this article.

The gzip compression is recommended on all types of text files including following files and extensions

.html
.php
.css
.js
.txt
.json

But, gzip compression is not recommended on graphic files or .zip (in the sense files which are compressed already). Because by compressing these kind of files we hardly save few bytes. Thus increase in loading time because of added unaffected compression task.

Why GZIP compression ?

As explained, gzip compression saves the lot of bandwidth by reducing file size. It saves up to 50% to 80% of bandwidth. So, it reduces download and waiting time of browser for resources. If you enable gzip compression, it won’t affect unsupported browsers, where they can fallback to normal(no compression) data download.

 

Summary

GZIP compression is process of zipping or compressing files on server before they get delivered or transferred via network to browser. Browser will uncompress the data before it uses. As it saves 50% to 80% of bandwidth, if it enabled performance of  website will increase considerably.

Monitor apache webserver realtime using Apachetop

Apachetop is a monitoring tool to monitor  the  performance of apache server and to watch request which are being served live. It’s very likely based on application mytop. It displays current number of reads, writes,  number of requests processed so far and current request being processed. This tool will use apache access_log to  Monitor apache webserver realtime using apachetop

Install apachetop

If you are using CentOS you can install this application by using yum as follows

$ yum install apachetop

On Debian based systems you can install using apt-get as follows,

$ apt-get install apachetop

If you wanna install it from source, download/clone it from github and then compile/install.

Clone Repository

$ git clone https://github.com/tessus/apachetop.git

How to apachetop

Run the application to monitor apache webserver, to do so, type apachetop command. You would see the screen something like below

If you open apachetop. You can see the help by hitting letter h.

You can filter request by URL or referrers or hosts. You can toggle filter by using f when apachetop is running

 

This application by default assumes that the path of access_log file as /var/log/httpd/access_log. If you have a custom installation of apache or running apache on SCL (Software Collections).You can specify the path with option -f. i.e,

$apachetop -f /opt/rh/httpd24/root/etc/httpd/logs/access_log

 

GIT: How to push code to repository on remote ssh server

You can push to  git repository which is on remote ssh server using git. You can configure remote repository as a git remote to be able access quickly using remote(short) name.

To be able to pull and push changes to repository on remote ssh server you have to specify it using any one of the following URL format.

For Instance, if repository is located at  /opt/project, URL would be

ssh://user@server/opt/project

You can push changes using following syntax

git push ssh://user@server/opt/project  master

or

git push server master

Where,

server is remote configured using git remote  command
master is the branch to push

or, it can also be specified in scp-like syntax

For Instance, if repository is located at /opt/project, URL would be

user@server:/opt/project

You can push changes using following syntax

git push user@server:/opt/project  master

or

git push server master

Where,

server is remote configured using git remote  command
master is the branch to push

When you push changes, you will be prompted for the password. You can configure ssh keys to bypass password authentication when you push changes.

How to fix GIT object file is empty error

 

You might have encountered this weird git object file is empty  error when your machine/laptop ran out of power while you are working with git, boom 💣

In this case, you can recover your repository simply by cloning new from remote repository if you don’t have much changes locally. Else, you can follow below mentioned procedure to recover your local repository.

If you try to commit your local/stated changes.

When repository is corrupted you would see error something like below when you try to commit changes

Take a backup of directory .git

$ cp  -a  .git .git-old

Run, git fsck to get corrupted files/objects. i.e, git fsck –full

Now, our job is remove all empty files reported by git fsck

Once you delete all empty files reported by git fsck. git fsck will start showing up missing blob.

If you are lucky enough your problem will be solved with the following step.

NOTE: dangling blobs shown by git fsck are not error

If you delete all empty files, then run git fsck. If it is reporting “missing blob” then look checkout git reflog. It should show “fatal: bad object HEAD”, else probably your problem will be solved with just git reset

$ git reset

If your problem is not solved and git reflog is showing up “fatal: bad object HEAD”. Run the following command to fix broken HEAD

Now, update the HEAD with parent of last commit

Now, run git fsck. If you see error something like invalid sha1 pointer in cache-tree . You can fix it by removing .git/index file and resetting repo using git reset

Check the status and then commit if you required and problem is solved.

 

References:

[1] http://stackoverflow.com/questions/11706215/how-to-fix-git-error-object-file-is-empty

 

How to mount AWS S3 bucket on linux

amazon AWS is offering amazing clound storange service called s3(Simple Storange Service). It is fast and cheap and can
be configured with AWS CDN(Content Delivery Network). It work in such a way that, it containts top level directory like things called
buckets. Buckets can have both files and directories. If you are often working with S3 is would be useful to mount AWS S3 bucket on
your machine or EC2 instance. So, once you mount AWS s3 bucket. You can use it like any other hard disk or partition.

Requirements to mount S3 bucket:

Access Credentials

* AWS Access Key ID
* Secret Access Key

Name of bucket you want mount
Read/Write permissions to bucket
s3fs-fuse

You will get AWS Access Credentials when you create a IAM user. We need those credentials with essential permissions
to successfully mount s3 bucket

Install s3fs

We will use s3fs-fuse software to mount s3 bucket. To get strted install s3fs on machine either using package manager or compiling it from source.
In this article, we will install it from the source.

Dowload or clone s3fs-fuse from github

Before we install s3fs, make sure that you have all dependencies. Install the following dependencies
On debian/ubuntu

sudo apt-get install automake autotools-dev g++ git libcurl4-gnutls-dev libfuse-dev libssl-dev libxml2-dev make pkg-config

On centos

sudo yum install automake fuse fuse-devel gcc-c++ git libcurl-devel libxml2-devel make openssl-devel

Clone the repository (To clone you need software git. If you don’t have git you can download from here)

Let’s clone, compile and install s3fs

Mount AWS S3 bucket

If you install s3fs successfully, you can now mount AWS S3  bucket as a disk or partition. To do so, you need AWS Access credentials. If you don’t have them you can create the one by creating IAM user AWS IAM  or you can ask your administrator for Access Credentials. Once you have them put them into a file as AWS_ACCESS_KEY_ID:AWS_SECRET_ACCESS_KEY.  You can write these credentials to a hidden file in home directory.

Change permission just to make sure only you can access.

Finally, mount your bucket using following commands.

Create a mount directory where you will mount the bucket. Get the bucket name.

If you encounter any errors, enable debug output:

You can also mount on boot by entering the following line to /etc/fstab

or

You can have a  global credential file at /etc/passwd-s3fs

If you are mounting on boot, you may also need to make sure netfs service is start on boot

😉

How to reset wordpress user password using mysql

WordPress uses md5 hash for password. We can reset wordpress user password using mysql single query.

To reset password using mysql,

Login to mysql and select the wordpress database

Following single query is helpful to reset wordpress user password

update wp_users set user_pass=md5(‘new_password’) where user_login=’username’ ;

Replace, new_password and username with your password and username.

How to shuffle lines in the file in linux

We can shuffle lines in the file in linux using following commands

  • shuf
  • sed and sort
  • awk
  • python

As an example we will take a file shuffle_mylines.txt  having numbers till 10 each digit in a new line.

Create a file using following command

$ seq 10  > shuffle_mylines.txt

Command shuf

This command is light wight and straight forward. You just need to call this command with file name as an argument.

Shuffle lines using sed

You may have already know about command sed(Stream Editor). It is one of the command widely used for text processing in unix/linux. We can’t shuffle line using single sed command, but we will do by combining other commands. Let’s take a look at following command,

How does it work?

Breakdown of above command,

Commands we have used in the above example are,

  • cat
  • while loop
  • $RANDOM   environment variable
  • soft
  • tail
  • sed

 

Now, lets come to see how this command work. First command cat will read the file content and will pipe it to shell while loop

Where, while loop will read the piped input into variable x and will iterate over all lines to generate  output  <random_number>:<line> as you can see $RANDOM:$x. Where $RANDOM is the environment variable, each time you query this variable you will get random number. Which is useful for to shuffle lines.

Then, we will sort output of above while loop using sort command

Out put of this command will always be randomly shuffled lines. It’s because $RANDOM.

Output here would look like,

To remove preceded random values we will use sed.

That’s it. On every execution of this command you will get shuffled lines. You can redirect output to new file if you want to store using (>) or (>>).

 

Shuffle lines using awk

The awk is the programming language which is specially designed for text processing. We will use it to shuffle lines.

 

Another example using awk. It’s is similar to sed and sort example.

Shuffle lines in file using python

Python is popular scripting language widely used today from big projects to small scripts. We will see, how you can shuffle lines using python.

Python Example 1

In this example, we are passing file name as command line argument. Reading it and shuffling the lines of file and printing them on terminal.

The output can be redirected to  a file using redirect operator (> or >>)

Conclusion:

If you are looking for a quick shuffle command shuf is best choice or you can have a fun of using other ways to shuffle lines in the file.