Manhole service in Twisted Application.

What is Manhole?

Manhole is an in-process service, that will accept UNIX domain socket connections and present the stack traces for all threads and an interactive prompt.

Using it we can access and modify objects or definition in the running application, like change or add the method in any class, change the definition of any method of class or module.

This allows us to make modifications in running an application without restarting the application, it makes work easy like debugging the application, you are able to check the values of the object while the program is running.

How to configure it?

Once you run above snippet, the service will start on TCP port 2222.

You need to use SSH command to get login into the service.

See below how it looks like.

Here In the first login, we change the value in DATA dictionary in running application, as we can see we get the new value in the second login.

Simple port scanner in python

a port scanner is an application designed to probe a server or host for open ports. Such an application may be used by administrators to verify the security policies of their networks and by attackers to identify network services running on a host and exploit vulnerabilities.

port-scanner.py

Example

port-scannin

Python Matplotlib Library with Examples

What Is Python Matplotlib?

Matplotlib is a plotting library for the Python programming language and its numerical mathematics extension NumPy. It provides an object-oriented API for embedding plots into applications using general-purpose GUI toolkits like Tkinter, wxPython, Qt, or GTK+.

Pyplot is a Matplotlib module which provides a MATLAB-like interface. Matplotlib is designed to be as usable as MATLAB, with the ability to use Python and the advantage of being free and open-source. matplotlib.pyplot is a plotting library used for 2D graphics in the python programming language. It can be used in python scripts, shell, web application servers, and other graphical user interface toolkits.

There are several toolkits that are available that extend python Matplotlib functionality.

  • Basemap: It is a map plotting toolkit with various map projections, coastlines, and political boundaries.
  • Cartopy: It is a mapping library featuring object-oriented map projection definitions, and arbitrary point, line, polygon and image transformation capabilities.
  • Excel tools: Matplotlib provides utilities for exchanging data with Microsoft Excel.
    Mplot3d: It is used for 3-D plots.
  • Natgrid: It is an interface to the “natgrid” library for irregular gridding of the spaced data.
  • GTK tools: mpl_toolkits.gtktools provides some utilities for working with GTK. This toolkit ships with matplotlib, but requires pygtk.
  • Qt interface
  • Mplot3d: The mplot3d toolkit adds simple 3D plotting capabilities to matplotlib by supplying an axes object that can create a 2D projection of a 3D scene.
  • matplotlib2tikz: export to Pgfplots for smooth integration into LaTeX documents.

Types of Plots
There are various plots which can be created using python Matplotlib. Some of them are listed below:

  • Bar Graph
  • Histogram
  • Scatter Plot
  • Line Plot
  • 3D plot
  • Area Plot
  • Pie Plot
  • Image Plot

We will demonstrate some of them in detail.

But before that, let me show you elementary codes in python matplotlib in order to generate a simple graph.

So, with three lines of code, you can generate a basic graph using python matplotlib.

Let us see how can we add title, labels to our graph created by python matplotlib library to bring in more meaning to it. Consider the below example:

You can even try many styling techniques to create a better graph by changing the width or color of a particular line or what if you want to have some grid lines, there you need styling!

The style package adds support for easy-to-switch plotting “styles” with the same parameters as a matplotlibrc file.

There are a number of pre-defined styles provided by matplotlib. For example, there’s a pre-defined style called “ggplot”, which emulates the aesthetics of ggplot (a popular plotting package for R). To use this style, just add:

To list all available styles, use:

So, let me show you how to add style to a graph using python matplotlib. First, you need to import the style package from python matplotlib library and then use styling functions as shown in below code:

Now, we will understand the different kinds of plots. Let’s start with the bar graph!

Matplotlib: Bar Graph
A bar graph uses bars to compare data among different categories. It is well suited when you want to measure the changes over a period of time. It can be plotted vertically or horizontally. Also, the vital thing to keep in mind is that longer the bar, the greater is the value. Now, let us practically implement it using python matplotlib.

When I run this code, it generates a figure like below:


In the above plot, I have displayed a comparison between the distance covered by two cars BMW and Audi over a period of 5 days. Next, let us move on to another kind of plot using python matplotlib – Histogram

Matplotlib – Histogram
Let me first tell you the difference between a bar graph and a histogram. Histograms are used to show a graphical representation of the distribution of numerical data whereas a bar chart is used to compare different entities.

It is an estimate of the probability distribution of a continuous variable (quantitative variable) and was first introduced by Karl Pearson. It is a kind of bar graph.

To construct a histogram, the first step is to “bin” the range of values — that is, divide the entire range of values into a series of intervals — and then count how many values fall into each interval. The bins are usually specified as consecutive, non-overlapping intervals of a variable. The bins (intervals) must be adjacent and are often (but are not required to be) of equal size.

Basically, histograms are used to represent data given in the form of some groups or we can say when you have arrays or a very long list. X-axis is about bin ranges where Y-axis talks about frequency. So, if you want to represent the age-wise population in form of the graph then histogram suits well as it tells you how many exist in certain group range or bin if you talk in the context of histograms.

In the below code, I have created the bins in the interval of 10 which means the first bin contains elements from 0 to 9, then 10 to 19 and so on.

When I run this code, it generates a figure like below:

As you can see in the above plot, Y-axis tells about the age groups that appear with respect to the bins. Our biggest age group is between 40 and 50.

Matplotlib: Scatter Plot
A scatter plot is a type of plot that shows the data as a collection of points. The position of a point depends on its two-dimensional value, where each value is a position on either the horizontal or vertical dimension. Usually, we need scatter plots in order to compare variables, for example, how much one variable is affected by another variable to build a relation out of it.
Consider the below example:

As you can see in the above graph, I have plotted two scatter plots based on the inputs specified in the above code. The data is displayed as a collection of points having ‘high-income low salary’ and ‘low-income high salary.’

Scatter plot with groups
Data can be classified into several groups. The code below demonstrates:

The purpose of using “plt.figure()” is to create a figure object. It’s a Top-level container for all plot elements.

The whole figure is regarded as the figure object. It is necessary to explicitly use “plt.figure()”when we want to tweak the size of the figure and when we want to add multiple Axes objects in a single figure.

fig.add_subplot() is used to control the default spacing of the subplots.
For example, “111” means “1×1 grid, first subplot” and “234” means “2×3 grid, 4th subplot”.

You can easily understand by the following picture:

Next, let us understand the area plot or you can also say Stack plot using python matplotlib.

Matplotlib: Area Plot
Area plots are pretty much similar to the line plot. They are also known as stack plots. These plots can be used to display the evolution of the value of several groups on the same graphic. The values of each group are displayed on top of each other. It allows checking on the same figure the evolution of both the total of a numeric variable and the importance of each group.

A line chart forms the basis of an area plot, where the region between the axis and the line is represented by colors.

The above-represented graph shows how an area plot can be plotted for the present scenario. Each shaded area in the graph shows a particular bike with the frequency variations denoting the distance covered by the bike on different days. Next, let us move to our last yet most frequently used plot – Pie chart.

Matplotlib: Pie Chart
In a pie plot, statistical data can be represented in a circular graph where the circle is divided into portions i.e. slices of pie that denote a particular data, that is, each portion is proportional to different values in the data. This sort of plot can be mainly used in mass media and business.

In the above-represented pie plot, the bikes scenario is illustrated, and I have divided the circle into 4 sectors, each slice represents a particular bike and the percentage of distance traveled by it. Now, if you have noticed these slices adds up to 24 hrs, but the calculation of pie slices is done automatically for you. In this way, the pie charts are really useful as you don’t have to be the one who calculates the percentage of the slice of the pie.

Matplotlib: 3D Plot
Plotting of data along x, y, and z axes to enhance the display of data represents the 3-dimensional plotting. 3D plotting is an advanced plotting technique that gives us a better view of the data representation along the three axes of the graph.

Line Plot 3D

In the above-represented 3D graph, a line graph is illustrated in a 3-dimensional manner. We make use of a special library to plot 3D graphs which are given in the following syntax.
Syntax for plotting 3D graphs:

The import Axes3D is mainly used to create an axis by making use of the projection=3d keyword. This enables a 3-dimensional view of any data that can be written along with the above-mentioned code.

Surface Plot 3D

By default, it will be colored in shades of a solid color, but it also supports color mapping by supplying the cmap argument.

The rstride and cstride kwargs set the stride used to sample the input data to generate the graph. If 1k by 1k arrays are passed in, the default values for the strides will result in a 100×100 grid being plotted. Defaults to 10. Raises a ValueError if both stride and count kwargs (see next section) are provided.

Matplotlib: Image Plot

Matplotlib: Working With Multiple Plots
I have discussed multiple types of plots in python matplotlib such as bar plot, scatter plot, pie plot, area plot, etc. Now, let me show you how to handle multiple plots.

Data Analysis with Pandas & Python

What is Data Analysis?
Data analysis is a process of inspecting, cleansing, transforming, and modeling data with the goal of discovering useful information, informing conclusions, and supporting decision-making. In today’s business world, data analysis plays a role in making decisions more scientific and helping businesses operate more effectively
Python is a great language for doing data analysis, primarily because of the fantastic ecosystem of data-centric Python packages. Pandas is one of those packages providing fast, flexible, and expressive data structures designed to make working with “relational” or “labeled” data both easy and intuitive. It aims to be the fundamental high-level building block for doing practical, real-world data analysis in Python.
In this article, I have used Pandas to know more about doing data analysis.
Mainly pandas have two data structures, series, data frames, and Panel.

Installation
The easiest way to install pandas is to use pip:

or, Download it from here.

  • pandas Series

pandas series can be used for the one-dimensional labeled array.

Labels can be accessed using index attribute
print(a.index)

You can use array indexing or labels to access data in the series.
You can use array indexing or labels to access data in the series
print(a[1])
print(a[‘test4’])

You can also apply mathematical operations on pandas series.
b = a * 2
c = a ** 1.5
print(b)
print(c)

You can even create a series of heterogeneous data.
s = pd.Series([‘test1’, 1.2, 3, ‘test2’], index=[‘test3’, ‘test4’, 2, ‘4.3’])

print(s)

  • pandas DataFrame

pandas DataFrame is a two-dimensional array with heterogeneous data.i.e., data is aligned in a tabular fashion in rows and columns.
Structure
Let us assume that we are creating a data frame with the student’s data.

Name Age Gender Rating
Steve 32 Male 3.45
Lia 28 Female 4.6
Vin 45 Male 3.9
Katie 38 Female 2

You can think of it as an SQL table or a spreadsheet data representation.
The table represents the data of a sales team of an organization with their overall performance rating. The data is represented in rows and columns. Each column represents an attribute and each row represents a person.
The data types of the four columns are as follows −

Column Type
Name String
Age Integer
Gender String
Rating Float

Key Points
• Heterogeneous data
• Size Mutable
• Data Mutable

A pandas DataFrame can be created using the following constructor −
pandas.DataFrame( data, index, columns, dtype, copy)

•  data
data takes various forms like ndarray, series, map, lists, dict, constants and also another DataFrame.
•  index
For the row labels, the Index to be used for the resulting frame is Optional Default np.arrange(n) if no index is passed.
•  columns
For column labels, the optional default syntax is – np.arrange(n). This is only true if no index is passed.
•  dtype
The data type of each column.
•  copy
This command (or whatever it is) is used for copying of data if the default is False.

There are many methods to create DataFrames.
• Lists
• dict
• Series
• Numpy ndarrays
• Another DataFrame

Creating DataFrame from the dictionary of Series
The following method can be used to create DataFrames from a dictionary of pandas series.

print(df)

print(df.index)

print(df.columns)

Creating DataFrame from list of dictionaries
l = [{‘orange’: 32, ‘apple’: 42}, {‘banana’: 25, ‘carrot’: 44, ‘apple’: 34}]
df = pd.DataFrame(l, index=[‘test1’, ‘test2’])

print(df)

You might have noticed that we got a DataFrame with NaN values in it. This is because we didn’t the data for that particular row and column.

Creating DataFrame from Text/CSV files
Pandas tool comes in handy when you want to load data from a CSV or a text file. It has built-in functions to do this for use.

df = pd.read_csv(‘happiness.csv’)

Yes, we created a DataFrame from a CSV file. This dataset contains the outcome of the European quality of life survey. This dataset is available here. Now we have stored the DataFrame in df, we want to see what’s inside. First, we will see the size of the DataFrame.

print(df.shape)

It has 105 Rows and 4 Columns. Instead of printing out all the data, we will see the first 10 rows.
df.head(10)

There are many more methods to create a DataFrames. But now we will see the basic operation on DataFrames.

Operations on DataFrame
We’ll recall the DataFrame we made earlier.

print(df)

Now we want to create a new row column from current columns. Let’s see how it is done.
df[‘column3’] = (2 * df[‘column1’] + 3 * df[‘column2’])/5

We have created a new column column3 from column1 and  column2. We’ll create one more using boolean.
df[‘flag’] = df[‘column1’] > 99.5

We can also remove columns.
column3 = df.pop(‘column3’)

print(column3)

print(df)

Descriptive Statistics using pandas
It’s very easy to view descriptive statistics of a dataset using pandas. We are gonna use, Biomass data collected from this source. Let’s load the data first.

url = ‘https://raw.github.com/vincentarelbundock/Rdatasets/master/csv/DAAG/biomass.csv’
df = pd.read_csv(url)
df.head()

We are not interested in the unnamed column. So, let’s delete that first. Then we’ll see the statistics with one line of code.

It’s simple as that. We can see all the statistics. Count, mean, standard deviation and other statistics. Now we are gonna find some other metrics which are not available in the describe() summary.

Mean :
print(df.mean())

Min and Max
print(df.min())

print(df.max())

Pairwise Correlation
df.corr()

Data Cleaning
We need to clean our data. Our data might contain missing values, NaN values, outliers, etc. We may need to remove or replace that data. Otherwise, our data might make any sense.
We can find null values using the following method.

print(df.isnull().any())

We have to remove these null values. This can be done by the method shown below.

newdf = df.dropna()

print(newdf.shape)

print(newdf.shape)

Pandas .Panel()
A panel is a 3D container of data. The term Panel data is derived from econometrics and is partially responsible for the name pandas − pan(el)-da(ta)-s.
The names for the 3 axes are intended to give some semantic meaning to describing operations involving panel data. They are −
• items − axis 0, each item corresponds to a DataFrame contained inside.
• major_axis − axis 1, it is the index (rows) of each of the DataFrames.
• minor_axis − axis 2, it is the columns of each of the DataFrames.

A Panel can be created using the following constructor −
The parameters of the constructor are as follows −
• data – Data takes various forms like ndarray, series, map, lists, dict, constants and also another DataFrame
• items – axis=0
• major_axis – axis=1
• minor_axis – axis=2
• dtype – the Data type of each column
• copy – Copy data. Default, false

A Panel can be created using multiple ways like −
• From ndarrays
• From dict of DataFrames
• From 3D ndarray

print(p)

Note − Observe the dimensions of the empty panel and the above panel, all the objects are different.

From dict of DataFrame Objects

print(p)

Selecting the Data from Panel
Select the data from the panel using −
• Items
• Major_axis
• Minor_axis

Using Items

print p[‘Item1’]

We have two items, and we retrieved item1. The result is a DataFrame with 4 rows and 3 columns, which are the Major_axis and Minor_axis dimensions.

Using major_axis
Data can be accessed using the method panel.major_axis(index).

Using minor_axis
Data can be accessed using the method panel.minor_axis(index).

print(p.minor_xs(1))

 

How to configure django app using gunicorn?

Django

Django is a python web framework used for developing web applications. It is fast, secure and scalable. Let us see how to configure the Django app using gunicorn.

Before proceeding to actual configuration, let us see some intro on the Gunicorn.

Gunicorn

Gunicorn (Green Unicorn) is a WSGI (Web Server Gateway Interface) server implementation commonly used to run python web applications and implements PEP 3333 server standard specifications, therefore, it can run web applications that implement application interface. Web applications written in Django, Flask or Bottle implements application interface.

Installation

Gunicorn coupled with Nginx or any web server works as a bridge between the web server and web framework. Web server (Nginx or Apache) can be used to serve static files and Gunicorn to handle requests to and responses from Django application. I will try to write another blog in detail on how to set up a django application with Nginx and Gunicorn.

Prerequisites

Please make sure you have below packages installed in your system and a basic understanding of Python, Django and Gunicorn are recommended.

  • Python > 3.5
  • Gunicorn > 15.0
  • Django > 1.11

Configure Django App Using Gunicorn

There are different ways to configure the Gunicron, I am going to demonstrate more on running the Django app using the gunicorn configuration file.

First, let us start by creating the Django project, you can do so as follows.

After starting the Django project, the directory structure looks like this.

The simplest way to run your django app using gunicorn is by using the following command, you must run this command from your manage.py folder.

This will run your Django project on 8000 port locally.

Configuration

Now let’s see, how to configure the django app using gunicorn configuration file. A simple Gunicorn configuration with worker class sync will look like this.

Let us see a few important details in the above configuration file.

  1. Append the base directory path in your systems path.
  2. You can bind the application to a socket using bind.
  3. backlog Maximum number of pending connections.
  4. workers number of workers to handle requests. This is based on your machine’s CPU count. This can be varied based on your application workload.
  5. worker_class, there are different types of classes, you can refer here for different types of classes. sync is the default and should handle normal types of loads.

You can refer more about the available Gunicorn settings here.

Running Django with gunicorn as a daemon PROCESS

Here is the sample systemd file,

After adding the file to the location /etc/systemd/system/. To reload new changes in file execute the following command.

NOTE: MAKE SURE TO INSTALL REQUIRED PACKAGES, GUNICORN FAILS TO START IF THERE ARE ANY MISSING PACKAGES. YOU CAN REFER TO MORE INFO IN ERROR LOGFILE MENTIONED IN CONFIGURATION FILE.

Start, Stop and Status of Application using systemctl

Now you can simply execute the following commands for your application.

To start your application

To stop your application.

To check the status of your application.

Please refer to a short complete video tutorial to configure the Django app below.

Configure Celery with SQS and Django on Elastic Beanstalk

 Introduction

Has your users complained about the loading issue on the web app you developed. That might be because of some long I/O bound call or a time consuming process. For example, when a customer signs up to website and we need to send confirmation email which in normal case the email will be sent and then reply 200 OK response is sent on signup POST. However we can send email later, after sending 200 OK response, right?. This is not so straight forward when you are working with  a framework like Django, which is tightly binded to MVC paradigm.

So, how do we do it ? The very first thought in mind would be python threading module. Well, Python threads are implemented as pthreads (kernel threads), and because of the global interpreter lock (GIL), a Python process only runs one thread at a time. And again threads are hard to manage, maintain code and scale it.

Perequisite

Audience for this blog requires to have knowledge about Django and AWS elastic beanstalk.

Celery

Celery is here to rescue. It can help when you have a time consuming task (heavy compute or I/O bound tasks) between request-response cycle. Celery is an open source asynchronous task queue or job queue which is based on distributed message passing. In this post I will walk you through the celery setup procedure with django and SQS on elastic beanstalk.

Why Celery ?   

Celery is very easy to integrate with existing code base. Just write a decorator above the definition of a function declaring a celery task and call that function with a .delay method of that function.

Broker

To work with celery, we need a message broker. As of writing this blog, Celery supports RabbitMQ, Redis, and Amazon SQS (not fully) as message broker solutions. Unless you don’t want to stick to AWS ecosystem (as in my case), I recommend to go with RabbitMQ or Redis because SQS does not yet support remote control commands and events. For more info check here. One of the reason to use SQS is its pricing. One million SQS free request per month for every user.

Proceeding with SQS, go to AWS SQS dashboard and create a new SQS queues. Click on create new queue button.

Depending upon the requirement we can select any type of the queue. We will name queue as dev-celery.

Installation

Celery has a very nice documentation. Installation and configuration is described here. For convenience here are the steps

Activate your virtual environment, if you have configured one and install cerely.

pip install celery[sqs]

Configuration

Celery has built-in support of django. It will pick its setting parameter from django’s settings.py which are prepended by CELERY_ (‘CELERY’ word needs to be defined while initializing celery app as namespace). So put below setting parameter in settings.py

AWS login credentials should be present in the environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY

Now let’s configure celery app within django code. Create a celery.py file besides django’s settings.py.

Now put below code in projects __init__.py

Testing

Now let’s test the configuration. Open terminal start celery

Terminal 1

 

All the task which are registered to use celery using celery decorators appear here while starting celery. If you find that your task does not appear here then make sure that the module containing the task is imported on startup.

Now open django shell in another terminal

Terminal 2

After executing the task function with delay method, that task should run in the worker process which is listening to events in other terminal. Here celery sent a message to SQS with details of the task and worker process which was listening to SQS, received it and task was executed in worker process. Below is what you should see in terminal 1

Terminal 1

Deploy celery worker process on AWS elastic beanstalk

Celery provides “multi” sub command to run process in daemon mode, but this cannot be used on production. Celery recommends various daemonization tools http://docs.celeryproject.org/en/latest/userguide/daemonizing.html

AWS elastic beanstalk already use supervisord for managing web server process. Celery can also be configured using supervisord tool. Celery’s official documentation has a nice example of supervisord config for celery. https://github.com/celery/celery/tree/master/extra/supervisord. Based on that we write quite a few commands under .ebextensions directory.

Create two files under .ebextensions directory. Celery.sh file extract the environment variable and forms celery configuration, which copied to /opt/python/etc/celery.conf file and supervisord is restarted. Here main celery command:

At the time if writing this blog celery had https://github.com/celery/celery/issues/3759 issue. As a work around to this issue we add “-P solo”. This will run task sequentially for a single worker process.

Now create elastic beanstalk configuration file as below. Make sure you have pycurl and celery in requirements.txt. To install pycurl libcurl-devel needs to be installed from yum package manager.

Add these files to git and deploy to elastic beanstalk.

Below is the figure describing the architecture with django, celery and elastic beanstalk.

What is DBF file? How to read it in linux and python?

What is DBF files ?

A DBF file is a standard database file used by dBASE, a database management system application. It organises data into multiple records with fields stored in an array data type. DBF files are also compatible with other “xBase” database programs, which became an important feature because of the file format’s popularity.

Tools which can read or open DBF files

Below are list of program which can read and open dbf file.

  • Windows
    1. dBase
    2. Microsoft Access
    3. Microsoft Excel
    4. Visual Foxpro
    5. Apache OpenOffice
    6. dbfview
    7. dbf Viewer Plus
  • Linux
    1. Apache OpenOffice
    2. GTK DBF Editor

How to read file in linux ?

“dbview” command available in linux, which can read dbf files.

Below code snippet show how to use dbview command.

 How to read it using python ?

dbfread” is the library available in python to read dbf files. This library reads DBF files and returns the data as native Python data types for further processing.

dbfread requires python 3.2 or 2.7.  dbfread is a pure python module, so doesn’t depend on any packages outside the standard library.

You can install library by the command below.

The below code snippet can read dbf file and retrieve data as python dictionary.

You can also use the with statement:

By default the records are streamed directly from the file.  If you have enough memory you can load them into a list instead. This allows random access

 How to Write content in DBF file using python ?

dbfpy is a python-only module for reading and writing DBF-files.  dbfpy can read and write simple DBF-files.

You can install it by using below command

The below example shows how to create dbf files and write records in to it.

Also you can update a dbf file record using dbf module.

The below example shows how to update a record in a .dbf file.

 

What is milter?

Every one gets tons of email these days. This includes emails about super duper offers from amazon to princess and wealthy businessmen trying to offer their money to you from some African country that you have never heard of. In all these emails in your inbox there lies one or two valuable emails either from your friends, bank alerts, work related stuff. Spam is a problem that email service providers are battling for ages. There are a few opensource spam fighting tools available like SpamAssasin or SpamBayes.

What is milter ?

Simply put – milter is mail filtering technology. Its designed by sendmail project. Now available in other MTAs also. People historically used all kinds of solutions for filtering mails on servers using procmail or MTA specific methods. The current scene seems to be moving forward to sieve. But there is a huge difference between milter and sieve. Sieve comes in to picture when mail is already accepted by MTA and had been handed over to MDA. On the other hand milter springs into action in the mail receiving part of MTA. When a new connection is made by remote server to your MTA, your MTA will give you an opportunity to accept of reject the mail every step of the way from new connection, reception of each header, and reception of body.

milter stages
milter protocol various stages

The above picture depicts simplified version of milter protocol working. Full details of milter protocol can be found here https://github.com/avar/sendmail-pmilter/blob/master/doc/milter-protocol.txt  . Not only filtering; using milter, you can also modify message or change headers.

HOW DO I GET STARTED WITH CODING MILTER PROGRAM ?

If you want to get started in C you can use libmilter.  For Python you have couple of options:

  1. pymilter –  https://pythonhosted.org/milter/
  2. txmilter – https://github.com/flaviogrossi/txmilter

Postfix supports milter protocol. You can find every thing related to postfix’s milter support in here – http://www.postfix.org/MILTER_README.html

WHY NOT SIEVE WHY MILTER ?

I found sieve to be rather limited. It doesn’t offer too many options to implement complex logic. It was purposefully made like that. Also sieve starts at the end of mail reception process after mail is already accepted by MTA.

Coding milter program in your favorite programming language gives you full power and allows you to implement complex , creative stuff.

WATCHOUT!!!

When writing milter programs take proper care to return a reply to MTA quickly. Don’t do long running tasks in milter program when the MTA is waiting for reply. This will have crazy side effects like remote parties submitting same mail multiple time filling up your inbox.

InlineCallbacks

Twisted features a decorator named inlineCallbacks which allows you to work with deferreds without writing callback functions.

This is done by writing your code as generators, which yield deferreds instead of attaching callbacks.

Consider the following function written in the traditional deferred style:

using inlineCallbacks, we can write this as:

from twisted.internet.defer import inlineCallbacks

Instead of calling addCallback on the deferred returned by redis.Connection, we yield it. this causes Twisted to return the deferred‘s result to us.

Though the inlineCallbacks looks like synchronous code, which blocks while waiting for the request to finish, each yield statement allows other code to run while waiting for the deferred being yielded to fire.

inlineCallbacks become even more powerful when dealing with complex control flow and error handling.

txRedis

txredis is a non-blocking client for the redis database, written in Python. It uses twisted for the asynchronous communication with redis.

Install

Now to check if txredis is properly installed or not goto python prompt and import txredis :

Example :

Using redis-cli, add list of user’s in redis-server with hmset :

hmset, set key to value within hash name for each corresponding key and value from the mapping dict.

Now, using python twisted get user’s :

Above code do connection with redis-server.

Here, rediscreator.connectTCP return defer and register appropriate method in addCallback, addErrback

Here, define callback function onRedisConnect and errRedisConnect.

Here, hget return the value of key within the hase name and return defer

Here, define callback function onSuccess and onFailur and log the appropriate result.

It give output 123, it is correspond to user abc