Configure Celery with SQS and Django on Elastic Beanstalk

 Introduction

Has your users complained about the loading issue on the web app you developed. That might be because of some long I/O bound call or a time consuming process. For example, when a customer signs up to website and we need to send confirmation email which in normal case the email will be sent and then reply 200 OK response is sent on signup POST. However we can send email later, after sending 200 OK response, right?. This is not so straight forward when you are working with  a framework like Django, which is tightly binded to MVC paradigm.

So, how do we do it ? The very first thought in mind would be python threading module. Well, Python threads are implemented as pthreads (kernel threads), and because of the global interpreter lock (GIL), a Python process only runs one thread at a time. And again threads are hard to manage, maintain code and scale it.

Perequisite

Audience for this blog requires to have knowledge about Django and AWS elastic beanstalk.

Celery

Celery is here to rescue. It can help when you have a time consuming task (heavy compute or I/O bound tasks) between request-response cycle. Celery is an open source asynchronous task queue or job queue which is based on distributed message passing. In this post I will walk you through the celery setup procedure with django and SQS on elastic beanstalk.

Why Celery ?   

Celery is very easy to integrate with existing code base. Just write a decorator above the definition of a function declaring a celery task and call that function with a .delay method of that function.

Broker

To work with celery, we need a message broker. As of writing this blog, Celery supports RabbitMQ, Redis, and Amazon SQS (not fully) as message broker solutions. Unless you don’t want to stick to AWS ecosystem (as in my case), I recommend to go with RabbitMQ or Redis because SQS does not yet support remote control commands and events. For more info check here. One of the reason to use SQS is its pricing. One million SQS free request per month for every user.

Proceeding with SQS, go to AWS SQS dashboard and create a new SQS queues. Click on create new queue button.

Depending upon the requirement we can select any type of the queue. We will name queue as dev-celery.

Installation

Celery has a very nice documentation. Installation and configuration is described here. For convenience here are the steps

Activate your virtual environment, if you have configured one and install cerely.

pip install celery[sqs]

Configuration

Celery has built-in support of django. It will pick its setting parameter from django’s settings.py which are prepended by CELERY_ (‘CELERY’ word needs to be defined while initializing celery app as namespace). So put below setting parameter in settings.py

AWS login credentials should be present in the environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY

Now let’s configure celery app within django code. Create a celery.py file besides django’s settings.py.

Now put below code in projects __init__.py

Testing

Now let’s test the configuration. Open terminal start celery

Terminal 1

 

All the task which are registered to use celery using celery decorators appear here while starting celery. If you find that your task does not appear here then make sure that the module containing the task is imported on startup.

Now open django shell in another terminal

Terminal 2

After executing the task function with delay method, that task should run in the worker process which is listening to events in other terminal. Here celery sent a message to SQS with details of the task and worker process which was listening to SQS, received it and task was executed in worker process. Below is what you should see in terminal 1

Terminal 1

Deploy celery worker process on AWS elastic beanstalk

Celery provides “multi” sub command to run process in daemon mode, but this cannot be used on production. Celery recommends various daemonization tools http://docs.celeryproject.org/en/latest/userguide/daemonizing.html

AWS elastic beanstalk already use supervisord for managing web server process. Celery can also be configured using supervisord tool. Celery’s official documentation has a nice example of supervisord config for celery. https://github.com/celery/celery/tree/master/extra/supervisord. Based on that we write quite a few commands under .ebextensions directory.

Create two files under .ebextensions directory. Celery.sh file extract the environment variable and forms celery configuration, which copied to /opt/python/etc/celery.conf file and supervisord is restarted. Here main celery command:

At the time if writing this blog celery had https://github.com/celery/celery/issues/3759 issue. As a work around to this issue we add “-P solo”. This will run task sequentially for a single worker process.

Now create elastic beanstalk configuration file as below. Make sure you have pycurl and celery in requirements.txt. To install pycurl libcurl-devel needs to be installed from yum package manager.

Add these files to git and deploy to elastic beanstalk.

Below is the figure describing the architecture with django, celery and elastic beanstalk.

FCM – send push notifications using Python

What is FCM ?

FCM – Firebase Cloud Messaging is a cross-platform  ( Android, iOS and Chrome ) messaging solution that lets you reliably deliver messages at no cost. FCM is best suited if you want to send push notification to your app which you built to run on Android and iOS. The advantage you get is you don’t have to separately deal with GCM (Google Cloud Messaging deprecated now) and Apple’s APNS. You hand over your notification message to FCM and FCM takes care of communicating with apple’s APNS and Android messaging servers to reliably deliver those messages.

fcm-2

Using FCM we can send message to single device or multiple devices.  There are two different types of messages, notification and data. Notification messages include JSON keys that are understood and interpreted by phone’s operating system. If you want to include customized app specific JSON keys use data message. You can combine both notification and data JSON objects in single message. You can also send messages with different priority.

Note : – You need to set priority  to high  if you want phone to wake up and show notification on screen

Sending message with Python

We can use PyFCM to send messages via FCM. PyFCM is good for synchronous ( blocking ) python. We will discuss non-blocking option in next paragraph.

Install PyFCM using following command

The following code will send a push notification to

So, the PyFCM API is the pretty straight forward to use.

Sending FCM push notification using Twisted

PyFCM discussed in above paragraph is good enough if you want to send messages in blocking fashion. If you have to send high number of concurrent messages then using Twisted is a good option.

Twisted Matrix
Twisted Matrix

Network operations performed using twisted library don’t block. Thus it’s a good choice when network concurrency is required by program. We can use txFCM library to send FCM messages using twisted

Install txFCM using following command

Following code send FCM message using txFCM

txFCM is built on top of PyFCM so all the API call that are available in PyFCM are also available in txFCM.

Installing ELK Stack(Elasticsearch,Logstash,Kibana) on CentOS with Sentinl plugin

ELK stack is also known as the Elastic stack, consists of Elasticsearch, Logstash, and Kibana. It helps you to have all of your logs stored in one place and analyze the issues by correlating the events at a particular time.

This guide helps you to install ELK stack on CentOS 7 / RHEL 7.

Components

Logstash – It does the processing (Collect, enrich and send it to Elasticsearch) of incoming logs sent by beats (forwarder).

Elasticsearch – It stores incoming logs from Logstash and provides an ability to search the logs/data in a real-time

Kibana – Provides visualization of logs.

Sentinl –  Sentinl extends Siren Investigate and Kibana with Alerting and Reporting functionality to monitor, notify and report on data series changes using standard queries, programmable validators and a variety of configurable actions – Think of it as a free an independent “Watcher” which also has scheduled “Reporting” capabilities (PNG/PDFs snapshots).

SENTINL is also designed to simplify the process of creating and managing alerts and reports in Siren Investigate/Kibana 6.xvia its native App Interface, or by using native watcher tools in Kibana 6.x+.

 

Beats – Installed on client machines, send logs to Logstash through beats protocol.

Environment

To have a full-featured ELK stack, we would need two machines to test the collection of logs.

ELK Stack

Filebeat

Prerequisites

Install Java

Since Elasticsearch is based on Java, make sure you have either OpenJDK or Oracle JDK is installed on your machine.

Here, I am using OpenJDK 1.8.

Verify the Java version.

Output:

Configure ELK repository

Import the Elastic signing key.

Setup the Elasticsearch repository and install it.

Add the below content to the elk.repo file.

Install Elasticsearch

Elasticsearch is an open source search engine, offers a real-time distributed search and analytics with the RESTful web interface. Elasticsearch stores all the data are sent by the Logstash and displays through the web interface (Kibana) on users request.

Install Elasticsearch.

Configure Elasticsearch to start during system startup.

Use CURL to check whether the Elasticsearch is responding to the queries or not.

Output:

Install Logstash

Logstash is an open source tool for managing events and logs, it collects the logs, parse them and store them on Elasticsearch for searching. Over 160+ plugins are available for Logstash which provides the capability of processing the different type of events with no extra work.

Install the Logstash package.

Create SSL certificate (Optional)

Filebeat (Logstash Forwarder) are normally installed on client servers, and they use SSL certificate to validate the identity of Logstash server for secure communication.

Create SSL certificate either with the hostname or IP SAN.

(Hostname FQDN)

If you use the Logstash server hostname in the beats (forwarder) configuration, make sure you have A record for Logstash server and also ensure that client machine can resolve the hostname of the Logstash server.

Go to the OpenSSL directory.

Now, create the SSL certificate. Replace green one with the hostname of your real Logstash server.

Configure Logstash

Logstash configuration can be found in /etc/logstash/conf.d/. Logstash configuration file consists of three sections input, filter, and the output. All three sections can be found either in a single file or separate files end with .conf.

I recommend you to use a single file for placing input, filter and output sections.

In the first section, we will put an entry for input configuration. The following configuration sets Logstash to listen on port 5044 for incoming logs from the beats (forwarder) that sit on client machines.

Also, add the SSL certificate details in the input section for secure communication – Optional.

In the filter section. We will use Grok to parse the logs ahead of sending it to Elasticsearch. The following grok filter will look for the syslog labeled logs and tries to parse them to make a structured index.

For more filter patterns, take a look at grokdebugger page.

In the output section, we will define the location where the logs to get stored; obviously, it should be Elasticsearch.

Now start and enable the Logstash service.

You can troubleshoot any issues by looking at Logstash logs.

Install & Configure Kibana

Kibana provides visualization of logs stored on the Elasticsearch. Install the Kibana using the following command.

Edit the kibana.yml file.

By default, Kibana listens on localhost which means you can not access Kibana interface from external machines. To allow it, edit the below line with your machine IP.

Uncomment the following line and update it with the Elasticsearch instance URL. In my case, it is localhost.

Start and enable kibana on system startup.

Install Sentinl plugin:

Install and Configure Filebeat

There are four beats clients available

  1. Packetbeat – Analyze network packet data.
  2. Filebeat – Real-time insight into log data.
  3. Topbeat – Get insights from infrastructure data.
  4. Metricbeat – Ship metrics to Elasticsearch.

To analyze the system logs of the client machine (Ex. client.lintel.local), we need to install filebeat. Create beats.repo file.

Add the below content to the above repo file.

Now, install Filebeat using the following command.

Set up a host entry on the client machine in case your environment does not have DNS server.

Make an host entry like below on the client machine.

Filebeat (beats) uses SSL certificate for validating Logstash server identity, so copy the logstash-forwarder.crt from the Logstash server to the client.

Skip this step, in case you are not using SSL in Logstash.

Filebeat configuration file is in YAML format, which means indentation is very important. Make sure you use the same number of spaces used in the guide.

Open up the filebeat configuration file.

On top, you would see the prospectors section. Here, you need to specify which logs should be sent to Logstash and how they should be handled. Each prospector starts with – character.

For testing purpose, we will configure filebeat to send /var/log/messages to Logstash server. To do that, modify the existing prospector under paths section.

Comment out the – /var/log/*.log to avoid sending all .log files present in that directory to Logstash.

Comment out the section output.elasticsearch: as we are not going to store logs directly to Elasticsearch.

Now, find the line output.logstash and modify the entries like below. This section defines filebeat to send logs to Logstash server server.lintel.local on port 5044 and mention the path where the copied SSL certificate is placed

Replace server.lintel.local with IP address in case if you are using IP SAN.

Restart the service.

Beats logs are typically found syslog file.

Access Kibana

Access the Kibana using the following URL.

http://your-ip-address:5601/

You would get the Kibana’s home page.

Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 - Kibana Starting Page
Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 – Kibana Starting Page

On your first login, you have to map the filebeat index. Go to Management >> Index Patterns.

Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 - Management
Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 – Management

Type the following in the Index pattern box.

Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 - Create Index Pattren
Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 – Create Index Pattern

You should see at least one filebeat index something like above. Click Next step.

Select @timestamp and then click on Create.

Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 - Configure Timestamp
Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 – Configure Timestamp

Verify your index patterns and its mappings.

Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 - Index Mappings
Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 – Index Mappings

Now, click Discover to view the incoming logs and perform search queries.

Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 - Discover Logs
Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 – Discover Logs

You can see sentinl plugin here

sentinl_annotation

That’s All.

 

Reference list:

https://github.com/sirensolutions/sentinl

https://www.itzgeek.com

How to Setup Redis Cluster from Source

What is redis

Redis is an open source in-memory database. It stores data in key-value format. Because of residing in memory, redis is an excellent tool for caching. Redis provides a rich set of data types. This gives redis upper hand over Memcached. Apart from caching, redis can be used as distributed message broker.

Redis Cluster and Sentinels

To achieve high availability, redis can be deployed in cluster along with Sentinels. Sentinel is a feature of redis. Multiple sentinels are deployed across redis clusters for monitoring purpose. When redis master goes down, sentinels elect a new master from slaves. When old master comes up again, it is added as slave.

Another use case of clustering is a distribution of load. In high load environment, we can send write requests to master and read request to slaves.

This tutorial is specifically focused on Redis Cluster Master Slave model. We will not cover data sharding across cluster here. In data sharding, keys are distributed across multiple redis nodes.

Setup for tutorial

For this tutorial, we will use 3 (virtual) servers. On one server Redis master will reside while other two servers will be used for slaves. Standard redis port is 6379. To differentiate easily, we will run master on 6379 port and slaves on
6380 and 6381 ports. Same will be applied for sentinel services. Master sentinel will listen on 16379 port while slave sentinels will be on 16380 and 16381.

Lets put this easy way.

This tutorial is tested on CentOS 6.9. For CentOS 7.X, check below Notes Section.

Installation

We will follow same installation steps for setting up of all servers. Only difference will be in configurations.

  • Step 1: Grab redis source, make and install
  • Step 2: Setup required directories
  • Step 3: Configure redis master
  • Step 4: Configure redis master sentinel
  • Step 5: Add low privileged user to run redis
  • Step 6: Setup init scripts
  • Step 7: Start service

Server 1 (Redis Master)


Install Redis

Setup required directories

Configure redis master

Edit config file /etc/redis/6379.conf in your favorite editor and change below options.

Configure redis master sentinel

Add config file for sentinel at /etc/redis/sentinel_6379.conf. Open a file and add below content

Add non-privileged user

Setup init scripts

You can find sample init scripts in Notes section below.

Start service

Server 2 (Redis Slave 1)


Install Redis

Setup required directories

Configure redis slave 1

Edit config file /etc/redis/6380.conf in your favorite editor and change below options.

Configure redis slave 1 sentinel

Add config file for sentinel at /etc/redis/sentinel_6380.conf. Open a file and add below content

Add non-privileged user

Setup init scripts

You can find sample init scripts in Notes section below. Change $HOST and $PORT values accordingly

Start service

Server 3 (Redis Slave 2)


Install Redis

Setup required directories

Configure redis slave 2

Edit config file /etc/redis/6381.conf in your favorite editor and change below options.

Configure redis slave 2 sentinel

Add config file for sentinel at /etc/redis/sentinel_6381.conf. Open a file and add below content

Add non-privileged user

Setup init scripts

You can find sample init scripts in Notes section below. Change $HOST and $PORT values accordingly

Start service

Sentinel Testing

Redis Fail-over Testing

For fail-over testing, we can take down redis-master either using init script or below command.

Also we can force sentinel to run fail over using below command

Sample init scripts

Redis Init Script

Sentinel Init Script

Notes

Security

  • NEVER EVER run redis on public interface
  • If redis is deployed in cloud environment like AWS, set up security groups/firewalls carefully. Most of times, cloud providers use ephemeral ips. Because of ephermal ips, even redis is bound to private ip, it can be accessed over public interface.
  • For more security, dangerous commands can be disabled(renamed). But be careful with them in cluster environment.
  • Redis also provides simple authentication mechanism. It is not covered here because of scope.

Sentinel management

  • During redis fail-over, config files are rewritten by sentinel program. So when restarting redis-cluster, be careful.

Sources

  • https://redis.io/topics/cluster-tutorial
  • https://redis.io/topics/security
  • https://redis.io/commands/debug-segfault

What is DBF file? How to read it in linux and python?

What is DBF files ?

A DBF file is a standard database file used by dBASE, a database management system application. It organises data into multiple records with fields stored in an array data type. DBF files are also compatible with other “xBase” database programs, which became an important feature because of the file format’s popularity.

Tools which can read or open DBF files

Below are list of program which can read and open dbf file.

  • Windows
    1. dBase
    2. Microsoft Access
    3. Microsoft Excel
    4. Visual Foxpro
    5. Apache OpenOffice
    6. dbfview
    7. dbf Viewer Plus
  • Linux
    1. Apache OpenOffice
    2. GTK DBF Editor

How to read file in linux ?

“dbview” command available in linux, which can read dbf files.

Below code snippet show how to use dbview command.

 How to read it using python ?

dbfread” is the library available in python to read dbf files. This library reads DBF files and returns the data as native Python data types for further processing.

dbfread requires python 3.2 or 2.7.  dbfread is a pure python module, so doesn’t depend on any packages outside the standard library.

You can install library by the command below.

The below code snippet can read dbf file and retrieve data as python dictionary.

You can also use the with statement:

By default the records are streamed directly from the file.  If you have enough memory you can load them into a list instead. This allows random access

 How to Write content in DBF file using python ?

dbfpy is a python-only module for reading and writing DBF-files.  dbfpy can read and write simple DBF-files.

You can install it by using below command

The below example shows how to create dbf files and write records in to it.

Also you can update a dbf file record using dbf module.

The below example shows how to update a record in a .dbf file.

 

What is milter?

Every one gets tons of email these days. This includes emails about super duper offers from amazon to princess and wealthy businessmen trying to offer their money to you from some African country that you have never heard of. In all these emails in your inbox there lies one or two valuable emails either from your friends, bank alerts, work related stuff. Spam is a problem that email service providers are battling for ages. There are a few opensource spam fighting tools available like SpamAssasin or SpamBayes.

What is milter ?

Simply put – milter is mail filtering technology. Its designed by sendmail project. Now available in other MTAs also. People historically used all kinds of solutions for filtering mails on servers using procmail or MTA specific methods. The current scene seems to be moving forward to sieve. But there is a huge difference between milter and sieve. Sieve comes in to picture when mail is already accepted by MTA and had been handed over to MDA. On the other hand milter springs into action in the mail receiving part of MTA. When a new connection is made by remote server to your MTA, your MTA will give you an opportunity to accept of reject the mail every step of the way from new connection, reception of each header, and reception of body.

milter stages
milter protocol various stages

The above picture depicts simplified version of milter protocol working. Full details of milter protocol can be found here https://github.com/avar/sendmail-pmilter/blob/master/doc/milter-protocol.txt  . Not only filtering; using milter, you can also modify message or change headers.

HOW DO I GET STARTED WITH CODING MILTER PROGRAM ?

If you want to get started in C you can use libmilter.  For Python you have couple of options:

  1. pymilter –  https://pythonhosted.org/milter/
  2. txmilter – https://github.com/flaviogrossi/txmilter

Postfix supports milter protocol. You can find every thing related to postfix’s milter support in here – http://www.postfix.org/MILTER_README.html

WHY NOT SIEVE WHY MILTER ?

I found sieve to be rather limited. It doesn’t offer too many options to implement complex logic. It was purposefully made like that. Also sieve starts at the end of mail reception process after mail is already accepted by MTA.

Coding milter program in your favorite programming language gives you full power and allows you to implement complex , creative stuff.

WATCHOUT!!!

When writing milter programs take proper care to return a reply to MTA quickly. Don’t do long running tasks in milter program when the MTA is waiting for reply. This will have crazy side effects like remote parties submitting same mail multiple time filling up your inbox.

AWS Cognito Configurations

Introduction

Cognito is the AWS solution for managing user profiles, and Federated Identities help keep track of your users across multiple logins. Integrated into the AWS ecosystem, AWS Cognito opens up a world of possibility for advanced front end development as Cognito+IAM roles give you selective secure access to other AWS services.

Go to AWS Cognito on the AWS console to get started!

AWS console

Initial Setup — Cognito

AWS Cognito
AWS Cognito

We will be setting up AWS Cognito, which is a custom login pool (such as login with email). Cognito IS NOT a login manager for any type of login (such as Facebook and Gmail), only for custom logins.

Let’s first make a user pool by clicking on “Manage your User Pools”. A user pool is a group of users that fulfill the same designation. The setup screen should look like this:

User Pool Name
User Pool Name

We’re gonna walk through this process step by step, so enter the Pool name of “App_Users” and click “Step through settings”. The next step is “Attributes”, where we define the attributes that our “App_Users” will have.

User Attributes
User Attributes

We now, we only want to have an email, password and “agentName”. The email is our unique identifier for a user and the password is a mandatory field (which is why you don’t see it in the list of standard attributes). We want users to be able to have a codename to go by, so let’s set up “agentName” is a custom attribute. We are only using “agentName” to show how to add custom attributes. Scroll down and you will see the option to add custom attributes.

Custom Attributes
Custom Attributes

As of the date this tutorial was written, you cannot go back and change the custom attributes (even though AWS appears to be able to), so be sure to get this right the first time! If you need to change attributes, you will have to create a new user pool. Hopefully AWS fixes this issue soon. Anyways, moving on to account policies!

Account Policies
Account Policies

So we can see here that our passwords can be enforced to require certain characters. Obviously requiring a mix of various character types would be more secure, but users often don’t like that. For a middle ground, lets just require the password to be 8+ characters in length, and include at least 1 number. We also want users to be able to sign themselves up. The other parts are not so important, so let’s move onto the next step: verifications.

Account Verifications
Account Verifications

This part is cool, we can easily integrate multi-factor authentication (MFA). This means users must sign up with an email as well as another form of authentication such as a phone number. A PIN would be sent to that phone number and the user would use it to verify their account. We won’t be using MFA in this tutorial, just email verification. Set MFA to “off” and check only “Email” as a verification method. We can leave the “AppUsers-SMS-Role” (IAM role) that has been filled in, as we won’t be using it but may use it in the future. Cognito uses that IAM role to be authorized to send SMS text messages used in MFA. Since we’re not using MFA, we can move on to: Message Customizations.

Custom Account Messages
Custom Account Messages

This part is cool, we can easily integrate multi-factor authentication (MFA). This means users must sign up with an email as well as another form of authentication such as a phone number. A PIN would be sent to that phone number and the user would use it to verify their account. We won’t be using MFA in this tutorial, just email verification. Set MFA to “off” and check only “Email” as a verification method. We can leave the “AppUsers-SMS-Role” (IAM role) that has been filled in, as we won’t be using it but may use it in the future. Cognito uses that IAM role to be authorized to send SMS text messages used in MFA. Since we’re not using MFA, we can move on to: Message Customizations.

Custom Account Messages
Custom Account Messages

When users receive their account verification emails, we can specify what goes into that email. Here we have made a custom email and programmatically placed in the verification PIN represented as {####}. Unfortunately we can’t pass in other variables such as a verification link. To accomplish this, we would have to use a combination of AWS Lambda and AWS SES.

SES (Simple Email Service)
SES (Simple Email Service)

Next click “Verify a New Address”, and enter the email you would like to verify.

Now login to your email and open the email from AWS. Click the link inside the email to verify, and you will be redirected to the AWS SES page again. You have successfully verified an email! That was easy.

Now that’s done, let’s return back to AWS Cognito and move on to: Tags.

User Pool Tags
User Pool Tags

It is not mandatory to add tags to a user pool, but it is definitely useful for managing many AWS services. Let’s just add a tag for ‘AppName’ and set it to a value of ‘MyApp’. We can now move on to: Devices.

Devices
Devices

We can opt to remember our user’s devices. I usually select “Always” because remembering user devices is both free and requires no coding on our part. The information is useful too, so why not? Next step: Apps.

Apps
Apps

We want certain apps to have access to our user pool. These apps are not present anywhere else on the AWS ecosystem, which means when we create an “app”, it is a Cognito-only identifier. Apps are useful because we can have multiple apps accessing the same user pool (imagine an Uber clone app, and a complimentary Driving Test Practice App). We will set the refresh token to 30 days, which means each login attempt will return a refresh token that we can use for authentication instead of logging in every time. We un-click “Generate Client Secret” because we intend to log into our user pool from the front end instead of back end (ergo, we cannot keep secrets on the front end because that is insecure). Click “Create App” and then “Next Step” to move on to: Triggers.

Triggers
Triggers

We can trigger various actions in the user authentication and setup flow. Remember how we said we can create more complex account verification emails using AWS Lambda and AWS SES? This is where we would set that up. For the scope of this tutorial, we will not be using any AWS Lambda triggers. Let’s move on to the final step: Review.

Review
Review

Here we review all the setup configurations we have made. If you are sure about this info, click “Create Pool” and our Cognito User Pool will be generated!

Take note of the Pool Id us-east-1_6i5p2Fwao in the Pool details tab.

Notice the Pool Id
Notice the Pool Id

And the App client id 5jr0qvudipsikhk2n1ltcq684b in the Apps tab. We will need both of these in our client side app.

Notice the App client id
Notice the App client id

Now that Cognito is set up, we can set up Federated Identities for multiple login providers. In this tutorial we do not cover the specifics of FB Login as it is not within in the scope of this tutorial series. However, integrating FB Login is super easy and we will show how it’s done in the below section.

Initial Setup — Federated Identities

AWS Cognito
AWS Cognito

Next we want to setup “Federated Identities”. If we have an app that allows multiple login providers (Amazon Cognito, Facebook, Gmail..etc) to the same user, we would use Federated Identities to centralize all these logins. In this tutorial, we will be using both our Amazon Cognito login, as well as a potential Facebook Login. Go to Federated Identities and begin the process to create a new identity pool. Give it an appropriate name.

create a new identity pool
Create a new identity pool

Now expand the “Authentication providers” section and you will see the below screen. Under Cognito, we are going to add the Cognito User Pool that we just created. Copy and paste the User Pool ID and App Client ID that we made note of earlier.

Authentication providers
Authentication providers

And if we wanted Facebook login for the same user identity pool, we can go to the Facebook tab and simply enter our Facebook App ID. That’s all there is to it on the AWS console!

Facebook tab
Facebook tab

Save the identity pool and you will be redirected to the below screen where IAM roles are created to represent the Federated Identity Pool. The unauthenticated IAM role is for non-logged in users, and the authenticated version is for logged in users. We can grant these IAM roles permission to access other AWS resources like S3 buckets and such. That is how we achieve greater security by integrating our app throughout the AWS ecosystem. Continue to finish creating this Identity Pool.

IAM roles
IAM roles

You should now see the below screen after successfully creating the identity pool. You now only need to make note of 1 thing which is the Identity Pool ID (i.e. us-east-1:65bd1e7d-546c-4f8c-b1bc-9e3e571cfaa7) which we will use later in our code. Great!

Sample code
Sample code

Exit everything and go back to the AWS Cognito main screen. If we enter the Cognito section or the Federated Identities section, we see that we have the 2 necessary pools set up. AWS Cognito and AWS Federated Identities are ready to go!

AWS Cognito
AWS Cognito
AWS Federated Identities
AWS Federated Identities

That’s all for set up! With these 2 pools we can integrate the rest of our code into Amazon’s complete authentication service and achieve top tier user management.

FreeSWITCH status on LED display using socket connection

It is a simple experiment to show  FreeSWITCH  status on LED display using socket connection. Here is Video :

What You Need

1.Raspberry pi-3

2.MAX-7219 based 8×8 LED Matrix Displays(4.No’s or more).

Those available in kit form and assembled form. And we can purchase through on- line marketing like Amazon etc.

In my case 4 modules are powered from GPIO pins of Raspberry . It is good to use separate power for modules for more than 2 modules.

3.Female to Female connector wires

to connect GPIO pins and MAX7219 LED modules.

Next What to do(installing FreeSWITCH)

1.Prepare SD card and load Raspbian and install FreeSWITCH.  For details

https://www.algissalys.com/how-to/freeswitch-1-7-raspberry-pi-2-voip-sip-server

2.Install Display drivers for MAX7219. 

git clone https://github.com/rm-hull/max7219.git
sudo python max7219/setup.py install

3.Do wiring.

(as given below) between GPIO of Raspberry pi and MAX 7219 matrix LED displays.

Pin        Name       Remarks            RPi Pin          RPi Function

1            Vcc          +5V Power              2                        5V0

2            Gnd           Ground                  6                        Gnd

3            DIN            Data In                 19                GPIO 10 (MOSI)

4             CS          Chip Select              24                 GPIO  8 (SPI CS0)

5            CLK           Clock                      23                GPIO 11 (SPI CLK)

4.Run demo program.

Edit matrix_demo.py according to no. of matrix devices used  i.e cascaded= n, in my case n=4.

device = max7219(serial, cascaded=4 or 1, block_orientation=block_orientation).

sudo python max7219/examples/matrix_demo.py

At Last

Use ESL connection between FreeSWITCH and Max7219demo program. For details

https://freeswitch.org/confluence/display/FREESWITCH/Python+ESL

Here is my source file.

 

 

How does email work ?

Most of people in this generation would spend their time daily to send and receive emails. E-mail plays a vital role in our daily activities. As technology evolved email became the one of major communication tool. It may not cost you to send or receive a mail but there is an intricate process involved behind. You just need to press  send button to send your message, your message goes through the complex machanism to get received by recipient.

The invention of email was started in 1961. There were more than couple of people to list as the inventors of email.

The evolution of email started from messaging users on the same computer, then message transition between computer then message among multi user and multi computer, finally email. Email has became revolutionary tool for communication. Now a days it is part of our daily life. We will see how email really works.

Terminology

There are different components involved in the email system. There are few abbreviated terms. Having idea about terminology and abbreviations  will help you to better understand the system.

MSA Mail Submission Agent
MTA Mail Transfer Agent
MDA Mail Delivery Agent
MX Mail Exchange
DNS Domain Name System

MUA is the software that user will use either to send or retrieve  the mail(message) from the server.

MSA is the piece of software which is installed on mail server. Where it is responsible to transfer the message to destination or mail server called MTA (Mail Transfer Agent)

MTA  is the Mail Transfer Agent. It is the piece software on the server which is responsible to route the mail to destination mail server. So we called it as mail router, mail server etc.                                                                        Here are few popular  MTA(Mail Server) softwares : postfixqmailCourier Mail and sendmail.

postfix is the one which is widely used, and it comes with many linux distributions.

You can find the exhaustive mail server softwares list here

Protocols involved

There are different protocol involved in email system. All of them are required to get email delivered to the recipient. They are building blocks of email system. Those protocols are,

SMTP Simple Mail Transport Protocol
IMAP Internet Message Access Protocol
POP Post Office Protocol
DNS Domain Name System (Protocol)

How does email work?

This sections will get you the idea about how these protocols work together to deliver the mail to recipient. Here is the abstract overview of email system.

How does email work abstract
How does email work abstract

The above figure will give you simple abstract overview of email system.  As described in the figure. SMTP is the protocol which is used to push or send an email to server by sender. IMAP and POP are the protocols which are used to check or retrieve message from server by recipient.  The recipient MUA is configured to use either IMAP or POP or BOTH (IMAP & POP). The protocol IMPA is bidirectional where POP is unidirectional. We will see more about POP and IMAP in later section.

        As per the above figure(abstract overview) you can sense that, the sender email client(MUA) will send message using SMPT protocol to mail server (MTA). The mail server will check for the destination if it finds the one it will connect to destination mail server, that is other MTA/MDA and will pass the message(mail) using SMTP protocol. The recipient server will store the received mail locally. Later if recipient will check the mail by using his mail client by connecting to his mail server.

Now let’s see flow of email in detail with following figure

Here Alice is sending  email to Bob(bob@domain.com) by using her email client. Where she will push her message to server using SMTP to send Bob. The mail server will determine the destination by getting the MX record for the destination server. The Alice mail server will check for the domain after in the to(recipient)  mail address to get the MX record. Once Alice(sender) mail server received the result from DNS for the DNS query to get MX record, this server will connect to destination mail server and will delivery the mail (message) using SMTP protocol. The destination mail server will store the message locally. Bob will check the newly arrived mails by using his mail client.

I would like to illustrate this process even more by using following figure.

In the above figure there are new components introduced, those are MSA, MDA and extra MX servers.

       MSA is the Mail Submission agent where it is piece of software it will receive the message from MUA. It uses the same protocol SMTP and port 25. Practically most MTAs perform the function of MSA so you may assume MTA as MSA.

       The MDA means mail delivery agent or message delivery agent  is a computer software component that is responsible for the delivery of e-mail messages to a local recipient’s mailbox. You may not need to get confused, most of MTAs perform the MDA functionality as well though there are softwares which are only designed to work as MDA.

Here, you can see that practically there will be more than 1 mail server(MX SERVER)s. All other MX servers are for backup purpose. When sender query DNS for MX record it may get more that one MX server with priorities. Here is the sample output of DNS query for gmail MX records. Gmail is having 5 MX servers with different priorities.

The above result is the out put of command line utility dig. Where we use this utility to query the DNS. In the output you would see something like below in the right side of  ANSWER SECTION.

Here you would see some numbers in front of domains. Those numbers will decide the priority of MX record. The domain which is having very lowest number associated will have highest priority. Here gmail-smtp-in.l.google.com MX record having highest priority cause it having lowest number associated, that is 5. So the sender MTA will try to connect with most prioritized MX first. If it is down or so, it will try to connect with next MX (Mail Exchange server). As a result mail will be delivered with no down time.

Once the mail is received by the destination mail server, it will store the message in the mail store. There are two types of mail stores used by various mail servers(softwares). Those are,

  • Mail DIR
  • Mail Box

Locally stored mail will be accessed(fetched) by the recipient client using either POP or IMAP.

Protocols POP and IMAP

POP and IMAP are application layer protocols, as described above POP and IMAP are the protocols which are used to fetch or access the mails by recipient email client (MUA). Both protocols are different, they server for different purpose.

The protocol POP means Post Office Protocol and POP3 is it’s  version 3.  As the name describes, this protocol is used to download the message from the server by the client. Once message is downloaded form the server, it will be removed unless you set the flag leave a copy on server just how post card is delivered to destination. If you are only the one who will access mailbox from one location the POP3 suites well. This will also save some memory on server. So, if use POP3 you can’t access the mail using different clients.

Unlike POP3 the IMAP won’t download the message by deleting it on server. It will just access the message like browser does webpages. So it is handy if you use multiple clients from different locations.

 

References:

[1] https://tools.ietf.org/html/rfc3501
[2] https://www.ietf.org/rfc/rfc1939.txt
[3] https://tools.ietf.org/html/rfc5321

InlineCallbacks

Twisted features a decorator named inlineCallbacks which allows you to work with deferreds without writing callback functions.

This is done by writing your code as generators, which yield deferreds instead of attaching callbacks.

Consider the following function written in the traditional deferred style:

using inlineCallbacks, we can write this as:

from twisted.internet.defer import inlineCallbacks

Instead of calling addCallback on the deferred returned by redis.Connection, we yield it. this causes Twisted to return the deferred‘s result to us.

Though the inlineCallbacks looks like synchronous code, which blocks while waiting for the request to finish, each yield statement allows other code to run while waiting for the deferred being yielded to fire.

inlineCallbacks become even more powerful when dealing with complex control flow and error handling.

txRedis

txredis is a non-blocking client for the redis database, written in Python. It uses twisted for the asynchronous communication with redis.

Install

Now to check if txredis is properly installed or not goto python prompt and import txredis :

Example :

Using redis-cli, add list of user’s in redis-server with hmset :

hmset, set key to value within hash name for each corresponding key and value from the mapping dict.

Now, using python twisted get user’s :

Above code do connection with redis-server.

Here, rediscreator.connectTCP return defer and register appropriate method in addCallback, addErrback

Here, define callback function onRedisConnect and errRedisConnect.

Here, hget return the value of key within the hase name and return defer

Here, define callback function onSuccess and onFailur and log the appropriate result.

It give output 123, it is correspond to user abc