Installing ELK Stack(Elasticsearch,Logstash,Kibana) on CentOS with Sentinl plugin

ELK stack is also known as the Elastic stack, consists of Elasticsearch, Logstash, and Kibana. It helps you to have all of your logs stored in one place and analyze the issues by correlating the events at a particular time.

This guide helps you to install ELK stack on CentOS 7 / RHEL 7.

Components

Logstash – It does the processing (Collect, enrich and send it to Elasticsearch) of incoming logs sent by beats (forwarder).

Elasticsearch – It stores incoming logs from Logstash and provides an ability to search the logs/data in a real-time

Kibana – Provides visualization of logs.

Sentinl –  Sentinl extends Siren Investigate and Kibana with Alerting and Reporting functionality to monitor, notify and report on data series changes using standard queries, programmable validators and a variety of configurable actions – Think of it as a free an independent “Watcher” which also has scheduled “Reporting” capabilities (PNG/PDFs snapshots).

SENTINL is also designed to simplify the process of creating and managing alerts and reports in Siren Investigate/Kibana 6.xvia its native App Interface, or by using native watcher tools in Kibana 6.x+.

 

Beats – Installed on client machines, send logs to Logstash through beats protocol.

Environment

To have a full-featured ELK stack, we would need two machines to test the collection of logs.

ELK Stack

Filebeat

Prerequisites

Install Java

Since Elasticsearch is based on Java, make sure you have either OpenJDK or Oracle JDK is installed on your machine.

Here, I am using OpenJDK 1.8.

Verify the Java version.

Output:

Configure ELK repository

Import the Elastic signing key.

Setup the Elasticsearch repository and install it.

Add the below content to the elk.repo file.

Install Elasticsearch

Elasticsearch is an open source search engine, offers a real-time distributed search and analytics with the RESTful web interface. Elasticsearch stores all the data are sent by the Logstash and displays through the web interface (Kibana) on users request.

Install Elasticsearch.

Configure Elasticsearch to start during system startup.

Use CURL to check whether the Elasticsearch is responding to the queries or not.

Output:

Install Logstash

Logstash is an open source tool for managing events and logs, it collects the logs, parse them and store them on Elasticsearch for searching. Over 160+ plugins are available for Logstash which provides the capability of processing the different type of events with no extra work.

Install the Logstash package.

Create SSL certificate (Optional)

Filebeat (Logstash Forwarder) are normally installed on client servers, and they use SSL certificate to validate the identity of Logstash server for secure communication.

Create SSL certificate either with the hostname or IP SAN.

(Hostname FQDN)

If you use the Logstash server hostname in the beats (forwarder) configuration, make sure you have A record for Logstash server and also ensure that client machine can resolve the hostname of the Logstash server.

Go to the OpenSSL directory.

Now, create the SSL certificate. Replace green one with the hostname of your real Logstash server.

Configure Logstash

Logstash configuration can be found in /etc/logstash/conf.d/. Logstash configuration file consists of three sections input, filter, and the output. All three sections can be found either in a single file or separate files end with .conf.

I recommend you to use a single file for placing input, filter and output sections.

In the first section, we will put an entry for input configuration. The following configuration sets Logstash to listen on port 5044 for incoming logs from the beats (forwarder) that sit on client machines.

Also, add the SSL certificate details in the input section for secure communication – Optional.

In the filter section. We will use Grok to parse the logs ahead of sending it to Elasticsearch. The following grok filter will look for the syslog labeled logs and tries to parse them to make a structured index.

For more filter patterns, take a look at grokdebugger page.

In the output section, we will define the location where the logs to get stored; obviously, it should be Elasticsearch.

Now start and enable the Logstash service.

You can troubleshoot any issues by looking at Logstash logs.

Install & Configure Kibana

Kibana provides visualization of logs stored on the Elasticsearch. Install the Kibana using the following command.

Edit the kibana.yml file.

By default, Kibana listens on localhost which means you can not access Kibana interface from external machines. To allow it, edit the below line with your machine IP.

Uncomment the following line and update it with the Elasticsearch instance URL. In my case, it is localhost.

Start and enable kibana on system startup.

Install Sentinl plugin:

Install and Configure Filebeat

There are four beats clients available

  1. Packetbeat – Analyze network packet data.
  2. Filebeat – Real-time insight into log data.
  3. Topbeat – Get insights from infrastructure data.
  4. Metricbeat – Ship metrics to Elasticsearch.

To analyze the system logs of the client machine (Ex. client.lintel.local), we need to install filebeat. Create beats.repo file.

Add the below content to the above repo file.

Now, install Filebeat using the following command.

Set up a host entry on the client machine in case your environment does not have DNS server.

Make an host entry like below on the client machine.

Filebeat (beats) uses SSL certificate for validating Logstash server identity, so copy the logstash-forwarder.crt from the Logstash server to the client.

Skip this step, in case you are not using SSL in Logstash.

Filebeat configuration file is in YAML format, which means indentation is very important. Make sure you use the same number of spaces used in the guide.

Open up the filebeat configuration file.

On top, you would see the prospectors section. Here, you need to specify which logs should be sent to Logstash and how they should be handled. Each prospector starts with – character.

For testing purpose, we will configure filebeat to send /var/log/messages to Logstash server. To do that, modify the existing prospector under paths section.

Comment out the – /var/log/*.log to avoid sending all .log files present in that directory to Logstash.

Comment out the section output.elasticsearch: as we are not going to store logs directly to Elasticsearch.

Now, find the line output.logstash and modify the entries like below. This section defines filebeat to send logs to Logstash server server.lintel.local on port 5044 and mention the path where the copied SSL certificate is placed

Replace server.lintel.local with IP address in case if you are using IP SAN.

Restart the service.

Beats logs are typically found syslog file.

Access Kibana

Access the Kibana using the following URL.

http://your-ip-address:5601/

You would get the Kibana’s home page.

Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 - Kibana Starting Page
Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 – Kibana Starting Page

On your first login, you have to map the filebeat index. Go to Management >> Index Patterns.

Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 - Management
Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 – Management

Type the following in the Index pattern box.

Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 - Create Index Pattren
Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 – Create Index Pattern

You should see at least one filebeat index something like above. Click Next step.

Select @timestamp and then click on Create.

Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 - Configure Timestamp
Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 – Configure Timestamp

Verify your index patterns and its mappings.

Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 - Index Mappings
Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 – Index Mappings

Now, click Discover to view the incoming logs and perform search queries.

Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 - Discover Logs
Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 – Discover Logs

You can see sentinl plugin here

sentinl_annotation

That’s All.

 

Reference list:

https://github.com/sirensolutions/sentinl

https://www.itzgeek.com

Configure Celery with SQS and Django on Elastic Beanstalk

 Introduction

Has your users complained about the loading issue on the web app you developed. That might be because of some long I/O bound call or a time consuming process. For example, when a customer signs up to website and we need to send confirmation email which in normal case the email will be sent and then reply 200 OK response is sent on signup POST. However we can send email later, after sending 200 OK response, right?. This is not so straight forward when you are working with  a framework like Django, which is tightly binded to MVC paradigm.

So, how do we do it ? The very first thought in mind would be python threading module. Well, Python threads are implemented as pthreads (kernel threads), and because of the global interpreter lock (GIL), a Python process only runs one thread at a time. And again threads are hard to manage, maintain code and scale it.

Perequisite

Audience for this blog requires to have knowledge about Django and AWS elastic beanstalk.

Celery

Celery is here to rescue. It can help when you have a time consuming task (heavy compute or I/O bound tasks) between request-response cycle. Celery is an open source asynchronous task queue or job queue which is based on distributed message passing. In this post I will walk you through the celery setup procedure with django and SQS on elastic beanstalk.

Why Celery ?   

Celery is very easy to integrate with existing code base. Just write a decorator above the definition of a function declaring a celery task and call that function with a .delay method of that function.

Broker

To work with celery, we need a message broker. As of writing this blog, Celery supports RabbitMQ, Redis, and Amazon SQS (not fully) as message broker solutions. Unless you don’t want to stick to AWS ecosystem (as in my case), I recommend to go with RabbitMQ or Redis because SQS does not yet support remote control commands and events. For more info check here. One of the reason to use SQS is its pricing. One million SQS free request per month for every user.

Proceeding with SQS, go to AWS SQS dashboard and create a new SQS queues. Click on create new queue button.

Depending upon the requirement we can select any type of the queue. We will name queue as dev-celery.

Installation

Celery has a very nice documentation. Installation and configuration is described here. For convenience here are the steps

Activate your virtual environment, if you have configured one and install cerely.

pip install celery[sqs]

Configuration

Celery has built-in support of django. It will pick its setting parameter from django’s settings.py which are prepended by CELERY_ (‘CELERY’ word needs to be defined while initializing celery app as namespace). So put below setting parameter in settings.py

AWS login credentials should be present in the environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY

Now let’s configure celery app within django code. Create a celery.py file besides django’s settings.py.

Now put below code in projects __init__.py

Testing

Now let’s test the configuration. Open terminal start celery

Terminal 1

 

All the task which are registered to use celery using celery decorators appear here while starting celery. If you find that your task does not appear here then make sure that the module containing the task is imported on startup.

Now open django shell in another terminal

Terminal 2

After executing the task function with delay method, that task should run in the worker process which is listening to events in other terminal. Here celery sent a message to SQS with details of the task and worker process which was listening to SQS, received it and task was executed in worker process. Below is what you should see in terminal 1

Terminal 1

Deploy celery worker process on AWS elastic beanstalk

Celery provides “multi” sub command to run process in daemon mode, but this cannot be used on production. Celery recommends various daemonization tools http://docs.celeryproject.org/en/latest/userguide/daemonizing.html

AWS elastic beanstalk already use supervisord for managing web server process. Celery can also be configured using supervisord tool. Celery’s official documentation has a nice example of supervisord config for celery. https://github.com/celery/celery/tree/master/extra/supervisord. Based on that we write quite a few commands under .ebextensions directory.

Create two files under .ebextensions directory. Celery.sh file extract the environment variable and forms celery configuration, which copied to /opt/python/etc/celery.conf file and supervisord is restarted. Here main celery command:

At the time if writing this blog celery had https://github.com/celery/celery/issues/3759 issue. As a work around to this issue we add “-P solo”. This will run task sequentially for a single worker process.

Now create elastic beanstalk configuration file as below. Make sure you have pycurl and celery in requirements.txt. To install pycurl libcurl-devel needs to be installed from yum package manager.

Add these files to git and deploy to elastic beanstalk.

Below is the figure describing the architecture with django, celery and elastic beanstalk.

How to Setup Redis Cluster from Source

What is redis

Redis is an open source in-memory database. It stores data in key-value format. Because of residing in memory, redis is an excellent tool for caching. Redis provides a rich set of data types. This gives redis upper hand over Memcached. Apart from caching, redis can be used as distributed message broker.

Redis Cluster and Sentinels

To achieve high availability, redis can be deployed in cluster along with Sentinels. Sentinel is a feature of redis. Multiple sentinels are deployed across redis clusters for monitoring purpose. When redis master goes down, sentinels elect a new master from slaves. When old master comes up again, it is added as slave.

Another use case of clustering is a distribution of load. In high load environment, we can send write requests to master and read request to slaves.

This tutorial is specifically focused on Redis Cluster Master Slave model. We will not cover data sharding across cluster here. In data sharding, keys are distributed across multiple redis nodes.

Setup for tutorial

For this tutorial, we will use 3 (virtual) servers. On one server Redis master will reside while other two servers will be used for slaves. Standard redis port is 6379. To differentiate easily, we will run master on 6379 port and slaves on
6380 and 6381 ports. Same will be applied for sentinel services. Master sentinel will listen on 16379 port while slave sentinels will be on 16380 and 16381.

Lets put this easy way.

This tutorial is tested on CentOS 6.9. For CentOS 7.X, check below Notes Section.

Installation

We will follow same installation steps for setting up of all servers. Only difference will be in configurations.

  • Step 1: Grab redis source, make and install
  • Step 2: Setup required directories
  • Step 3: Configure redis master
  • Step 4: Configure redis master sentinel
  • Step 5: Add low privileged user to run redis
  • Step 6: Setup init scripts
  • Step 7: Start service

Server 1 (Redis Master)


Install Redis

Setup required directories

Configure redis master

Edit config file /etc/redis/6379.conf in your favorite editor and change below options.

Configure redis master sentinel

Add config file for sentinel at /etc/redis/sentinel_6379.conf. Open a file and add below content

Add non-privileged user

Setup init scripts

You can find sample init scripts in Notes section below.

Start service

Server 2 (Redis Slave 1)


Install Redis

Setup required directories

Configure redis slave 1

Edit config file /etc/redis/6380.conf in your favorite editor and change below options.

Configure redis slave 1 sentinel

Add config file for sentinel at /etc/redis/sentinel_6380.conf. Open a file and add below content

Add non-privileged user

Setup init scripts

You can find sample init scripts in Notes section below. Change $HOST and $PORT values accordingly

Start service

Server 3 (Redis Slave 2)


Install Redis

Setup required directories

Configure redis slave 2

Edit config file /etc/redis/6381.conf in your favorite editor and change below options.

Configure redis slave 2 sentinel

Add config file for sentinel at /etc/redis/sentinel_6381.conf. Open a file and add below content

Add non-privileged user

Setup init scripts

You can find sample init scripts in Notes section below. Change $HOST and $PORT values accordingly

Start service

Sentinel Testing

Redis Fail-over Testing

For fail-over testing, we can take down redis-master either using init script or below command.

Also we can force sentinel to run fail over using below command

Sample init scripts

Redis Init Script

Sentinel Init Script

Notes

Security

  • NEVER EVER run redis on public interface
  • If redis is deployed in cloud environment like AWS, set up security groups/firewalls carefully. Most of times, cloud providers use ephemeral ips. Because of ephermal ips, even redis is bound to private ip, it can be accessed over public interface.
  • For more security, dangerous commands can be disabled(renamed). But be careful with them in cluster environment.
  • Redis also provides simple authentication mechanism. It is not covered here because of scope.

Sentinel management

  • During redis fail-over, config files are rewritten by sentinel program. So when restarting redis-cluster, be careful.

Sources

  • https://redis.io/topics/cluster-tutorial
  • https://redis.io/topics/security
  • https://redis.io/commands/debug-segfault

What is DBF file? How to read it in linux and python?

What is DBF files ?

A DBF file is a standard database file used by dBASE, a database management system application. It organises data into multiple records with fields stored in an array data type. DBF files are also compatible with other “xBase” database programs, which became an important feature because of the file format’s popularity.

Tools which can read or open DBF files

Below are list of program which can read and open dbf file.

  • Windows
    1. dBase
    2. Microsoft Access
    3. Microsoft Excel
    4. Visual Foxpro
    5. Apache OpenOffice
    6. dbfview
    7. dbf Viewer Plus
  • Linux
    1. Apache OpenOffice
    2. GTK DBF Editor

How to read file in linux ?

“dbview” command available in linux, which can read dbf files.

Below code snippet show how to use dbview command.

 How to read it using python ?

dbfread” is the library available in python to read dbf files. This library reads DBF files and returns the data as native Python data types for further processing.

dbfread requires python 3.2 or 2.7.  dbfread is a pure python module, so doesn’t depend on any packages outside the standard library.

You can install library by the command below.

The below code snippet can read dbf file and retrieve data as python dictionary.

You can also use the with statement:

By default the records are streamed directly from the file.  If you have enough memory you can load them into a list instead. This allows random access

 How to Write content in DBF file using python ?

dbfpy is a python-only module for reading and writing DBF-files.  dbfpy can read and write simple DBF-files.

You can install it by using below command

The below example shows how to create dbf files and write records in to it.

Also you can update a dbf file record using dbf module.

The below example shows how to update a record in a .dbf file.

 

AWS Cognito Configurations

Introduction

Cognito is the AWS solution for managing user profiles, and Federated Identities help keep track of your users across multiple logins. Integrated into the AWS ecosystem, AWS Cognito opens up a world of possibility for advanced front end development as Cognito+IAM roles give you selective secure access to other AWS services.

Go to AWS Cognito on the AWS console to get started!

AWS console

Initial Setup — Cognito

AWS Cognito
AWS Cognito

We will be setting up AWS Cognito, which is a custom login pool (such as login with email). Cognito IS NOT a login manager for any type of login (such as Facebook and Gmail), only for custom logins.

Let’s first make a user pool by clicking on “Manage your User Pools”. A user pool is a group of users that fulfill the same designation. The setup screen should look like this:

User Pool Name
User Pool Name

We’re gonna walk through this process step by step, so enter the Pool name of “App_Users” and click “Step through settings”. The next step is “Attributes”, where we define the attributes that our “App_Users” will have.

User Attributes
User Attributes

We now, we only want to have an email, password and “agentName”. The email is our unique identifier for a user and the password is a mandatory field (which is why you don’t see it in the list of standard attributes). We want users to be able to have a codename to go by, so let’s set up “agentName” is a custom attribute. We are only using “agentName” to show how to add custom attributes. Scroll down and you will see the option to add custom attributes.

Custom Attributes
Custom Attributes

As of the date this tutorial was written, you cannot go back and change the custom attributes (even though AWS appears to be able to), so be sure to get this right the first time! If you need to change attributes, you will have to create a new user pool. Hopefully AWS fixes this issue soon. Anyways, moving on to account policies!

Account Policies
Account Policies

So we can see here that our passwords can be enforced to require certain characters. Obviously requiring a mix of various character types would be more secure, but users often don’t like that. For a middle ground, lets just require the password to be 8+ characters in length, and include at least 1 number. We also want users to be able to sign themselves up. The other parts are not so important, so let’s move onto the next step: verifications.

Account Verifications
Account Verifications

This part is cool, we can easily integrate multi-factor authentication (MFA). This means users must sign up with an email as well as another form of authentication such as a phone number. A PIN would be sent to that phone number and the user would use it to verify their account. We won’t be using MFA in this tutorial, just email verification. Set MFA to “off” and check only “Email” as a verification method. We can leave the “AppUsers-SMS-Role” (IAM role) that has been filled in, as we won’t be using it but may use it in the future. Cognito uses that IAM role to be authorized to send SMS text messages used in MFA. Since we’re not using MFA, we can move on to: Message Customizations.

Custom Account Messages
Custom Account Messages

This part is cool, we can easily integrate multi-factor authentication (MFA). This means users must sign up with an email as well as another form of authentication such as a phone number. A PIN would be sent to that phone number and the user would use it to verify their account. We won’t be using MFA in this tutorial, just email verification. Set MFA to “off” and check only “Email” as a verification method. We can leave the “AppUsers-SMS-Role” (IAM role) that has been filled in, as we won’t be using it but may use it in the future. Cognito uses that IAM role to be authorized to send SMS text messages used in MFA. Since we’re not using MFA, we can move on to: Message Customizations.

Custom Account Messages
Custom Account Messages

When users receive their account verification emails, we can specify what goes into that email. Here we have made a custom email and programmatically placed in the verification PIN represented as {####}. Unfortunately we can’t pass in other variables such as a verification link. To accomplish this, we would have to use a combination of AWS Lambda and AWS SES.

SES (Simple Email Service)
SES (Simple Email Service)

Next click “Verify a New Address”, and enter the email you would like to verify.

Now login to your email and open the email from AWS. Click the link inside the email to verify, and you will be redirected to the AWS SES page again. You have successfully verified an email! That was easy.

Now that’s done, let’s return back to AWS Cognito and move on to: Tags.

User Pool Tags
User Pool Tags

It is not mandatory to add tags to a user pool, but it is definitely useful for managing many AWS services. Let’s just add a tag for ‘AppName’ and set it to a value of ‘MyApp’. We can now move on to: Devices.

Devices
Devices

We can opt to remember our user’s devices. I usually select “Always” because remembering user devices is both free and requires no coding on our part. The information is useful too, so why not? Next step: Apps.

Apps
Apps

We want certain apps to have access to our user pool. These apps are not present anywhere else on the AWS ecosystem, which means when we create an “app”, it is a Cognito-only identifier. Apps are useful because we can have multiple apps accessing the same user pool (imagine an Uber clone app, and a complimentary Driving Test Practice App). We will set the refresh token to 30 days, which means each login attempt will return a refresh token that we can use for authentication instead of logging in every time. We un-click “Generate Client Secret” because we intend to log into our user pool from the front end instead of back end (ergo, we cannot keep secrets on the front end because that is insecure). Click “Create App” and then “Next Step” to move on to: Triggers.

Triggers
Triggers

We can trigger various actions in the user authentication and setup flow. Remember how we said we can create more complex account verification emails using AWS Lambda and AWS SES? This is where we would set that up. For the scope of this tutorial, we will not be using any AWS Lambda triggers. Let’s move on to the final step: Review.

Review
Review

Here we review all the setup configurations we have made. If you are sure about this info, click “Create Pool” and our Cognito User Pool will be generated!

Take note of the Pool Id us-east-1_6i5p2Fwao in the Pool details tab.

Notice the Pool Id
Notice the Pool Id

And the App client id 5jr0qvudipsikhk2n1ltcq684b in the Apps tab. We will need both of these in our client side app.

Notice the App client id
Notice the App client id

Now that Cognito is set up, we can set up Federated Identities for multiple login providers. In this tutorial we do not cover the specifics of FB Login as it is not within in the scope of this tutorial series. However, integrating FB Login is super easy and we will show how it’s done in the below section.

Initial Setup — Federated Identities

AWS Cognito
AWS Cognito

Next we want to setup “Federated Identities”. If we have an app that allows multiple login providers (Amazon Cognito, Facebook, Gmail..etc) to the same user, we would use Federated Identities to centralize all these logins. In this tutorial, we will be using both our Amazon Cognito login, as well as a potential Facebook Login. Go to Federated Identities and begin the process to create a new identity pool. Give it an appropriate name.

create a new identity pool
Create a new identity pool

Now expand the “Authentication providers” section and you will see the below screen. Under Cognito, we are going to add the Cognito User Pool that we just created. Copy and paste the User Pool ID and App Client ID that we made note of earlier.

Authentication providers
Authentication providers

And if we wanted Facebook login for the same user identity pool, we can go to the Facebook tab and simply enter our Facebook App ID. That’s all there is to it on the AWS console!

Facebook tab
Facebook tab

Save the identity pool and you will be redirected to the below screen where IAM roles are created to represent the Federated Identity Pool. The unauthenticated IAM role is for non-logged in users, and the authenticated version is for logged in users. We can grant these IAM roles permission to access other AWS resources like S3 buckets and such. That is how we achieve greater security by integrating our app throughout the AWS ecosystem. Continue to finish creating this Identity Pool.

IAM roles
IAM roles

You should now see the below screen after successfully creating the identity pool. You now only need to make note of 1 thing which is the Identity Pool ID (i.e. us-east-1:65bd1e7d-546c-4f8c-b1bc-9e3e571cfaa7) which we will use later in our code. Great!

Sample code
Sample code

Exit everything and go back to the AWS Cognito main screen. If we enter the Cognito section or the Federated Identities section, we see that we have the 2 necessary pools set up. AWS Cognito and AWS Federated Identities are ready to go!

AWS Cognito
AWS Cognito
AWS Federated Identities
AWS Federated Identities

That’s all for set up! With these 2 pools we can integrate the rest of our code into Amazon’s complete authentication service and achieve top tier user management.

FreeSWITCH status on LED display using socket connection

It is a simple experiment to show  FreeSWITCH  status on LED display using socket connection. Here is Video :

What You Need

1.Raspberry pi-3

2.MAX-7219 based 8×8 LED Matrix Displays(4.No’s or more).

Those available in kit form and assembled form. And we can purchase through on- line marketing like Amazon etc.

In my case 4 modules are powered from GPIO pins of Raspberry . It is good to use separate power for modules for more than 2 modules.

3.Female to Female connector wires

to connect GPIO pins and MAX7219 LED modules.

Next What to do(installing FreeSWITCH)

1.Prepare SD card and load Raspbian and install FreeSWITCH.  For details

https://www.algissalys.com/how-to/freeswitch-1-7-raspberry-pi-2-voip-sip-server

2.Install Display drivers for MAX7219. 

git clone https://github.com/rm-hull/max7219.git
sudo python max7219/setup.py install

3.Do wiring.

(as given below) between GPIO of Raspberry pi and MAX 7219 matrix LED displays.

Pin        Name       Remarks            RPi Pin          RPi Function

1            Vcc          +5V Power              2                        5V0

2            Gnd           Ground                  6                        Gnd

3            DIN            Data In                 19                GPIO 10 (MOSI)

4             CS          Chip Select              24                 GPIO  8 (SPI CS0)

5            CLK           Clock                      23                GPIO 11 (SPI CLK)

4.Run demo program.

Edit matrix_demo.py according to no. of matrix devices used  i.e cascaded= n, in my case n=4.

device = max7219(serial, cascaded=4 or 1, block_orientation=block_orientation).

sudo python max7219/examples/matrix_demo.py

At Last

Use ESL connection between FreeSWITCH and Max7219demo program. For details

https://freeswitch.org/confluence/display/FREESWITCH/Python+ESL

Here is my source file.

 

 

How Implement Multiservice in Twisted.

Multiservice module is service collection provided by twisted, which is useful for creating a new service and combines with two or more existing services.

The major tools that manages Twisted application is a command line utility called twistd. twistd is a cross-platform, and is the recommended tool for running twisted applications.

The core component of the Twisted Application infrastructure is the

object. which represents your application. Application acts as a container of any “Services” that your application provides. This will be done through Services.

Services manages application that can be started and stopped. In Application object can contain many services, or can even hierarchies of Services using “Multiservice” or your own custom IServiceCollection implementations.

Multiservice Implementaion:

To use multiserivce, which implements IService. For this, import internet and service module.

 

Example :

To run, Save above code in a file as serviceexample.tac . Here, “tac ” file is regular python file. Twisted application infrastructure, protocol implementations live in a module, services, using those protocols are registered in a Twisted Application Configuration(TAC) file and the reactor and configuration are managed by an external utility.

Here, I use multiservice functionality from service. agentservice create object of multiservice. Then add services using add service method. In service, you can add web servers, FTP servers and SSH clients. After this, set application name and pass application to serviceparent method.

now, add service on port 8082 as :

add another service same as above on port 8083 as:

To run serviceexample.tac file using twistd program, use command twistd -y serviceexample.tac -n. After this, open browser and enter url localhost:8082 and localhost:8083. You can see result on web page and both TCP servers are active.

Compile C program using gcc in Linux

“This post explains about using gcc to compile C program on Linux”

Compile  C program using gcc:

  • What is a compiler:

Compiler is just like translator between programing language and machine language. It converts source written in programing language to executable instructions file for computer. For different programing languages different compilers are available. Compilers differs from operating system to operating system.

  • Open text editor:

Compiling C program start with a text editor to write our C program like VI . It is generally inbuilt in Linux operating systems. By opening terminal in our system we start from there.screenshot-from-2016-12-05-13-14-11

 

 

 

 

  • Write code:

In terminal type: vi sample.c  and Enter key. Then we enter in to vi text editor with our filename given. Now type a or i key to go in to insert mode. Then type our C program in it. After typing C program press Esc key,colon(:),w,q and Enter key  respectively to save and exit from VI. Here is figure showing source code.

screenshot-from-2016-12-05-13-18-02

  • Compile using gcc command:

Now we are in terminal again.Here we type ls command to see our saved file in the list. Then type gcc sample.c -o sample and enter key. Now the gcc compiler compiles our C file and gives the output as filename we given that is executable. screenshot-from-2016-12-05-13-20-29

  • Execute output file:

If there are any mistakes or errors in the program the compiler gives warnings and error messages with line number to find out them easily, after correcting them compile once again. If it compiles successfully it gives executable file.  To check that file we use ls command and see if it is. If it is, now type ./sample in terminal to execute it. we see the result on terminal.screenshot-from-2016-12-05-13-22-54

Actually in compiling process preprocessor adds the necessary files those are included in C libraries. That we are listed in first of our program like <stdio.h> and some other files also generated by compiler . One of  those file is object file.

Asynchronous DB Operations in Twisted

Twisted is an asynchronous networking framework. Other Database API Implementations have blocking interfaces.

For this reason, twisted.enterprise.adbapi was created. It is a non-blocking interface,which allows you to access a number of different RDBMSes.

General Method to access DB API.

1 ) Create a Connection with db.

2) create a cursor.

3) do a query.

Cursor blocks to response in asynchronous framework. Those delays are unacceptable when using an asynchronous framework such as Twisted.
To Overcome blocking interface, twisted provides asynchronous wrapper for db module such as twisted.enterprise.adbapi

Database Connection using adbapi API.

To use adbapi, we import dependencies as below

1) Connect Database using adbapi.ConnectionPool

Here, We do not need to import dbmodule directly.
dbmodule.connect are passed as extra arguments to adbapi.ConnectionPool’s Constructor.

2) Run Database Query

Here, I used ‘%s’ paramstyle for mysql. if you use another database module, you need to use compatible paramstyle. for more, use DB-API specification.

Twisted doesn’t attempt to offer any sort of magic parameter munging – runQuery(query,params,…) maps directly onto cursor.execute(query,params,…).

This query returns Deferred, which allows arbitrary callbacks to be called upon completion (or failure).

Demo : Select, Insert and Update query in Database.

Here, I have used MySQLdb api, agentdata as a database name, root as a user, 123456 as a password.
Also, I have created select, insert and update query for select, insert and update operation respectively.
runQuery method returns deferred. For this, add callback and error back to handle success and failure respectively.