Fusebill AJAX Transparent Redirect

To facilitate PCI compliant credit card collections Fusebill provides a AJAX Transparent Redirect endpoint which you can use to securely capture customer’s credit cards. If you are adding the first payment method on a customer, it will be set to the default payment method automatically.

This API action is authenticated with a separate Public API Key. If you do not have that key, please contact Fusebill Support. The Public Key can only be used to authenticate the Transparent Redirect action.

Ajax Transparent Redirect

Google reCAPTCHA required.

Fusebill leverages reCAPTCHA technology to ensure payment method data captured is provided by a human and to protect against bots and scripting.

We use Google reCAPTCHA V2 in order to accomplish this.
https://developers.google.com/recaptcha/intro
The basic workflow for how this is accomplished is as follows:

  • Using Fusebill’s public site key, the client is presented with a captcha widget.
  • The user then verifies that they are human, starting with a check box. The user may be presented with additional verification steps such as an image recognition task.
  • The captcha widget then verifies with Google that the user is human, and returns a response token.
  • That response token is then sent to Fusebill with the payment method data for our system to validate and verify.
Fusebill Environment
reCAPTCHA Public Site Key

Staging (stg-payments.subscriptionplatform.com)

6LcI_GwUAAAAAJZu0VvB68DdxNxb5ZcBIwAX7RVj

Sandbox and Production (payments.subscriptionplatform.com)

6LfVtGwUAAAAALHn9Ycaig9801f6lrPmouzuKF11

Create Credit Card Payment Method

Field Name
Details
Required
Type

CustomerID

This is the Fusebill customer ID of the customer you wish to add the card to

Yes

Number

PublicAPIKey

This is your public API key.
This is found in fusebill account under Settings > Integrations > Transparent Redirect.

Yes

String

CardNumber

This is the credit card number.

Yes

Number

FirstName

 

The first name of the cardholder.

Yes

String

LastName

The last name of the card holder.

Yes

String

ExpirationMonth

Expiration month on the credit card.

Yes

Number

ExpirationYear

Expiration on the credit card.

Yes

Number

CVV

The credit card verification number.

Yes

Number

recaptcha

Recaptcha token response.

Yes

String

riskToken

WePay Risk token

No+

String

clientIp

Client/Customer IP address

No+

String

email

Customer Email address

No+

String

address1

First line of payment method address.

No*

String

address2

Second line of payment method address.

No*

String

city

City of the payment method

No*

String

stateId

State ID of the Payment method.
These can be found by performing a GET to v1/countries

No*

Number

countryId

Country ID of the payment method.
These can be found by performing a GET to v1/countries

No*

Number

postalZip

PostalZip of the payment method

No*

String

paymentCollectOptions

Object that allows specifying an amount to collect when creating the card.

Only works through Json
{
"collectionAmount": 1.0
}

No

Object

+ Denotes a field required for Fusebill Payments API Risk Fields
* Denotes fields required for AVS and may be required by your account’s Gateway. These fields are also required if using Fusebill Payments accounts as AVS is mandatory.

Notes:- Address information can optionally be captured as well.

Sample Code


Sample Response

Fusebill Payments

When using Fusebill Payments as your gateway processing account, some additional processing and data is required.

These are the ClientIP and a Risk token.

Additional information is available here.

Fusebill Test Gateways

Available here.

How to Debug the Execution of a Program in Linux

strace is a useful diagnostic, instructional, and debugging tool. System administrators, diagnosticians and trouble-shooters will find it invaluable for solving problems with programs for which the source is not readily available since they do not need to be recompiled in order to trace them. Students, hackers and the overly-curious will find that a great deal can be learned about a system and its system calls by tracing even ordinary programs. And programmers will find that since system calls and signals are events that happen at the user/kernel interface, a close examination of this boundary is very useful for bug isolation, sanity checking and attempting to capture race conditions.

Trace the Execution

You can use strace command to trace the execution of any executable. The following example shows the output of strace for the Linux uname command.

Counting number of syscalls

Run the ls command counting the number of times each system call was made and print totals showing the number and time spent in each call (useful for basic profiling or bottleneck isolation):

Save the Trace Execution to a File Using Option -o

The following examples stores the strace output to output.txt file.

Print Timestamp for Each Trace Output Line Using Option -t

To print the timestamp for each strace output line, use the option -t as shown below.

Tracing only network related system calls

Trace just the network related system calls of ping command

 

Viewing files opened by a process/daemon using tracefile

tracefile: Output from cmd on stdout can mess up output from strace.

Notes

It is a pity that so much tracing clutter is produced by systems employing shared libraries.

It is instructive to think about system call inputs and outputs as data-flow across the user/kernel boundary. Because user-space and kernel-space are separate and address-protected, it is sometimes possible to make deductive inferences about process behavior using inputs and outputs as propositions.

In some cases, a system call will differ from the documented behavior or have a different name. For example, on System V-derived systems the true time(2) system call does not take an argument and the stat function is called xstat and takes an extra leading argument. These discrepancies are normal but idiosyncratic characteristics of the system call interface and are accounted for by C library wrapper functions.

On some platforms a process that has a system call trace applied to it with the -p option will receive a SIGSTOP . This signal may interrupt a system call that is not restartable. This may have an unpredictable effect on the process if the process takes no action to restart the system call.

 

Network namespaces – part 1

Linux namespaces are a relatively new kernel feature which is essential for implementation of containers. A namespace wraps a global system resource into an abstraction which will be bound only to processes within the namespace, providing resource isolation. In this article I discuss network namespace and show a practical example.

What is namespace?

A namespace is a way of scoping a particular set of identifiers. Using a namespace, you can use the same identifier multiple times in different namespaces. You can also restrict an identifier set visible to particular processes.

For example, Linux provides namespaces for networking and processes, among other things. If a process is running within a process namespace, it can only see and communicate with other processes in the same namespace. So, if a shell in a particular process namespace ran ps waux, it would only show the other processes in the same namespace.

Linux network namespaces

In a network namespace, the scoped ‘identifiers’ are network devices; so a given network device, such as eth0, exists in a particular namespace. Linux starts up with a default network namespace, so if your operating system does not do anything special, that is where all the network devices will be located. But it is also possible to create further non-default namespaces, and create new devices in those namespaces, or to move an existing device from one namespace to another.

Each network namespace also has its own routing table, and in fact this is the main reason for namespaces to exist. A routing table is keyed by destination IP address, so network namespaces are what you need if you want the same destination IP address to mean different things at different times – which is something that OpenStack Networking requires for its feature of providing overlapping IP addresses in different virtual networks.

Each network namespace also has its own set of iptables (for both IPv4 and IPv6). So, you can apply different security to flows with the same IP addressing in different namespaces, as well as different routing.

Any given Linux process runs in a particular network namespace. By default this is inherited from its parent process, but a process with the right capabilities can switch itself into a different namespace; in practice this is mostly done using the ip netns exec NETNS COMMAND… invocation, which starts COMMAND running in the namespace named NETNS. Suppose such a process sends out a message to IP address A.B.C.D, the effect of the namespace is that A.B.C.D will be looked up in that namespace’s routing table, and that will determine the network device that the message is transmitted through.

Lets play with ip namespaces

By convention a named network namespace is an object at /var/run/netns/NAME that can be opened. The file descriptor resulting from opening /var/run/netns/NAME refers to the specified network namespace.

create a namespace

power up loopback device

open up a namespace shell

now we can use this shell like user shell where it uses ns1 namespace only

 

In part-2  , I will explain how to connect to internet from ns1 namespace and adding custom routes.

Speed up Ansible

Update to the latest version. Ansible 2.0 is slower than Ansible 1.9 because it included an important change to the execution engine to allow any user to choose the execution algorithm to be used. In the versions that followed, and mostly in 2.1, big optimizations have been done to increase execution speed, so be sure to be running the latest possible version.

Profiling Tasks

The best way I’ve found to time the execution of Ansible playbooks is by enabling the profile_tasks callback. This callback is included with Ansible and all you need to do to enable it is add callback_whitelist = profile_tasks to the [defaults] section of your ansible.cfg:
# ansible.cfg

 

Enable pipelining

You can enable pipelining by simply adding pipelining = True to the [ssh_connection]area of your ansible.cfg or by by using the ANSIBLE_PIPELINING and ANSIBLE_SSH_PIPELINING environment variables.
# ansible.cfg
You’ll also need to make sure that requiretty is disabled in /etc/sudoers on the remote host, or become won’t work with pipelining enabled.

Enable Mitogen for Ansible

Enabling Mitogen for Ansible is as simple as downloading and extracting the plugin, then adding 2 lines to the [defaults] section of your ansible.cfg:
# ansible.cfg

SSH multiplexing

The first thing to check is whether SSH multiplexing is enabled and used. This gives a tremendous speed boost because Ansible can reuse opened SSH sessions instead of negotiating new one (actually more than one) for every task. Ansible has this setting turned on by default. It can be set in configuration file as follows:

But be careful to override  ssh_args  — if you don’t set ControlMaster   and ControlPersist  while overriding, Ansible will “forget” to use them.

To check whether SSH multiplexing is used, start Ansible with  -vvvv  option:
ansible test -vvvv -m ping

UseDNS

UseDNS is an SSH-server setting (/etc/ssh/sshd_config file) which forces a server to check a client’s PTR-record upon connection. It may cause connection delays especially with slow DNS servers on the server side. In modern Linux distribution, this setting is turned off by default, which is correct.

PreferredAuthentications

It is an SSH-client setting which informs server about preferred authentication methods. By default Ansible uses:
-o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey
So if GSSAPI Authentication is enabled on the server (at the time of writing this it is turned on in RHEL EC2 AMI) it will be tried as the first option, forcing the client and server to make PTR-record lookups. But in most cases, we want to use only public key auth. We can force Ansible to do so by changing ansible.cfg:

 

Facts Gathering

At the start of playbook execution, Ansible collects facts about remote system (this is default behaviour for ansible-playbook but not relevant to ansible ad-hoc commands). It is similar to calling “setup” module thus requires another ssh communication step. If you don’t need any facts in your playbook (e.g. our test playbook) you can disable fact gathering:

Fork

Until this moment we discussed how to speed up playbook execution on a given remote host. But if you run playbook against tens or hundreds of hosts, Ansible internal performance becomes a bottleneck. For example, there’s preconfigured number of forks – number of hosts that can be interacted simultaneously. You can change this value in  ansible.cfg file:

 

The default value is 5, which is quite conservative. You can experiment with this setting depending on your local CPU and network bandwidth resources.
Another thing about forks is that if you have a lot of servers to work with and a low number of available forks, your master ssh-sessions may expire between tasks. Ansible uses linear strategy by default, which executes one task for every host and then proceeds to the next task. This way if time between task execution on the first server and on the last one is greater than ControlPersist then master socket will expire by the time Ansible starts execution of the following task on the first server, thus new ssh connection will be required.

Poll Interval

When module is executed on remote host, Ansible starts to poll for its result. The lower is interval between poll attempts, the higher is CPU load on Ansible control host. But we want to have CPU available for greater forks number (see above). You can tweak poll interval in  ansible.cfg:

 

If you run “slow” jobs (like backups) on multiple hosts, you may want to increase the interval to 0.05   to use less CPU.
Hope this helps you to speed up your setup. Seems like there are no more items in environment check-list and further speed gains only possible by optimizing your playbook code.

Asynchronous Actions and Polling

By default tasks in playbooks block, meaning the connections stay open until the task is done on each node. This may not always be desirable, or you may be running operations that take longer than the SSH timeout.
To avoid blocking or timeout issues, you can use asynchronous mode to run all of your tasks at once and then poll until they are done.
The behaviour of asynchronous mode depends on the value of poll.

Avoid connection timeouts: poll > 0

When poll is a positive value, the playbook will still block on the task until it either completes, fails or times out.
In this case, however, async explicitly sets the timeout you wish to apply to this task rather than being limited by the connection method timeout.
To launch a task asynchronously, specify its maximum runtime and how frequently you would like to poll for status. The default poll value is 15 seconds if you do not specify a value for poll:

 

Concurrent tasks: poll = 0

When poll is 0, Ansible will start the task and immediately move on to the next one without waiting for a result.
From the point of view of sequencing this is asynchronous programming: tasks may now run concurrently.
The playbook run will end without checking back on async tasks.
The async tasks will run until they either complete, fail or timeout according to their async value.
If you need a synchronization point with a task, register it to obtain its job ID and use the async_status module to observe it.
You may run a task asynchronously by specifying a poll value of 0:

 

Enable fact_caching

By enabling this value we’re telling Ansible to keep the facts it gathers in a local file. You can also set this to a redis cache. See the documentation for details.
Fact_caching is what happens when Ansible says, “Gathering facts” about your target hosts. If we don’t change our targets hardware (or virtual hardware) very often this can be very helpful. Enable it by adding this to your ansible.cfg file:
Enable facts caching mechanism
If you still need some of the facts groups, but at the same time the gathering process is still slow for you, you could try use fact caching.
Caching enables Ansible to cache the facts for a given host in some kind of backend.
Currently the caching plugin supports the following cache backend:

  •  
More information on the caching plugin, could be found here:
This is an example configuration of facts caching in json files

References:

1.https://dzone.com/articles/speed-up-ansible

2.https://habr.com/en/post/453446/

3.https://www.toptechskills.com/ansible-tutorials-courses/speed-up-ansible-playbooks-pipelining-mitogen/

4.https://www.youtube.com/watch?v=NZUYAbGs-ec

How to configure django app using gunicorn?

Django

Django is a python web framework used for developing web applications. It is fast, secure and scalable. Let us see how to configure the Django app using gunicorn.

Before proceeding to actual configuration, let us see some intro on the Gunicorn.

Gunicorn

Gunicorn (Green Unicorn) is a WSGI (Web Server Gateway Interface) server implementation commonly used to run python web applications and implements PEP 3333 server standard specifications, therefore, it can run web applications that implement application interface. Web applications written in Django, Flask or Bottle implements application interface.

Installation

Gunicorn coupled with Nginx or any web server works as a bridge between the web server and web framework. Web server (Nginx or Apache) can be used to serve static files and Gunicorn to handle requests to and responses from Django application. I will try to write another blog in detail on how to set up a django application with Nginx and Gunicorn.

Prerequisites

Please make sure you have below packages installed in your system and a basic understanding of Python, Django and Gunicorn are recommended.

  • Python > 3.5
  • Gunicorn > 15.0
  • Django > 1.11

Configure Django App Using Gunicorn

There are different ways to configure the Gunicron, I am going to demonstrate more on running the Django app using the gunicorn configuration file.

First, let us start by creating the Django project, you can do so as follows.

After starting the Django project, the directory structure looks like this.

The simplest way to run your django app using gunicorn is by using the following command, you must run this command from your manage.py folder.

This will run your Django project on 8000 port locally.

Configuration

Now let’s see, how to configure the django app using gunicorn configuration file. A simple Gunicorn configuration with worker class sync will look like this.

Let us see a few important details in the above configuration file.

  1. Append the base directory path in your systems path.
  2. You can bind the application to a socket using bind.
  3. backlog Maximum number of pending connections.
  4. workers number of workers to handle requests. This is based on your machine’s CPU count. This can be varied based on your application workload.
  5. worker_class, there are different types of classes, you can refer here for different types of classes. sync is the default and should handle normal types of loads.

You can refer more about the available Gunicorn settings here.

Running Django with gunicorn as a daemon PROCESS

Here is the sample systemd file,

After adding the file to the location /etc/systemd/system/. To reload new changes in file execute the following command.

NOTE: MAKE SURE TO INSTALL REQUIRED PACKAGES, GUNICORN FAILS TO START IF THERE ARE ANY MISSING PACKAGES. YOU CAN REFER TO MORE INFO IN ERROR LOGFILE MENTIONED IN CONFIGURATION FILE.

Start, Stop and Status of Application using systemctl

Now you can simply execute the following commands for your application.

To start your application

To stop your application.

To check the status of your application.

Please refer to a short complete video tutorial to configure the Django app below.

Using WebSockets in Javascript

WebSockets is a next-generation bidirectional communication technology for web applications which operates over a single socket and is exposed via a JavaScript interface in HTML 5 compliant browsers.

Using websockets in javascript

Once you get a Web Socket connection with the web server, you can send data from browser to server by calling a send() method, and receive data from server to browser by an onmessage event handler.

Following is the API which creates a new WebSocket object.

Here first argument, url, specifies the URL to which to connect. The second attribute, protocol is optional, and if present, specifies a sub-protocol that the server must support for the connection to be successful.

A simple example

To open a websocket connection, we need to create new WebSocket using the special protocol ws in the url:

There’s also encrypted wss:// protocol. It’s like HTTPS for websockets.

Always prefer wss://

The wss:// protocol not only encrypted, but also more reliable.

That’s because ws:// data is not encrypted, visible for any intermediary. Old proxy servers do not know about WebSocket, they may see “strange” headers and abort the connection.

On the other hand, wss:// is WebSocket over TLS, (same as HTTPS is HTTP over TLS), the transport security layer encrypts the data at sender and decrypts at the receiver, so it passes encrypted through proxies. They can’t see what’s inside and let it through.

WebSocket Attributes

Following are the attribute of WebSocket object. Assuming we created Socket object as mentioned above:-

Sr.No. Attribute & Description
1 Socket.readyState

The readonly attribute readyState represents the state of the connection. It can have the following values:-

  • A value of 0 indicates that the connection has not yet been established.
  • 1 value indicates that the connection is established and communication is possible.
  • 2 value indicates that the connection is going through the closing handshake.
  • 3 value indicates that the connection has been closed or could not be opened.

2 Socket.bufferedAmount

The readonly attribute bufferedAmount represents the number of bytes of UTF-8 text that have been queued using send() method.

WebSocket Events

Following are the events associated with WebSocket object. Assuming we created Socket object as mentioned above:-

Event Event Handler Description
open Socket.onopen This event occurs when socket connection is established.
message Socket.onmessage This event occurs when client receives data from server.
error Socket.onerror This event occurs when there is any error in communication.
close Socket.onclose This event occurs when connection is closed.

WebSocket Methods

These are the methods associated with WebSocket object. Assuming we created Socket object as mentioned above:-

Sr.No. Method & Description
1 Socket.send()

The send(data) method transmits data using the connection.

2 Socket.close()

The close() method would be used to terminate any existing connection.

WebSocket Example

WebSocket is a standard bidirectional TCP socket between the client and the server. The socket starts out as a HTTP connection and then “Upgrades” to a TCP socket after a HTTP handshake. After the handshake, either side can send data.

Client Side HTML & JavaScript Code

At the time of writing this tutorial, there are only few web browsers supporting WebSocket() interface. You can try following example with latest version of Chrome, Mozilla, Opera and Safari.

 

5 Ways to Speed Up SSH Connections in Linux

SSH is the most popular and secure method for managing Linux servers remotely. One of the challenges with remote server management is connection speeds, especially when it comes to session creation between the remote and local machines.

There are several bottlenecks to this process, one scenario is when you are connecting to a remote server for the first time; it normally takes a few seconds to establish a session. However, when you try to start multiple connections in succession, this causes an overhead (combination of excess or indirect computation time, memory, bandwidth, or other related resources to carry out the operation).

In this article, we will share four useful tips on how to speed up remote SSH connections in Linux.

1.Use Compression option in SSH

From the ssh man page (type man ssh to see the whole thing):

 

2.Force SSH Connection Over IPV4

OpenSSH supports both IPv4/IP6, but at times IPv6 connections tend to be slower. So you can consider forcing ssh connections over IPv4 only, using the syntax below:

Alternatively, use the AddressFamily (specifies the address family to use when connecting) directive in your ssh configuration file  (global configuration) or ~/.ssh/config (user specific file).

The accepted values are “any”, “inet” for IPv4 only, or “inet6”.

AddressFamily inet

3. Reuse SSH Connection

An ssh client program is used to establish connections to an sshd daemon accepting remote connections. You can reuse an already-established connection when creating a new ssh session and this can significantly speed up subsequent sessions.

You can enable this in your ~/.ssh/config file.

ControlMaster auto
ControlPath /home/akhil/.ssh/sockets/ssh_mux_%x_%p_%r
ControlPersist yes

openssh doesn’t support %x(ip address in control paths),  use my repo instead

https://github.com/akhilin/openssh-portable.git

or use %h to use hostname instead of ip address

using ip address is recommended so that even if you connect using different hostnames it uses same socket ( very useful when using ansible , pdsh )

4. Use Specific SSH Authentication Method

Another way of speeding up ssh connections is to use a given authentication method for all ssh connections, and here we recommend configuring ssh passwordless login using ssh keygen in 5 easy steps.

Once that is done, use the PreferredAuthentications directive, within ssh_config files (global or user specific) above. This directive defines the order in which the client should try authentication methods (you can specify a command separated list to use more than one method).

PreferredAuthentications=publickey

If you prefer password authentication which is deemed unsecure, use this.

5.Disable DNS Lookup On Remote Machine

By default, sshd daemon looks up the remote host name, and also checks that the resolved host name for the remote IP address maps back to the very same IP address. This can result into delays in connection establishment or session creation.

The UseDNS directive controls the above functionality; to disable it, search and uncomment it in the /etc/ssh/sshd_config file. If it’s not set, add it with the value no.

UseDNS=no

Configure Celery with SQS and Django on Elastic Beanstalk

 Introduction

Has your users complained about the loading issue on the web app you developed. That might be because of some long I/O bound call or a time consuming process. For example, when a customer signs up to website and we need to send confirmation email which in normal case the email will be sent and then reply 200 OK response is sent on signup POST. However we can send email later, after sending 200 OK response, right?. This is not so straight forward when you are working with  a framework like Django, which is tightly binded to MVC paradigm.

So, how do we do it ? The very first thought in mind would be python threading module. Well, Python threads are implemented as pthreads (kernel threads), and because of the global interpreter lock (GIL), a Python process only runs one thread at a time. And again threads are hard to manage, maintain code and scale it.

Perequisite

Audience for this blog requires to have knowledge about Django and AWS elastic beanstalk.

Celery

Celery is here to rescue. It can help when you have a time consuming task (heavy compute or I/O bound tasks) between request-response cycle. Celery is an open source asynchronous task queue or job queue which is based on distributed message passing. In this post I will walk you through the celery setup procedure with django and SQS on elastic beanstalk.

Why Celery ?   

Celery is very easy to integrate with existing code base. Just write a decorator above the definition of a function declaring a celery task and call that function with a .delay method of that function.

Broker

To work with celery, we need a message broker. As of writing this blog, Celery supports RabbitMQ, Redis, and Amazon SQS (not fully) as message broker solutions. Unless you don’t want to stick to AWS ecosystem (as in my case), I recommend to go with RabbitMQ or Redis because SQS does not yet support remote control commands and events. For more info check here. One of the reason to use SQS is its pricing. One million SQS free request per month for every user.

Proceeding with SQS, go to AWS SQS dashboard and create a new SQS queues. Click on create new queue button.

Depending upon the requirement we can select any type of the queue. We will name queue as dev-celery.

Installation

Celery has a very nice documentation. Installation and configuration is described here. For convenience here are the steps

Activate your virtual environment, if you have configured one and install cerely.

pip install celery[sqs]

Configuration

Celery has built-in support of django. It will pick its setting parameter from django’s settings.py which are prepended by CELERY_ (‘CELERY’ word needs to be defined while initializing celery app as namespace). So put below setting parameter in settings.py

AWS login credentials should be present in the environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY

Now let’s configure celery app within django code. Create a celery.py file besides django’s settings.py.

Now put below code in projects __init__.py

Testing

Now let’s test the configuration. Open terminal start celery

Terminal 1

 

All the task which are registered to use celery using celery decorators appear here while starting celery. If you find that your task does not appear here then make sure that the module containing the task is imported on startup.

Now open django shell in another terminal

Terminal 2

After executing the task function with delay method, that task should run in the worker process which is listening to events in other terminal. Here celery sent a message to SQS with details of the task and worker process which was listening to SQS, received it and task was executed in worker process. Below is what you should see in terminal 1

Terminal 1

Deploy celery worker process on AWS elastic beanstalk

Celery provides “multi” sub command to run process in daemon mode, but this cannot be used on production. Celery recommends various daemonization tools http://docs.celeryproject.org/en/latest/userguide/daemonizing.html

AWS elastic beanstalk already use supervisord for managing web server process. Celery can also be configured using supervisord tool. Celery’s official documentation has a nice example of supervisord config for celery. https://github.com/celery/celery/tree/master/extra/supervisord. Based on that we write quite a few commands under .ebextensions directory.

Create two files under .ebextensions directory. Celery.sh file extract the environment variable and forms celery configuration, which copied to /opt/python/etc/celery.conf file and supervisord is restarted. Here main celery command:

At the time if writing this blog celery had https://github.com/celery/celery/issues/3759 issue. As a work around to this issue we add “-P solo”. This will run task sequentially for a single worker process.

Now create elastic beanstalk configuration file as below. Make sure you have pycurl and celery in requirements.txt. To install pycurl libcurl-devel needs to be installed from yum package manager.

Add these files to git and deploy to elastic beanstalk.

Below is the figure describing the architecture with django, celery and elastic beanstalk.

FCM – send push notifications using Python

What is FCM ?

FCM – Firebase Cloud Messaging is a cross-platform  ( Android, iOS and Chrome ) messaging solution that lets you reliably deliver messages at no cost. FCM is best suited if you want to send push notification to your app which you built to run on Android and iOS. The advantage you get is you don’t have to separately deal with GCM (Google Cloud Messaging deprecated now) and Apple’s APNS. You hand over your notification message to FCM and FCM takes care of communicating with apple’s APNS and Android messaging servers to reliably deliver those messages.

fcm-2

Using FCM we can send message to single device or multiple devices.  There are two different types of messages, notification and data. Notification messages include JSON keys that are understood and interpreted by phone’s operating system. If you want to include customized app specific JSON keys use data message. You can combine both notification and data JSON objects in single message. You can also send messages with different priority.

Note : – You need to set priority  to high  if you want phone to wake up and show notification on screen

Sending message with Python

We can use PyFCM to send messages via FCM. PyFCM is good for synchronous ( blocking ) python. We will discuss non-blocking option in next paragraph.

Install PyFCM using following command

The following code will send a push notification to

So, the PyFCM API is the pretty straight forward to use.

Sending FCM push notification using Twisted

PyFCM discussed in above paragraph is good enough if you want to send messages in blocking fashion. If you have to send high number of concurrent messages then using Twisted is a good option.

Twisted Matrix
Twisted Matrix

Network operations performed using twisted library don’t block. Thus it’s a good choice when network concurrency is required by program. We can use txFCM library to send FCM messages using twisted

Install txFCM using following command

Following code send FCM message using txFCM

txFCM is built on top of PyFCM so all the API call that are available in PyFCM are also available in txFCM.

Setting Up Ansible for AWS with Dynamic Inventory (EC2)

If your Ansible inventory fluctuates over time, with hosts spinning up and shutting down in response to business demands, the static inventory solutions described in Working with Inventory will not serve your needs. You may need to track hosts from multiple sources

Ansible integrates all of these options via a dynamic external inventory system. Ansible supports two ways to connect with external inventory: Inventory Plugins and inventory scripts.

If you use Amazon Web Services EC2, maintaining an inventory file might not be the best approach, because hosts may come and go over time, be managed by external applications, or you might even be using AWS autoscaling. For this reason, you can use the EC2 external inventory script.

You can use this script in one of two ways.

  1. The easiest is to use Ansible’s -i command-line option and specify the path to the script after marking it executable:
  2. The second option is to copy the script to /etc/ansible/hosts and chmod +x it. You will also need to copy the ec2.ini file to /etc/ansible/ec2.ini. Then you can run ansible as you would normally.

You can test the script by itself to make sure your config is correct:


After a few moments, you should see your entire EC2 inventory across all regions in JSON.

If you use Boto profiles to manage multiple AWS accounts, you can pass --profile PROFILE name to the ec2.py script.

You can then run ec2.py --profile prod to get the inventory for the prod account, although this option is not supported by ansible-playbook. You can also use the AWS_PROFILE variable – for example:

 

ec2.py