Tag Archives: django

how to setup apache proxy for django application

Apache HTTP Server can be configured in both a forward and reverse proxy (also known as gateway) mode.

forward proxy

An ordinary forward proxy is an intermediate server that sits between the client and the origin server. In order to get content from the origin server, the client sends a request to the proxy naming the origin server as the target. The proxy then requests the content from the origin server and returns it to the client. The client must be specially configured to use the forward proxy to access other sites.

A typical usage of a forward proxy is to provide Internet access to internal clients that are otherwise restricted by a firewall. The forward proxy can also use caching (as provided by mod_cache) to reduce network usage.

The forward proxy is activated using the ProxyRequests directive. Because forward proxies allow clients to access arbitrary sites through your server and to hide their true origin, it is essential that you secure your server so that only authorized clients can access the proxy before activating a forward proxy.

reverse proxy

reverse proxy (or gateway), by contrast, appears to the client just like an ordinary web server. No special configuration on the client is necessary. The client makes ordinary requests for content in the namespace of the reverse proxy. The reverse proxy then decides where to send those requests and returns the content as if it were itself the origin.

A typical usage of a reverse proxy is to provide Internet users access to a server that is behind a firewall. Reverse proxies can also be used to balance load among several back-end servers or to provide caching for a slower back-end server. In addition, reverse proxies can be used simply to bring several servers into the same URL space.

A reverse proxy is activated using the ProxyPass directive or the [P] flag to the RewriteRule directive. It is not necessary to turn ProxyRequests on in order to configure a reverse proxy.

django application

I am running my gunicorn application on port 8090 using following command.

“`/opt/venv/bin/python3.6 /opt/venv/bin/gunicorn –config /etc/controlpanel/gunicorn/controlpanel.py –pid /var/run/controlpanel.pid controlpanel.wsgi:application“`

static files path is “`/opt/controlpanel/ui-ux/static/“`

apache config (/etc/apache2/sites-enabled/cp.conf)

  • enable mod_proxy module in apache
<VirtualHost *:80>

ServerName devcontrol.lintel.com
ErrorLog /var/log/httpd/cp_error.log
CustomLog /var/log/httpd/cp_access.log combined

ProxyPreserveHost On
ProxyPass /static !
ProxyPass / http://127.0.0.1:8090/
ProxyPassReverse / http://127.0.0.1:8090/
ProxyTimeout 300

Alias /static/ /opt/controlpanel/ui-ux/static/
<Directory "/opt/controlpanel/ui-ux/static/">
Options Indexes FollowSymLinks
AllowOverride None
Require all granted
</Directory>

</VirtualHost>

after deploying on Apache you can use lets encrypt to install SSL certificates.

How to configure django app using gunicorn?

Django

Django is a python web framework used for developing web applications. It is fast, secure and scalable. Let us see how to configure the Django app using gunicorn.

Before proceeding to actual configuration, let us see some intro on the Gunicorn.

Gunicorn

Gunicorn (Green Unicorn) is a WSGI (Web Server Gateway Interface) server implementation commonly used to run python web applications and implements PEP 3333 server standard specifications, therefore, it can run web applications that implement application interface. Web applications written in Django, Flask or Bottle implements application interface.

Installation

pip3 install gunicorn

Gunicorn coupled with Nginx or any web server works as a bridge between the web server and web framework. Web server (Nginx or Apache) can be used to serve static files and Gunicorn to handle requests to and responses from Django application. I will try to write another blog in detail on how to set up a django application with Nginx and Gunicorn.

Prerequisites

Please make sure you have below packages installed in your system and a basic understanding of Python, Django and Gunicorn are recommended.

  • Python > 3.5
  • Gunicorn > 15.0
  • Django > 1.11

Configure Django App Using Gunicorn

There are different ways to configure the Gunicron, I am going to demonstrate more on running the Django app using the gunicorn configuration file.

First, let us start by creating the Django project, you can do so as follows.

django-admin startproject webapp

After starting the Django project, the directory structure looks like this.

The simplest way to run your django app using gunicorn is by using the following command, you must run this command from your manage.py folder.

gunicorn webapp.wsgi

This will run your Django project on 8000 port locally.

Configuration

Now let’s see, how to configure the django app using gunicorn configuration file. A simple Gunicorn configuration with worker class `sync` will look like this.

import sys

BASE_DIR = "/path/to/base/dir/"
sys.path.append(BASE_DIR)


bind = '127.0.0.1:8000'
backlog = 2048


import multiprocessing
workers = multiprocessing.cpu_count() * 2 + 1
worker_class = 'sync'
worker_connections = 1000
timeout = 300
keepalive = 2

#
#   spew - Install a trace function that spews every line of Python
#       that is executed when running the server. This is the
#       nuclear option.
#
#       True or False
#

spew = False


#errorlog = '-'

accesslog = '/var/log/webapp_access_log'
loglevel = 'debug'
errorlog = '/var/log/webapp_error_log'


def post_fork(server, worker):
    server.log.info("Worker spawned (pid: %s)", worker.pid)


def pre_fork(server, worker):
    pass


def pre_exec(server):
    server.log.info("Forked child, re-executing.")


def when_ready(server):
    server.log.info("Server is ready. Spawning workers")


def worker_int(worker):
    worker.log.info("worker received INT or QUIT signal")

    ## get traceback info
    import threading, sys, traceback
    id2name = dict([(th.ident, th.name) for th in threading.enumerate()])
    code = []
    for threadId, stack in sys._current_frames().items():
        code.append("\n# Thread: %s(%d)" % (id2name.get(threadId,""),
            threadId))
        for filename, lineno, name, line in traceback.extract_stack(stack):
            code.append('File: "%s", line %d, in %s' % (filename,
                lineno, name))
            if line:
                code.append("  %s" % (line.strip()))
    worker.log.debug("\n".join(code))


def worker_abort(worker):
    worker.log.info("worker received SIGABRT signal")

Let us see a few important details in the above configuration file.

  1. Append the base directory path in your systems path.
  2. You can bind the application to a socket using bind.
  3. `backlog` Maximum number of pending connections.
  4. `workers` number of workers to handle requests. This is based on your machine’s CPU count. This can be varied based on your application workload.
  5. `worker_class`, there are different types of classes, you can refer here for different types of classes. `sync` is the default and should handle normal types of loads.

You can refer more about the available Gunicorn settings here.

Running Django with gunicorn as a daemon PROCESS

Here is the sample systemd file,

[Unit]
Description=webapp daemon
After=network.target

[Service]
PIDFile=/var/run/webapp.pid
WorkingDirectory=/path/to/base/dir/
ExecStart=/usr/local/bin/gunicorn --config /path/to/gunicorn/config.py --pid /var/run/webapp.pid webapp.wsgi:application
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s TERM $MAINPID
PrivateTmp=true

[Install]
WantedBy=multi-user.target

After adding the file to the location /etc/systemd/system/. To reload new changes in file execute the following command.

systemctl daemon-reload

NOTE: MAKE SURE TO INSTALL REQUIRED PACKAGES, GUNICORN FAILS TO START IF THERE ARE ANY MISSING PACKAGES. YOU CAN REFER TO MORE INFO IN ERROR LOGFILE MENTIONED IN CONFIGURATION FILE.

Start, Stop and Status of Application using systemctl

Now you can simply execute the following commands for your application.

To start your application

systemctl start webapp

To stop your application.

systemctl stop webapp

To check the status of your application.

systemctl status webapp

Please refer to a short complete video tutorial to configure the Django app below.

Configure Celery with SQS and Django on Elastic Beanstalk

 Introduction

Has your users complained about the loading issue on the web app you developed. That might be because of some long I/O bound call or a time consuming process. For example, when a customer signs up to website and we need to send confirmation email which in normal case the email will be sent and then reply 200 OK response is sent on signup POST. However we can send email later, after sending 200 OK response, right?. This is not so straight forward when you are working with  a framework like Django, which is tightly binded to MVC paradigm.

So, how do we do it ? The very first thought in mind would be python threading module. Well, Python threads are implemented as pthreads (kernel threads), and because of the global interpreter lock (GIL), a Python process only runs one thread at a time. And again threads are hard to manage, maintain code and scale it.

Perequisite

Audience for this blog requires to have knowledge about Django and AWS elastic beanstalk.

Celery

Celery is here to rescue. It can help when you have a time consuming task (heavy compute or I/O bound tasks) between request-response cycle. Celery is an open source asynchronous task queue or job queue which is based on distributed message passing. In this post I will walk you through the celery setup procedure with django and SQS on elastic beanstalk.

Why Celery ?   

Celery is very easy to integrate with existing code base. Just write a decorator above the definition of a function declaring a celery task and call that function with a .delay method of that function.

from celery import Celery

app = Celery('hello', broker='amqp://guest@localhost//')

@app.task
def hello():
    return 'hello world'
# Calling a celery task
hello.delay()

Broker

To work with celery, we need a message broker. As of writing this blog, Celery supports RabbitMQ, Redis, and Amazon SQS (not fully) as message broker solutions. Unless you don’t want to stick to AWS ecosystem (as in my case), I recommend to go with RabbitMQ or Redis because SQS does not yet support remote control commands and events. For more info check here. One of the reason to use SQS is its pricing. One million SQS free request per month for every user.

Proceeding with SQS, go to AWS SQS dashboard and create a new SQS queues. Click on create new queue button.

Depending upon the requirement we can select any type of the queue. We will name queue as dev-celery.

Installation

Celery has a very nice documentation. Installation and configuration is described here. For convenience here are the steps

Activate your virtual environment, if you have configured one and install cerely.

pip install celery[sqs]

Configuration

Celery has built-in support of django. It will pick its setting parameter from django’s settings.py which are prepended by CELERY_ (‘CELERY’ word needs to be defined while initializing celery app as namespace). So put below setting parameter in settings.py

# Amazon credentials will be taken from environment variable.
CELERY_BROKER_URL = 'sqs://'

AWS login credentials should be present in the environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY

CELERY_BROKER_TRANSPORT_OPTIONS = {'region': 'us-west-2',
                                   'visibility_timeout': 3600,
                                   'polling_interval': 10,
                                   'queue_name_prefix': '%s-' % {True: 'dev',
                                                                 False: 'production'}[DEBUG],
                                   'CELERYD_PREFETCH_MULTIPLIER': 0,
                                  }


Now let’s configure celery app within django code. Create a celery.py file besides django’s settings.py.

from __future__ import absolute_import, unicode_literals
import os
from celery import Celery

# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'proj.settings')

app = Celery('proj')

# Using a string here means the worker doesn't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
#   should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY')

# Load task modules from all registered Django app configs.
app.autodiscover_tasks()

@app.task(bind=True)
def debug_task(self):
   print('Request: {0!r}'.format(self.request))

Now put below code in projects __init__.py

from __future__ import absolute_import, unicode_literals

# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from .celery import app as celery_app

__all__ = ('celery_app',)

Testing

Now let’s test the configuration. Open terminal start celery

Terminal 1

$ celery worker --app=proj --loglevel=INFO
-------------- celery@lintel v4.1.0 (latentcall)
---- **** -----
--- * ***  * -- Linux-4.15.0-24-generic-x86_64-with-Ubuntu-18.04-bionic 2018-07-04 11:18:57
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app:         enq_web:0x7f0ba29fa3d0
- ** ---------- .> transport:   sqs://localhost//
- ** ---------- .> results:     disabled://
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
               .> celery           exchange=celery(direct) key=celery
[tasks]
 . enq_web._celery.debug_task

 

All the task which are registered to use celery using celery decorators appear here while starting celery. If you find that your task does not appear here then make sure that the module containing the task is imported on startup.

Now open django shell in another terminal

Terminal 2

$ python manage.py shell

In [1]: from proj import celery
In [2]: celery.debug_task() # ←← ← Not through celery 
In [3]: celery.debug_task.delay() # ←← ← This is through celery

After executing the task function with delay method, that task should run in the worker process which is listening to events in other terminal. Here celery sent a message to SQS with details of the task and worker process which was listening to SQS, received it and task was executed in worker process. Below is what you should see in terminal 1

Terminal 1

Request: <Context: {'origin': 'gen14099@lintel', u'args': [], 'chain': None, 'root_id': '041be6c3-419d-4aa0-822f-d50da1b340a0', 'expires': None, u'is_eager': False, u'correlation_id': '041be6c3-419d-4aa0-822f-d50da1b340a0', 'chord': None, u'reply_to': 'd2e76b9b-094b-33b4-a873-db5d2ace8881', 'id': '041be6c3-419d-4aa0-822f-d50da1b340a0', 'kwargsrepr': '{}', 'lang': 'py', 'retries': 0, 'task': 'proj.celery.debug_task', 'group': None, 'timelimit': [None, None], u'delivery_info': {u'priority': 0, u'redelivered': None, u'routing_key': 'celery', u'exchange': u''}, u'hostname': u'celery@lintel', 'called_directly': False, 'parent_id': None, 'argsrepr': '()', 'errbacks': None, 'callbacks': None, u'kwargs': {}, 'eta': None, '_protected': 1}>

Deploy celery worker process on AWS elastic beanstalk

Celery provides “multi” sub command to run process in daemon mode, but this cannot be used on production. Celery recommends various daemonization tools http://docs.celeryproject.org/en/latest/userguide/daemonizing.html

AWS elastic beanstalk already use supervisord for managing web server process. Celery can also be configured using supervisord tool. Celery’s official documentation has a nice example of supervisord config for celery. https://github.com/celery/celery/tree/master/extra/supervisord. Based on that we write quite a few commands under .ebextensions directory.

Create two files under .ebextensions directory. Celery.sh file extract the environment variable and forms celery configuration, which copied to /opt/python/etc/celery.conf file and supervisord is restarted. Here main celery command:

celery worker -A PROJECT_NAME -P solo --loglevel=INFO -n worker.%%h.

At the time if writing this blog celery had https://github.com/celery/celery/issues/3759 issue. As a work around to this issue we add “-P solo”. This will run task sequentially for a single worker process.

#!/usr/bin/env bash

# Get django environment variables
celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'`
celeryenv=${celeryenv%?}

# Create celery configuraiton script
celeryconf="[program:celeryd-worker]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery worker -A PROJECT_NAME -P solo --loglevel=INFO -n worker.%%h

directory=/opt/python/current/app/enq_web
user=nobody
numprocs=1
stdout_logfile=/var/log/celery/worker.log
stderr_logfile=/var/log/celery/worker.log
autostart=true
autorestart=true
startsecs=10

; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600

; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true

; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998

environment=$celeryenv
"

# Create the celery supervisord conf script
echo "$celeryconf" | tee /opt/python/etc/celery.conf

# Add configuration script to supervisord conf (if not there already)
if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
  then
  echo "[include]" | tee -a /opt/python/etc/supervisord.conf
  echo "files: celery.conf" | tee -a /opt/python/etc/supervisord.conf
fi

# Reread the supervisord config
/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf reread

# Update supervisord in cache without restarting all services
/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf update

# Start/Restart celeryd through supervisord
/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd-worker

Now create elastic beanstalk configuration file as below. Make sure you have pycurl and celery in requirements.txt. To install pycurl libcurl-devel needs to be installed from yum package manager.

packages:
  yum:
    libcurl-devel: []

container_commands:
    01_mkdir_for_log_and_pid:
        command: "mkdir -p /var/log/celery/ /var/run/celery/"
    02_celery_configure:
        command: "cp .ebextensions/celery-worker.sh /opt/elasticbeanstalk/hooks/appdeploy/post/ && chmod 744 /opt/elasticbeanstalk/hooks/appdeploy/post/celery-worker.sh"
        cwd: "/opt/python/ondeck/app"
    03_celery_run:
        command: "/opt/elasticbeanstalk/hooks/appdeploy/post/celery-worker.sh"

Add these files to git and deploy to elastic beanstalk.

Below is the figure describing the architecture with django, celery and elastic beanstalk.

Implementing Webhook Handler in Python.

What is Webhook ?

Webhook is an asynchronous HTTP callback on an event occurrence. It is a simple server to server communication for reporting a specific event occurred on a server. The server on which event occurred will fire a HTTP POST request to another server on a URL which is provided by receiving server.

For example, whenever your colleague pushes code commits to github, an event has occurred on github’s server. Now if a webhook URL is provided in github settings, a webhook will be fired to that URL. This webhook will be a HTTP POST request with commit details inside the body in a specified format.  More details on github webhook can be found here.

In this post, I will share my experience of implementing webhook handler in python. For the readers, basic knowledge on implementing web application in python would be better.

Webhook Handler

A Webhook can be handled by simply providing a URL endpoint in a web application. Following is an example using Django. Add webhook url in urls.py

from django.conf.urls import url
import views

urlpatterns = [
    url(r'^webhook', views.webhook, name='webhook'),
]

Now create view function in views.py which will parse the data and process it.  In most of the cases, webhook data is sent in JSON format. So lets load the webhook data and sent the data to process_webhook function.

Most of the web applications accept POST request after verifying CSRF token, but here we need to exempt it from this check. So put @csrf_token decorator above the view function. Also put an @require_post decorator to ensure the request is only POST.

from django.views.decorators.csrf import csrf_exempt
from django.views.decorators.http import require_POST

@require_POST
@csrf_exempt
def webhook(request):

    # Load the event data from JSON
    data = json.loads(request.body)
    # And process it
    process_webhook(data)

    return 200, 'Processed.'

The above implementation of URL endpoint will remain different for various other python web framework like Flask, tornado, twisted. But the below code  process_webhook function implementation will remain same irrespective of any framework.

Processing event

There may be different type events we need to handle. So, before proceeding to implement process_webhook function, lets create a python module named webhook_events.py, which will contain a single function for each type of event wherein will be the logic for that particular event. In other words, we are going to map event name with its function, which will handle the logic for that particular type of webhook event.

def event_one(event):
    # do something for
    # for event 'event.one'


def event_two(event):
    # do something for
    # for event 'event.two'

There are many ways to implement process_webhook function and how we map a webhook event with its function. We are going to discuss different implementation of process_webhook based on extendability. Most basic version of that is below.

import webhook_events

def process_webhook(event):
    event_name = event['name']

    if event_name == 'event.one':
        webhook_event.event_one(event)

    elif event_name == 'event.two':
        webhook_event.event_two(event)

    # and so on

A Better way

Now suppose, there are 10s of webhook to be served. We certainly don’t want to write repetitive code. So below is a better way of implementing process_webhook. Here we just replace dot in event name with underscore, so that we get the function name written in webhook_events.py for that event. If the function is not found that means event is not registered (not being served). In this way, no matter the number webhook to be served, just write the function to handle it, in webhook_events.py

import webhook_events

def process_webhook(event):
    event_name = event['name']

    function_name = event_name.replace('.', '_')
    function = getattr(webhook_events, function_name, None)

    if function:
        function(event)
    else:
        print('Event %s is not registered.' % event_name)

Decorators

More robust and pythonic way of implementing process_webhook is by using decorators. Lets define a decorator in webhook_events.py which will map the event_name to its function. Here the EVENT_MAP is dictionary inside a setting module, which will contain event name as key and event function as its value.

from django.conf import settings

def register(event_name):

    def wrapper(event_function):
        
        # Initializing settings.event_map if not already
        event_map = getattr(settings, 'EVENT_MAP', None)
        if not event_map:
            settings.EVENT_MAP = dict()
        
        # Mapping event name to its function
        settings.EVENT_MAP[event_name] = event_function
    
        return event_function

    return wrapper


@register('event.one')
def event_one(event):
    # do something for
    # for event 'event.one'


@register('event.two')
def event_two(event):
    # do something for
    # for event 'event.two'

In this case, the process_webhook will look like below:

def process_webhook(event):
    event_name = event['name']
    function = settings.EVENT_MAP.get(event_name, None)

    if function:
        function(event)
    else:
        print('Event %s is not registered.' % event_name)

This is the way which I prefer to implement webhook handler in python. How would you prefer ? Please feel free to comment below.

How to integrate Celery into Django project

What is Celery?

Celery is a distributed task queue that allows us to execute jobs in background. This article explains how to set up Celery with Django to perform a background task.

Advantages:

  • Large or small, Celery makes scheduling such periodic tasks easy.
  • You never want end users to have to wait unnecessarily for pages to load or actions to complete. If a long process is part of your application’s workflow, you can use Celery to execute that process in the background, as resources become available, so that your application can continue to respond to client requests.

Celery uses brokers to pass messages between a Django Project and the Celery workers. We will use Redis as the message broker.

Installation

Before diving into Celery, follow the below setup points

Create a new virtualenv ‘venv’ using following command:

$ virtualenv venv

To activate the environment use command:

$ source ./venv/bin/activate

Install django and create a django project ‘myproject’. Make sure to activate a virtualenv, create a requirements.txt file and run the migrations. Then fire up the server and navigate to http://localhost:8000/ in your browser. You should see the familiar “Congratulations on your first Django-powered page” text. When done, kill the sever.

Let’s install Celery:

$ pip install celery

$ pip freeze > requirements.txt

Now we will integrate Celery into our Django project with the following steps:

Step 1:

Inside the myproject directory i.e beside your settings.py create a new file called celery.py and add the following code in that:

from __future__ import absolute_import
import os
from celery import Celery
from django.conf import settings

# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myproject.settings')
app = Celery('myproject')

# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)

@app.task(bind=True)
def debug_task(self):
    print('Request: {0!r}'.format(self.request))

Let’s break down what happens in the first module, first we import absolute imports from the future, so that our celery.py module will not clash with the library:

from __future__ import absolute_import

Then we set the default DJANGO_SETTINGS_MODULE for the celery command-line program:

os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myproject.settings')

Specifying the settings here means the celery command line program will know where your Django project is. This statement must always appear before the app instance is created, which is what we do next:

app = Celery('myproject')

This is your instance of the library, you can have many instances but there’s probably no reason for that when using Django.

We also add the Django settings module as a configuration source for Celery. This means that you don’t have to use multiple configuration files, and instead configure Celery directly from the Django settings.

You can pass the object directly here, but using a string is better since then the worker doesn’t have to serialize the object when using Windows or execv:

app.config_from_object('django.conf:settings')

Next, a common practice for reusable apps is to define all tasks in a separate tasks.py module, and Celery does have a way to autodiscover these modules:

app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)

Step 2:

To ensure that the Celery app is loaded when Django starts, add the following code into the __init__.py file that sits next to your settings.py file:

from __future__ import absolute_import
# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.

from .celery import app as celery_app

Project layout should look like:

├── manage.py
├── myproject
│   ├── __init__.py
│   ├── celery.py
│   ├── settings.py
│   ├── urls.py
│   └── wsgi.py
└── requirements.txt

Step 3:

Celery uses “brokers” to pass messages between a Django Project and the Celery workers. In this article, we will use Redis as the message broker.

First, install Redis from the official download page and then turn to your terminal, in a new terminal window, fire up the server:

$ redis-server

You can test that Redis is working properly by typing this into your terminal:

$ redis-cli ping

Redis should reply with PONG – try it!

Once Redis is up, add the following code to your settings.py file:

# CELERY STUFF
BROKER_URL = 'redis://localhost:6379'
CELERY_RESULT_BACKEND = 'redis://localhost:6379'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = 'Africa/Nairobi'

You also need to add Redis as a dependency in the Django Project:

$ pip install redis==2.10.3
$ pip freeze > requirements.txt

Test that the Celery worker is ready to receive tasks:

$ celery -A myproject worker -l info

Kill the process with CTRL-C. Now, test that the Celery task scheduler is ready for action:

$ celery -A myproject beat -l info

That’s it! You can now use Celery with Django. For more information on setting up Celery with Django, please check out the official Celery documentation.

How to implement PayPal payment gateway

The PayPal REST APIs are supported in two environments. Use the Sandbox environment for testing purposes, then move to the live environment for production processing.

The following endpoints address are two environments:

Sandbox (for testing) : https://api.sandbox.paypal.com 
Live (production) : https://api.paypal.com

A complete REST operation is formed by combining an HTTP method with the full URI to the resource you’re addressing. For example, here is the operation to create a new payment:

POST https://api.paypal.com/v1/payments/payment

The PayPal REST sdk can be obtained through pip

pip install paypalrestsdk

OAuth Request / Response

import paypalrestsdk
api = paypalrestsdk.set_config(
       mode="sandbox", # sandbox or live
       client_id="CLIENT_ID",
       client_secret="CLIENT_SECRET")
api.get_access_token()

Client Id and Secret Id can be obtained from the application created in the paypal account.

For the following each API call, you’ll need to set request headers, including the access token.

Creating a WebProfile:

web_profile = WebProfile({
        "name": Web_Profile_Name,
        "presentation": {
            "brand_name": "BusinessName",
            "logo_image": URL to logo image,
            "locale_code": "US"
            },
        "input_fields": {
            "allow_note": 1,
            "no_shipping": 1,
            "address_override": 1
            },
        "flow_config": {
            "landing_page_type": "Login"
            }
    })
web_profile.create(): # Will return True or False
name:

The name of the web experience profile which should be unique among the profiles for a given merchant.

presentation:

It contains the parameters for style and presentation.

input_fields:

Parameters for input fields customization:

  1. allow_note : It enables the buyer to enter a note to the merchant on the paypal page during checkout.

  2. no_shipping : Determines whether or not PayPal displays shipping address fields on the experience pages.

    • 0 – PayPal displays the shipping address on the PayPal pages.
    • 1 – PayPal does not display shipping address fields whatsoever.
    • 2 – If you do not pass the shipping address, PayPal obtains it from the buyer’s account profile.
  3. address_override : Determines if the PayPal pages should display the shipping address supplied in this call, rather than the shipping address on file with PayPal for this buyer.

    • 0 – PayPal pages should display the address on file
    • 1 – PayPal pages should display the addresses supplied in this call instead of the address from the buyer’s PayPal account.
flow_config:

Parameters for flow configuration

  1. landing_page_type : Type of PayPal page to be displayed when a user lands on the PayPal site for checkout.

    • Billing – The Non-PayPal account landing page is used
    • Login – The paypal account login page is used.

Creating a Payment:

payment = Payment({
    "intent": "sale",
    "experience_profile_id": web_profile.id,
    "payer": {
        "payment_method": "paypal",
        "status": "VERIFIED" },
    "redirect_urls": {
        "return_url": RETURN_URL,
        "cancel_url": CANCEL_URL },
    "transactions": [ {
    "amount": {
        "total": amount,
        "currency": "USD" },
        "description": 'description' } ] } )
payment.create(): # Returns True or False

 

intent:

Payment intent. Allowed values are:

  • “sale” – For immediate payment
  • “authorize” – To authorize a payment for capture later
  • “order” – To create an order
experience_profile_id:

Id that will be obtained from the response of web profile request

payer:

Source of the funds for this payment represented by a PayPal account or a credit card.

  • payment_method : Payment method used. Must be either credit_card or paypal.
  • funding_instruments : A list of funding instruments for the current payment
  • payer_info : Information related to the payer
  • status : Status of the payer’s PayPal account. VERIFIED or UNVERIFIED
transactions:

Transactional details including the amount and item details.

redirect_urls:

Set of redirect URLs you provide only for PayPal-based payments.

  • return_url : The payer is redirected to this URL after approving the payment.
  • cancel_url : The payer is redirected to this URL after canceling the payment.

Execute an approved PayPal payment:

Use this call to complete a PayPal payment that has been approved by the payer.

payment = Payment.find("PAY-23456676765ASCASFE45")
payment.execute({ "payer_id": "7D4FDSFWEF5" })

The payment_id and payer_id are passed in the return_url. Once the payment is executed,it returns an array of payment object.

In the response state of the payment is obtained as : created approved failed, canceled, expired or pending.

The transaction details in the response contains state of the sale which is obtained as: pending, completed, refunded or partially_refunded

If the payment state is approved and the sale state is completed, the payment is successfully executed.

How to implement social Login for Django app

In this article we will get to know about how to login to your django app by using social logins like Facebook and Google.

Start a simple Django project

$ django-admin.py startproject thirdauth
$ tree thirdauth/
thirdauth/
├── manage.py
└── thirdauth
    ├── __init__.py
    ├── settings.py
    ├── urls.py
    └── wsgi.py

 

Running ./manage.py syncdb and then ./manage.py runserver and navigating to localhost:8000 will show the familiar “It worked!” Django page. Let’s put some custom application code in place, so that we can tell whether the current user is authenticated or anonymous.

Show current user’s authentication status

Now, the very small customizations we’ll add are:

Add ‘thirdauth’ to INSTALLED_APPS

Create the template for the home page

Add a view for the home page

Add a URL pointing to the home page view

Relevant portion of settings.py:

INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
...
'thirdauth',
)

 

Template: thirdauth/base.html:

<!DOCTYPE html>
<html lang="en">
 <head>
   <meta charset="utf-8">
   <meta http-equiv="X-UA-Compatible" content="IE=edge">
   <meta name="viewport" content="width=device-width, initial-scale=1">
   <title>{% block title %}Third-party Authentication Tutorial{% endblock %}</title>

   <!-- Bootstrap -->
   <link href="/static/css/bootstrap.min.css" rel="stylesheet">
   <link href="/static/css/bootstrap-theme.min.css" rel="stylesheet">
   <link href="/static/css/fbposter.css" rel="stylesheet">

   <!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries -->
   <!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
   <!--[if lt IE 9]>
     <script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script>
     <script src="https://oss.maxcdn.com/libs/respond.js/1.4.2/respond.min.js"></script>
   <![endif]-->
 </head>
 <body>
   {% block main %}{% endblock %}
   <!-- jQuery (necessary for Bootstrap's JavaScript plugins) -->
   <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.0/jquery.min.js"></script>
   <!-- Include all compiled plugins (below), or include individual files as needed -->
   <script src="/static/js/bootstrap.min.js"></script>
 </body>
</html>

Template: thirdauth/home.html:

{% extends 'thirdauth/base.html' %}

    {% block main %}
    <div>
    <h1>Third-party authentication demo</h1>

    <p>
    {% if user and not user.is_anonymous %}
     Hello {{ user.get_full_name|default:user.username }}!
    {% else %}
     I don’t think we’ve met before.
    {% endif %}
    </p>
    </div>
{% endblock %}

File views.py:

from django.shortcuts import render_to_response
from django.template.context import RequestContext

def home(request):
   context = RequestContext(request,
                           {'user': request.user})
   return render_to_response('thirdauth/home.html',
                             context_instance=context)

 

File urls.py:

from django.conf.urls import patterns, include, url

from django.contrib import admin
admin.autodiscover()

urlpatterns = patterns('',
   url(r'^$', 'thirdauth.views.home', name='home'),
   url(r'^admin/', include(admin.site.urls)),
)

 

Install Python Social Auth

 pip install python-social-auth

Second, let’s make some modifications to our settings.py to include python-social-auth in our project:

INSTALLED_APPS = (
...
'social.apps.django_app.default',
...
)

    TEMPLATE_CONTEXT_PROCESSORS = (
   'django.contrib.auth.context_processors.auth',
   'django.core.context_processors.debug',
   'django.core.context_processors.i18n',
   'django.core.context_processors.media',
   'django.core.context_processors.static',
   'django.core.context_processors.tz',
   'django.contrib.messages.context_processors.messages',
   'social.apps.django_app.context_processors.backends',
   'social.apps.django_app.context_processors.login_redirect',
)

AUTHENTICATION_BACKENDS = (
   'social.backends.facebook.FacebookOAuth2',
   'social.backends.google.GoogleOAuth2',
   'social.backends.twitter.TwitterOAuth',
   'django.contrib.auth.backends.ModelBackend',
)

 

Lets update the urls module to include the new group of URLs:

urlpatterns = patterns('',
...
url('', include('social.apps.django_app.urls', namespace='social')),
...
)

 

And finally, let’s update the database models:

./manage.py syncdb

Add links for logging in and logging out.

Since we’ll be logging in and out multiple times, let’s include django.contrib.auth URLs into our URLs configuration:

urlpatterns = patterns('',
   ...
   url('', include('django.contrib.auth.urls', namespace='auth')),
   ...
)

 

Let’s modify our Home page template like this:

{% extends 'thirdauth/base.html' %}

{% block main %}
 <div>
 <h1>Third-party authentication demo</h1>

 <p>
   <ul>
   {% if user and not user.is_anonymous %}
     <li>
       <a>Hello {{ user.get_full_name|default:user.username }}!</a>
     </li>
     <li>
       <a href="{% url 'auth:logout' %}?next={{ request.path }}">Logout</a>
     </li>
   {% else %}
     <li>
       <a href="{% url 'social:begin' 'facebook' %}?next={{ request.path }}">Login with Facebook</a>
     </li>
     <li>
       <a href="{% url 'social:begin' 'google-oauth2' %}?next={{ request.path }}">Login with Google</a>
     </li>
     <li>
       <a href="{% url 'social:begin' 'twitter' %}?next={{ request.path }}">Login with Twitter</a>
     </li>
   {% endif %}
   </ul>
 </p>
 </div>
{% endblock %}

For the login and logout links in this template to work correctly, we need to modify a few things. First, let’s take care of logout, it’s easier. Just add ‘request’ to the context object that we pass into template-rendering code.

Updated views.py:

from django.shortcuts import render_to_response
from django.template.context import RequestContext

def home(request):
   context = RequestContext(request,
                           {'request': request,
                            'user': request.user})
   return render_to_response('thirdauth/home.html',
                             context_instance=context)

 

For login to work, let’s first add a LOGIN_REDIRECT_URL parameter to settings (to prevent the default /account/profile from raising a 404):

LOGIN_REDIRECT_URL = ‘/’

Get Client IDs for the social sites.

For All the social networks we are using in this demo, the process of obtaining an OAuth2 client ID (also known as application ID) is pretty similar. All of them will require that your application has a “real” URL – that is, not http://127.0.0.1 or http://localhost. You can add an entry in your /etc/hosts file that maps 127.0.0.1 to something like “test1.com”, and the URL of your application becomes http://test1.com:8000 – that is good enough for testing. You can change it in the social app settings when it goes into production.

Facebook:

Go to https://developers.facebook.com/apps/?action=create and click the green “Create New App” button.

In the settings of the newly-created application, click “Add Platform”. From the options provided, choose Web, and fill in the URL of the site (http://test1.com:8000 in our example).

Copy the App ID and App Secret, and place them into settings.py file:

SOCIAL_AUTH_FACEBOOK_KEY = …
SOCIAL_AUTH_FACEBOOK_SECRET = …

This should be enough to get your app to login with Facebook! Try logging in and out – you should get redirected between your app and FB OAuth2 service, and a new record in the User social auths table will get created, along with a new User record pointing to it.

Google:

Go to https://console.developers.google.com/ and create a new application.

Under APIs and Auth > Credentials, create a new Client ID.

Make sure to specify the right callback URL: http://test1.com:8000/complete/google-oauth2/

Copy the values into settings file:

SOCIAL_AUTH_GOOGLE_OAUTH2_KEY = …
SOCIAL_AUTH_GOOGLE_OAUTH2_SECRET = …