How to run OpenVPN tunnel inside a network namespace
Linux network namespaces can be used to control which processes should be tunneled by OpenVPN.
First create an –up and –down script for OpenVPN. This script will create the VPN tunnel interface inside a network namespace called vpn, instead of the default namespace.
strace is a useful diagnostic, instructional, and debugging tool. System administrators, diagnosticians and trouble-shooters will find it invaluable for solving problems with programs for which the source is not readily available since they do not need to be recompiled in order to trace them. Students, hackers and the overly-curious will find that a great deal can be learned about a system and its system calls by tracing even ordinary programs. And programmers will find that since system calls and signals are events that happen at the user/kernel interface, a close examination of this boundary is very useful for bug isolation, sanity checking and attempting to capture race conditions.
Trace the Execution
You can use strace command to trace the execution of any executable. The following example shows the output of strace for the Linux uname command.
Counting number of syscalls
Run the ls command counting the number of times each system call was made and print totals showing the number and time spent in each call (useful for basic profiling or bottleneck isolation):
Save the Trace Execution to a File Using Option -o
The following examples stores the strace output to output.txt file.
Print Timestamp for Each Trace Output Line Using Option -t
To print the timestamp for each strace output line, use the option -t as shown below.
Tracing only network related system calls
Trace just the network related system calls of ping command
Viewing files opened by a process/daemon using tracefile
tracefile: Output from cmd on stdout can mess up output from strace.
It is a pity that so much tracing clutter is produced by systems employing shared libraries.
It is instructive to think about system call inputs and outputs as data-flow across the user/kernel boundary. Because user-space and kernel-space are separate and address-protected, it is sometimes possible to make deductive inferences about process behavior using inputs and outputs as propositions.
In some cases, a system call will differ from the documented behavior or have a different name. For example, on System V-derived systems the true time(2) system call does not take an argument and the stat function is called xstat and takes an extra leading argument. These discrepancies are normal but idiosyncratic characteristics of the system call interface and are accounted for by C library wrapper functions.
On some platforms a process that has a system call trace applied to it with the -p option will receive a SIGSTOP . This signal may interrupt a system call that is not restartable. This may have an unpredictable effect on the process if the process takes no action to restart the system call.
Linux namespaces are a relatively new kernel feature which is essential for implementation of containers. A namespace wraps a global system resource into an abstraction which will be bound only to processes within the namespace, providing resource isolation. In this article I discuss network namespace and show a practical example.
What is namespace?
A namespace is a way of scoping a particular set of identifiers. Using a namespace, you can use the same identifier multiple times in different namespaces. You can also restrict an identifier set visible to particular processes.
For example, Linux provides namespaces for networking and processes, among other things. If a process is running within a process namespace, it can only see and communicate with other processes in the same namespace. So, if a shell in a particular process namespace ran ps waux, it would only show the other processes in the same namespace.
Linux network namespaces
In a network namespace, the scoped ‘identifiers’ are network devices; so a given network device, such as eth0, exists in a particular namespace. Linux starts up with a default network namespace, so if your operating system does not do anything special, that is where all the network devices will be located. But it is also possible to create further non-default namespaces, and create new devices in those namespaces, or to move an existing device from one namespace to another.
Each network namespace also has its own routing table, and in fact this is the main reason for namespaces to exist. A routing table is keyed by destination IP address, so network namespaces are what you need if you want the same destination IP address to mean different things at different times – which is something that OpenStack Networking requires for its feature of providing overlapping IP addresses in different virtual networks.
Each network namespace also has its own set of iptables (for both IPv4 and IPv6). So, you can apply different security to flows with the same IP addressing in different namespaces, as well as different routing.
Any given Linux process runs in a particular network namespace. By default this is inherited from its parent process, but a process with the right capabilities can switch itself into a different namespace; in practice this is mostly done using the ip netns exec NETNS COMMAND… invocation, which starts COMMAND running in the namespace named NETNS. Suppose such a process sends out a message to IP address A.B.C.D, the effect of the namespace is that A.B.C.D will be looked up in that namespace’s routing table, and that will determine the network device that the message is transmitted through.
Lets play with ip namespaces
By convention a named network namespace is an object at /var/run/netns/NAME that can be opened. The file descriptor resulting from opening /var/run/netns/NAME refers to the specified network namespace.
create a namespace
power up loopback device
open up a namespace shell
now we can use this shell like user shell where it uses ns1 namespace only
In part-2 , I will explain how to connect to internet from ns1 namespace and adding custom routes.
Update to the latest version. Ansible 2.0 is slower than Ansible 1.9 because it included an important change to the execution engine to allow any user to choose the execution algorithm to be used. In the versions that followed, and mostly in 2.1, big optimizations have been done to increase execution speed, so be sure to be running the latest possible version.
Profiling Tasks
The best way I’ve found to time the execution of Ansible playbooks is by enabling the profile_tasks callback. This callback is included with Ansible and all you need to do to enable it is add callback_whitelist = profile_tasks to the [defaults] section of your ansible.cfg:
# ansible.cfg
1
2
[default]
callback_whitelist=profile_tasks
Enable pipelining
You can enable pipelining by simply adding pipelining = True to the [ssh_connection]area of your ansible.cfg or by by using the ANSIBLE_PIPELINING and ANSIBLE_SSH_PIPELINING environment variables.
# ansible.cfg
1
2
[ssh_connection]
pipelining=True
You’ll also need to make sure that requiretty is disabled in /etc/sudoers on the remote host, or become won’t work with pipelining enabled.
Enable Mitogen for Ansible
Enabling Mitogen for Ansible is as simple as downloading and extracting the plugin, then adding 2 lines to the [defaults] section of your ansible.cfg:
The first thing to check is whether SSH multiplexing is enabled and used. This gives a tremendous speed boost because Ansible can reuse opened SSH sessions instead of negotiating new one (actually more than one) for every task. Ansible has this setting turned on by default. It can be set in configuration file as follows:
1
2
[ssh_connection]
ssh_args=-oControlMaster=auto-oControlPersist=60s
But be careful to override ssh_args — if you don’t set ControlMaster and ControlPersist while overriding, Ansible will “forget” to use them.
To check whether SSH multiplexing is used, start Ansible with -vvvv option:
ansible test -vvvv -m ping
UseDNS
UseDNS is an SSH-server setting (/etc/ssh/sshd_config file) which forces a server to check a client’s PTR-record upon connection. It may cause connection delays especially with slow DNS servers on the server side. In modern Linux distribution, this setting is turned off by default, which is correct.
PreferredAuthentications
It is an SSH-client setting which informs server about preferred authentication methods. By default Ansible uses:
So if GSSAPI Authentication is enabled on the server (at the time of writing this it is turned on in RHEL EC2 AMI) it will be tried as the first option, forcing the client and server to make PTR-record lookups. But in most cases, we want to use only public key auth. We can force Ansible to do so by changing ansible.cfg:
At the start of playbook execution, Ansible collects facts about remote system (this is default behaviour for ansible-playbook but not relevant to ansible ad-hoc commands). It is similar to calling “setup” module thus requires another ssh communication step. If you don’t need any facts in your playbook (e.g. our test playbook) you can disable fact gathering:
1
gather_facts:no
Fork
Until this moment we discussed how to speed up playbook execution on a given remote host. But if you run playbook against tens or hundreds of hosts, Ansible internal performance becomes a bottleneck. For example, there’s preconfigured number of forks – number of hosts that can be interacted simultaneously. You can change this value in ansible.cfg file:
1
2
[defaults]
forks=20
The default value is 5, which is quite conservative. You can experiment with this setting depending on your local CPU and network bandwidth resources.
Another thing about forks is that if you have a lot of servers to work with and a low number of available forks, your master ssh-sessions may expire between tasks. Ansible uses linear strategy by default, which executes one task for every host and then proceeds to the next task. This way if time between task execution on the first server and on the last one is greater than ControlPersist then master socket will expire by the time Ansible starts execution of the following task on the first server, thus new ssh connection will be required.
Poll Interval
When module is executed on remote host, Ansible starts to poll for its result. The lower is interval between poll attempts, the higher is CPU load on Ansible control host. But we want to have CPU available for greater forks number (see above). You can tweak poll interval in ansible.cfg:
1
2
[defaults]
internal_poll_interval=0.001
If you run “slow” jobs (like backups) on multiple hosts, you may want to increase the interval to 0.05 to use less CPU.
Hope this helps you to speed up your setup. Seems like there are no more items in environment check-list and further speed gains only possible by optimizing your playbook code.
Asynchronous Actions and Polling
By default tasks in playbooks block, meaning the connections stay open until the task is done on each node. This may not always be desirable, or you may be running operations that take longer than the SSH timeout.
To avoid blocking or timeout issues, you can use asynchronous mode to run all of your tasks at once and then poll until they are done.
The behaviour of asynchronous mode depends on the value of poll.
Avoid connection timeouts: poll > 0
When poll is a positive value, the playbook will still block on the task until it either completes, fails or times out.
In this case, however, async explicitly sets the timeout you wish to apply to this task rather than being limited by the connection method timeout.
To launch a task asynchronously, specify its maximum runtime and how frequently you would like to poll for status. The default poll value is 15 seconds if you do not specify a value for poll:
YAML
1
2
3
4
5
6
7
8
9
10
11
---
- hosts: all
remote_user:root
tasks:
-name:simulate long running op (15 sec),waitforupto45sec,pollevery5sec
command:/bin/sleep 15
async:45
poll:5
Concurrent tasks: poll = 0
When poll is 0, Ansible will start the task and immediately move on to the next one without waiting for a result.
From the point of view of sequencing this is asynchronous programming: tasks may now run concurrently.
The playbook run will end without checking back on async tasks.
The async tasks will run until they either complete, fail or timeout according to their async value.
If you need a synchronization point with a task, register it to obtain its job ID and use the async_status module to observe it.
You may run a task asynchronously by specifying a poll value of 0:
YAML
1
2
3
4
5
6
7
8
9
10
11
---
- hosts: all
remote_user:root
tasks:
-name:simulate long running op,allowtorunfor45sec,fireandforget
command:/bin/sleep 15
async:45
poll:0
Enable fact_caching
By enabling this value we’re telling Ansible to keep the facts it gathers in a local file. You can also set this to a redis cache. See the documentation for details.
Fact_caching is what happens when Ansible says, “Gathering facts” about your target hosts. If we don’t change our targets hardware (or virtual hardware) very often this can be very helpful. Enable it by adding this to your ansible.cfg file:
Enable facts caching mechanism
If you still need some of the facts groups, but at the same time the gathering process is still slow for you, you could try use fact caching.
Caching enables Ansible to cache the facts for a given host in some kind of backend.
Currently the caching plugin supports the following cache backend:
YAML
1
2
3
4
5
6
memcache
redis
yaml
json
mongodb
memory
More information on the caching plugin, could be found here:
Django is a python web framework used for developing web applications. It is fast, secure and scalable. Let us see how to configure the Django app using gunicorn.
Before proceeding to actual configuration, let us see some intro on the Gunicorn.
Gunicorn
Gunicorn (Green Unicorn) is a WSGI (Web Server Gateway Interface) server implementation commonly used to run python web applications and implements PEP 3333 server standard specifications, therefore, it can run web applications that implement application interface. Web applications written in Django, Flask or Bottle implements application interface.
Installation
Shell
1
pip3 install gunicorn
Gunicorn coupled with Nginx or any web server works as a bridge between the web server and web framework. Web server (Nginx or Apache) can be used to serve static files and Gunicorn to handle requests to and responses from Django application. I will try to write another blog in detail on how to set up a django application with Nginx and Gunicorn.
Prerequisites
Please make sure you have below packages installed in your system and a basic understanding of Python, Django and Gunicorn are recommended.
Python > 3.5
Gunicorn > 15.0
Django > 1.11
Configure Django App Using Gunicorn
There are different ways to configure the Gunicron, I am going to demonstrate more on running the Django app using the gunicorn configuration file.
First, let us start by creating the Django project, you can do so as follows.
Start a django project
Shell
1
django-admin startproject webapp
After starting the Django project, the directory structure looks like this.
The simplest way to run your django app using gunicorn is by using the following command, you must run this command from your manage.py folder.
Shell
1
gunicorn webapp.wsgi
This will run your Django project on 8000 port locally.
Configuration
Now let’s see, how to configure the django app using gunicorn configuration file. A simple Gunicorn configuration with worker class sync will look like this.
Simple Gunicron Configuration File
Python
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
importsys
BASE_DIR="/path/to/base/dir/"
sys.path.append(BASE_DIR)
bind='127.0.0.1:8000'
backlog=2048
importmultiprocessing
workers=multiprocessing.cpu_count()*2+1
worker_class='sync'
worker_connections=1000
timeout=300
keepalive=2
#
# spew - Install a trace function that spews every line of Python
# that is executed when running the server. This is the
code.append('File: "%s", line %d, in %s'%(filename,
lineno,name))
ifline:
code.append(" %s"%(line.strip()))
worker.log.debug("\n".join(code))
defworker_abort(worker):
worker.log.info("worker received SIGABRT signal")
Let us see a few important details in the above configuration file.
Append the base directory path in your systems path.
You can bind the application to a socket using bind.
backlog Maximum number of pending connections.
workers number of workers to handle requests. This is based on your machine’s CPU count. This can be varied based on your application workload.
worker_class, there are different types of classes, you can refer here for different types of classes. sync is the default and should handle normal types of loads.
You can refer more about the available Gunicorn settings here.
After adding the file to the location /etc/systemd/system/. To reload new changes in file execute the following command.
systemd reload
1
systemctl daemon-reload
NOTE: MAKE SURE TO INSTALL REQUIRED PACKAGES, GUNICORN FAILS TO START IF THERE ARE ANY MISSING PACKAGES. YOU CAN REFER TO MORE INFO IN ERROR LOGFILE MENTIONED IN CONFIGURATION FILE.
Start, Stop and Status of Application using systemctl
Now you can simply execute the following commands for your application.
To start your application
Shell
1
systemctl start webapp
To stop your application.
Shell
1
systemctl stop webapp
To check the status of your application.
Shell
1
systemctl status webapp
Please refer to a short complete video tutorial to configure the Django app below.