All posts by Akhil Jalagam

How to run twisted script as daemon without twistd command

Warning:- this article assumes you are familiar with twisted
we have scenarios like run different services on different ports
to create multiservice architecture.
So, here we cannot run the services on from the command line
we have to run them as daemon service, like using cron jobs in linux.

– Create a simple twisted with protocol and factory and save to test.py or any name you like.

#!/usr/bin/env python2.7

from twisted.internet import protocol, reactor

class echo(protocol.Protocol):
        def dataReceived(self, data):
                self.transport.write(data.upper())

class echoFactory(protocol.Factory):
        def buildProtocol(sef,addr):
                print 'called'
                return echo()

reactor.listenTCP(8000,echoFactory())
reactor.run()

this is a normal echo server script which can run using

./test.py # assuming the script is saved in test.py

But our script cannot run this is as a daemon service
daemon service is some task runs in the background, and it logs to some specific location, not on stdout. When we run this script like specified above, it blocks the current terminal ( if you are running from a terminal ) which is not something daemon script do.
To make this script run as a daemon, we need to use twistd inside the script
means it’ll run and execute, create a process in the background.

from twisted.scripts import twistd
import sys
sys.argv.append('-y dummy')
sys.argv.append('--pidfile={0} --logfile=/dev/null'.format('/tmp/echo.pid'))

application = service.Application('echo_daemon')
tcp_service = internet.TCPServer(interface='127.0.0.1',port=8000, factory=factory)
tcp_service.setServiceParent(application)

class ApplicationRunner(twistd._SomeApplicationRunner):
    def createOrGetApplication(self):
        return application

    def run(self):
        self.preApplication()
        self.application = self.createOrGetApplication()
        self.postApplication()


twistd._SomeApplicationRunner = ApplicationRunner
twistd.run()

#~ python2.7 test.py

How should we cross-check if the service is running or not
Use the following command on Linux system (in shell/terminal for simple word).

netstat -ntulp

You get a list of all ports (UDP, TCP) open on the system.

How to edit file with sed

This tutorial is about how to replace the pattern with sed command in the same file you are reading from.

What is sed?
Sed stands for stream editor, as per the Linux man page.
But sed is more than it. We can write more powerful bash scripts with just a single line using sed. It can be used for fetching specific lines, a bunch of lines, replace the character with patterns and so much. Basically, just manipulate the stream of characters with whatever logic you projectile

Let’s get started,
We understand more with the task in hand. So, our task for this tutorial is to replace some characters with our characters.
Suppose we have a file like content below,

  304  PULSEaudio -k
  306  PULSEaudio --cleanup-shm 
  310  PULSEaudio --check 
  311  PULSEaudio --start 
  312  PULSEaudio --kill 
  322  PULSEaudio -k
  323  PULSEaudio --check 
  324  killall PULSEaudio
  325  PULSEaudio --check 
  331  ls * | grep -e PULSE
  332  cd PULSE/
  340  PULSEaudio -k
  344  PULSEaudio -D
  345  PULSEaudio -d
  346  service PULSEaudio status
  348  ps -eo "user args" | grep PULSE
  350  ps -eo "user args" | grep PULSE
  351  PULSEaudio -k
  352  killall PULSEaudio 

And we are supposed to replace the characters PULSE to pulse.
Either, we can open this file in VIM and type command like

 %s/PULSE/pulse/gc

If you are familiar with vim. You’ll know what I am talking about
But if you are using this output for some reason in your bash script, you need to do this with a single command.

Here comes our savior sed.

sed s/pattern/replace_char/ <file_name>

This command does our task, but the problem is it gives output on stdout.

Common error:- we generally try to redirect that output to the file we are editing
If we are editing the file named replace.txt then the command will be
sed s/PULSE/pulse/ replace.txt 2> replace.txt

But sed creates a problem here. It doesn’t work that way it’s not sed’s problem, Its problem with the order of file descriptors it set.

This is a common error, we want to modify a file using something that reads from a file and writes the result to stdout. To do this, we redirect stdout to the file we want to modify. The problem here is that, as we have seen, the redirections are setup before the command is actually executed.
So BEFORE sed starts, standard output has already been redirected, with the additional side effect that, because we used >, “file” gets truncated. When sed starts to read the file, it contains nothing
( if you don’t know what it is read this link https://wiki.bash-hackers.org/howto/redirection_tutorial ).

Sed added one feature which internally sets this redirection of a file descriptor for us. Use -i option to overcome this problem.

The final command will be

sed s/PULSE/pulse/ replace.txt -i
mysql

How To Import and Export Databases in MySQL

MySQL is an open-source relational database management system. Its name is a combination of “My”, the name of co-founder Michael Widenius’s daughter, and “SQL”, the abbreviation for Structured Query Language.

A relational database organizes data into one or more data tables in which data types may be related to each other; these relations help structure the data. SQL is a language programmers use to create, modify and extract data from the relational database, as well as control user access to the database. In addition to relational databases and SQL, an RDBMS like MySQL works with an operating system to implement a relational database in a computer’s storage system, manages users, allows for network access and facilitates testing database integrity and creation of backups.

What is mysql?

mysql is a simple SQL shell (with GNU readline capabilities). It supports interactive and
non-interactive use. When used interactively, query results are presented in an ASCII-table format.
When used non-interactively (for example, as a filter), the result is presented in tab-separated
format. The output format can be changed using command options.

What is mysqldump?

The mysqldump client is a backup program originally written by Igor Romanenko. It can be used to dump
a database or a collection of databases for backup or transfer to another SQL server (not necessarily
a MariaDB server). The dump typically contains SQL statements to create the table, populate it, or
both. However, mysqldump can also be used to generate files in CSV, other delimited text, or XML
format.

Export a MySQL Database

Use mysqldump to export your database:

mysqldump -u username -p database_name > database_name-dump.sql

You can compress the data on the run using pipe and gzip.

mysqldump -u username -p database_name | gzip > database_name-dump.sql.gz

*Using GZIP will save a lot of space on disk for huge databases.

Import a MySQL Database

Use mysql to import your database:

Create the database first.

mysql > CREATE DATABASE database_name;

Import the database now.

mysql -u username -p database_name < database_name-dump.sql

If the file is compressed with gzip. use zcat to extract on the run.

zcat database_name-dump.sql.gz | mysql -u username -p database_name

Handy scripts for admins who do backups daily

bkpmysqlgz() {
    set -x
    DEST=${1:-/opt/backups}
    shift
    sudo mkdir -p $DEST
    DATE=$(DATE)
    if command -v pv > /dev/null 2>&1; then
        sudo mysqldump $@ | pv | gzip > $DEST/"${@: -1}"-$DATE.sql.gz
    else
        sudo mysqldump $@ | gzip > $DEST/"${@: -1}"-$DATE.sql.gz
    fi
    ls -lh $DEST/"${@: -1}"-$DATE.sql.gz
    set +x
}

using script.

$ bkpmysqlgz /opt/backups -u root -p secret dbname

How to whitelist Google IP address ranges in firewall using iptables

As an administrator, when you need to obtain a range of IP addresses for Google APIs and services’ default domains, you can refer to the following sources of information.

The default domains’ IP address ranges for Google APIs and services fit within the list of ranges between these 2 sources. (Subtract the usable ranges from the complete list.)

Once you get the IP address ranges, use the “`xargs“` command to update iptables.

google-ips-whitelist.sh

echo "8.8.4.0/24
8.8.8.0/24
8.34.208.0/20
8.35.192.0/20
23.236.48.0/20
23.251.128.0/19
34.64.0.0/10
34.128.0.0/10
35.184.0.0/13
35.192.0.0/14
35.196.0.0/15
35.198.0.0/16
35.199.0.0/17
35.199.128.0/18
35.200.0.0/13
35.208.0.0/12
35.224.0.0/12
35.240.0.0/13
64.15.112.0/20
64.233.160.0/19
66.102.0.0/20
66.249.64.0/19
70.32.128.0/19
72.14.192.0/18
74.114.24.0/21
74.125.0.0/16
104.154.0.0/15
104.196.0.0/14
104.237.160.0/19
107.167.160.0/19
107.178.192.0/18
108.59.80.0/20
108.170.192.0/18
108.177.0.0/17
130.211.0.0/16
136.112.0.0/12
142.250.0.0/15
146.148.0.0/17
162.216.148.0/22
162.222.176.0/21
172.110.32.0/21
172.217.0.0/16
172.253.0.0/16
173.194.0.0/16
173.255.112.0/20
192.158.28.0/22
192.178.0.0/15
193.186.4.0/24
199.36.154.0/23
199.36.156.0/24
199.192.112.0/22
199.223.232.0/21
207.223.160.0/20
208.65.152.0/22
208.68.108.0/22
208.81.188.0/22
208.117.224.0/19
209.85.128.0/17
216.58.192.0/19
216.73.80.0/20
216.239.32.0/19" | xargs -I% iptables -I INPUT -p tcp -s % -j ACCEPT

how to setup apache proxy for django application

Apache HTTP Server can be configured in both a forward and reverse proxy (also known as gateway) mode.

forward proxy

An ordinary forward proxy is an intermediate server that sits between the client and the origin server. In order to get content from the origin server, the client sends a request to the proxy naming the origin server as the target. The proxy then requests the content from the origin server and returns it to the client. The client must be specially configured to use the forward proxy to access other sites.

A typical usage of a forward proxy is to provide Internet access to internal clients that are otherwise restricted by a firewall. The forward proxy can also use caching (as provided by mod_cache) to reduce network usage.

The forward proxy is activated using the ProxyRequests directive. Because forward proxies allow clients to access arbitrary sites through your server and to hide their true origin, it is essential that you secure your server so that only authorized clients can access the proxy before activating a forward proxy.

reverse proxy

reverse proxy (or gateway), by contrast, appears to the client just like an ordinary web server. No special configuration on the client is necessary. The client makes ordinary requests for content in the namespace of the reverse proxy. The reverse proxy then decides where to send those requests and returns the content as if it were itself the origin.

A typical usage of a reverse proxy is to provide Internet users access to a server that is behind a firewall. Reverse proxies can also be used to balance load among several back-end servers or to provide caching for a slower back-end server. In addition, reverse proxies can be used simply to bring several servers into the same URL space.

A reverse proxy is activated using the ProxyPass directive or the [P] flag to the RewriteRule directive. It is not necessary to turn ProxyRequests on in order to configure a reverse proxy.

django application

I am running my gunicorn application on port 8090 using following command.

“`/opt/venv/bin/python3.6 /opt/venv/bin/gunicorn –config /etc/controlpanel/gunicorn/controlpanel.py –pid /var/run/controlpanel.pid controlpanel.wsgi:application“`

static files path is “`/opt/controlpanel/ui-ux/static/“`

apache config (/etc/apache2/sites-enabled/cp.conf)

  • enable mod_proxy module in apache
<VirtualHost *:80>

ServerName devcontrol.lintel.com
ErrorLog /var/log/httpd/cp_error.log
CustomLog /var/log/httpd/cp_access.log combined

ProxyPreserveHost On
ProxyPass /static !
ProxyPass / http://127.0.0.1:8090/
ProxyPassReverse / http://127.0.0.1:8090/
ProxyTimeout 300

Alias /static/ /opt/controlpanel/ui-ux/static/
<Directory "/opt/controlpanel/ui-ux/static/">
Options Indexes FollowSymLinks
AllowOverride None
Require all granted
</Directory>

</VirtualHost>

after deploying on Apache you can use lets encrypt to install SSL certificates.

How to install & configure nvidia driver on arch linux

Nvidia is a graphics processing chip manufacturer that currently generates most of its revenue from the sales of graphics processing units (GPUs), which are used for competitive gaming, professional visualization, and cryptocurrency mining.

1. Install nvidia driver using pacman command

“`sudo pacman -S nvidia“`

Note: add pacman hook to compile module on kernel upgrades

/etc/pacman.d/hooks/nvidia.hook
[Trigger]
Operation=Install
Operation=Upgrade
Operation=Remove
Type=Package
Target=nvidia
Target=linux
# Change the linux part above and in the Exec line if a different kernel is used

[Action]
Description=Update Nvidia module in initcpio
Depends=mkinitcpio
When=PostTransaction
NeedsTargets
Exec=/bin/sh -c 'while read -r trg; do case $trg in linux) exit 0; esac; done; /usr/bin/mkinitcpio -P'

2. Blacklist nouveau driver

“`sudo bash -c “echo blacklist nouveau > /etc/modprobe.d/blacklist-nvidia-nouveau.conf”“`

3. Add graphics card configuration in Xorg server

/etc/X11/xorg.conf.d/20-nvidia.conf

Section "OutputClass"
Identifier "intel"
MatchDriver "i915"
Driver "modesetting"
EndSection

Section "OutputClass"
Identifier "nvidia"
MatchDriver "nvidia-drm"
Driver "nvidia"
Option "AllowEmptyInitialConfiguration"
Option "PrimaryGPU" "yes"
ModulePath "/usr/lib/nvidia/xorg"
ModulePath "/usr/lib/xorg/modules"
EndSection

4. Load nvidia modules on boot – update firmware

/etc/mkinitcpio.conf

“`MODULES=(nvidia nvidia_modeset nvidia_uvm nvidia_drm)“`

sudo mkinitcpio -P linux

5. Finally, update ~/.xinitrc

use this command to list providers and update in xinitrc file

“`xrandr –listproviders“`

~/.xinitrc

xrandr --setprovideroutputsource modesetting NVIDIA-0
xrandr --auto
exec i3 &>> "/var/log/i3.log"

Test the GPU processes now

Using nvidia-smi

Using nvtop

how to manage users with ansible

If you have multiple servers to manage, it can be a pain to manually add a new user, change a password, or lock an old account. Manually logging into all of your servers and performing these tasks is a real pain, and a huge waste of time.

Using ansible user module, you can manage users and ssh keys in a single run of playbook.

Create users

The home directory for the user will also be created by default. You have the option to choose your home directory by setting the home parameter.

Following playbook is for Red Hat/CentOS

You need to change user group for Debian based systems

authorize_users.yml

---
- hosts: tag_group_{{ env }}_webserver
  ignore_unreachable: true
  strategy: free
  gather_facts: False

  vars_files:
  - group_vars/all.yml

  vars:
    users:
      - tony
      - thor
      - hulk

  tasks:
  - name: Make sure we have a 'wheel' group
    group:
      name: wheel
      state: present
  - name: Allow 'wheel' group to have passwordless sudo
    lineinfile:
      dest: /etc/sudoers
      state: present
      regexp: '^%wheel'
      line: '%wheel ALL=(ALL) NOPASSWD: ALL'
      validate: visudo -cf %s
  - name: "Create user accounts and add users to groups"
    user:
      name: "{{ item }}"
      groups: "wheel"
      shell: /bin/bash
    loop: "{{ users }}"
  - name: Add sudoers users to wheel group
    user:
      name: "{{ item }}"
      groups: wheel
      append: yes
    loop: "{{ users }}"
  - name: "Add authorized keys"
    authorized_key:
      user: "{{ item }}"
      key: "{{ lookup('file', '~/.ssh/'+ item + '.pub') }}"
      state: present
    with_items: "{{ users }}"

Running:

“`$ ENV=prod; ansible-playbook   -i inventories/$ENV –extra-vars “env=$ENV” authorize_users.yml“`

Remove Users

Removing an existing user is easy. You just have to set the ‘state’ parameter to ‘absent’. It executes the ‘userdel’ command in the background.

deauthorize_users.yml

---
- hosts: tag_group_{{ env }}_webserver
  ignore_unreachable: true
  strategy: free
  gather_facts: False

  vars_files:
  - group_vars/all.yml

  vars:
    users:
      - frodo
      - sam
      - gollum

  tasks:
  - name: "Remove from authorized keys"
    authorized_key:
      user: "{{ item }}"
      key: "{{ lookup('file', '~/.ssh/lintel/'+ item + '.pub') }}"
      state: absent
    with_items: "{{ users }}"

  - name: "Remove from authorized keys from root"
    authorized_key:
      user: root
      key: "{{ lookup('file', '~/.ssh/lintel/'+ item + '.pub') }}"
      state: absent
    with_items: "{{ users }}"

  - name: Remove users
    user:
      name: "{{ item }}"
      remove: yes
      state: absent
    loop: "{{ users }}"

 

Running:

“`$ ENV=prod; ansible-playbook -i inventories/$ENV –extra-vars “env=$ENV” deauthorize_users.yml“`

how to manage airpods on linux

This article guides you on how to manage airpods and airpods pro on linux.

It uses pulseaudio and ofono telephony service for A2DP, HSP/HFP profiles.

Lets start…

1. Dependencies

sudo add-apt-repository ppa:smoser/bluetooth  
sudo apt-get install ofono-phonesim ofono  
git clone https://github.com/rilmodem/ofono.git /opt/ofono

2. Download the script

wget https://raw.githubusercontent.com/AkhilJalagam/pulseaudio-airpods/master/pulseaudio-airpods

3. Tweak the script for first time

replace MAC and card name in the script

AIRPODS_MAC='4C:6B:E8:80:46:84' # it should be somewhere in blueman-manager  
AIRPODS_NAME='bluez_card.4C_6B_E8_80_46_84' # you can find this using 'pactl list cards' command

4. Usage

pusleaudio-airpods connect/toggle_profile/disconnect

Note

you should first pair your airpods using blueman-manager and trust them to use this script

References

https://github.com/AkhilJalagam/pulseaudio-airpods

https://github.com/AkhilJalagam/i3blocks-airpods

Speed up SSH with multiplexing

SSH multiplexing is the ability to carry multiple SSH sessions over a single TCP connection.

OpenSSH can reuse an existing TCP connection for multiple concurrent SSH sessions. This results into reduction of the overhead of creating new TCP connections.

Advantage of using SSH multiplexing is that it speeds up certain operations that rely on or occur over SSH. For example, let’s say that you’re using SSH to regularly execute a command on a remote host. Without multiplexing, every time that command is executed your SSH client must establish a new TCP connection and a new SSH session with the remote host. With multiplexing, you can configure SSH to establish a single TCP connection that is kept alive for a specific period of time, and SSH sessions are established over that connection.

You can see the difference below

without multiplexing, we see the normal connection time:

“`$ time ssh lintel-blog“`

real    0m0.658s
user    0m0.016s
sys     0m0.008s

Then we do the same thing again, but with a multiplexed connection to see a faster result:

“`$ time ssh lintel-blog“`

real    0m0.029s
user    0m0.004s
sys     0m0.004s

Configure Multiplexing

OpenSSH client supports multiplexing its outgoing connections, since version 3.9, using the ControlMaster, ControlPath and ControlPersist configuration directives which get defined in ssh_config. The client configuration file usually defaults to the location ~/.ssh/config.

ControlMaster determines whether ssh will listen for control connections and what to do about them. ControlPath sets the location for the control socket used by the multiplexed sessions. These can be either globally or locally in ssh_config or else specified at run time. Control sockets are removed automatically when the master connection has ended. ControlPersist can be used in conjunction with ControlMaster. If ControlPersist is set to ‘yes’, then it will leave the master connection open in the background to accept new connections until either killed explicitly or closed with -O or ends at a pre-defined timeout. If ControlPersist is set to a time, then it will leave the master connection open for the designated time or until the last multiplexed session is closed, whichever is longer.

Here is a sample excerpt from ssh_config applicable for starting a multiplexed session to server1.example.org via the shortcut server1.

Host server1
  HostName server1.example.org
  ControlPath ~/.ssh/controlmasters/%r@%h:%p
  ControlMaster auto
  ControlPersist 10m

 

How to install jitsi meet on CentOS 7

Jitsi is a set of Open Source projects that allows you to easily build and deploy secure videoconferencing solutions.

Jitsi Meet is a fully encrypted, 100% Open Source video conferencing solution that you can use all day, every day, for free — with no account needed.

1. Architecture

A Jitsi Meet installation can be broken down into the following components:

  • A web interface
  • An XMPP server
  • A conference focus component
  • A video router (could be more than one)
  • A SIP gateway for audio calls
  • A Broadcasting Infrastructure for recording or streaming a conference.

The diagram shows a typical deployment in a host running Docker. This project separates each of the components above into interlinked containers. To this end, several container images are provided.

2. Ports

The following external ports must be opened on a firewall:

  • 80/tcp for Web UI HTTP (really just to redirect, after uncommenting ENABLE_HTTP_REDIRECT=1 in .env)
  • 443/tcp for Web UI HTTPS
  • 4443/tcp for RTP media over TCP
  • 10000/udp for RTP media over UDP

Also 20000-20050/udp for jigasi, in case you choose to deploy that to facilitate SIP access.

E.g. on a CentOS server this would be done like this (without SIP access):

    $ sudo firewall-cmd --permanent --add-port=80/tcp
    $ sudo firewall-cmd --permanent --add-port=443/tcp
    $ sudo firewall-cmd --permanent --add-port=4443/tcp
    $ sudo firewall-cmd --permanent --add-port=10000/udp
    $ sudo firewall-cmd --reload

 

3. Configuration

The configuration is performed via environment variables contained in a .env file. You can copy the provided env.example file as a reference.

a. Jibri Module Setup

Before running Jibri, you need to set up an ALSA loopback device on the host. This will not work on a non-Linux host.

For CentOS 7, the module is already compiled with the kernel, so just run:

# configure 5 capture/playback interfaces
echo "options snd-aloop enable=1,1,1,1,1 index=0,1,2,3,4" > /etc/modprobe.d/alsa-loopback.conf
# setup autoload the module
echo "snd_aloop" > /etc/modules-load.d/snd_aloop.conf
# load the module
modprobe snd-aloop
# check that the module is loaded
lsmod | grep snd_aloop

b. Installation

  • clone the repository:

git clone https://github.com/jitsi/docker-jitsi-meet && cd docker-jitsi-meet

  • Create a .env file by copying and adjusting env.example
    • cp env.example .env
  • Set strong passwords in the security section options of .env file by running the following bash script
    • ./gen-passwords.sh
  • Create required CONFIG directories
    • mkdir -p ~/.jitsi-meet-cfg/{web/letsencrypt,transcripts,prosody/config,prosody/prosody-plugins-custom,jicofo,jvb,jigasi,jibri}
  • Run docker-compose up -d
  • Access the web UI at https://domain.com (or a different port, in case you edited the compose file).

 

If you want to use jigasi too, first configure your env file with SIP credentials and then run Docker Compose as follows: docker-compose -f docker-compose.yml -f jigasi.yml up

If you want to enable document sharing via Etherpad, configure it and run Docker Compose as follows: docker-compose -f docker-compose.yml -f etherpad.yml up

If you want to use jibri too, first configure a host as described in JItsi BRoadcasting Infrastructure configuration section and then run Docker Compose as follows: docker-compose -f docker-compose.yml -f jibri.yml up -d or to use jigasi too: docker-compose -f docker-compose.yml -f jigasi.yml -f jibri.yml up -d

Running behind NAT or on a LAN environment
If running in a LAN environment (as well as on the public Internet, via NAT) is a requirement, the DOCKER_HOST_ADDRESS should be set. This way, the Videobridge will advertise the IP address of the host running Docker instead of the internal IP address that Docker assigned it, thus making ICE succeed. If your users are coming in over the Internet (and not over LAN), this will likely be your public IP address. If this is not set up correctly, calls will crash when more than two users join a meeting.

The public IP address is discovered via STUN. STUN servers can be specified with the JVB_STUN_SERVERS option.