All posts by Akhil Jalagam

mysql

How To Import and Export Databases in MySQL

MySQL is an open-source relational database management system. Its name is a combination of “My”, the name of co-founder Michael Widenius’s daughter, and “SQL”, the abbreviation for Structured Query Language.

A relational database organizes data into one or more data tables in which data types may be related to each other; these relations help structure the data. SQL is a language programmers use to create, modify and extract data from the relational database, as well as control user access to the database. In addition to relational databases and SQL, an RDBMS like MySQL works with an operating system to implement a relational database in a computer’s storage system, manages users, allows for network access and facilitates testing database integrity and creation of backups.

What is mysql?

mysql is a simple SQL shell (with GNU readline capabilities). It supports interactive and
non-interactive use. When used interactively, query results are presented in an ASCII-table format.
When used non-interactively (for example, as a filter), the result is presented in tab-separated
format. The output format can be changed using command options.

What is mysqldump?

The mysqldump client is a backup program originally written by Igor Romanenko. It can be used to dump
a database or a collection of databases for backup or transfer to another SQL server (not necessarily
a MariaDB server). The dump typically contains SQL statements to create the table, populate it, or
both. However, mysqldump can also be used to generate files in CSV, other delimited text, or XML
format.

Export a MySQL Database

Use mysqldump to export your database:

You can compress the data on the run using pipe and gzip.

*Using GZIP will save a lot of space on disk for huge databases.

Import a MySQL Database

Use mysql to import your database:

Create the database first.

Import the database now.

If the file is compressed with gzip. use zcat to extract on the run.

Handy scripts for admins who do backups daily

using script.

How to whitelist Google IP address ranges in firewall using iptables

As an administrator, when you need to obtain a range of IP addresses for Google APIs and services’ default domains, you can refer to the following sources of information.

The default domains’ IP address ranges for Google APIs and services fit within the list of ranges between these 2 sources. (Subtract the usable ranges from the complete list.)

Once you get the IP address ranges, use the xargs command to update iptables.

google-ips-whitelist.sh

how to setup apache proxy for django application

Apache HTTP Server can be configured in both a forward and reverse proxy (also known as gateway) mode.

forward proxy

An ordinary forward proxy is an intermediate server that sits between the client and the origin server. In order to get content from the origin server, the client sends a request to the proxy naming the origin server as the target. The proxy then requests the content from the origin server and returns it to the client. The client must be specially configured to use the forward proxy to access other sites.

A typical usage of a forward proxy is to provide Internet access to internal clients that are otherwise restricted by a firewall. The forward proxy can also use caching (as provided by mod_cache) to reduce network usage.

The forward proxy is activated using the ProxyRequests directive. Because forward proxies allow clients to access arbitrary sites through your server and to hide their true origin, it is essential that you secure your server so that only authorized clients can access the proxy before activating a forward proxy.

reverse proxy

reverse proxy (or gateway), by contrast, appears to the client just like an ordinary web server. No special configuration on the client is necessary. The client makes ordinary requests for content in the namespace of the reverse proxy. The reverse proxy then decides where to send those requests and returns the content as if it were itself the origin.

A typical usage of a reverse proxy is to provide Internet users access to a server that is behind a firewall. Reverse proxies can also be used to balance load among several back-end servers or to provide caching for a slower back-end server. In addition, reverse proxies can be used simply to bring several servers into the same URL space.

A reverse proxy is activated using the ProxyPass directive or the [P] flag to the RewriteRule directive. It is not necessary to turn ProxyRequests on in order to configure a reverse proxy.

django application

I am running my gunicorn application on port 8090 using following command.

/opt/venv/bin/python3.6 /opt/venv/bin/gunicorn --config /etc/controlpanel/gunicorn/controlpanel.py --pid /var/run/controlpanel.pid controlpanel.wsgi:application

static files path is /opt/controlpanel/ui-ux/static/

apache config (/etc/apache2/sites-enabled/cp.conf)

  • enable mod_proxy module in apache

after deploying on Apache you can use lets encrypt to install SSL certificates.

How to install & configure nvidia driver on arch linux

Nvidia is a graphics processing chip manufacturer that currently generates most of its revenue from the sales of graphics processing units (GPUs), which are used for competitive gaming, professional visualization, and cryptocurrency mining.

1. Install nvidia driver using pacman command

sudo pacman -S nvidia

Note: add pacman hook to compile module on kernel upgrades

2. Blacklist nouveau driver

sudo bash -c "echo blacklist nouveau > /etc/modprobe.d/blacklist-nvidia-nouveau.conf"

3. Add graphics card configuration in Xorg server

/etc/X11/xorg.conf.d/20-nvidia.conf

4. Load nvidia modules on boot – update firmware

/etc/mkinitcpio.conf

MODULES=(nvidia nvidia_modeset nvidia_uvm nvidia_drm)

sudo mkinitcpio -P linux

5. Finally, update ~/.xinitrc

use this command to list providers and update in xinitrc file

xrandr --listproviders

~/.xinitrc

Test the GPU processes now

Using nvidia-smi

Using nvtop

how to manage users with ansible

If you have multiple servers to manage, it can be a pain to manually add a new user, change a password, or lock an old account. Manually logging into all of your servers and performing these tasks is a real pain, and a huge waste of time.

Using ansible user module, you can manage users and ssh keys in a single run of playbook.

Create users

The home directory for the user will also be created by default. You have the option to choose your home directory by setting the home parameter.

Following playbook is for Red Hat/CentOS

You need to change user group for Debian based systems

authorize_users.yml

Running:

$ ENV=prod; ansible-playbook   -i inventories/$ENV --extra-vars "env=$ENV" authorize_users.yml

Remove Users

Removing an existing user is easy. You just have to set the ‘state’ parameter to ‘absent’. It executes the ‘userdel’ command in the background.

deauthorize_users.yml

 

Running:

$ ENV=prod; ansible-playbook -i inventories/$ENV --extra-vars "env=$ENV" deauthorize_users.yml

how to manage airpods on linux

This article guides you on how to manage airpods and airpods pro on linux.

It uses pulseaudio and ofono telephony service for A2DP, HSP/HFP profiles.

Lets start…

1. Dependencies

2. Download the script

3. Tweak the script for first time

replace MAC and card name in the script

4. Usage

Note

you should first pair your airpods using blueman-manager and trust them to use this script

References

https://github.com/AkhilJalagam/pulseaudio-airpods

https://github.com/AkhilJalagam/i3blocks-airpods

Speed up SSH with multiplexing

SSH multiplexing is the ability to carry multiple SSH sessions over a single TCP connection.

OpenSSH can reuse an existing TCP connection for multiple concurrent SSH sessions. This results into reduction of the overhead of creating new TCP connections.

Advantage of using SSH multiplexing is that it speeds up certain operations that rely on or occur over SSH. For example, let’s say that you’re using SSH to regularly execute a command on a remote host. Without multiplexing, every time that command is executed your SSH client must establish a new TCP connection and a new SSH session with the remote host. With multiplexing, you can configure SSH to establish a single TCP connection that is kept alive for a specific period of time, and SSH sessions are established over that connection.

You can see the difference below

without multiplexing, we see the normal connection time:

$ time ssh lintel-blog

Then we do the same thing again, but with a multiplexed connection to see a faster result:

$ time ssh lintel-blog

Configure Multiplexing

OpenSSH client supports multiplexing its outgoing connections, since version 3.9, using the ControlMaster, ControlPath and ControlPersist configuration directives which get defined in ssh_config. The client configuration file usually defaults to the location ~/.ssh/config.

ControlMaster determines whether ssh will listen for control connections and what to do about them. ControlPath sets the location for the control socket used by the multiplexed sessions. These can be either globally or locally in ssh_config or else specified at run time. Control sockets are removed automatically when the master connection has ended. ControlPersist can be used in conjunction with ControlMaster. If ControlPersist is set to ‘yes’, then it will leave the master connection open in the background to accept new connections until either killed explicitly or closed with -O or ends at a pre-defined timeout. If ControlPersist is set to a time, then it will leave the master connection open for the designated time or until the last multiplexed session is closed, whichever is longer.

Here is a sample excerpt from ssh_config applicable for starting a multiplexed session to server1.example.org via the shortcut server1.

 

How to install jitsi meet on CentOS 7

Jitsi is a set of Open Source projects that allows you to easily build and deploy secure videoconferencing solutions.

Jitsi Meet is a fully encrypted, 100% Open Source video conferencing solution that you can use all day, every day, for free — with no account needed.

1. Architecture

A Jitsi Meet installation can be broken down into the following components:

  • A web interface
  • An XMPP server
  • A conference focus component
  • A video router (could be more than one)
  • A SIP gateway for audio calls
  • A Broadcasting Infrastructure for recording or streaming a conference.

The diagram shows a typical deployment in a host running Docker. This project separates each of the components above into interlinked containers. To this end, several container images are provided.

2. Ports

The following external ports must be opened on a firewall:

  • 80/tcp for Web UI HTTP (really just to redirect, after uncommenting ENABLE_HTTP_REDIRECT=1 in .env)
  • 443/tcp for Web UI HTTPS
  • 4443/tcp for RTP media over TCP
  • 10000/udp for RTP media over UDP

Also 20000-20050/udp for jigasi, in case you choose to deploy that to facilitate SIP access.

E.g. on a CentOS server this would be done like this (without SIP access):

 

3. Configuration

The configuration is performed via environment variables contained in a .env file. You can copy the provided env.example file as a reference.

a. Jibri Module Setup

Before running Jibri, you need to set up an ALSA loopback device on the host. This will not work on a non-Linux host.

For CentOS 7, the module is already compiled with the kernel, so just run:

b. Installation

  • clone the repository:

git clone https://github.com/jitsi/docker-jitsi-meet && cd docker-jitsi-meet

  • Create a .env file by copying and adjusting env.example
    • cp env.example .env
  • Set strong passwords in the security section options of .env file by running the following bash script
    • ./gen-passwords.sh
  • Create required CONFIG directories
    • mkdir -p ~/.jitsi-meet-cfg/{web/letsencrypt,transcripts,prosody/config,prosody/prosody-plugins-custom,jicofo,jvb,jigasi,jibri}
  • Run docker-compose up -d
  • Access the web UI at https://domain.com (or a different port, in case you edited the compose file).

 

If you want to use jigasi too, first configure your env file with SIP credentials and then run Docker Compose as follows: docker-compose -f docker-compose.yml -f jigasi.yml up

If you want to enable document sharing via Etherpad, configure it and run Docker Compose as follows: docker-compose -f docker-compose.yml -f etherpad.yml up

If you want to use jibri too, first configure a host as described in JItsi BRoadcasting Infrastructure configuration section and then run Docker Compose as follows: docker-compose -f docker-compose.yml -f jibri.yml up -d or to use jigasi too: docker-compose -f docker-compose.yml -f jigasi.yml -f jibri.yml up -d

Running behind NAT or on a LAN environment
If running in a LAN environment (as well as on the public Internet, via NAT) is a requirement, the DOCKER_HOST_ADDRESS should be set. This way, the Videobridge will advertise the IP address of the host running Docker instead of the internal IP address that Docker assigned it, thus making ICE succeed. If your users are coming in over the Internet (and not over LAN), this will likely be your public IP address. If this is not set up correctly, calls will crash when more than two users join a meeting.

The public IP address is discovered via STUN. STUN servers can be specified with the JVB_STUN_SERVERS option.

 

cluster

Parallel command execution – Linux Cluster

The pdsh parallel shell tool allows you and lets you run a shell command across multiple nodes in a cluster.

This is a high performance, parallel pdsh shell remote shell utility for admins. Chaos Pdsh is a multithreaded remote shell client which executes commands on multiple remote hosts in parallel.  A parallel shell permits your clusters Linux Ubuntu RedHat to run the same similar command on many designated hosts or nodes within the hadoop cluster. In this case you do not have to really log in to each node individually.

High-performance and parallel remote shell utility with dshgroup module allows dsh on pdsh (or otherwise known as Dancer’s shell sudo) files from /etc/dsh/group directory. Now download Parallel Distributed Shell free of charge.

What is pdsh?

pdsh is a variant of the rsh(1) command. Unlike rsh(1), which runs commands on a single remote host, pdsh can run multiple remote commands in parallel. pdsh uses a “sliding window” (or fanout) of threads to conserve resources on the initiating host while allowing some connections to time out.

When pdsh receives SIGINT (ctrl-C), it lists the status of current threads. A second SIGINT within one second terminates the program. Pending threads may be canceled by issuing ctrl-Z within one second of ctrl-C. Pending threads are those that have not yet been initiated, or are still in the process of connecting to the remote host.

If a remote command is not specified on the command line, pdsh runs interactively, prompting for commands and executing them when terminated with a carriage return. In interactive mode, target nodes that time out on the first command are not contacted for subsequent commands, and commands prefixed with an exclamation point will be executed on the local system.

The core functionality of pdsh may be supplemented by dynamically loadable modules. The modules may provide a new connection protocol (replacing the standard rcmd(3) protocol used by rsh(1)), filtering options (e.g. removing hosts that are “down” from the target list), and/or host selection options (e.g., -a selects all hosts from a configuration file.). By default, pdsh must have at least one “rcmd” module loaded. See the RCMD MODULES section for more information.

Installing pdsh

Debian based:

apt install pdsh

RHEL based:

yum install pdsh

Running

The following command installs telegraf on all 4 nodes in cluster02

Running multiple commands

Pipe redirection

 

Example

 

When using ssh for remote execution, expect the stderr of ssh to be folded in with that of the remote command. When invoked by pdsh, it is not possible for ssh to prompt for passwords if RSA/DSA keys are configured properly, etc.. For ssh implementations that suppport a connect timeout option, pdsh attempts to use that option to enforce the timeout (e.g. -oConnectTimeout=T for OpenSSH), otherwise connect timeouts are not supported when using ssh. Finally, there is no reliable way for pdsh to ensure that remote commands are actually terminated when using a command timeout. Thus if -u is used with ssh commands may be left running on remote hosts even after timeout has killed local ssh processes.

Output from multiple processes per node may be interspersed when using qshell or mqshell rcmd modules.

The number of nodes that pdsh can simultaneously execute remote jobs on is limited by the maximum number of threads that can be created concurrently, as well as the availability of reserved ports in the rsh and qshell rcmd modules. On systems that implement Posix threads, the limit is typically defined by the constant PTHREADS_THREADS_MAX.

dms

How to fix missing foreign keys and/or indexes – AWS DMS

AWS Database Migration Service (DMS) helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from most widely used commercial and open-source databases.

The Database Migration Service is a data mover. It creates only the structures required to migrate your data, (this is for performance reasons mainly.) Additionally, it doesn’t migrate secondary indexes, default values, procedures, triggers, auto increment columns etc. These objects/modifications need to be made after migrating the data, (and typically prior to switching the app.)

But it can be fixed by importing schema manually.

Problem

missing foreign keys and/or indexes

Solution

To fix foreign keys & indexes missing issue, follow this

  1. Import Database schema manually to RDS.
  2. Set Target table preparation mode to Truncate

Using JSON:

dms

Using DMS GUI:

dms

Now run the task.

You will see all foreign keys and indexes in target (RDS).