How to use ipset command on linux to block bulk IPs

ipset is a companion application for the iptables Linux firewall. It allows you to setup rules to quickly and easily block a set of IP addresses, among other things.

Installation

Debian based system

# apt install ipset

Redhat based system

# yum install ipset

Blocking a list of network

Start by creating a new “set” of network addresses. This creates a new “hash” set of “net” network addresses named “myset”.

or

Add any IP address that you’d like to block to the set.

Finally, configure iptables to block any address in that set. This command will add a rule to the top of the “INPUT” chain to “-m” match the set named “myset” from ipset (–match-set) when it’s a “src” packet and “DROP”, or block, it.

Blocking a list of IP addresses

Start by creating a new “set” of ip addresses. This creates a new “hash” set of “ip” addresses named “myset-ip”.

or

Add any IP address that you’d like to block to the set.

Finally, configure iptables to block any address in that set.

Making ipset persistent

The ipset you have created is stored in memory and will be gone after reboot. To make the ipset persistent you have to do the followings:

First save the ipset to /etc/ipset.conf:

Then enable ipset.service, which works similarly to iptables.service for restoring iptables rules.

Other Commands

To view the sets:

or

To delete a set named “myset”:

or

To delete all sets:

How to configure IPsec/L2TP VPN Clients on Linux

After setting up your own VPN server, follow these steps to configure your devices. In case you are unable to connect, first, check to make sure the VPN credentials were entered correctly.

Commands must be run as root on your VPN client.

To set up the VPN client, first install the following packages:

Create VPN variables (replace with actual values):

Configure strongSwan:

Configure xl2tpd:

The VPN client setup is now complete. Follow the steps below to connect.

Note: You must repeat all steps below every time you try to connect to the VPN.

Create xl2tpd control file:

Restart services:

Start the IPsec connection:

Start the L2TP connection:

Run ifconfig and check the output. You should now see a new interface ppp0.

Check your existing default route:

Find this line in the output: default via X.X.X.X .... Write down this gateway IP for use in the two commands below.

Exclude your VPN server’s IP from the new default route (replace with actual value):

If your VPN client is a remote server, you must also exclude your Local PC’s public IP from the new default route, to prevent your SSH session from being disconnected (replace with actual value):

Add a new default route to start routing traffic via the VPN server:

The VPN connection is now complete. Verify that your traffic is being routed properly:

The above command should return Your VPN Server IP.

To stop routing traffic via the VPN server:

To disconnect:

How to specify the source address for all outbound connections

If you have multiple IPs assigned on your Linux pc then there is a chance that you want to use different IPs for some applications than default one. Updating IP routes every time isn’t a good idea and you may mess up.

get bindhack.c

wget 'https://gist.githubusercontent.com/akhilin/f6660a2f93f64545ff8fcc0d6b23e42a/raw/7bf3f066b74a4b9e3d3768a8affee26da6a3ada6/bindhack.c' -P /tmp/

compile it

gcc -fPIC -static -shared -o /tmp/bindhack.so /tmp/bindhack.c -lc -ldl

Copy it to library folder

cp /tmp/bindhack.so /usr/lib/ && chmod +x /usr/lib/bindhack.so

Optional (ignore if you have it already )

echo 'nameserver 8.8.8.8' >> /etc/resolv.conf

using bindhack

BIND_ADDR=<source ip> LD_PRELOAD=/usr/lib/bindhack.so <command here>

Example

 

you can add below function in your .bashrc to spin it at any time

 

 

take a look at bindhack.c

 

 

Speed up Ansible

Update to the latest version. Ansible 2.0 is slower than Ansible 1.9 because it included an important change to the execution engine to allow any user to choose the execution algorithm to be used. In the versions that followed, and mostly in 2.1, big optimizations have been done to increase execution speed, so be sure to be running the latest possible version.

Profiling Tasks

The best way I’ve found to time the execution of Ansible playbooks is by enabling the profile_tasks callback. This callback is included with Ansible and all you need to do to enable it is add callback_whitelist = profile_tasks to the [defaults] section of your ansible.cfg:
# ansible.cfg

 

Enable pipelining

You can enable pipelining by simply adding pipelining = True to the [ssh_connection]area of your ansible.cfg or by by using the ANSIBLE_PIPELINING and ANSIBLE_SSH_PIPELINING environment variables.
# ansible.cfg
You’ll also need to make sure that requiretty is disabled in /etc/sudoers on the remote host, or become won’t work with pipelining enabled.

Enable Mitogen for Ansible

Enabling Mitogen for Ansible is as simple as downloading and extracting the plugin, then adding 2 lines to the [defaults] section of your ansible.cfg:
# ansible.cfg

SSH multiplexing

The first thing to check is whether SSH multiplexing is enabled and used. This gives a tremendous speed boost because Ansible can reuse opened SSH sessions instead of negotiating new one (actually more than one) for every task. Ansible has this setting turned on by default. It can be set in configuration file as follows:

But be careful to override  ssh_args  — if you don’t set ControlMaster   and ControlPersist  while overriding, Ansible will “forget” to use them.

To check whether SSH multiplexing is used, start Ansible with  -vvvv  option:
ansible test -vvvv -m ping

UseDNS

UseDNS is an SSH-server setting (/etc/ssh/sshd_config file) which forces a server to check a client’s PTR-record upon connection. It may cause connection delays especially with slow DNS servers on the server side. In modern Linux distribution, this setting is turned off by default, which is correct.

PreferredAuthentications

It is an SSH-client setting which informs server about preferred authentication methods. By default Ansible uses:
-o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey
So if GSSAPI Authentication is enabled on the server (at the time of writing this it is turned on in RHEL EC2 AMI) it will be tried as the first option, forcing the client and server to make PTR-record lookups. But in most cases, we want to use only public key auth. We can force Ansible to do so by changing ansible.cfg:

 

Facts Gathering

At the start of playbook execution, Ansible collects facts about remote system (this is default behaviour for ansible-playbook but not relevant to ansible ad-hoc commands). It is similar to calling “setup” module thus requires another ssh communication step. If you don’t need any facts in your playbook (e.g. our test playbook) you can disable fact gathering:

Fork

Until this moment we discussed how to speed up playbook execution on a given remote host. But if you run playbook against tens or hundreds of hosts, Ansible internal performance becomes a bottleneck. For example, there’s preconfigured number of forks – number of hosts that can be interacted simultaneously. You can change this value in  ansible.cfg file:

 

The default value is 5, which is quite conservative. You can experiment with this setting depending on your local CPU and network bandwidth resources.
Another thing about forks is that if you have a lot of servers to work with and a low number of available forks, your master ssh-sessions may expire between tasks. Ansible uses linear strategy by default, which executes one task for every host and then proceeds to the next task. This way if time between task execution on the first server and on the last one is greater than ControlPersist then master socket will expire by the time Ansible starts execution of the following task on the first server, thus new ssh connection will be required.

Poll Interval

When module is executed on remote host, Ansible starts to poll for its result. The lower is interval between poll attempts, the higher is CPU load on Ansible control host. But we want to have CPU available for greater forks number (see above). You can tweak poll interval in  ansible.cfg:

 

If you run “slow” jobs (like backups) on multiple hosts, you may want to increase the interval to 0.05   to use less CPU.
Hope this helps you to speed up your setup. Seems like there are no more items in environment check-list and further speed gains only possible by optimizing your playbook code.

Asynchronous Actions and Polling

By default tasks in playbooks block, meaning the connections stay open until the task is done on each node. This may not always be desirable, or you may be running operations that take longer than the SSH timeout.
To avoid blocking or timeout issues, you can use asynchronous mode to run all of your tasks at once and then poll until they are done.
The behaviour of asynchronous mode depends on the value of poll.

Avoid connection timeouts: poll > 0

When poll is a positive value, the playbook will still block on the task until it either completes, fails or times out.
In this case, however, async explicitly sets the timeout you wish to apply to this task rather than being limited by the connection method timeout.
To launch a task asynchronously, specify its maximum runtime and how frequently you would like to poll for status. The default poll value is 15 seconds if you do not specify a value for poll:

 

Concurrent tasks: poll = 0

When poll is 0, Ansible will start the task and immediately move on to the next one without waiting for a result.
From the point of view of sequencing this is asynchronous programming: tasks may now run concurrently.
The playbook run will end without checking back on async tasks.
The async tasks will run until they either complete, fail or timeout according to their async value.
If you need a synchronization point with a task, register it to obtain its job ID and use the async_status module to observe it.
You may run a task asynchronously by specifying a poll value of 0:

 

Enable fact_caching

By enabling this value we’re telling Ansible to keep the facts it gathers in a local file. You can also set this to a redis cache. See the documentation for details.
Fact_caching is what happens when Ansible says, “Gathering facts” about your target hosts. If we don’t change our targets hardware (or virtual hardware) very often this can be very helpful. Enable it by adding this to your ansible.cfg file:
Enable facts caching mechanism
If you still need some of the facts groups, but at the same time the gathering process is still slow for you, you could try use fact caching.
Caching enables Ansible to cache the facts for a given host in some kind of backend.
Currently the caching plugin supports the following cache backend:

  •  
More information on the caching plugin, could be found here:
This is an example configuration of facts caching in json files

References:

1.https://dzone.com/articles/speed-up-ansible

2.https://habr.com/en/post/453446/

3.https://www.toptechskills.com/ansible-tutorials-courses/speed-up-ansible-playbooks-pipelining-mitogen/

4.https://www.youtube.com/watch?v=NZUYAbGs-ec

5 Ways to Speed Up SSH Connections in Linux

SSH is the most popular and secure method for managing Linux servers remotely. One of the challenges with remote server management is connection speeds, especially when it comes to session creation between the remote and local machines.

There are several bottlenecks to this process, one scenario is when you are connecting to a remote server for the first time; it normally takes a few seconds to establish a session. However, when you try to start multiple connections in succession, this causes an overhead (combination of excess or indirect computation time, memory, bandwidth, or other related resources to carry out the operation).

In this article, we will share four useful tips on how to speed up remote SSH connections in Linux.

1.Use Compression option in SSH

From the ssh man page (type man ssh to see the whole thing):

 

2.Force SSH Connection Over IPV4

OpenSSH supports both IPv4/IP6, but at times IPv6 connections tend to be slower. So you can consider forcing ssh connections over IPv4 only, using the syntax below:

Alternatively, use the AddressFamily (specifies the address family to use when connecting) directive in your ssh configuration file  (global configuration) or ~/.ssh/config (user specific file).

The accepted values are “any”, “inet” for IPv4 only, or “inet6”.

AddressFamily inet

3. Reuse SSH Connection

An ssh client program is used to establish connections to an sshd daemon accepting remote connections. You can reuse an already-established connection when creating a new ssh session and this can significantly speed up subsequent sessions.

You can enable this in your ~/.ssh/config file.

ControlMaster auto
ControlPath /home/akhil/.ssh/sockets/ssh_mux_%x_%p_%r
ControlPersist yes

openssh doesn’t support %x(ip address in control paths),  use my repo instead

https://github.com/akhilin/openssh-portable.git

or use %h to use hostname instead of ip address

using ip address is recommended so that even if you connect using different hostnames it uses same socket ( very useful when using ansible , pdsh )

4. Use Specific SSH Authentication Method

Another way of speeding up ssh connections is to use a given authentication method for all ssh connections, and here we recommend configuring ssh passwordless login using ssh keygen in 5 easy steps.

Once that is done, use the PreferredAuthentications directive, within ssh_config files (global or user specific) above. This directive defines the order in which the client should try authentication methods (you can specify a command separated list to use more than one method).

PreferredAuthentications=publickey

If you prefer password authentication which is deemed unsecure, use this.

5.Disable DNS Lookup On Remote Machine

By default, sshd daemon looks up the remote host name, and also checks that the resolved host name for the remote IP address maps back to the very same IP address. This can result into delays in connection establishment or session creation.

The UseDNS directive controls the above functionality; to disable it, search and uncomment it in the /etc/ssh/sshd_config file. If it’s not set, add it with the value no.

UseDNS=no

Installing ELK Stack(Elasticsearch,Logstash,Kibana) on CentOS with Sentinl plugin

ELK stack is also known as the Elastic stack, consists of Elasticsearch, Logstash, and Kibana. It helps you to have all of your logs stored in one place and analyze the issues by correlating the events at a particular time.

This guide helps you to install ELK stack on CentOS 7 / RHEL 7.

Components

Logstash – It does the processing (Collect, enrich and send it to Elasticsearch) of incoming logs sent by beats (forwarder).

Elasticsearch – It stores incoming logs from Logstash and provides an ability to search the logs/data in a real-time

Kibana – Provides visualization of logs.

Sentinl –  Sentinl extends Siren Investigate and Kibana with Alerting and Reporting functionality to monitor, notify and report on data series changes using standard queries, programmable validators and a variety of configurable actions – Think of it as a free an independent “Watcher” which also has scheduled “Reporting” capabilities (PNG/PDFs snapshots).

SENTINL is also designed to simplify the process of creating and managing alerts and reports in Siren Investigate/Kibana 6.xvia its native App Interface, or by using native watcher tools in Kibana 6.x+.

 

Beats – Installed on client machines, send logs to Logstash through beats protocol.

Environment

To have a full-featured ELK stack, we would need two machines to test the collection of logs.

ELK Stack

Filebeat

Prerequisites

Install Java

Since Elasticsearch is based on Java, make sure you have either OpenJDK or Oracle JDK is installed on your machine.

Here, I am using OpenJDK 1.8.

Verify the Java version.

Output:

Configure ELK repository

Import the Elastic signing key.

Setup the Elasticsearch repository and install it.

Add the below content to the elk.repo file.

Install Elasticsearch

Elasticsearch is an open source search engine, offers a real-time distributed search and analytics with the RESTful web interface. Elasticsearch stores all the data are sent by the Logstash and displays through the web interface (Kibana) on users request.

Install Elasticsearch.

Configure Elasticsearch to start during system startup.

Use CURL to check whether the Elasticsearch is responding to the queries or not.

Output:

Install Logstash

Logstash is an open source tool for managing events and logs, it collects the logs, parse them and store them on Elasticsearch for searching. Over 160+ plugins are available for Logstash which provides the capability of processing the different type of events with no extra work.

Install the Logstash package.

Create SSL certificate (Optional)

Filebeat (Logstash Forwarder) are normally installed on client servers, and they use SSL certificate to validate the identity of Logstash server for secure communication.

Create SSL certificate either with the hostname or IP SAN.

(Hostname FQDN)

If you use the Logstash server hostname in the beats (forwarder) configuration, make sure you have A record for Logstash server and also ensure that client machine can resolve the hostname of the Logstash server.

Go to the OpenSSL directory.

Now, create the SSL certificate. Replace green one with the hostname of your real Logstash server.

Configure Logstash

Logstash configuration can be found in /etc/logstash/conf.d/. Logstash configuration file consists of three sections input, filter, and the output. All three sections can be found either in a single file or separate files end with .conf.

I recommend you to use a single file for placing input, filter and output sections.

In the first section, we will put an entry for input configuration. The following configuration sets Logstash to listen on port 5044 for incoming logs from the beats (forwarder) that sit on client machines.

Also, add the SSL certificate details in the input section for secure communication – Optional.

In the filter section. We will use Grok to parse the logs ahead of sending it to Elasticsearch. The following grok filter will look for the syslog labeled logs and tries to parse them to make a structured index.

For more filter patterns, take a look at grokdebugger page.

In the output section, we will define the location where the logs to get stored; obviously, it should be Elasticsearch.

Now start and enable the Logstash service.

You can troubleshoot any issues by looking at Logstash logs.

Install & Configure Kibana

Kibana provides visualization of logs stored on the Elasticsearch. Install the Kibana using the following command.

Edit the kibana.yml file.

By default, Kibana listens on localhost which means you can not access Kibana interface from external machines. To allow it, edit the below line with your machine IP.

Uncomment the following line and update it with the Elasticsearch instance URL. In my case, it is localhost.

Start and enable kibana on system startup.

Install Sentinl plugin:

Install and Configure Filebeat

There are four beats clients available

  1. Packetbeat – Analyze network packet data.
  2. Filebeat – Real-time insight into log data.
  3. Topbeat – Get insights from infrastructure data.
  4. Metricbeat – Ship metrics to Elasticsearch.

To analyze the system logs of the client machine (Ex. client.lintel.local), we need to install filebeat. Create beats.repo file.

Add the below content to the above repo file.

Now, install Filebeat using the following command.

Set up a host entry on the client machine in case your environment does not have DNS server.

Make an host entry like below on the client machine.

Filebeat (beats) uses SSL certificate for validating Logstash server identity, so copy the logstash-forwarder.crt from the Logstash server to the client.

Skip this step, in case you are not using SSL in Logstash.

Filebeat configuration file is in YAML format, which means indentation is very important. Make sure you use the same number of spaces used in the guide.

Open up the filebeat configuration file.

On top, you would see the prospectors section. Here, you need to specify which logs should be sent to Logstash and how they should be handled. Each prospector starts with – character.

For testing purpose, we will configure filebeat to send /var/log/messages to Logstash server. To do that, modify the existing prospector under paths section.

Comment out the – /var/log/*.log to avoid sending all .log files present in that directory to Logstash.

Comment out the section output.elasticsearch: as we are not going to store logs directly to Elasticsearch.

Now, find the line output.logstash and modify the entries like below. This section defines filebeat to send logs to Logstash server server.lintel.local on port 5044 and mention the path where the copied SSL certificate is placed

Replace server.lintel.local with IP address in case if you are using IP SAN.

Restart the service.

Beats logs are typically found syslog file.

Access Kibana

Access the Kibana using the following URL.

http://your-ip-address:5601/

You would get the Kibana’s home page.

Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 - Kibana Starting Page
Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 – Kibana Starting Page

On your first login, you have to map the filebeat index. Go to Management >> Index Patterns.

Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 - Management
Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 – Management

Type the following in the Index pattern box.

Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 - Create Index Pattren
Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 – Create Index Pattern

You should see at least one filebeat index something like above. Click Next step.

Select @timestamp and then click on Create.

Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 - Configure Timestamp
Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 – Configure Timestamp

Verify your index patterns and its mappings.

Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 - Index Mappings
Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 – Index Mappings

Now, click Discover to view the incoming logs and perform search queries.

Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 - Discover Logs
Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS 7 – Discover Logs

You can see sentinl plugin here

sentinl_annotation

That’s All.

 

Reference list:

https://github.com/sirensolutions/sentinl

https://www.itzgeek.com

InlineCallbacks

Twisted features a decorator named inlineCallbacks which allows you to work with deferreds without writing callback functions.

This is done by writing your code as generators, which yield deferreds instead of attaching callbacks.

Consider the following function written in the traditional deferred style:

using inlineCallbacks, we can write this as:

from twisted.internet.defer import inlineCallbacks

Instead of calling addCallback on the deferred returned by redis.Connection, we yield it. this causes Twisted to return the deferred‘s result to us.

Though the inlineCallbacks looks like synchronous code, which blocks while waiting for the request to finish, each yield statement allows other code to run while waiting for the deferred being yielded to fire.

inlineCallbacks become even more powerful when dealing with complex control flow and error handling.

txRedis

txredis is a non-blocking client for the redis database, written in Python. It uses twisted for the asynchronous communication with redis.

Install

Now to check if txredis is properly installed or not goto python prompt and import txredis :

Example :

Using redis-cli, add list of user’s in redis-server with hmset :

hmset, set key to value within hash name for each corresponding key and value from the mapping dict.

Now, using python twisted get user’s :

Above code do connection with redis-server.

Here, rediscreator.connectTCP return defer and register appropriate method in addCallback, addErrback

Here, define callback function onRedisConnect and errRedisConnect.

Here, hget return the value of key within the hase name and return defer

Here, define callback function onSuccess and onFailur and log the appropriate result.

It give output 123, it is correspond to user abc

How Implement Multiservice in Twisted.

Multiservice module is service collection provided by twisted, which is useful for creating a new service and combines with two or more existing services.

The major tools that manages Twisted application is a command line utility called twistd. twistd is a cross-platform, and is the recommended tool for running twisted applications.

The core component of the Twisted Application infrastructure is the

object. which represents your application. Application acts as a container of any “Services” that your application provides. This will be done through Services.

Services manages application that can be started and stopped. In Application object can contain many services, or can even hierarchies of Services using “Multiservice” or your own custom IServiceCollection implementations.

Multiservice Implementaion:

To use multiserivce, which implements IService. For this, import internet and service module.

 

Example :

To run, Save above code in a file as serviceexample.tac . Here, “tac ” file is regular python file. Twisted application infrastructure, protocol implementations live in a module, services, using those protocols are registered in a Twisted Application Configuration(TAC) file and the reactor and configuration are managed by an external utility.

Here, I use multiservice functionality from service. agentservice create object of multiservice. Then add services using add service method. In service, you can add web servers, FTP servers and SSH clients. After this, set application name and pass application to serviceparent method.

now, add service on port 8082 as :

add another service same as above on port 8083 as:

To run serviceexample.tac file using twistd program, use command twistd -y serviceexample.tac -n. After this, open browser and enter url localhost:8082 and localhost:8083. You can see result on web page and both TCP servers are active.

Asynchronous DB Operations in Twisted

Twisted is an asynchronous networking framework. Other Database API Implementations have blocking interfaces.

For this reason, twisted.enterprise.adbapi was created. It is a non-blocking interface,which allows you to access a number of different RDBMSes.

General Method to access DB API.

1 ) Create a Connection with db.

2) create a cursor.

3) do a query.

Cursor blocks to response in asynchronous framework. Those delays are unacceptable when using an asynchronous framework such as Twisted.
To Overcome blocking interface, twisted provides asynchronous wrapper for db module such as twisted.enterprise.adbapi

Database Connection using adbapi API.

To use adbapi, we import dependencies as below

1) Connect Database using adbapi.ConnectionPool

Here, We do not need to import dbmodule directly.
dbmodule.connect are passed as extra arguments to adbapi.ConnectionPool’s Constructor.

2) Run Database Query

Here, I used ‘%s’ paramstyle for mysql. if you use another database module, you need to use compatible paramstyle. for more, use DB-API specification.

Twisted doesn’t attempt to offer any sort of magic parameter munging – runQuery(query,params,…) maps directly onto cursor.execute(query,params,…).

This query returns Deferred, which allows arbitrary callbacks to be called upon completion (or failure).

Demo : Select, Insert and Update query in Database.

Here, I have used MySQLdb api, agentdata as a database name, root as a user, 123456 as a password.
Also, I have created select, insert and update query for select, insert and update operation respectively.
runQuery method returns deferred. For this, add callback and error back to handle success and failure respectively.