Howto use ssh as VPN tunnel

SSH is typically used to log into a remote machine and execute commands, but it also supports tunneling, forwarding TCP ports and X11 connections.

What is SSH Tunneling?

A tunneling protocol may, for example, allow a foreign protocol to run over a network that does not support that particular protocol, such as running IPv6 over IPv4.

SSH tunneling is a method of transporting arbitrary networking data over an encrypted SSH connection. It can be used to add encryption to legacy applications. … It also provides a way to secure the data traffic of any given application using port forwarding, basically tunneling any TCP/IP port over SSH.

sshuttle

sshuttle is not exactly a VPN, and not exactly port forwarding. It’s kind of both, and kind of neither.

It’s like a VPN, since it can forward every port on an entire network, not just ports you specify. Conveniently, it lets you use the “real” IP addresses of each host rather than faking port numbers on localhost.

On the other hand, the way it works is more like ssh port forwarding than a VPN. Normally, a VPN forwards your data one packet at a time, and doesn’t care about individual connections; ie. it’s “stateless” with respect to the traffic. sshuttle is the opposite of stateless; it tracks every single connection.

Installation

 sudo pip install sshuttle

Example

$ sshuttle --dns -v -r <remote-host> 0/0

ssh-tunnel

* This will forward all connections including DNS requests…

Usage

Shell script wrapper function for sending messages through Pushover

Pushover makes it easy to get real-time notifications on your Android, iPhone, iPad, and Desktop (Android Wear and Apple Watch, too!)

You can use this shell function anywhere in your script.

Example:

Note: you need to update API tokens and title above

Locking your bash script against parallel execution

Sometimes there’s a need to ensure only one copy of a script runs, i.e prevent two or more copies running simultaneously. Imagine an important cronjob doing something very important, which will fail or corrupt data if two copies of the called program were to run at the same time. To prevent this, a form of MUTEX (mutual exclusion) lock is needed.

The basic procedure is simple: The script checks if a specific condition (locking) is present at startup, if yes, it’s locked – the script doesn’t start.

This article describes locking with common UNIX® tools.

Method 1

setting the noclobber shell option (set -C). This will cause redirection to fail, if the file the redirection points to already exists (using diverse open() methods). Need to write a code example here.

 

Method 2

A simple way to get that is to create a lock directory – with the mkdir command. It will:

create a given directory only if it does not exist, and set a successful exit code
it will set an unsuccessful exit code if an error occurs – for example, if the directory specified already exists
With mkdir it seems, we have our two steps in one simple operation. A (very!) simple locking code might look like this:

In case mkdir reports an error, the script will exit at this point – the MUTEX did its job!

WSL vs WSL 2 – performance

WSL 2 is a new version of the architecture that powers the Windows Subsystem for Linux to run ELF64 Linux binaries on Windows. Its primary goals are to increase file system performance, as well as adding full system call compatibility. This new architecture changes how these Linux binaries interact with Windows and your computer’s hardware, but still provides the same user experience as in WSL 1 (the current widely available version). Individual Linux distros can be run either as a WSL 1 distro, or as a WSL 2 distro, can be upgraded or downgraded at any time, and you can run WSL 1 and WSL 2 distros side by side. WSL 2 uses an entirely new architecture that uses a real Linux kernel.

It’s a major reworking of the original WSL concept, moving away from translating Linux system calls to Windows to shipping a complete Linux kernel that runs alongside Windows’ own kernel.

The reasons for doing this are many, but the main one is simple: It’s impossible for an emulator that ships twice a year to keep up with the changes in the Linux kernel, changes that Linux binaries depend on. If Windows is to support developers building Linux apps for the cloud, then it needs to be more than consistent, it needs to be compatible.

 

Linux kernel in WSL 2

The Linux kernel in WSL 2 is built in house from the latest stable branch, based on the source available at kernel.org. This kernel has been specially tuned for WSL 2. It has been optimized for size and performance to give an amazing Linux experience on Windows and will be serviced through Windows updates, which means you will get the latest security fixes and kernel improvements without needing to manage it yourself.

Increased file IO performance

File intensive operations like git clone, npm install, apt update, apt upgrade, and more will all be noticeably faster. The actual speed increase will depend on which app you’re running and how it is interacting with the file system. Initial versions of WSL 2 run up to 20x faster compared to WSL 1 when unpacking a zipped tarball, and around 2-5x faster when using git clone, npm install and cmake on various projects.

Sockets performance benchmarks

WSL

wsl

 

WSL 2

wsl2

The Ubuntu 18.04 LTS WSL instance was used for testing with its default packages. In addition to looking at the WSL1 vs. WSL2 performance of Ubuntu 18.04, Ubuntu 18.04.2 LTS itself was also tested bare metal on the same system for looking at the raw performance of Ubuntu on the Intel desktop being tested.

Full System Call Compatibility

Linux binaries use system calls to perform many functions such as accessing files, requesting memory, creating processes, and more. In WSL 1 we created a translation layer that interprets many of these system calls and allows them to work on the Windows NT kernel. However, it’s challenging to implement all of these system calls, resulting in some apps being unable to run in WSL 1. Now that WSL 2 includes its own Linux kernel it has full system call compatibility. This introduces a whole new set of apps that you can run inside of WSL. Some exciting examples are the Linux version of Docker, as well as FUSE!

Using WSL 2 means you can also get the most recent improvements to the Linux kernel much faster than in WSL 1, as we can simply update the WSL 2 kernel rather than needing to reimplement the changes ourselves.

WSL 2 will be a much more powerful platform for you to run your Linux apps on and will empower you to do more with a Linux environment on Windows.

 

Openvas installation in CentOS 7

What is Openvas?

OpenVAS (Open Vulnerability Assessment System, originally known as GNessUs) is a software framework of several services and tools offering vulnerability scanning and vulnerability management.

All OpenVAS products are free software, and most components are licensed under the GNU General Public License (GPL). Plugins for OpenVAS are written in the Nessus Attack Scripting Language, NASL.

The primary reason to use this scan type is to perform comprehensive security testing of an IP address. It will initially perform a port scan of an IP address to find open services. Once listening services are discovered they are then tested for known vulnerabilities and misconfiguration using a large database (more than 53000 NVT checks). The results are then compiled into a report with detailed information regarding each vulnerability and notable issues discovered.

Once you receive the results of the tests, you will need to check each finding for relevance and possibly false positives. Any confirmed vulnerabilities should be re-mediated to ensure your systems are not at risk.

Vulnerability scans performed from externally hosted servers give you the same perspective as an attacker. This has the advantage of understanding exactly what is exposed on external-facing services.

Step 1: Disable SELinux

sed -i 's/=enforcing/=disabled/' /etc/selinux/config

and reboot the machine.

Step 2:  Install dependencies

yum -y install wget rsync curl net-tools

Step 3: Install OpenVAS repository

install the official repository so that OpenVAS works appropriately in the analysis of vulnerabilities.

wget -q -O - http://www.atomicorp.com/installers/atomic |sh

Step 4: Install OpenVAS

yum -y install openvas

Step 5: Run OpenVAS

Once OpenVAS is installed, we continue to start it by executing the following command:

openvas-setup

Once downloaded it will be necessary to configure the GSAD IP address, Greenbone Security Assistant, which is a web interface to manage system scans.

Step 6: Configure OpenVAS Connectivity

We go to our browser and enter the IP address of the CentOS 7 server where we have installed OpenVAS, and we will see that the following message is displayed:

Openvas dashboard

 

Automatic NVT Updates With Cron

35 1 * * * /usr/sbin/greenbone-nvt-sync > /dev/null
5 0 * * * /usr/sbin/greenbone-scapdata-sync > /dev/null
5 1 * * * /usr/sbin/greenbone-certdata-sync > /dev/null

 

How to create letsencrypt wildcard certificates

What’s Certbot?

Certbot is a free, open-source software tool for automatically using Let’s Encrypt certificates on manually-administrated websites to enable HTTPS.

Wildcard certificates

Let’s Encrypt supports wildcard certificate via ACMEv2 using the DNS-01 challenge.

It is necessary to add a TXT record specified by Certbot to the DNS server.

Caution: As it is necessary to update Let’s Encrypt’s certificate every 90 days, a new TXT record is required at every renewal.

 

Step 1: Run command

 

Step 2: Update DNS TXT record

 

After a successful verification

Setting Up Ansible for AWS with Dynamic Inventory (EC2)

If your Ansible inventory fluctuates over time, with hosts spinning up and shutting down in response to business demands, the static inventory solutions described in Working with Inventory will not serve your needs. You may need to track hosts from multiple sources

Ansible integrates all of these options via a dynamic external inventory system. Ansible supports two ways to connect with external inventory: Inventory Plugins and inventory scripts.

If you use Amazon Web Services EC2, maintaining an inventory file might not be the best approach, because hosts may come and go over time, be managed by external applications, or you might even be using AWS autoscaling. For this reason, you can use the EC2 external inventory script.

You can use this script in one of two ways.

  1. The easiest is to use Ansible’s -i command-line option and specify the path to the script after marking it executable:
  2. The second option is to copy the script to /etc/ansible/hosts and chmod +x it. You will also need to copy the ec2.ini file to /etc/ansible/ec2.ini. Then you can run ansible as you would normally.

You can test the script by itself to make sure your config is correct:


After a few moments, you should see your entire EC2 inventory across all regions in JSON.

If you use Boto profiles to manage multiple AWS accounts, you can pass --profile PROFILE name to the ec2.py script.

You can then run ec2.py --profile prod to get the inventory for the prod account, although this option is not supported by ansible-playbook. You can also use the AWS_PROFILE variable – for example:

 

ec2.py