Python Matplotlib Library with Examples

What Is Python Matplotlib?

Matplotlib is a plotting library for the Python programming language and its numerical mathematics extension NumPy. It provides an object-oriented API for embedding plots into applications using general-purpose GUI toolkits like Tkinter, wxPython, Qt, or GTK+.

Pyplot is a Matplotlib module which provides a MATLAB-like interface. Matplotlib is designed to be as usable as MATLAB, with the ability to use Python and the advantage of being free and open-source. matplotlib.pyplot is a plotting library used for 2D graphics in the python programming language. It can be used in python scripts, shell, web application servers, and other graphical user interface toolkits.

There are several toolkits that are available that extend python Matplotlib functionality.

  • Basemap: It is a map plotting toolkit with various map projections, coastlines, and political boundaries.
  • Cartopy: It is a mapping library featuring object-oriented map projection definitions, and arbitrary point, line, polygon and image transformation capabilities.
  • Excel tools: Matplotlib provides utilities for exchanging data with Microsoft Excel.
    Mplot3d: It is used for 3-D plots.
  • Natgrid: It is an interface to the “natgrid” library for irregular gridding of the spaced data.
  • GTK tools: mpl_toolkits.gtktools provides some utilities for working with GTK. This toolkit ships with matplotlib, but requires pygtk.
  • Qt interface
  • Mplot3d: The mplot3d toolkit adds simple 3D plotting capabilities to matplotlib by supplying an axes object that can create a 2D projection of a 3D scene.
  • matplotlib2tikz: export to Pgfplots for smooth integration into LaTeX documents.

Types of Plots
There are various plots which can be created using python Matplotlib. Some of them are listed below:

  • Bar Graph
  • Histogram
  • Scatter Plot
  • Line Plot
  • 3D plot
  • Area Plot
  • Pie Plot
  • Image Plot

We will demonstrate some of them in detail.

But before that, let me show you elementary codes in python matplotlib in order to generate a simple graph.

So, with three lines of code, you can generate a basic graph using python matplotlib.

Let us see how can we add title, labels to our graph created by python matplotlib library to bring in more meaning to it. Consider the below example:

You can even try many styling techniques to create a better graph by changing the width or color of a particular line or what if you want to have some grid lines, there you need styling!

The style package adds support for easy-to-switch plotting “styles” with the same parameters as a matplotlibrc file.

There are a number of pre-defined styles provided by matplotlib. For example, there’s a pre-defined style called “ggplot”, which emulates the aesthetics of ggplot (a popular plotting package for R). To use this style, just add:

To list all available styles, use:

So, let me show you how to add style to a graph using python matplotlib. First, you need to import the style package from python matplotlib library and then use styling functions as shown in below code:

Now, we will understand the different kinds of plots. Let’s start with the bar graph!

Matplotlib: Bar Graph
A bar graph uses bars to compare data among different categories. It is well suited when you want to measure the changes over a period of time. It can be plotted vertically or horizontally. Also, the vital thing to keep in mind is that longer the bar, the greater is the value. Now, let us practically implement it using python matplotlib.

When I run this code, it generates a figure like below:


In the above plot, I have displayed a comparison between the distance covered by two cars BMW and Audi over a period of 5 days. Next, let us move on to another kind of plot using python matplotlib – Histogram

Matplotlib – Histogram
Let me first tell you the difference between a bar graph and a histogram. Histograms are used to show a graphical representation of the distribution of numerical data whereas a bar chart is used to compare different entities.

It is an estimate of the probability distribution of a continuous variable (quantitative variable) and was first introduced by Karl Pearson. It is a kind of bar graph.

To construct a histogram, the first step is to “bin” the range of values — that is, divide the entire range of values into a series of intervals — and then count how many values fall into each interval. The bins are usually specified as consecutive, non-overlapping intervals of a variable. The bins (intervals) must be adjacent and are often (but are not required to be) of equal size.

Basically, histograms are used to represent data given in the form of some groups or we can say when you have arrays or a very long list. X-axis is about bin ranges where Y-axis talks about frequency. So, if you want to represent the age-wise population in form of the graph then histogram suits well as it tells you how many exist in certain group range or bin if you talk in the context of histograms.

In the below code, I have created the bins in the interval of 10 which means the first bin contains elements from 0 to 9, then 10 to 19 and so on.

When I run this code, it generates a figure like below:

As you can see in the above plot, Y-axis tells about the age groups that appear with respect to the bins. Our biggest age group is between 40 and 50.

Matplotlib: Scatter Plot
A scatter plot is a type of plot that shows the data as a collection of points. The position of a point depends on its two-dimensional value, where each value is a position on either the horizontal or vertical dimension. Usually, we need scatter plots in order to compare variables, for example, how much one variable is affected by another variable to build a relation out of it.
Consider the below example:

As you can see in the above graph, I have plotted two scatter plots based on the inputs specified in the above code. The data is displayed as a collection of points having ‘high-income low salary’ and ‘low-income high salary.’

Scatter plot with groups
Data can be classified into several groups. The code below demonstrates:

The purpose of using “plt.figure()” is to create a figure object. It’s a Top-level container for all plot elements.

The whole figure is regarded as the figure object. It is necessary to explicitly use “plt.figure()”when we want to tweak the size of the figure and when we want to add multiple Axes objects in a single figure.

fig.add_subplot() is used to control the default spacing of the subplots.
For example, “111” means “1×1 grid, first subplot” and “234” means “2×3 grid, 4th subplot”.

You can easily understand by the following picture:

Next, let us understand the area plot or you can also say Stack plot using python matplotlib.

Matplotlib: Area Plot
Area plots are pretty much similar to the line plot. They are also known as stack plots. These plots can be used to display the evolution of the value of several groups on the same graphic. The values of each group are displayed on top of each other. It allows checking on the same figure the evolution of both the total of a numeric variable and the importance of each group.

A line chart forms the basis of an area plot, where the region between the axis and the line is represented by colors.

The above-represented graph shows how an area plot can be plotted for the present scenario. Each shaded area in the graph shows a particular bike with the frequency variations denoting the distance covered by the bike on different days. Next, let us move to our last yet most frequently used plot – Pie chart.

Matplotlib: Pie Chart
In a pie plot, statistical data can be represented in a circular graph where the circle is divided into portions i.e. slices of pie that denote a particular data, that is, each portion is proportional to different values in the data. This sort of plot can be mainly used in mass media and business.

In the above-represented pie plot, the bikes scenario is illustrated, and I have divided the circle into 4 sectors, each slice represents a particular bike and the percentage of distance traveled by it. Now, if you have noticed these slices adds up to 24 hrs, but the calculation of pie slices is done automatically for you. In this way, the pie charts are really useful as you don’t have to be the one who calculates the percentage of the slice of the pie.

Matplotlib: 3D Plot
Plotting of data along x, y, and z axes to enhance the display of data represents the 3-dimensional plotting. 3D plotting is an advanced plotting technique that gives us a better view of the data representation along the three axes of the graph.

Line Plot 3D

In the above-represented 3D graph, a line graph is illustrated in a 3-dimensional manner. We make use of a special library to plot 3D graphs which are given in the following syntax.
Syntax for plotting 3D graphs:

The import Axes3D is mainly used to create an axis by making use of the projection=3d keyword. This enables a 3-dimensional view of any data that can be written along with the above-mentioned code.

Surface Plot 3D

By default, it will be colored in shades of a solid color, but it also supports color mapping by supplying the cmap argument.

The rstride and cstride kwargs set the stride used to sample the input data to generate the graph. If 1k by 1k arrays are passed in, the default values for the strides will result in a 100×100 grid being plotted. Defaults to 10. Raises a ValueError if both stride and count kwargs (see next section) are provided.

Matplotlib: Image Plot

Matplotlib: Working With Multiple Plots
I have discussed multiple types of plots in python matplotlib such as bar plot, scatter plot, pie plot, area plot, etc. Now, let me show you how to handle multiple plots.

Data Analysis with Pandas & Python

What is Data Analysis?
Data analysis is a process of inspecting, cleansing, transforming, and modeling data with the goal of discovering useful information, informing conclusions, and supporting decision-making. In today’s business world, data analysis plays a role in making decisions more scientific and helping businesses operate more effectively
Python is a great language for doing data analysis, primarily because of the fantastic ecosystem of data-centric Python packages. Pandas is one of those packages providing fast, flexible, and expressive data structures designed to make working with “relational” or “labeled” data both easy and intuitive. It aims to be the fundamental high-level building block for doing practical, real-world data analysis in Python.
In this article, I have used Pandas to know more about doing data analysis.
Mainly pandas have two data structures, series, data frames, and Panel.

Installation
The easiest way to install pandas is to use pip:

or, Download it from here.

  • pandas Series

pandas series can be used for the one-dimensional labeled array.

Labels can be accessed using index attribute
print(a.index)

You can use array indexing or labels to access data in the series.
You can use array indexing or labels to access data in the series
print(a[1])
print(a[‘test4’])

You can also apply mathematical operations on pandas series.
b = a * 2
c = a ** 1.5
print(b)
print(c)

You can even create a series of heterogeneous data.
s = pd.Series([‘test1’, 1.2, 3, ‘test2’], index=[‘test3’, ‘test4’, 2, ‘4.3’])

print(s)

  • pandas DataFrame

pandas DataFrame is a two-dimensional array with heterogeneous data.i.e., data is aligned in a tabular fashion in rows and columns.
Structure
Let us assume that we are creating a data frame with the student’s data.

Name Age Gender Rating
Steve 32 Male 3.45
Lia 28 Female 4.6
Vin 45 Male 3.9
Katie 38 Female 2

You can think of it as an SQL table or a spreadsheet data representation.
The table represents the data of a sales team of an organization with their overall performance rating. The data is represented in rows and columns. Each column represents an attribute and each row represents a person.
The data types of the four columns are as follows −

Column Type
Name String
Age Integer
Gender String
Rating Float

Key Points
• Heterogeneous data
• Size Mutable
• Data Mutable

A pandas DataFrame can be created using the following constructor −
pandas.DataFrame( data, index, columns, dtype, copy)

•  data
data takes various forms like ndarray, series, map, lists, dict, constants and also another DataFrame.
•  index
For the row labels, the Index to be used for the resulting frame is Optional Default np.arrange(n) if no index is passed.
•  columns
For column labels, the optional default syntax is – np.arrange(n). This is only true if no index is passed.
•  dtype
The data type of each column.
•  copy
This command (or whatever it is) is used for copying of data if the default is False.

There are many methods to create DataFrames.
• Lists
• dict
• Series
• Numpy ndarrays
• Another DataFrame

Creating DataFrame from the dictionary of Series
The following method can be used to create DataFrames from a dictionary of pandas series.

print(df)

print(df.index)

print(df.columns)

Creating DataFrame from list of dictionaries
l = [{‘orange’: 32, ‘apple’: 42}, {‘banana’: 25, ‘carrot’: 44, ‘apple’: 34}]
df = pd.DataFrame(l, index=[‘test1’, ‘test2’])

print(df)

You might have noticed that we got a DataFrame with NaN values in it. This is because we didn’t the data for that particular row and column.

Creating DataFrame from Text/CSV files
Pandas tool comes in handy when you want to load data from a CSV or a text file. It has built-in functions to do this for use.

df = pd.read_csv(‘happiness.csv’)

Yes, we created a DataFrame from a CSV file. This dataset contains the outcome of the European quality of life survey. This dataset is available here. Now we have stored the DataFrame in df, we want to see what’s inside. First, we will see the size of the DataFrame.

print(df.shape)

It has 105 Rows and 4 Columns. Instead of printing out all the data, we will see the first 10 rows.
df.head(10)

There are many more methods to create a DataFrames. But now we will see the basic operation on DataFrames.

Operations on DataFrame
We’ll recall the DataFrame we made earlier.

print(df)

Now we want to create a new row column from current columns. Let’s see how it is done.
df[‘column3’] = (2 * df[‘column1’] + 3 * df[‘column2’])/5

We have created a new column column3 from column1 and  column2. We’ll create one more using boolean.
df[‘flag’] = df[‘column1’] > 99.5

We can also remove columns.
column3 = df.pop(‘column3’)

print(column3)

print(df)

Descriptive Statistics using pandas
It’s very easy to view descriptive statistics of a dataset using pandas. We are gonna use, Biomass data collected from this source. Let’s load the data first.

url = ‘https://raw.github.com/vincentarelbundock/Rdatasets/master/csv/DAAG/biomass.csv’
df = pd.read_csv(url)
df.head()

We are not interested in the unnamed column. So, let’s delete that first. Then we’ll see the statistics with one line of code.

It’s simple as that. We can see all the statistics. Count, mean, standard deviation and other statistics. Now we are gonna find some other metrics which are not available in the describe() summary.

Mean :
print(df.mean())

Min and Max
print(df.min())

print(df.max())

Pairwise Correlation
df.corr()

Data Cleaning
We need to clean our data. Our data might contain missing values, NaN values, outliers, etc. We may need to remove or replace that data. Otherwise, our data might make any sense.
We can find null values using the following method.

print(df.isnull().any())

We have to remove these null values. This can be done by the method shown below.

newdf = df.dropna()

print(newdf.shape)

print(newdf.shape)

Pandas .Panel()
A panel is a 3D container of data. The term Panel data is derived from econometrics and is partially responsible for the name pandas − pan(el)-da(ta)-s.
The names for the 3 axes are intended to give some semantic meaning to describing operations involving panel data. They are −
• items − axis 0, each item corresponds to a DataFrame contained inside.
• major_axis − axis 1, it is the index (rows) of each of the DataFrames.
• minor_axis − axis 2, it is the columns of each of the DataFrames.

A Panel can be created using the following constructor −
The parameters of the constructor are as follows −
• data – Data takes various forms like ndarray, series, map, lists, dict, constants and also another DataFrame
• items – axis=0
• major_axis – axis=1
• minor_axis – axis=2
• dtype – the Data type of each column
• copy – Copy data. Default, false

A Panel can be created using multiple ways like −
• From ndarrays
• From dict of DataFrames
• From 3D ndarray

print(p)

Note − Observe the dimensions of the empty panel and the above panel, all the objects are different.

From dict of DataFrame Objects

print(p)

Selecting the Data from Panel
Select the data from the panel using −
• Items
• Major_axis
• Minor_axis

Using Items

print p[‘Item1’]

We have two items, and we retrieved item1. The result is a DataFrame with 4 rows and 3 columns, which are the Major_axis and Minor_axis dimensions.

Using major_axis
Data can be accessed using the method panel.major_axis(index).

Using minor_axis
Data can be accessed using the method panel.minor_axis(index).

print(p.minor_xs(1))

 

Locking your bash script against parallel execution

Sometimes there’s a need to ensure only one copy of a script runs, i.e prevent two or more copies running simultaneously. Imagine an important cronjob doing something very important, which will fail or corrupt data if two copies of the called program were to run at the same time. To prevent this, a form of MUTEX (mutual exclusion) lock is needed.

The basic procedure is simple: The script checks if a specific condition (locking) is present at startup, if yes, it’s locked – the script doesn’t start.

This article describes locking with common UNIX® tools.

Method 1

setting the noclobber shell option (set -C). This will cause redirection to fail, if the file the redirection points to already exists (using diverse open() methods). Need to write a code example here.

 

Method 2

A simple way to get that is to create a lock directory – with the mkdir command. It will:

create a given directory only if it does not exist, and set a successful exit code
it will set an unsuccessful exit code if an error occurs – for example, if the directory specified already exists
With mkdir it seems, we have our two steps in one simple operation. A (very!) simple locking code might look like this:

In case mkdir reports an error, the script will exit at this point – the MUTEX did its job!

React Quickstart Tutorial

React is a JavaScript library created by Facebook.
ReactJS is a tool for building UI components.

Adding React to an HTML Page

This quickstart tutorial will add React to a page like this:

Example


What is Babel?

Babel is a JavaScript compiler that can translate markup or programming languages into JavaScript.

With Babel, you can use the newest features of JavaScript (ES6 – ECMAScript 2015).

Babel is available for different conversions. React uses Babel to convert JSX into JavaScript.

Please note that is needed for using Babel.


What is JSX?

JSX stands for JavaScript XML.
JSX is an XML/HTML like extension to JavaScript.

Example

As you can see above, JSX is not JavaScript nor HTML.

JSX is a XML syntax extension to JavaScript that also comes with the full power of ES6 (ECMAScript 2015).

Just like HTML, JSX tags can have a tag names, attributes, and children. If an attribute is wrapped in curly braces, the value is a JavaScript expression.

Note that JSX does not use quotes around the HTML text string.

React DOM Render

The method ReactDom.render() is used to render (display) HTML elements:

Example


JSX Expressions

Expressions can be used in JSX by wrapping them in curly {} braces.

Example


React Elements

React applications are usually built around a single HTML element.

React developers often call this the root node (root element):

React elements look like this:

Elements are rendered (displayed) with the ReactDOM.render() method:

React elements are immutable. They cannot be changed.

The only way to change a React element is to render a new element every time:

Example


React Components

React components are JavaScript functions.

This example creates a React component named “Welcome”:

Example

React can also use ES6 classes to create components.

This example creates a React component named Welcome with a render method:

Example


React Component Properties

This example creates a React component named “Welcome” with property arguments:

Example

React can also use ES6 classes to create components.

This example also creates a React component named “Welcome” with property arguments:

Example


 JSX Compiler

The examples on this page compiles JSX in the browser.

For production code, the compilation should be done separately.


Create React Application

Facebook has created a Create React Application with everything you need to build a React app.

It is a a development server that uses Webpack to compile React, JSX, and ES6, auto-prefix CSS files.

The Create React App uses ESLint to test and warn about mistakes in the code.

To create a Create React App run the following code on your terminal:

Example

Make sure you have Node.js 5.2 or higher. Otherwise you must install npx:

Example

Start one folder up from where you want your application to stay:

Example

Success Result:

Apple iOS App Provisioning

A distribution certificate identifies your team/organization within a distribution App Provisioning profile and allows you to submit your app to the Apple App Store.

The workflow for developing and distributing iOS apps can be complex and difficult to understand. This article explain the steps needed to manage certificates and provisioning profiles and assist developer who are starting to develop in-house iOS apps.

A provisioning profile is a collection of digital entities that uniquely ties developers and devices to an authorized iPhone Development Team and enables a device to be used for testing.

The following steps describe the high level activities required to manage and distribute apps.

  1. Manage Certificates for development and production
    1. Create certificate for Development
    2. Create certificate for Production
  2. Register AppId and DeviceId
    1. Add Device ID’s
    2. Register AppID’s
  3. Create App Provisioning Profiles for Project
    1. Create Provisioning Profile for Developement & Production
    2. Apply AppID to Provisioning Profile
    3. Apply Certificate to App Provisioning Profile
    4. Download Profile and add it in Xcode

Step 1: Login

Go to https://developer.apple.com and click on Account (you must have an Apple Developer account to begin)

1. Click Log In, choose Select Certificates, Identifiers & Profiles

Step 2 : create Certificate

On the left menu select Certificates

1. Select add button “+” at the top right to create a new Certificate

2. Select “iOS Distribution (App Store and Ad Hoc)” and press Continue

3. Developers will need to generate a Certificate Signing Request (CSR) from their keychain and perform the Request Certificate function. Then select Signing certificate and Generate and Download certificate.

4. Click on Downloaded Certificate, it will added to KeyChain.

Step 3 : RegisterAppID

  1. From the left menu select Identifiers -> Click on Add new -> Select AppID’s
  2. Describe Name, and paste your project’s BundleID and select Capability which your app provides
  3. Click On Register Button. Now your AppID is Registered.

Step 4 : Create App Provisioning Profile

On the left tab under Provisioning Profiles, select Distribution

1. Select add button “+” at the top right to create a new profile

2. Select “App Store” and press Continue

iOS Provisioning Profile

 

 

3. Select App ID and press Continue

 

iOS Provisioning Profile

 

 

4. Select the Certificate you wish to include in this provisioning profile (the certificate the app was signed with) and click Continue. Next, select the devices you wish to include in the provisioning profile. The certificate is a public/private key-pair, which identifies who developed the app.

 

5. Create a name for your profile and click Generate. You might want to include “Distribution” in the name so you can distinguish this one from testing.

6. Download Your Profile and by clicking it will be added to Xcode.

 

Howto reverse proxy in nginx

Proxying is typically used to distribute the load among several servers, seamlessly show content from different websites, or pass requests for processing to application servers over protocols other than HTTP.

When NGINX proxies a request, it sends the request to a specified proxied server, fetches the response, and sends it back to the client. It is possible to proxy requests to an HTTP server (another NGINX server or any other server) or a non-HTTP server (which can run an application developed with a specific framework, such as PHP or Python) using a specified protocol.

1. To pass a request to an HTTP proxied server, the proxy_pass directive is specified inside a location. For example:

 2. This address can be specified as a domain name or an IP address. The address may also include a port:

3. To pass a request to a non-HTTP proxied server, the appropriate **_pass directive should be used:

  • fastcgi_pass passes a request to a FastCGI server
  • uwsgi_pass passes a request to a uwsgi server
  • scgi_pass passes a request to an SCGI server
  • memcached_pass passes a request to a memcached server

4. Passing Request Headers

 

5. To disable buffering in a specific location, place the proxy_buffering directive in the location with the off parameter, as follows:

 

 

Matrix Javascript SDK

Matrix Client-Server r0 SDK for JavaScript. This SDK can be run in a browser or in Node.js.

Quickstart

In a browser

Download either the full or minified version from https://github.com/matrix-org/matrix-js-sdk/releases/latest and add that as a <script> to your page. There will be a global variable matrixcs attached to window through which you can access the SDK. See below for how to include libolm to enable end-to-end-encryption.

Please check the working browser example for more information.

In Node.js

Ensure you have the latest LTS version of Node.js installed.

Using yarn instead of npm is recommended. Please see the Yarn install guide if you do not have it already.

yarn add matrix-js-sdk

See below for how to include libolm to enable end-to-end-encryption. Please check the Node.js terminal app for a more complex example.

To start the client:

You can perform a call to /sync to get the current state of the client:

To send a message:

To listen for message events:

By default, the matrix-js-sdk client uses the MemoryStore to store events as they are received. For example to iterate through the currently stored timeline for a room:

What does this SDK do?

This SDK provides a full object model around the Matrix Client-Server API and emits events for incoming data and state changes. Aside from wrapping the HTTP API, it:

  • Handles syncing (via /initialSync and /events)
  • Handles the generation of “friendly” room and member names.
  • Handles historical RoomMember information (e.g. display names).
  • Manages room member state across multiple events (e.g. it handles typing, power levels and membership changes).
  • Exposes high-level objects like Rooms, RoomState, RoomMembers and Users which can be listened to for things like name changes, new messages, membership changes, presence changes, and more.
  • Handle “local echo” of messages sent using the SDK. This means that messages that have just been sent will appear in the timeline as ‘sending’, until it completes. This is beneficial because it prevents there being a gap between hitting the send button and having the “remote echo” arrive.
  • Mark messages which failed to send as not sent.
  • Automatically retry requests to send messages due to network errors.
  • Automatically retry requests to send messages due to rate limiting errors.
  • Handle queueing of messages.
  • Handles pagination.
  • Handle assigning push actions for events.
  • Handles room initial sync on accepting invites.
  • Handles WebRTC calling.

Later versions of the SDK will:

  • Expose a RoomSummary which would be suitable for a recents page.
  • Provide different pluggable storage layers (e.g. local storage, database-backed)

 Usage

 Conventions

 Emitted events

The SDK will emit events using an EventEmitter. It also emits object models (e.g. Rooms, RoomMembers) when they are updated.

Promises and Callbacks

Most of the methods in the SDK are asynchronous: they do not directly return a result, but instead return a Promise which will be fulfilled in the future.

The typical usage is something like:

Alternatively, if you have a Node.js-style callback(err, result) function, you can pass the result of the promise into it with something like:

The main thing to note is that it is an error to discard the result of a promise-returning function, as that will cause exceptions to go unobserved. If you have nothing better to do with the result, just call .done() on it. See http://documentup.com/kriskowal/q/#the-end for more information.

Methods which return a promise show this in their documentation.

Many methods in the SDK support both Node.js-style callbacks and Promises, via an optional callback argument. The callback support is now deprecated: new methods do not include a callback argument, and in the future it may be removed from existing methods.

Examples

This section provides some useful code snippets which demonstrate the core functionality of the SDK. These examples assume the SDK is setup like this:

Automatically join rooms when invited

Print out messages for all rooms

Output:

Print out membership lists whenever they are changed

Output:

API Reference

A hosted reference can be found at http://matrix-org.github.io/matrix-js-sdk/index.html

This SDK uses JSDoc3 style comments. You can manually build and host the API reference from the source files like this:

Then visit http://localhost:8005 to see the API docs.

End-to-end encryption support

The SDK supports end-to-end encryption via the Olm and Megolm protocols, using libolm. It is left up to the application to make libolm available, via the Olm global.

It is also necessry to call matrixClient.initCrypto() after creating a new MatrixClient (but before calling matrixClient.startClient()) to initialise the crypto layer.

If the Olm global is not available, the SDK will show a warning, as shown below; initCrypto() will also fail.

If the crypto layer is not (successfully) initialised, the SDK will continue to work for unencrypted rooms, but it will not support the E2E parts of the Matrix specification.

To provide the Olm library in a browser application:

To provide the Olm library in a node.js application:

  • yarn add https://packages.matrix.org/npm/olm/olm-3.0.0.tgz (replace the URL with the latest version you want to use from https://packages.matrix.org/npm/olm/)
  • global.Olm = require('olm'); before loading matrix-js-sdk.

If you want to package Olm as dependency for your node.js application, you can use yarn add https://packages.matrix.org/npm/olm/olm-3.0.0.tgz. If your application also works without e2e crypto enabled, add --optional to mark it as an optional dependency.

Contributing

This section is for people who want to modify the SDK. If you just want to use this SDK, skip this section.

First, you need to pull in the right build tools:

Building

To build a browser version from scratch when developing::

To constantly do builds when files are modified (using watchify)::

To run tests (Jasmine)::

To run linting:

WSL vs WSL 2 – performance

WSL 2 is a new version of the architecture that powers the Windows Subsystem for Linux to run ELF64 Linux binaries on Windows. Its primary goals are to increase file system performance, as well as adding full system call compatibility. This new architecture changes how these Linux binaries interact with Windows and your computer’s hardware, but still provides the same user experience as in WSL 1 (the current widely available version). Individual Linux distros can be run either as a WSL 1 distro, or as a WSL 2 distro, can be upgraded or downgraded at any time, and you can run WSL 1 and WSL 2 distros side by side. WSL 2 uses an entirely new architecture that uses a real Linux kernel.

It’s a major reworking of the original WSL concept, moving away from translating Linux system calls to Windows to shipping a complete Linux kernel that runs alongside Windows’ own kernel.

The reasons for doing this are many, but the main one is simple: It’s impossible for an emulator that ships twice a year to keep up with the changes in the Linux kernel, changes that Linux binaries depend on. If Windows is to support developers building Linux apps for the cloud, then it needs to be more than consistent, it needs to be compatible.

 

Linux kernel in WSL 2

The Linux kernel in WSL 2 is built in house from the latest stable branch, based on the source available at kernel.org. This kernel has been specially tuned for WSL 2. It has been optimized for size and performance to give an amazing Linux experience on Windows and will be serviced through Windows updates, which means you will get the latest security fixes and kernel improvements without needing to manage it yourself.

Increased file IO performance

File intensive operations like git clone, npm install, apt update, apt upgrade, and more will all be noticeably faster. The actual speed increase will depend on which app you’re running and how it is interacting with the file system. Initial versions of WSL 2 run up to 20x faster compared to WSL 1 when unpacking a zipped tarball, and around 2-5x faster when using git clone, npm install and cmake on various projects.

Sockets performance benchmarks

WSL

wsl

 

WSL 2

wsl2

The Ubuntu 18.04 LTS WSL instance was used for testing with its default packages. In addition to looking at the WSL1 vs. WSL2 performance of Ubuntu 18.04, Ubuntu 18.04.2 LTS itself was also tested bare metal on the same system for looking at the raw performance of Ubuntu on the Intel desktop being tested.

Full System Call Compatibility

Linux binaries use system calls to perform many functions such as accessing files, requesting memory, creating processes, and more. In WSL 1 we created a translation layer that interprets many of these system calls and allows them to work on the Windows NT kernel. However, it’s challenging to implement all of these system calls, resulting in some apps being unable to run in WSL 1. Now that WSL 2 includes its own Linux kernel it has full system call compatibility. This introduces a whole new set of apps that you can run inside of WSL. Some exciting examples are the Linux version of Docker, as well as FUSE!

Using WSL 2 means you can also get the most recent improvements to the Linux kernel much faster than in WSL 1, as we can simply update the WSL 2 kernel rather than needing to reimplement the changes ourselves.

WSL 2 will be a much more powerful platform for you to run your Linux apps on and will empower you to do more with a Linux environment on Windows.

 

Installing openvas in CentOS 7

What is Openvas?

OpenVAS (Open Vulnerability Assessment System, originally known as GNessUs) is a software framework of several services and tools offering vulnerability scanning and vulnerability management.

All OpenVAS products are free software, and most components are licensed under the GNU General Public License (GPL). Plugins for OpenVAS are written in the Nessus Attack Scripting Language, NASL.

 

Step 1: Disable SELinux

sed -i 's/=enforcing/=disabled/' /etc/selinux/config

and reboot the machine.

Step 2:  Install dependencies

yum -y install wget rsync curl net-tools

Step 3: Install OpenVAS repository

install the official repository so that OpenVAS works appropriately in the analysis of vulnerabilities.

wget -q -O - http://www.atomicorp.com/installers/atomic |sh

Step 4: Install OpenVAS

yum -y install openvas

Step 5: Run OpenVAS

Once OpenVAS is installed, we continue to start it by executing the following command:

openvas-setup

Once downloaded it will be necessary to configure the GSAD IP address, Greenbone Security Assistant, which is a web interface to manage system scans.

Step 6: Configure OpenVAS Connectivity

We go to our browser and enter the IP address of the CentOS 7 server where we have installed OpenVAS, and we will see that the following message is displayed:

Openvas dashboard

 

Automatic NVT Updates With Cron

35 1 * * * /usr/sbin/greenbone-nvt-sync > /dev/null
5 0 * * * /usr/sbin/greenbone-scapdata-sync > /dev/null
5 1 * * * /usr/sbin/greenbone-certdata-sync > /dev/null

 

How to create letsencrypt wildcard certificates

What’s Certbot?

Certbot is a free, open-source software tool for automatically using Let’s Encrypt certificates on manually-administrated websites to enable HTTPS.

Wildcard certificates

Let’s Encrypt supports wildcard certificate via ACMEv2 using the DNS-01 challenge.

It is necessary to add a TXT record specified by Certbot to the DNS server.

Caution: As it is necessary to update Let’s Encrypt’s certificate every 90 days, a new TXT record is required at every renewal.

 

Step 1: Run command

 

Step 2: Update DNS TXT record

 

After a successful verification

Setting Up Ansible for AWS with Dynamic Inventory (EC2)

If your Ansible inventory fluctuates over time, with hosts spinning up and shutting down in response to business demands, the static inventory solutions described in Working with Inventory will not serve your needs. You may need to track hosts from multiple sources

Ansible integrates all of these options via a dynamic external inventory system. Ansible supports two ways to connect with external inventory: Inventory Plugins and inventory scripts.

If you use Amazon Web Services EC2, maintaining an inventory file might not be the best approach, because hosts may come and go over time, be managed by external applications, or you might even be using AWS autoscaling. For this reason, you can use the EC2 external inventory script.

You can use this script in one of two ways.

  1. The easiest is to use Ansible’s -i command-line option and specify the path to the script after marking it executable:
  2. The second option is to copy the script to /etc/ansible/hosts and chmod +x it. You will also need to copy the ec2.ini file to /etc/ansible/ec2.ini. Then you can run ansible as you would normally.

You can test the script by itself to make sure your config is correct:


After a few moments, you should see your entire EC2 inventory across all regions in JSON.

If you use Boto profiles to manage multiple AWS accounts, you can pass --profile PROFILE name to the ec2.py script.

You can then run ec2.py --profile prod to get the inventory for the prod account, although this option is not supported by ansible-playbook. You can also use the AWS_PROFILE variable – for example:

 

ec2.py