Categories
MikroTik Networking Projects

Building a network on Entre Ríos

It is always nice to fly. I took two flights, the first one with a little stop at Aeroparque (AEP), and then a short one to Paraná city (PRA). The skies were just beautiful.

Travelling MDZ to AEP

My current company is establishing operations on Entre Rios province, where a we are initiating a brand-new ISP service for the towns of Crespo, Libertador San Martin, and Paraná City. This was the main task among another small consulting and assistance.

My first time seeing the mighty Paraná river

Connecting People

Service is provided with two providers, and BGP sessions must be established with both to announce a /24 prefix of our AS, and probably receive just a default route from the upstream. There is no need to use the full table- yet. Both providers has approximately the same AS-PATH.

We’ll use a MikroTik CCR1036-8G-2S+ as the border router. Although it has SFP+ ports to allow 10 Gbps operation, at the moment the links will be negotiated using SFP modules at 1000 Mbps.

Main customer will be directly connected to this router using copper at 1 Gbps. They are using a MikroTik CRS326-24G-2S+ for their edge router, which will be enough for their 100 Mbps service. They provide us co-location too, so I installed the core router on their shelter, which is backed up by dual A/C systems and dual UPS-rectifier systems.

The new router racked and powered up
We’ll have some mate while waiting for the upstream provider port to go into no shutdown

Once the upstream was go, I was able to see they were in fact sending us the full BGP table, which we don’t need yet, so a couple route filters were configured to use put only a default gateway on the main routing table. As the default route was configured as a static one, the route filter policy was as easy as discard all BGP inputs.

[[email protected]] > routing filter export 
# jun/18/2019 16:24:37 by RouterOS 6.42.6
#
/routing filter
add action=discard chain=dynamic-in protocol=bgp

On this site there was also an Ubiquiti AirFiber 11X wireless link to reach Libertador San Martín town. Both radios were previously installed but not configured, so I connected to the radio and the site and configured it as Master. We traveled to the remote end, configured the radio as Slave and it worked just fine. Ubiquiti is getting up to date with their firmwares and UI, and it has became pretty straight forward to get a link working, even for someone with little or none networking skills.

¿Do you think this ease-of-use is making the job easier for us, or is it the start point of a madness of wireless spectrum usage?

From this node at Libertador, we installed two single-mode fiber lines, one to connect the town Hospital and another for the town University. MikroTik CRS326-24G-2S+ switches were installed at each site to be used as CPEs.

All monitoring, reporting and backup systems were previously configured at our NOC, so that was ll for us on the site.

Watching cars go by

I also assisted a brand new urban surveillance camera installation on the entrance of the Raúl Uranga – Carlos Sylvestre Begnis Subfluvial Tunnel. The objective was to read license plates on this strategic points, which is one of the few exits outside the Paraná river, and the one which has the most vehicle traffic.

Previously we had selected a Hikvision DS-2CD4A26FWD-IZHS8/P (yep, that’s the model name) camera which was already installed by Policía of Entre Ríos technician. This camera was specifically designed for licence-plate recognition (LPR). It supports OCR on hardware and works in very low light conditions, as low as 0.0027 lux.

Traces of Paraná City

I stayed at Hotel Howard Johnson Plaza Resort & Casino Mayorazgo, and I encourage you to visit it. The rooms are lovely and the staff is excellent.

My view from the hotel room

Be sure to schedule time to walk on the Paraná river borders, visit the Martiniano Leguizamón historic town museum and enjoy yourself. This is a beautiful city.

Blue skies at Crespo, Entre Ríos
Categories
MikroTik Networking Projects

Using The Dude on MikroTik CHR

The Dude network monitor is RouterOS package intended to manage a network environment. It automatically scan all devices within specified subnets, draw and layout a network maps, monitor services, and alert you in case of problems.

Previous versions of The Dude were developed as Windows x86 software, but later versions went through a full rebuild, and now it is distributed as a RouterOS package. This comes handy as the same RouterOS instance can be linked to the network, eliminating the the need for additional VPNs on servers or gateways. Instead, all tunneling can be done inside the CHR instance.

The Windows versions also had a web GUI which was, awful. For all the new editions, you’ll need a software client available on https://download.mikrotik.com/routeros/6.43.14/dude-install-6.43.14.exe

It will update itself whenever you connect to a newer RouterOS version. Just be sure to run it as administrator on W10.

Installing

Get the CHR package from https://download.mikrotik.com/routeros/6.43.14/dude-6.43.14.npk.

Once downloaded, upload it to the CHR instance via Winbox drag-and-drop, FTP client, or just download it from inside chr:

Downloading from CHR

Reboot the CHR instance, and you will find the new Dude menu inside Winbox.

New Dude menu

Head to Dude > Settings and tick Enabled to enable the server. A few folders will be created on the filesystem, and the server will be ready to accept connections on port 8291. The previous x86 based versions of Dude used port TCP/2210 or TCP/2211, but on this new integrated RouterOS package, all the management is handled on the same port as Winbox.

If you still don’t have the client, get it on https://download.mikrotik.com/routeros/6.43.14/dude-install-6.43.14.exe.

One you connect, the following window should appear by default. You can run a discover for multiple networks and let Dude map your network for you, but it will only disconver layer 3 adyancencies. In order to have complete control over the monitoring, I suggest to build your backbone manually and let the autodiscovery handle your management VLANs/VRFs.

Categories
Projects

Machine Learning – Weighted Train Data

Last post talked about an introduction to Machine Learning and how outcomes can be predicted using sklearn’s LogisticReggression.

Sometimes, the input data could require additional processing to prefer certain classes of information, that it considered more valuable or more representative to the outcome.

The LogisticRegression model allows to set the preference, or weight, at the time of being created, or later when being fitted.

The data used on the previous entry had four main classes: DRAFT, ACT, SLAST and FLAST. Once it is encoded and fitted, it can be selected by its index. I prefer to initialize some mnemonics selectors to ease the coding and make the entire code more human friendly.

x_columns_names = ['DRAFT', 'ACT', 'SLAST', 'FLAST']
y_columns_names = ['PREDICTION']

# Indexes for columns, used for weighting
DRAFT = 0
ACT = 1
SLAST = 2
FLAST = 3

# Weights
DRAFT_WEIGHT = 1
ACT_WEIGHT = 1
SLAST_WEIGHT = 1
FLAST_WEIGHT = 1

The model can be initialized lated using the following method, where the class_weight parameter is used referencing the previous helpers.

model = LogisticRegression(
    solver='lbfgs',
    multi_class='multinomial',
    max_iter=5000,
    class_weight={
        DRAFT: DRAFT_WEIGHT,
        ACT: ACT_WEIGHT,
        SLAST: SLAST_WEIGHT,
        FLAST: FLAST_WEIGHT,
    })

Categories
Projects

Machine Learning – Classification and Regression Analysis

Machine Learning is the science and art of programming computers so they can learn from data.

For example, your spam filter is a Machine Learning program that can learn to flag spam given examples of spam emails (flagged by users, detected by other methods) and examples of regular (non-spam, also called “ham”) emails.

The examples that the system uses to learn are called the training set. The new ingested data is called the test set. The performance measure of the prediction model is called accuracy and it’s the objetive of this project.

The tools

To tackle this, Python (version 3) will be used, among the package scikit-learn. You can find more info about this package on the official page.

https://scikit-learn.org/stable/tutorial/basic/tutorial.html

Supervised learning

In general, a learning problem considers a set of n samples of data and then tries to predict properties of unknown data. If each sample is more than a single number and, for instance, a multi-dimensional entry (aka multivariate data), it is said to have several attributes or features.

Supervised learning consists in learning the link between two datasets: the observed data X and an external variable y that we are trying to predict, usually called “target” or “labels”. Most often, y is a 1D array of length n_samples.

All supervised estimators in scikit-learn implement a fit(X, y) method to fit the model and a predict(X) method that, given unlabeled observations X, returns the predicted labels y.

If the prediction task is to classify the observations in a set of finite labels, in other words to “name” the objects observed, the task is said to be a classification task. On the other hand, if the goal is to predict a continuous target variable, it is said to be a regression task.

When doing classification in scikit-learn, y is a vector of integers or strings.

The Models

LinearRegression, in its simplest form, fits a linear model to the data set by adjusting a set of parameters in order to make the sum of the squared residuals of the model as small as possible.

LogisticRegression, which has a very counter-intuitive model, is a better choice when linear regression is not the right approach as it will give too much weight to data far from the decision frontier. A linear approach is to fit a sigmoid function or logistic function.

../../_images/sphx_glr_plot_logistic_001.png

The Data

Data is presented on a CSV file. It has around 2500 rows, with 5 columns. Correct formatting and integrity of values cannot be assured, so additional processing will be needed. The sample file is like this.

The Code

We need three main libraries to start:

  • numpy, which basically is a N-dimensional array object. It also has tools for linear algebra, Fourier transforms and random numbers.
    It can be used as an efficient multi-dimensional container of generic data, where arbitrary data-types can be defined.
  • pandas, which provides high-performance and easy-to-use data structures and data analysis tools simple and efficient tools for data mining and data analysis
  • sklearn, the main machine learning library. It has capabilities for classification, regression, clustering, dimensionality reduction, model selection and data preprocessing.

A non essential, but useful library is matplotlib, to plot sets of data.

In order to provide data for sklearn models to work, it has to be encoded first. As the sample data has strings, or labels, a LabelEncoder is needed. Next, the prediction model is declared, where a LogisticRegression model is used.

The input data file path is also declared, in order to be loaded with pandas.read_csv().

import pandas as pd
import numpy as np
import matplotlib.pyplot as pyplot

from sklearn.preprocessing import LabelEncoder
from sklearn.linear_model import LogisticRegression

encoder = LabelEncoder()
model = LogisticRegression(
    solver='lbfgs', multi_class='multinomial', max_iter=5000)

# Input dataset
file = "sample_data.csv"

The CSV file can be loaded into a pandas dataframe in a single line. The library also provides a convenient method to remove any rows with missing values.

# Use pandas to load csv. Pandas can eat mixed data with numbers and strings
data = pd.read_csv(file, header=0, error_bad_lines=False)
# Remove missing values
data = data.dropna()

print("Valid data items : %s" % len(data))

Once loaded, the data needs to be encoded in order to be fitted into the prediction model. This is handled by the previously declared LabelEncoder. Once encoded, the x and y datasets are selected. The pandas library provides a way to drop entire labels from a dataframe, which allows to easily select data.

encoded_data = data.apply(encoder.fit_transform)
x = encoded_data.drop(columns=['PREDICTION'])
y = encoded_data.drop(columns=['DRAFT', 'ACT', 'SLAST', 'FLAST'])

The main objective is to test against different lengths of train and test data, to find out how much data provides the best accuracy. The lengths of data will be incremented in steps of 100 to get a broad variety of results.

length = 100
scores = []
lenghts = []
while length < len(x):
    x_train = x[:length]
    y_train = y[:length]
    x_test = x.sample(n=length)
    y_test = y.sample(n=length)
    print("Fitting model for %s training values" % length)
    trained = model.fit(x_train, y_train.values.ravel())
    score = model.score(x_test, y_test)
    print("Score for %s training values is %0.6f" % (length, score))
    length = length + 100
    scores.append(score)
    lenghts.append(length)

Finally, a plot is made with the accuracy scores.

pyplot.plot(lenghts,scores)
pyplot.ylabel('accuracy')
pyplot.xlabel('values')
pyplot.show()
Categories
Projects

Using Zabbix API for Custom Reports

Zabbix is an open source monitoring tool for diverse IT components, including networks, servers, virtual machines (VMs) and cloud services. It provides monitoring metrics, among others network utilization, CPU load and disk space consumption. Data can be collected in a agent-less fashion using SNMP, ICMP, or with an multi-platform agent, available for most operating systems.

Even when it is considered one of the best NMS on the market, its reporting capabilities are very limited. For example, this is an availability report created with PRTG.

Image result for prtg reports

And this is a Zabbix Report. There is no graphs, no data tables, and it is difficult to establish a defined time span for the data collection.

My client required an executive report with the following information.

  • Host / Service Name
  • Minimum SLA for ICMP echo request monitoring
  • Achieved SLA for ICMP echo request monitoring
  • Memory usage graph, if host is being SNMP-monitored
  • Main network interface graph, if host is being SNMP-monitored
  • And storage usage graph, also if the host is being SNMP-monitored

Using the Zabbix API

To do call the API, we need to send HTTP POST requests to the api_jsonrpc.php file located in the frontend directory. For example, if the Zabbix frontend is installed under http://company.com/zabbix, the HTTP request to call the apiinfo.version method may look like this:

POST http://company.com/zabbix/api_jsonrpc.php HTTP/1.1
Content-Type: application/json-rpc
{
    "jsonrpc":"2.0",
    "method":"apiinfo.version",
    "id":1,
    "auth":null,
    "params":
        {
        }
}

The request must have the Content-Type header set to one of these values: application/json-rpc, application/json or application/jsonrequest.

Before access any data, it’s necessary to log in and obtain an authentication token. The user.login method is used for this.

{
    "jsonrpc": "2.0",
    "method": "user.login",
    "params": {
        "user": "Admin",
        "password": "zabbix"
    },
    "id": 1,
    "auth": null
}

If the authentication request succeeds, the API response will look like this.

{
    "jsonrpc": "2.0",
    "result": "0424bd59b807674191e7d77572075f33",
    "id": 1
}

The result field is the authentication token, which will be sent on subsequent requests.

Instead of reinvent the wheel, let’s use a existing library to call the API.

Using jqzabbix jQuery plugin for the Zabbix API

GitHub user kodai provides a nice JavaScript client, in a form of a jQuery plugin. You can get it on https://github.com/kodai/jqzabbix.

The usage is quite forward, first, include both jQuery and jqzabbix.js on your HTML file. I using Cloudflare to link jQuery.

<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js">/script>
<script type="text/javascript" charset="utf-8" src="jqzabbix.js"></script>

An object has to be created to initialize the client. I prefer to set url, username, and password dynamically, with data provided by the end user, so no credentials are stored here.

server = new $.jqzabbix({
	url: url,  			// URL of Zabbix API
	username: user,   	// Zabbix login user name
	password: pass,  	// Zabbix login password
	basicauth: false,   // If you use basic authentication, set true for this option
	busername: '',      // User name for basic authentication
	bpassword: '',      // Password for basic authentication
	timeout: 5000,      // Request timeout (milli second)
	limit: 1000,        // Max data number for one request
});

As told before, the first step is to authenticate with the API, and save the authorization token. This is handled by the jqzabbix library by first making a request to get the API version, and then authenticating.

server.getApiVersion();
server.userLogin();

If the authentication procedure is completed properly, the API version and authentication ID are stored as properties of the server object. The userlogin() method allows to set callbacks for both success and error.

var success = function() { console.log('Success!'); }
var error = function() { console.error('Error!'); }

server.userLogin(null, success, error)

Once authenticated, the Zabbix API methods are called in the following fashion with the sendAjaxRequest method.

server.sendAjaxRequest(method, params, success, error)

Retrieving Hosts

I set a global array hosts to store the hosts information.
Another global array called SEARCH_GROUPS is used to define which hosts groups should considered on the API request. By setting the selectHosts parameter to true, the hosts on the host groups are retrieved too on the response.

On success, the result is stored on the hosts array, and the get_graphs function is called. If there is an error, the default error callback is fired.

hosts = [];
function get_hosts() {
	// Get hosts
	server.sendAjaxRequest(
		"hostgroup.get",
		{
			"selectHosts": true,
			"filter": {
				"name": SEARCH_GROUPS
			}
		},
		function (e) {
			e.result.forEach(group => {
				group.hosts.forEach(host => {
					hosts.push(host);
				});
			});
			get_graphs();
		},
		error,
	);
}

Retrieving Graphs

Previously, user defined graphs were configured on Zabbix, to match the client requeriments of specific information. All names for the graphs that should be included on the report were terminated the ” – Report” suffix.

This function retrieves all those graphs, and by setting the selectHosts to true, the hosts linked to each graph are retrieved too.

On success, the result is stored on the graphs array, and the render function is called. If there is an error, the default error callback is fired.

graphs = [];
function get_graphs() {
	server.sendAjaxRequest(
		"graph.get",
		{
			"selectHosts": "*",
			"search": {
				name: "- Report"
			}
		},
		function (e) {
			graphs = e.result;
			render();
		},
		error
	)
}

Retrieving Graphs Images Instead of Graph Data

By this time you should have noticed that the Zabbix API allows to retrieve values for the graphs, but no images. An additional PHP file will be stored with the HTML and JS files, as a helper to call the web interface by using php_curl.

You can get it on https://zabbix.org/wiki/Get_Graph_Image_PHP. I made a couple modifications to it in order to pass username and password on the URL query, with parameters for the graph ID, the timespan, and the image dimensions.

<?php
//////////
// GraphImgByID v1.1 
// (c) Travis Mathis - [email protected]
// It's free use it however you want.
// ChangeLog:
// 1/23/12 - Added width and height to GetGraph Function
// 23/7/13 - Zabbix 2.0 compatibility
// ERROR REPORTING
error_reporting(E_ALL);
set_time_limit(1800);


$graph_id = filter_input(INPUT_GET,'id');
$period= filter_input(INPUT_GET,'period');
$width= filter_input(INPUT_GET,'width');
$height = filter_input(INPUT_GET,'height');
$user = filter_input(INPUT_GET,'user');
$pass = filter_input(INPUT_GET,'pass');

//CONFIGURATION
$z_server = 'zabbix_url'; //set your URL here
$z_user = $user;
$z_pass = $pass;
$z_img_path = "/usr/local/share/zabbix/custom_pages/tmp_images/";

//NON CONFIGURABLE
$z_tmp_cookies = "";
$z_url_index = $z_server . "index.php";
$z_url_graph = $z_server . "chart2.php";
$z_url_api = $z_server . "api_jsonrpc.php";

// Zabbix 1.8
// $z_login_data  = "name=" .$z_user ."&password=" .$z_pass ."&enter=Enter";
// Zabbix 2.0
$z_login_data = array('name' => $z_user, 'password' => $z_pass, 'enter' => "Sign in");

// FUNCTION
function GraphImageById($graphid, $period = 3600, $width, $height) {
    global $z_server, $z_user, $z_pass, $z_tmp_cookies, $z_url_index, $z_url_graph, $z_url_api, $z_img_path, $z_login_data;
    // file names
    $filename_cookie = $z_tmp_cookies . "zabbix_cookie_" . $graphid . ".txt";
    $image_name = $z_img_path . "zabbix_graph_" . $graphid . ".png";

    //setup curl
    $ch = curl_init();
    curl_setopt($ch, CURLOPT_URL, $z_url_index);
    curl_setopt($ch, CURLOPT_HEADER, false);
    curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
    curl_setopt($ch, CURLOPT_BINARYTRANSFER, true);
    curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
    curl_setopt($ch, CURLOPT_POST, true);
    curl_setopt($ch, CURLOPT_POSTFIELDS, $z_login_data);
    curl_setopt($ch, CURLOPT_COOKIEJAR, $filename_cookie);
    curl_setopt($ch, CURLOPT_COOKIEFILE, $filename_cookie);
    // login
    curl_exec($ch);
    // get graph
    curl_setopt($ch, CURLOPT_URL, $z_url_graph . "?graphid=" . $graphid . "&width=" . $width . "&height=" . $height . "&period=" . $period);
    $output = curl_exec($ch);
    curl_close($ch);
    // delete cookie
    header("Content-type: image/png");
    unlink($filename_cookie);
    /*
      $fp = fopen($image_name, 'w');
      fwrite($fp, $output);
      fclose($fp);
      header("Content-type: text/html");
     */
    return $output;
}

echo GraphImageById($graph_id, $period, $width, $height);

Quick and Dirty Frontend

You should be able to customize this small frontend to your needs.

<html>

<head>
	<link rel="stylesheet" href="https://unpkg.com/chota@latest">
	<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
	<script src="https://cdn.jsdelivr.net/npm/js-cookie@2/src/js.cookie.min.js"></script>
	<script src="jqzabbix.js"></script>
	<style>
		.host-container {
			margin-bottom: 3em;
		}
		@media print {
			.host-container {
				page-break-before: auto;
				page-break-after: auto;
				page-break-inside: avoid;
			}
			img {
				display: block;
				page-break-before: auto;
				page-break-after: auto;
				page-break-inside: avoid;
			}
		}
	</style>
</head>

<body>
	<div id="container" class="container">

		<div class="row" style="margin-bottom: 3em">
			<div class="col">
				<h2>Services and Availability Report</h2>
				<table id="table" class="bg-dark">
					<thead>
						<th>Host Name</th>
						<th>Target</th>
						<th class="is-text-center">Availibilty</th>
						<th class="is-text-center">Availabilty Status</th>
						<th class="is-text-center">Total Availability</th>
					</thead>
				</table>
			</div>
		</div>


		<div id="template" style="display: none">
			<div class="host-container">
				<div class="row bg-dark">
					<div class="col-12">
						<span id="host-HOST_ID-name">Service Name</span>
					</div>
				</div>
				<div class="row bg-light">
					<div class="col-3">
						Status
					</div>
					<div class="col-3">
						SLA Minimum
					</div>
					<div class="col-3">
						SLA
					</div>
				</div>
				<div class="row bg-primary">
					<div class="col-3">
						<span id="host-HOST_ID-status"></span>OK</span>
					</div>
					<div class="col-3">
						<span id="host-HOST_ID-sla"></span>99.9%
					</div>
					<div class="col-3">
						<span id="host-HOST_ID-sla-value"></span>100%
					</div>
				</div>
				<div class="row is-text-center" id="host-HOST_ID-graphs">
				</div>
			</div>
		</div>

	</div>

	<script src="ui.js"></script>

</body>

</html>

Result

The final page is a complete report, including a briefing table which resumes the services status and SLA compliance.

Categories
Projects

Customizing NetBox Templates

NetBox is an IP address management (IPAM) and data center infrastructure management (DCIM) tool. Initially conceived by the network engineering team at DigitalOcean, NetBox was developed specifically to address the needs of network and infrastructure engineers.

Image result for netbox device types

When I started using NetBox on my daily job, I planned to use it as a replacement for all the spreadsheets I had for switch configurations, IP address management, secrets, and VLAN assignments. NetBox can handle all of this and more, but the interface didn’t suit my needs.

NetBox is built using the Python Django framework, which I have used for another projects. I used Visual Studio Code to clone the repository and debug, as it has native support for the Django template language.

I keep a copy of the repository on my local machine for ease of modifications. Prior, I have set DEBUG=TRUE on netbox/configuration.py, and allowed localhost and my local network to access the development server. Also, I set the correct settings to connect to the existing postgresql database.

Connecting the existing DB to my local development server

This environment works for test purposes, but the best you can do is to set up separated development and production environments, and commit your changes to production once everything is tested.

Using VSCode to debug Django

The URL definition for the single device view is around line #147 of the netbox/dcim/urls.py file, and it looks like this.

 url(r'^devices/(?P<pk>\d+)/$', views.DeviceView.as_view(), name='device'),

Heading to the DeviceView view, I put a breakpoint on the interfaces
QuerySet of the view definition, and launched the debugger. The default location is at http://localhost:8000.

Setting up the debugger
Breakpoints

I headed to http://localhost:8000/dcim/devices/570/, where I had defined a switch with several VLANs, to hit the breakpoint and find out if the
QuerySet had information about the VLANs, or if they were queried in a per-interface basis, on the interface view.

QuerySet returns this

Lucky me, the QuerySet recovered all the information I needed, and it is passed to the template via a render() call.

All the information I want is rendered on this table. This is the power of the Django framework. I added line #513 as an additional header for the VLANs column.

This table has a for loop which iterates for each interface of the device, so I edited the included template file at dcim/inc/interface.html.

Both tagged and untagged VLANs groups have a bolded title, and the VID and VLAN name is shown after it. I used the dictsort filter, which is part of the Django framework, to sort all the VLANs by their VID.

dcim/inc/interface.html

The final result looks like the following image, and it allows to keep track of all the VLANs on all ports, at first sight. This is easier and more user friendly than getting that information interface per interface, or making a new custom view.

New Template Rendering
Categories
Projects

Running NetBox in Docker

NetBox is an IP address management (IPAM) and data center infrastructure management (DCIM) tool. Initially conceived by the network engineering team at DigitalOcean, NetBox was developed specifically to address the needs of network and infrastructure engineers.

A quick way to get it working is to use the Docker stack provided at https://github.com/ninech/netbox-docker.

Installing

First, I cloned the repository.

$ git clone -b master https://github.com/ninech/netbox-docker.git
$ cd netbox-docker

Once cloned, I used docker-compose to pull the images

$ docker-compose pull

And then I started the stack with

$ docker-compose up -d

The service will be up and running after a few minutes. Once ready, you need to find where to connect to with

$ docker-compose port nginx 8080

Or use this snippet

$ echo "http://$(docker-compose port nginx 8080)/"

Here I use Portainer as a gui to manage Docker, and Traefik as a reverse proxy to enable FQDN access to the services behind. I added an entry on my DNS to route netbox.arturo.local to the Docker IP address, on the exposed port for Nginx.

Categories
Projects

Spiceworks Customization

Andrew Foster at Topland Communications reached me via Upwork looking to customize and fine tune a existing Spiceworks installation.

After a quick inspection, I decided to tackle the project by compacting the DB in first place. Spiceworks keeps a lot of logs regarding the system activity, which are located on C:\Program Files\Spiceworks\Log. In order to clean them, the first step is to stop Spiceworks service.

Logs are stored in two main locations:

  • C:\Program Files\Spiceworks\Log, for the Spiceworks service
  • C:\Program Files\Spiceworks\httpd\log\, where the Apache server keeps them

Once the logs are cleaned, I compacted the DB to increase the performance, and I started the service again.

Ticket rules were configured to auto assign support tickets, thus saving time to the support operators.

And the user portal was customized to match the company colors and logo.

Categories
Projects

Axis CCTV and Video Management System

A security company contacted me thorough Upwork, searching for support on a brand new installation of an Axis Camera Station System on a educational institution. This company, Coyote Cabling from New Mexico, US, was on charge of a 52 camera installation, with an option to add 32 existing cameras on a later stage.

After a research, they decided to use Axis S1148 servers, which really are re-branded Dell servers. The S1148 comes with a ready to use Windows Server 2012 OS, and with the Axis Camera Station preinstalled. This vendor supported hardware allowed to reduce licensing costs, because they are included on the server price, and avoid any incompatibilities

Categories
Projects

ISPConfig 3 in Digital Ocean Droplet

Client wanted to set up a ISPConfig 3 Control Panel onto a Digital Ocean droplet.

Digital Ocean works best for this kind of services, because they provision the public addresses directly on the server. The configuration is easier to build and mantain, thanks to the Digital Ocean integrated firewall.

ISPConfig allows to manage servers and hosting plans from a friendly GUI.