Categories
Projects

Using Zabbix API for Custom Reports

Zabbix is an open source monitoring tool for diverse IT components, including networks, servers, virtual machines (VMs) and cloud services. It provides monitoring metrics, among others network utilization, CPU load and disk space consumption. Data can be collected in a agent-less fashion using SNMP, ICMP, or with an multi-platform agent, available for most operating systems.

Even when it is considered one of the best NMS on the market, its reporting capabilities are very limited. For example, this is an availability report created with PRTG.

Image result for prtg reports

And this is a Zabbix Report. There is no graphs, no data tables, and it is difficult to establish a defined time span for the data collection.

My client required an executive report with the following information.

  • Host / Service Name
  • Minimum SLA for ICMP echo request monitoring
  • Achieved SLA for ICMP echo request monitoring
  • Memory usage graph, if host is being SNMP-monitored
  • Main network interface graph, if host is being SNMP-monitored
  • And storage usage graph, also if the host is being SNMP-monitored

Using the Zabbix API

To do call the API, we need to send HTTP POST requests to the api_jsonrpc.php file located in the frontend directory. For example, if the Zabbix frontend is installed under http://company.com/zabbix, the HTTP request to call the apiinfo.version method may look like this:

POST http://company.com/zabbix/api_jsonrpc.php HTTP/1.1
Content-Type: application/json-rpc
{
    "jsonrpc":"2.0",
    "method":"apiinfo.version",
    "id":1,
    "auth":null,
    "params":
        {
        }
}

The request must have the Content-Type header set to one of these values: application/json-rpc, application/json or application/jsonrequest.

Before access any data, it’s necessary to log in and obtain an authentication token. The user.login method is used for this.

{
    "jsonrpc": "2.0",
    "method": "user.login",
    "params": {
        "user": "Admin",
        "password": "zabbix"
    },
    "id": 1,
    "auth": null
}

If the authentication request succeeds, the API response will look like this.

{
    "jsonrpc": "2.0",
    "result": "0424bd59b807674191e7d77572075f33",
    "id": 1
}

The result field is the authentication token, which will be sent on subsequent requests.

Instead of reinvent the wheel, let’s use a existing library to call the API.

Using jqzabbix jQuery plugin for the Zabbix API

GitHub user kodai provides a nice JavaScript client, in a form of a jQuery plugin. You can get it on https://github.com/kodai/jqzabbix.

The usage is quite forward, first, include both jQuery and jqzabbix.js on your HTML file. I using Cloudflare to link jQuery.

<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js">/script>
<script type="text/javascript" charset="utf-8" src="jqzabbix.js"></script>

An object has to be created to initialize the client. I prefer to set url, username, and password dynamically, with data provided by the end user, so no credentials are stored here.

server = new $.jqzabbix({
	url: url,  			// URL of Zabbix API
	username: user,   	// Zabbix login user name
	password: pass,  	// Zabbix login password
	basicauth: false,   // If you use basic authentication, set true for this option
	busername: '',      // User name for basic authentication
	bpassword: '',      // Password for basic authentication
	timeout: 5000,      // Request timeout (milli second)
	limit: 1000,        // Max data number for one request
});

As told before, the first step is to authenticate with the API, and save the authorization token. This is handled by the jqzabbix library by first making a request to get the API version, and then authenticating.

server.getApiVersion();
server.userLogin();

If the authentication procedure is completed properly, the API version and authentication ID are stored as properties of the server object. The userlogin() method allows to set callbacks for both success and error.

var success = function() { console.log('Success!'); }
var error = function() { console.error('Error!'); }

server.userLogin(null, success, error)

Once authenticated, the Zabbix API methods are called in the following fashion with the sendAjaxRequest method.

server.sendAjaxRequest(method, params, success, error)

Retrieving Hosts

I set a global array hosts to store the hosts information.
Another global array called SEARCH_GROUPS is used to define which hosts groups should considered on the API request. By setting the selectHosts parameter to true, the hosts on the host groups are retrieved too on the response.

On success, the result is stored on the hosts array, and the get_graphs function is called. If there is an error, the default error callback is fired.

hosts = [];
function get_hosts() {
	// Get hosts
	server.sendAjaxRequest(
		"hostgroup.get",
		{
			"selectHosts": true,
			"filter": {
				"name": SEARCH_GROUPS
			}
		},
		function (e) {
			e.result.forEach(group => {
				group.hosts.forEach(host => {
					hosts.push(host);
				});
			});
			get_graphs();
		},
		error,
	);
}

Retrieving Graphs

Previously, user defined graphs were configured on Zabbix, to match the client requeriments of specific information. All names for the graphs that should be included on the report were terminated the ” – Report” suffix.

This function retrieves all those graphs, and by setting the selectHosts to true, the hosts linked to each graph are retrieved too.

On success, the result is stored on the graphs array, and the render function is called. If there is an error, the default error callback is fired.

graphs = [];
function get_graphs() {
	server.sendAjaxRequest(
		"graph.get",
		{
			"selectHosts": "*",
			"search": {
				name: "- Report"
			}
		},
		function (e) {
			graphs = e.result;
			render();
		},
		error
	)
}

Retrieving Graphs Images Instead of Graph Data

By this time you should have noticed that the Zabbix API allows to retrieve values for the graphs, but no images. An additional PHP file will be stored with the HTML and JS files, as a helper to call the web interface by using php_curl.

You can get it on https://zabbix.org/wiki/Get_Graph_Image_PHP. I made a couple modifications to it in order to pass username and password on the URL query, with parameters for the graph ID, the timespan, and the image dimensions.

<?php
//////////
// GraphImgByID v1.1 
// (c) Travis Mathis - [email protected]
// It's free use it however you want.
// ChangeLog:
// 1/23/12 - Added width and height to GetGraph Function
// 23/7/13 - Zabbix 2.0 compatibility
// ERROR REPORTING
error_reporting(E_ALL);
set_time_limit(1800);


$graph_id = filter_input(INPUT_GET,'id');
$period= filter_input(INPUT_GET,'period');
$width= filter_input(INPUT_GET,'width');
$height = filter_input(INPUT_GET,'height');
$user = filter_input(INPUT_GET,'user');
$pass = filter_input(INPUT_GET,'pass');

//CONFIGURATION
$z_server = 'zabbix_url'; //set your URL here
$z_user = $user;
$z_pass = $pass;
$z_img_path = "/usr/local/share/zabbix/custom_pages/tmp_images/";

//NON CONFIGURABLE
$z_tmp_cookies = "";
$z_url_index = $z_server . "index.php";
$z_url_graph = $z_server . "chart2.php";
$z_url_api = $z_server . "api_jsonrpc.php";

// Zabbix 1.8
// $z_login_data  = "name=" .$z_user ."&password=" .$z_pass ."&enter=Enter";
// Zabbix 2.0
$z_login_data = array('name' => $z_user, 'password' => $z_pass, 'enter' => "Sign in");

// FUNCTION
function GraphImageById($graphid, $period = 3600, $width, $height) {
    global $z_server, $z_user, $z_pass, $z_tmp_cookies, $z_url_index, $z_url_graph, $z_url_api, $z_img_path, $z_login_data;
    // file names
    $filename_cookie = $z_tmp_cookies . "zabbix_cookie_" . $graphid . ".txt";
    $image_name = $z_img_path . "zabbix_graph_" . $graphid . ".png";

    //setup curl
    $ch = curl_init();
    curl_setopt($ch, CURLOPT_URL, $z_url_index);
    curl_setopt($ch, CURLOPT_HEADER, false);
    curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
    curl_setopt($ch, CURLOPT_BINARYTRANSFER, true);
    curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
    curl_setopt($ch, CURLOPT_POST, true);
    curl_setopt($ch, CURLOPT_POSTFIELDS, $z_login_data);
    curl_setopt($ch, CURLOPT_COOKIEJAR, $filename_cookie);
    curl_setopt($ch, CURLOPT_COOKIEFILE, $filename_cookie);
    // login
    curl_exec($ch);
    // get graph
    curl_setopt($ch, CURLOPT_URL, $z_url_graph . "?graphid=" . $graphid . "&width=" . $width . "&height=" . $height . "&period=" . $period);
    $output = curl_exec($ch);
    curl_close($ch);
    // delete cookie
    header("Content-type: image/png");
    unlink($filename_cookie);
    /*
      $fp = fopen($image_name, 'w');
      fwrite($fp, $output);
      fclose($fp);
      header("Content-type: text/html");
     */
    return $output;
}

echo GraphImageById($graph_id, $period, $width, $height);

Quick and Dirty Frontend

You should be able to customize this small frontend to your needs.

<html>

<head>
	<link rel="stylesheet" href="https://unpkg.com/chota@latest">
	<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
	<script src="https://cdn.jsdelivr.net/npm/js-cookie@2/src/js.cookie.min.js"></script>
	<script src="jqzabbix.js"></script>
	<style>
		.host-container {
			margin-bottom: 3em;
		}
		@media print {
			.host-container {
				page-break-before: auto;
				page-break-after: auto;
				page-break-inside: avoid;
			}
			img {
				display: block;
				page-break-before: auto;
				page-break-after: auto;
				page-break-inside: avoid;
			}
		}
	</style>
</head>

<body>
	<div id="container" class="container">

		<div class="row" style="margin-bottom: 3em">
			<div class="col">
				<h2>Services and Availability Report</h2>
				<table id="table" class="bg-dark">
					<thead>
						<th>Host Name</th>
						<th>Target</th>
						<th class="is-text-center">Availibilty</th>
						<th class="is-text-center">Availabilty Status</th>
						<th class="is-text-center">Total Availability</th>
					</thead>
				</table>
			</div>
		</div>


		<div id="template" style="display: none">
			<div class="host-container">
				<div class="row bg-dark">
					<div class="col-12">
						<span id="host-HOST_ID-name">Service Name</span>
					</div>
				</div>
				<div class="row bg-light">
					<div class="col-3">
						Status
					</div>
					<div class="col-3">
						SLA Minimum
					</div>
					<div class="col-3">
						SLA
					</div>
				</div>
				<div class="row bg-primary">
					<div class="col-3">
						<span id="host-HOST_ID-status"></span>OK</span>
					</div>
					<div class="col-3">
						<span id="host-HOST_ID-sla"></span>99.9%
					</div>
					<div class="col-3">
						<span id="host-HOST_ID-sla-value"></span>100%
					</div>
				</div>
				<div class="row is-text-center" id="host-HOST_ID-graphs">
				</div>
			</div>
		</div>

	</div>

	<script src="ui.js"></script>

</body>

</html>

Result

The final page is a complete report, including a briefing table which resumes the services status and SLA compliance.

Categories
Projects

Customizing NetBox Templates

NetBox is an IP address management (IPAM) and data center infrastructure management (DCIM) tool. Initially conceived by the network engineering team at DigitalOcean, NetBox was developed specifically to address the needs of network and infrastructure engineers.

Image result for netbox device types

When I started using NetBox on my daily job, I planned to use it as a replacement for all the spreadsheets I had for switch configurations, IP address management, secrets, and VLAN assignments. NetBox can handle all of this and more, but the interface didn’t suit my needs.

NetBox is built using the Python Django framework, which I have used for another projects. I used Visual Studio Code to clone the repository and debug, as it has native support for the Django template language.

I keep a copy of the repository on my local machine for ease of modifications. Prior, I have set DEBUG=TRUE on netbox/configuration.py, and allowed localhost and my local network to access the development server. Also, I set the correct settings to connect to the existing postgresql database.

Connecting the existing DB to my local development server

This environment works for test purposes, but the best you can do is to set up separated development and production environments, and commit your changes to production once everything is tested.

Using VSCode to debug Django

The URL definition for the single device view is around line #147 of the netbox/dcim/urls.py file, and it looks like this.

 url(r'^devices/(?P<pk>\d+)/$', views.DeviceView.as_view(), name='device'),

Heading to the DeviceView view, I put a breakpoint on the interfaces
QuerySet of the view definition, and launched the debugger. The default location is at http://localhost:8000.

Setting up the debugger
Breakpoints

I headed to http://localhost:8000/dcim/devices/570/, where I had defined a switch with several VLANs, to hit the breakpoint and find out if the
QuerySet had information about the VLANs, or if they were queried in a per-interface basis, on the interface view.

QuerySet returns this

Lucky me, the QuerySet recovered all the information I needed, and it is passed to the template via a render() call.

All the information I want is rendered on this table. This is the power of the Django framework. I added line #513 as an additional header for the VLANs column.

This table has a for loop which iterates for each interface of the device, so I edited the included template file at dcim/inc/interface.html.

Both tagged and untagged VLANs groups have a bolded title, and the VID and VLAN name is shown after it. I used the dictsort filter, which is part of the Django framework, to sort all the VLANs by their VID.

dcim/inc/interface.html

The final result looks like the following image, and it allows to keep track of all the VLANs on all ports, at first sight. This is easier and more user friendly than getting that information interface per interface, or making a new custom view.

New Template Rendering
Categories
Projects

Running NetBox in Docker

NetBox is an IP address management (IPAM) and data center infrastructure management (DCIM) tool. Initially conceived by the network engineering team at DigitalOcean, NetBox was developed specifically to address the needs of network and infrastructure engineers.

A quick way to get it working is to use the Docker stack provided at https://github.com/ninech/netbox-docker.

Installing

First, I cloned the repository.

$ git clone -b master https://github.com/ninech/netbox-docker.git
$ cd netbox-docker

Once cloned, I used docker-compose to pull the images

$ docker-compose pull

And then I started the stack with

$ docker-compose up -d

The service will be up and running after a few minutes. Once ready, you need to find where to connect to with

$ docker-compose port nginx 8080

Or use this snippet

$ echo "http://$(docker-compose port nginx 8080)/"

Here I use Portainer as a gui to manage Docker, and Traefik as a reverse proxy to enable FQDN access to the services behind. I added an entry on my DNS to route netbox.arturo.local to the Docker IP address, on the exposed port for Nginx.

Categories
Projects

Spiceworks Customization

Andrew Foster at Topland Communications reached me via Upwork looking to customize and fine tune a existing Spiceworks installation.

After a quick inspection, I decided to tackle the project by compacting the DB in first place. Spiceworks keeps a lot of logs regarding the system activity, which are located on C:\Program Files\Spiceworks\Log. In order to clean them, the first step is to stop Spiceworks service.

Logs are stored in two main locations:

  • C:\Program Files\Spiceworks\Log, for the Spiceworks service
  • C:\Program Files\Spiceworks\httpd\log\, where the Apache server keeps them

Once the logs are cleaned, I compacted the DB to increase the performance, and I started the service again.

Ticket rules were configured to auto assign support tickets, thus saving time to the support operators.

And the user portal was customized to match the company colors and logo.

Categories
Projects

Axis CCTV and Video Management System

A security company contacted me thorough Upwork, searching for support on a brand new installation of an Axis Camera Station System on a educational institution. This company, Coyote Cabling from New Mexico, US, was on charge of a 52 camera installation, with an option to add 32 existing cameras on a later stage.

After a research, they decided to use Axis S1148 servers, which really are re-branded Dell servers. The S1148 comes with a ready to use Windows Server 2012 OS, and with the Axis Camera Station preinstalled. This vendor supported hardware allowed to reduce licensing costs, because they are included on the server price, and avoid any incompatibilities

Categories
Projects

ISPConfig 3 in Digital Ocean Droplet

Client wanted to set up a ISPConfig 3 Control Panel onto a Digital Ocean droplet.

Digital Ocean works best for this kind of services, because they provision the public addresses directly on the server. The configuration is easier to build and mantain, thanks to the Digital Ocean integrated firewall.

ISPConfig allows to manage servers and hosting plans from a friendly GUI.

Categories
Projects

Dynamic DNS Server System

After a couple of successful jobs with my client Visual Link Internet LLC, they reached me to set up a service similar to dyndns.net. I had already developed another value added services for their customers, like web filtering and firewalling, so I found this project very interesting and fun to do.

Cool, but what is DNS ?

DNS stands for Domain Name Systems. Yep, domains like google.com.

It is based on a distributed database that takes some time to update globally. When DNS was first introduced, the database was small and could be easily maintained by hand. As the system grew this task became difficult for any one site to handle, and a new management structure was introduced to spread out the updates among many domain name registrars.

Due to the distributed nature of the DNS systems and its registrars, updates to the global DNS system may take hours to distribute. Thus DNS is only suitable for services that do not change their IP address very often, but not for servers being run with dynamic addresses, which are likely to change their IP address over very short periods of time.

Ok, but my ISP gives dynamic addresses, and I want to access services on my network. What can i do?

Dynamic DNS is a system that addresses the problem of rapid updates. The term is used in two ways, which, while technically similar, have very different purposes and user populations. The first is “standards-based DNS updates”, which uses an extension of the DNS protocol to ask for an update. The second is usually a web-based protocol, normally a single HTTP fetch with username and password which then updates some DNS records (by some unspecified method).

Many providers offer commercial or free Dynamic DNS service for this scenario. The automatic reconfiguration is generally implemented in the user’s router or computer, which runs software to update the DDNS service. The communication between the user’s equipment and the provider is not standardized, although a few standard web-based methods of updating have emerged over time.

Yeah, but those free services are now paid, and some have even disappeared

I know, I know. But this service can be built in-house. Using open source software, there are no fees, and the company domain name can be used to keep things professional.

This is what my client wanted, so I deployed a solution on a that allowed to offer added value services to customers, and provide easy remote access. Using a open source solution based on PHP (https://github.com/nicokaiser/Dyndns) and some custom Bash scripts I was able to deliver a stable system in a short amount of time.

The main techonologies I used are Apache 2 and PHP 7 for the HTTP requests and update system, and BIND9 for the DNS service.

The solution used the standard URL schema of DynDNS, so it is compatible with any device with support for it. Also, because most CPEs of the client’s network were MikroTik based, I also wrote a RouterOS script to call the update.

Categories
Projects

A new order of IT

There is a new order of IT. In the last years, a very disruptive element appeared in the field, with the name of IT as a Service (ITaaS). On its top there is a crystal-clear examination and understanding of business and technology needs, and at its bottom there is a foundation built by a massive set of virtualized resources.

Now, IT administrators can find a set of previously configured building blocks that can be combined and deployed very quickly. Using this technology, the IT departments can respond to the changing needs of the business with optimized yet highly standardized solutions.

Using the ITaaS model, most of the information technology solutions can be deployed when they are needed, at any time, paying only for what is used. It is a shift on operational and organizational procedures to run IT like a business and service provider.

This approach allows IT areas to be a strategic partner of the business.

This service model requires a platform or catalog comprising information about the users and the services each one consumes. It also should bring information about to which services a user is subscribed, and how the services use will be charged back to the respective business unit.

Once all the services are cataloged and published,

  • Can the business units act upon it?
  • It is just a static document or is it a dynamic tool?
  • The services can be directly requested from within the catalog?
  • Is it easy to use as any online store?

Most IT departments already have a set of tools manage and monitor their infrastructure. These tools often also keep track of cost, orders, helpdesk requests and many other functions within IT. Maybe there even is another service catalog in another division of the organization. All of these possibilities must be considered when selecting a service catalog tool.

  • Can the new catalog integrate with the existing tools?
  • Will it replace an existing tool?
  • Also, the process automation is already bundled into the platform, or the IT department will need to engineer it? It is scalable?
  • As any service to a business, the catalog tool carries a cost with it. When any updates of fixes become available, will the vendor charge for it? How is the licensing scheme calculated?

Answers to these and more questions will be needed before to know how much the new service catalog tool will really cost to the organization, and how to design a business case for its acquisition. Even when all these questions are answered, it takes time to retrain the staff and restructure the habitual policies and procedures.

In the traditional IT approach, everything is organized in a vertical form. There is a storage team, a networking team, system administration team and a DBA team. But in the ITasS world, the approach now is horizontal. There is a cloud services architecture team and most of the nfrastructure is virtualized and abstracted, so everybody in the IT team can work across different functions.

This newer horizontal organization usually produces highly skilled personnel for cloud computing implementations. These kind of employees is very rare and in high demand.

When the ITaaS model is deployed in a company or organization, sometimes there can be difficulties retaining the skilled cloud personnel. Sometimes the solution is found in service providers because the talent now is working for them.

The first step in the transformation is to understand what the organization is dealing with today. IT infrastructures are complex and usually have an unstructured approach to the delivery of IT services.

Mobile users, helpdesk request, are sometimes serviced ad-hoc, often without attention to business requirements. This leads to a complex mesh of user requirements and available services that can be difficult to untangle.

Also, does the IT team should try to preserve the actual user experience? Should it set a breakpoint where many elements are replaced with a brand new user experience?

In conclusion, IT teams should discover what services are delivering today, take control of these services, and put in place a delivery platform capable to deliver current services and future ones. Also, they must ensure that the platform can integrate with the largest desktop and application delivery approaches, simplifying the user experience, meeting all security and compliance requirements.

The service delivery should not just focus on application installation; it must consider other requirements so the services can be delivered fully. The solution should integrate the existing tools and processes, but also giving enough flexibility to enable any other services that your users need.

When the service delivery platform is ready to go, then the catalog of services should be distributed. All users should receive a services offer relevant to their necessities and their position in the organization. Also, in an optimal service delivery catalog, users should be able to select a service and, subject to previously established rules and approvals, the service should be delivered directly to the user, in an automated process,
fully provisioned and working.

A well designed and efficient service catalog can result in huge advantages for the IT department and for the business.

  • Better communication between the IT team and users, because of ease of administration and the service-oriented approach
  • Improved understanding of the business requirements, issues and challenges
  • Costs are allocated specific business units
  • Standards are established and consistency is achieved
  • IT operational costs are reduced by identification and elimination of non- necessary IT services
  • Computing resources are reallocated to critical business systems

It is important also that, whatever platform is used to provide the catalog, the solution should be adapted to the user base and to the services delivered. This information is critical for implementing chargeback, so a good services catalog platform should be capable to answer some questions.

  • What is the cost of delivering each service?
  • How much should be charged for each service?
  • Who consumes each service?
  • Who should be billed?
  • There are some services provided free of charge?

ITaaS doesn’t have to be an additional layer of complexity. IT departments and organizations should partner with a vendor who understands the process, so you can get solutions that help you to address each step of the way.

It can deliver huge benefits to you and your business.

Categories
Projects

UTM Solutions

An israeli client contacted me thourgh Upwork requesting a report on the state of UTM solutions, main features, pros and cons.

Categories
Projects

WISP Network Design

An Upwork client reached me seeking for a set of suggestion and a brief desing for a brand new WISP network on Wisconsin, USA.

The milestones were:

  • Feasibility calculation of Wireless Links
  • Recommendation of devices
  • Analysis of the the network topology and re-engineering
Logical Topology

Two local ISPs offered dark fiber and MPLS circuits to estabilish the network backbone, but the client declined the offers considering the contract timespan and cost of the lease. An additional link budget was needed, and after considering several vendors, the backbone was built using Ubiquiti Airfiber 24HD radios, which allowed to pass around 500 Mbps on the best conditions, and even 350 Mbps under heavy rain.

Other vendors had backbone solutions on lower frequency bands, but they require a licence to operate, so the non-licensed 24 GHz band was the best selection to avoid further costs.

The last mile was operated using WiMAX gear from Telrad, which gave us a great support and assistance on the initial deployment. The main reasons to select this technology instead of other wireless solutions was the ability to use CPEs that didn’t require direct line of sight, for indoor use, with limited capabilities, and a set of higher-end outdoor radios with more advanced features.