Only this pageAll pages
Powered by GitBook
Couldn't generate the PDF for 187 pages, generation stopped at 100.
Extend with 50 more pages.
1 of 100

3.6.9

Loading...

Getting started

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Reports

Loading...

Loading...

Loading...

Loading...

Loading...

Services

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

ACCOUNTS

Loading...

Data pipelines

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Installation

Server requirements, browser support and installation guides

Introduction

Exivity is a metering and billing software solution for public and private cloud environments that allows you to report on cloud consumption from any IT resource. Exivity enables you to apply your MSP/CSP business rules and makes any type of Pay-as-you-Go model work. It also facilitates internal charge-back and show-back requirements for Enterprise IT.

These things are done by extracting IT consumption data from various endpoints and then mapping this data to meaningful customer-specific information such as services, customer IDs, names, and contracts.

There are four main steps involved in a successful deployment:

  1. Extract

  2. Transform

  3. Report

  4. Integrate (optional)

Extract

The Extract step defines your data sources such as:

  • APIs that return usage data, service catalogue, rate card, customer/subscriber lists, and similarly available records from public or private clouds

  • APIs or ODBC queries that return contracts, customer names, IDs, and other contextual lookup data from CMDB / CRM systems

  • Flat files on disk in CSV, JSON, or XML format

Exivity provides a rich scripting interface via its Unified Scriptable Extractor (USE) component which facilitates integration with almost any data source. For most of the big cloud platforms, we provide template extractor scripts as part of the product. Additionally, you can also write your own USE scripts from scratch in order to integrate with custom data sources.

Transform

The Transform step provides a powerful for processing extracted data. Using it you can merge consumption metrics, contract details, customer information, custom metadata, service definitions or any other imported information to produce an enriched and/or normalised result.

This is done using the Transcript component, which executes user-definable scripts (termed tasks) in order to produce a meaningful set of data suitable for reporting against. Often this data will feed a consolidated bill of IT based on the various different consumed services.

Transcript also allows you to define and populate services and rates, either of which may be passed through from cloud data, defined as custom offerings, or a mixture of the two.

Report

Exivity provides a modern responsive User Interface, that allows you to 'slice and dice' the processed data in any way you choose. Multiple Report Definitions can be created with ease which allows you to graphically and textually display both cost and usage statistics.

Integrate

We think that Exivity should be part of your automation landscape, where it can provide (for example) line items that can be digested by your ERP and/or invoicing system. Therefore we consider Integrate as the logical final step for any deployment where it is useful.

To this end, we offer an open and fully-featured REST API. Our GUI uses our own API for all back-end processes meaning that all textual data shown in the Exivity GUI is also obtainable via our API.

Detailed information about the REST API can be found at

Prerequisites

Tutorials

For some specific use cases, we've created tutorials to get started.

How-to guides

Concepts

Announcements

General Exivity product announcements.

Announcement regarding CVE-2021-44228

Exivity does not ship any Java-powered software components. As such the Exivity software solution is not affected by CVE-2021-44228. For more information regarding this matter, please file a request at the Exivity support portal or send an e-mail to [email protected].

Extractor templates

Exivity provides a catalogue of USE extraction scripts that can be used to integrate with almost any cloud provider, hypervisor or legacy IT end point. We've published some of our templates on GitHub for your convenience.

Within every directory of that repository, the Extractor script will have the .use extension, for example GoogleCloud.use

This repository contains Extractors for VMware, Azure, Amazon and others. However, if you are currently missing an integration template and are unwilling or unable to create your own, feel free to drop us an e-mail at [email protected].

Subroutines

Example subroutines which can be used in your data Extractors

The gosub function in a data Extractor allows invoking a subroutine in a data extractor script. Subroutines are useful to avoid having to duplicate code snippets.

Our customers and Solution Architects have created several useful subroutines over the last few years. We are keeping a small library of the most useful ones on our public docs. You can find them listed on this page.

Language

The documentation in this section assumes knowledge of the USE script basics.

check_dateformat

This Subroutine checks if a date is in the YYYYMMDD format, if not then it raises and error.

Syntax

gosub check_dateformat ("YYYYMMDD")

Code Snippet

subroutine check_dateformat {
	match date "^(([0-9]{4}(0[1-9]|1[0-2])(0[1-9]|1[0-9]|2[0-9]|3[0-1])))" ${SUBARG_1}
	if (${date.STATUS} != MATCH) {
		print Argument error: ${SUBARG_1} is not in YYYYMMDD format
		terminate with error
	}
}
Other HTTP/S sources
Extract
ETL framework
Transform
Reports
https://api.exivity.com
Integrate

Instances

The Instances report will provide an overview of all chargeable resources. It is recommended to apply filters to this report. Therefore the best approach is to drill down into Instances from the Accounts or Services report. This allows a user to easily drill down into all resource instances for a particular Account, Service, or both:

VM Instance view using a Service Category and Account filter

Opening the Instances report without any filters applied can take a while to load depending on the number of records in the selected reporting period.

Budget

After configuring a budget, it is possible to report on current budget spendings. To do so, first browse to the Reports > Budget screen and select a Budget from the drop-down list:

Also, make sure to select a date range that matches with one or more of the configured budget revisions. Once selected, the budget report will start loading. This may take a few seconds, and a report similar to this will be shown on your screen:

Directory structure

This article describes the directory structure after the installation

Exivity uses two main disk directories when it's installed. One to store the user data and one to store the software files. These directories are called the home and program directories, respectively.

On the system where Exivity is installed, the following environmental variables contain a absolute path reference to these directories: EXIVITY_HOME_DIRECTORY and EXIVITY_PROGRAM_DIRECTORY.

Home directory

How to store contract information with an Account in a report

Learn how to store details like contract start date/end date or contact details in a report

The feature in Exivity can be leveraged to store relevant information about contracts (like Contract ID, Customer ID, contact details, etc.) together with in report summaries. First, we need to create a Metadata definition, then associate it to one or more . These are the steps to achieve this:

  1. Navigate to the Data Pipelines > Metadata menu and click Create Metadata.

  2. Choose a meaningful name for your Metadata definition.

Services

The Services report provides the ability to report on your metered-based IT consumption costs from a Services perspective.

Once logged into the system, navigate to the Reports > Services menu. From here you will get a report grouped by the services consumed. This report can be refined using several filters.

Filters and Reporting Depth

Once you have selected your date range and report you can start viewing your data. By default, it will show all consumed services for this report for the accounts you have permission to access. If you want to limit your view you can change the reporting '

Summary

The Summary report provides a detailed breakdown of costs in an invoice-like format. The defined in the system determine what you'll see here.

Once logged into the system, go to Reports > Summary. From here you're able to generate different kinds of detailed costs reports, which can be used for billing, chargeback and showback.

Filters and Reporting Depth

Once you have selected your date range and chosen which report you to activate will be presented with the summary cost report as shown above. By default it will show all consumed services for an account. As accounts are hierarchical, this view will include consumed services for all children of the selected account. Therefore it is important that you select an appropriate

check_dateargument

including 1 day mode

This Subroutines checks if the FROM and TO date are in order. In the case that there is only 1 day entered, it will automatically fill in the second day in a "1 Day Mode".

Syntax

Code Snippet

format_date

This Subroutine extracts the day, month and year from a given date in YYYYMMDD format.

Syntax

Code Snippet

validate_response

This subroutine allows validating an HTTP response from a .

Syntax

Code snippet

clear

The clear statement is used to delete all HTTP headers previously configured using the statement.

Syntax

clear http_headers

decimal_to_ipv4

Syntax

decimal_to_ipv4variable_name

decimal_to_ipv4source_variable_nameas

discard

The discard statement is used to delete a named .

Syntax

discard{buffer_name}

exit_loop

The exit_loop statement will terminate the current loop.

Either exit_loop or loop_exit may be used. Both variants work identically.

environment

The environment statement specifies the name of the environment to use for resolving .

Syntax

environment name

Syntax

exit_loop

Details

The exit_loop statement will immediately terminate the current loop and script execution will jump to the statement following the } at the end of the current loop.

This can be done even if the exit_loop statement is within one or more if constructs inside the loop.

If no loop is in effect then an error will be logged and the script will terminate.

Details

The environment statement selects the predefined environment to use for global variable lookup. It is an error to specify the environment which is not defined in the global database.

If no environment is specified, the default environment (the one specified as default in the global database) is assumed.

The environment can be changed many times without limitations, and the change affects only global variables that are referenced the first time within the script, e.g. all global variables, resolved (copied to local variables) retain their values.

global variables
gosub check_dateargument ()
subroutine check_dateargument {
	# Validate that amount of input arguments is as expected
	if (${ARGC} != 2) {
		if (${ARGC} == 1) {
			print "Running in 1 day mode"
			var firstday = ${ARG_1}
			var lastday = (@DATEADD(${firstday}, 1))
		} else {
			print "This requires 1 or 2 arguments, the day to collect usage for, and the date following that day, both in YYYYMMDD format"
			terminate with error
		}
	} else {
		var firstday = ${ARG_1}
		var lastday = ${ARG_2}
	}
	
	# Validate that to date is not before from date
	if (${firstday} > ${lastday}) {
		print "TO date cannot be a date that lies before FROM date"
		terminate with error
	}
	# Validate that to date is not the same as from date
	if (${firstday} == ${lastday}) {
		print "TO date cannot be the same as FROM date"
		terminate with error
	}
}
gosub format_date ("YYYYMMDD")
subroutine format_date {
    match day "^[0-9]{6}([0-9]{2})" ${SUBARG_1}
    if (${day.STATUS} != MATCH) {
        terminate with error
    } else {
        var day = ${day.RESULT}
    }
    match month "^[0-9]{4}([0-9]{2})[0-9]{2}" ${SUBARG_1}
    if (${day.STATUS} != MATCH) {
        terminate with error
    } else {
        var month = ${month.RESULT}
    }
    match year "^([0-9]{4})[0-9]{4}" ${SUBARG_1}
    if (${year.STATUS} != MATCH) {
        terminate with error
    } else {
        var year = ${year.RESULT}
    }
}

gosub validate_response({my_buffer})
subroutine validate_response {
    if (${HTTP_STATUS_CODE} != 200) {
        print Got HTTP status ${HTTP_STATUS_CODE}, expected a status of 200
        print The server response was:
        json format ${SUBARG_1}
        print ${SUBARG_1}
        terminate with error
    }
}

buffer
The home directory should preferably be located on a dedicated volume i.e. D:\exivity\home and it is recommended that it be located on an SSD drive.

Program directory

The main program directory as it should be installed by the Exivity installer:

root
├─── bin                        Backend binaries
|    ├─── exivityd.exe
|    ├─── chronos.exe
|    ├─── horizon.exe
|    ├─── edify.exe
|    ├─── transcript.exe
|    └─── use.exe
├─── server                     Frontend / API dependencies
|    ├─── nginx
|    ├─── php
|    ├─── rabbitmq
|    ├─── pgsql
|    └─── redis
├─── web                        Compiled frontend repositories
|    ├─── glass
|    └─── proximity
├─── *.bat
└─── uninstall.exe
Details

The clear statement will remove all the headers currently defined, after which a new set of headers can be specified using set http_header.

Example

set http_header
set http_header "Accept: application/json"
set http_header "Authorization: FFDC-4567-AE53-1234"    
set http_savefile "d:\exivity\customers.json"
buffer customers = http GET "https://demo.server.com:4444/v1/customers"

clear http_headers   # Clear headers in order to use a different Authorization: value
set http_header "Accept: application/json"
set http_header "Authorization: ABCD-EFGH-8888-1234"    
set http_savefile "d:\exivity\addresses.json"
buffer customers = http GET "https://demo.server.com:4444/v1/addresses"
destination_variable_name

Details

The decimal_to_ipv4 statement will convert a decimal value to an IPv4 address in conventional dotted-quad notation (such as 192.168.0.10 ).

The statement will verify that the format of the value to be converted is a valid decimal integer. If not, then the Extractor script will stop with an error.

The statement operates on the value of a variable and can be used in either of the ways illustrated in the Syntax section above. In the first case, the value to be converted is replaced with an ASCII representation of the decimal value and in the second case the value to be converted remains unmodified, the result being placed into the variable named after the 'as' keyword.

If the first variable does not exist, this will cause an error and the Extractor will terminate. If the second variable does not exist then it will be created automatically. If the second variable does exist then its value will be overwritten with the converted value.

The textual IP address generated by the ipv4_to_decimal statement is identical to that represented by the same decimal value in the protocol headers of a network packet.

Example

This Extractor script snippet ...

... will produce the following output:

var x = 3232235530
print Example 1 Original: ${x}
decimal_to_ipv4 x
print Example 1 as dotted-quad: ${x}

var y = 3232235530
decimal_to_ipv4 y as converted
print Example 2 Original: ${y}
print Example 2 as dotted-quad: ${converted}
Details

The discard statement will delete the named buffer and free the memory used to store its contents. The statement takes immediate effect and any attempt to reference the buffer afterwards (at least until such time as another buffer with the same name is created) will cause the USE script to log an error and fail.

Example

buffer
var server = "https://my_json_server.com"
buffer response = http GET ${server}/generatetoken        

# Create a variable called ${secret_token} from the 'access_token'
# string in the JSON in the {response} buffer
var secret_token = $JSON{response}.[access_token]

# We no longer need the {response} buffer as the value extracted
# from it is stored in a variable
discard {response}
Example 1 Original: 3232235530
Example 1 as dotted-quad: 192.168.0.10
Example 2 Original: 3232235530
Example 2 as dotted-quad: 192.168.0.10

By clicking the + button, you can add fields. Add your prefered fields with their corresponding type. For example:

Adding a Metadata definition for contract information

4. Click Create to save your Metadata.

5. Now, we need to associate this Metadata definition with a report. Go to Data Pipelines > Reports and choose your prefered report. Click on the Configuration tab and you will notice that for each Key Column you have the option to add Metadata. In this example, we associated the "Contract Information" Metadata with the Customer level Account:

Associating Metadata with a report

6. Click Update to apply your changes.

7. To fill in the contract details you just created, go to the Accounts menu, select Overview and from the list of Accounts, you will notice that at the Account level associated with Metadata, you can add values:

Adding Metadata values to an Account

8. Click Update to save. Finally, the contract information for the configured Account(s) will be visible in the report.

Metadata
Accounts
Accounts
Depth
', and then apply additional filters such as:
  • Category - to only view certain Service Categories

  • Account - to limit your view to all services belonging to a certain account

When filtering the services for a specific account, it is recommended to start from the Accounts Report where you can drill down into a specific account. Once the account has been picked, switch the view to the services associated with that account using the buttons in the detailed report:

Account Report to Services Report drilldown
Depth
when running this report.

Grouping and display of Instances

The Summary report has a few options that you can turn on and off. These options allow you to tune the amount of detail shown and the grouping applied to it:

If you want to include a detailed grouping of Services and Service Categories, then ensure that the Services checkbox is enabled. The same goes for including Instance level information. The latter enables you to view resource level consumption data, such as the Virtual Machine hostname, Container or User Name.

services

How to configure receiving a monthly billing report

Learn how to configure receiving your monthly report by email

Exivity has a powerful Workflow engine and Notification engine. These two combined allow users to receive their billing reports monthly by email or other notification channels.

These are the steps to configure receiving your monthly billing report by email:

  1. Navigate to the Notifications menu. Click on your username at the top right corner of the screen, then select My Notifications.

This tutorial teaches you how to create a notification that sends a report for yourself, but you can also if you have admin rights.

2. Fill in the details (Name, Title and Description) and make sure the Trigger is set to Workflow Ended.

3. Select the .

4. Apply the trigger of the Workflow Status to Successful workflows:

5. In the Filenames section, it is possible to use to select which files to export.

For example, if you configured the publishing of a monthly report (on the 15th day of every month) for your Departments account, the file name will be similar to report-Departments-account-range-20220115-20220315.You can select your file with a regular expression whose name starts with the string "Report-Departments" and ends with "15":

/report-Departments(.*?)15/

In this example, the (.*?) in the regular expression has the role of matching any character 0 or more times, meaning that any string can be between "report-Departments" and "15".

You may also be interested in selecting only the files that start with a certain String. For example to select the files starting with "report-Departments", you may use the following regular expression:

/report-Departments*/

Or you may choose to select the files that end with a certain string, for example, ending with "15":

/&15/

6. In this scenario, you want to send the file along with the notification only if the file was created/modified after the start time of that workflow. To achieve this, tick the Enabled box next to Since workflow start time.

7. It is possible to compress the files by ticking the Enabled box next to Compress Attachments.

The files will be sent in a PDF/CSV format.

8. Select the . You may want to send the report to your email.

9. Finally, click the Create button.

How to automatically trigger a monthly billing report

Learn how to configure publishing a report on a monthly basis

Exivity enables customers to generate summaries of the cloud spend on a schedule. You may choose to publish a report of your resource usage and costs quarterly, monthly or daily. This article guides you through the steps of publishing a monthly report: In order to trigger the publishing automatically, you must use the Workflow engine.

1. Navigate to the Data Pipelines > Workflow menu and select Workflows.

2. Click Create Workflow.

3. Provide a meaningful name and description.

4. Click Add Schedule.

4.1. The Type selection should be monthly.

4.2. Furthermore, the interval field Run Every should be set to 1 Month.

4.3. Select the start date (Effective from), Start time and Time zone.

5. Click the Add Step button and set the type to Publish report.

6. Choose your report from the drop-down list.

7. The Timeout setting allows you to choose the interval of waiting, before trying to execute this step again, in the event of a failure.

8. Finally, click Create.

In combination with the , you can receive your monthly report on a notification channel (for example: email).

How to update your license

Learn how to add a new license when it has expired

In order to have a fully functional solution, you need a valid license.

When the product is installed without a proper license, a warning is shown to all users in the GUI, and the Report functionality is disabled:

These are the steps to reset your license in case it has expired: 1. Navigate to the Administration > Settings menu.

2. Select the License tab and paste the new license serial key under the New license box.

3. Click the Update button. Afterwards, you should see the Valid status along with the expiry date.

Accounts

The Accounts report provides the ability to drill down into your metered based IT consumption costs. The way you've created your Report, determines on what values you can zoom into.

Once you've logged into the system, navigate to the Reports > Accounts menu. Here are a few key parameters you can use to define how your report is generated:

Date Selection

The date selector is important to limit the scope of data you're focusing on.

First, select the date range you are interested in. This can be a single month, a 3-month time period, half a year, a full year, or a custom date range.

Report

Your Exivity solution can have more than one report definition created. If this is the case, you will need to select the appropriate report containing the data you wish to examine. An end user will only see reports listed that they have permissions to, and the first of those is automatically selected.

Drill Down, Services & Report Depth

After selecting a date range and report you can start drilling down into your data in a number of ways:

  1. Moving your mouse over one of the accounts will reveal a toolbar as shown above. For each account, you have the option to click on the Drilldown control in that toolbar. This will do the following:

    1. Descends one level deeper into the report

    2. Updates your view of the data to reflect that deeper level

Adjustments

Adjustments allow a user to create account-specific rate adjustment policies. An Adjustment policy supports applying a discount or a premium using one of these modifiers:

  1. a certain amount of money (i.e. $ 100)

  2. a certain quantity (i.e. 100 GB/hours)

  3. a percentage (i.e. 10%)

This Adjustment can then be applied to a single service, multiple different services, or one or more service categories.

Create an Adjustment Policy

To create a new Adjustment policy for an account, follow these steps:

  1. From the menu on the left, select Services > Adjustments

  2. Then select the Account from the list of accounts for which you want to create an adjustment policy

  3. After selecting the account, click Add Policy, and provide a meaningful name for your policy in the right screen where it says Adjustment name

Extract

Introduction

Extraction is the process by which USE (Unified Scriptable Extractor) retrieves data from external locations. The following types of data sources are supported:

Type

Description

APIs

Typically, usage data is retrieved from the API or APIs provided by the cloud (or clouds) for which reports need to be generated. This is usually a REST API accessed via HTTP/S.

USE script

A USE script is required for USE to operate. Further information can be found via the links below:

An introductory overview of the scripting language:

A reference guide for the USE scripting language:

How to parse XML and JSON data

Template scripts that can be used as starting points for common data sources:

basename

The basename statement is used to extract the filename portion of a path + filename string.

Syntax

basenamevarName

basenamestringasvarName

Details

Given a string describing the full path of a file, such as /extracted/test/mydata.csv the basename statement is used to identify the filename (including the file extension, if any) portion of that string only. If there are no path delimiters in the string then the original string is returned.

The basename statement supports both UNIX-style (forward slash) and Windows-style (backslash) delimiters.

When invoked as basename varName, the varName parameter must be the name of the variable containing the string to analyse. The value of the variable will be updated with the result so care should be taken to copy the original value to a new variable beforehand if the full path may be required later in the script.

As a convenience in cases where the full path needs to be retained, the result of the operation can be placed into a separate variable by using the form basename string as varName where string is the value containing the full path + filename and varName is the name of the variable to set as the result.

When invoked using basename string as varName if a variable called varName does not exist then it will be created, else its value will be updated.

Examples

Example 1

The following script ...

... will produce the following output:

Example 2

The following script ...

... will produce the following output:

encode

The encode statement is used to base16 or base64 encode the contents of a variable or a named buffer.

Syntax

encode base16|base64varName|{buffer_name}

Details

The encode statement will encode the contents of an existing variable or named buffer, replacing those contents with the encoded version.

The result of encoding the contents will increase their length. With base16 encoding the new length will be double the original, but the exact size increase will depend on the contents being encoded.

When encoding a variable, if the size of the result after encoding exceeds the maximum allowable length for a variable value (8095 characters) then the USE script will fail and an error will be returned.

Encoding an empty variable or buffer will produce an empty result.

Example

The following script ...

... produces the following output:

escape

The escape statement is used to escape quotes in a variable value or the contents of a named buffer.

Syntax

escape quotes invarName|{bufferName}[usingescape_char]

Details

If a variable value or named buffer contains quotes then it may be desirable to escape them, either for display purposes (to prevent USE from removing them before rendering the data as output) or in order to satisfy the requirements of an external API.

The escape statement will precede all occurrences of the character " with a specified escape character (backslash by default) as shown in the example below. This operation is not just temporary - it will update the actual contents of the variable or named buffer.

The escape statement does not take into account the context of existing quote characters in the data. Running it multiple times against the same data will add an additional escape character each time to each occurrence of a quote.

Example

Given an input file called 'escapeme.txt' containing the following data:

The following script:

... will produce the following output:

gosub

The gosub keyword is used to run a named subroutine.

Syntax

gosub subroutineName([argument1, ... argumentN])

The argument list may span multiple lines, so long as any given argument is contained on a single line and ends with a comma, eg:

Details

The subroutineName provided to the gosub statement must be that of a subroutine defined elsewhere in the script using the statement.

If any argument contains white-space or a comma then it must be quoted:

gosub getfile("directory with spaces/filename.txt")

It is permitted to call a subroutine from within another subroutine, therefore gosub can be used within the body of a subroutine. This may be done up to 256 levels in depth.

The opening bracket after subroutineName may or may not be preceded with a space:

gosub getfile ("filename.txt")

To call a subroutine with no parameters, use empty brackets:

gosub dosomething()

Example

Please refer to the example in the documentation for the statement.

unzip

The unzip statement is used to unzip the data in a named buffer.

Syntax

unzip{buffer_name}

Details

The unzip statement will extract a single file from a zip archive stored in a named buffer. In order for this to succeed, the buffer must have been previously populated using the statement, and the data within the buffer must be a valid ZIP file.

Only ZIP files are supported. To extract GZIP files, use

A warning will be logged, the buffer left intact and the script will continue to execute if any of the following conditions arise:

  • The buffer is empty or does not contain a valid ZIP archive

  • The ZIP archive is damaged or otherwise corrupted

  • More than 1 file is present within the archive

After the unzip statement completes, the buffer will contain the unzipped data (the original ZIP archive is discarded during this process).

The filename of the unpacked file is also discarded, as the resulting data is stored in the buffer and can subsequently be saved using an explicit filename as shown in the example below.

Example

print

The print statement is used to display text to standard output while a USE script is executing.

Syntax

print [-n]word|{buffer_name} [... word|{buffer_name]

Details

The print statement enables user-defined output to be generated during the execution of a USE script. When retrieving data from external sources it may take some time for a lengthy series of operations to complete, so one use of the print statement is to provide periodic status updates during this time.

The print statement will process as many arguments as it is given, but at least one argument is required. If the first argument is -n then no newline will be output after the last argument has been echoed to standard output, else a newline is output after the last argument.

Arguments that are normal words will be sent to standard output followed by a space. Arguments referencing a named buffer will result in the contents of the buffer being displayed.

Note that print will stop output of data from a named buffer as soon as a NUL (ASCII value 0) character is encountered

Binary data

It is not recommended that print is given a buffer containing binary data to display, as when echoed to a console on screen this is likely to result in various control codes and other sequences to be sent to the console which may have undesired side effects.

Example

User interface

This article describes the functionalities available in the User Interface

The graphical user interface of Exivity is a purely client-side application, which means it runs inside your web browser. It communicates with the Exivity to obtain data records, report data and general configuration. This means all functionality available in the GUI can also be accessed programmatically.

Throughout this documentation, the graphical user interface will be referred to as the Glass Interface.

The interface allows you to do the following:

  • Develop

Rates

The 'Rates' screen allows you to configure manual rates for services that do not have a rate provided with their data source. Services must have at least have a default Global Rate and may also have optional customer-specific rates.

Rate types can be configured as either manual or automatic. In both cases the rate is applied per unit of consumption.

Automatic services obtain the rate (and/or interval value) from a column you specify, whereas manual services allow a rate to be explicitly specified. If proration is enabled, the charge on the cost reports is reduced based on the number of days in the month that the service was used.

By default each service has a global rate configured. This will be applicable to all accounts that consume this service unless a customer-specific rate has been configured for that account, in which case it will take precedence over the global rate.

generate_jwt

The generate_jwt statement is used to generate an -compliant JWT (JSON Web Token) which can be used, for example, for .

Syntax

generate_jwt keykey component1 [... componentN]as

json

The json statement is used to format JSON in a .

Syntax

json format{buffername}

ipv4_to_decimal

Syntax

ipv4_to_decimalvariable_name

ipv4_to_decimalsource_variable_nameas

pause

The pause statement is used to suspend execution of a USE script for a specified time.

Syntax

pausedelaytime

gunzip

The functionality described in this article is not yet available. This notice will be removed when the appropriate release is made.

The gunzip statement is used to inflate a GZIP file

save

The save statement is used to write the contents of a to disk.

Syntax

save{buffer_name}asfilename

loglevel

While executing a USE script, various messages are written to a logfile. The loglevel option determines the amount of detail recorded in that logfile.

Syntax

loglevelloglevel

terminate

The terminate statement will exit the USE script immediately.

Syntax

terminate [with error]

return

The return statement is used to exit a subroutine at an arbitrary point and return to the calling location.

Syntax

return

Files

A file on the local file system or on a shared volume. This is usually a CSV, JSON or XML file.

Exivity

In some cases it is useful to retrieve information from Exivity itself, such that accounts and usage data that were created historically can be incorporated into the daily processing.

Database

Arbitrary SQL queries can be executed against an SQL server either via a direct connection string or via an ODBC DSN.

Web

Arbitrary HTTP queries can be invoked in order to retrieve information from any web page accessible from the Exivity server.

Script basics
Language
Parslets
Extractor templates
Encoding a variable ...
Encoded base16 result is: 5465787420746F20626520656E636F646564
Encoded base64 result is: VGV4dCB0byBiZSBlbmNvZGVk
Encoding a buffer ...
Encoded base16 result is: 5465787420746F20626520656E636F646564
Encoded base64 result is: VGV4dCB0byBiZSBlbmNvZGVk
"this "is some text" with
some "quotes" in it"
subroutine
subroutine
buffer
gunzip
result

Details

The generate_jwt statement performs the following actions:

  • encodes all components as Base64URL

  • concatenates all components using a dot separator (.)

  • hashes the concatenated result using SHA256

  • signs the hash with a provided PEM-encoded key using the RSA algorithm

  • encodes the resulting signature as Base64URL

  • builds JWT by concatenating the two results using a dot separator (.)

  • stores the final result in the variable specified by the result parameter

The RSA key needs to be in PEM format. PEM format requires the header and footer to be on separate lines so it is important to separate the key contents with ${NEWLINE}as shown below:

var key = "-----BEGIN PRIVATE KEY-----${NEWLINE}Key-data-goes-here{$NEWLINE}-----END PRIVATE KEY-----"

Example

To acquire a Google Cloud OAuth 2.0 access token:

RFC 7515
Google Cloud OAuth 2.0 Server to Server Authentication
Details

In many cases an API or other external source will return JSON in a densely packed format which is not easy for the human eye to read. The json statement is used to re-format JSON data that has been previously loaded into a named buffer (via the buffer statement) into a form that is friendlier to human eyes.

After the JSON has been formatted, the buffer can be saved or printed for subsequent inspection

Example

Given the following single packed line of JSON in a named buffer called myJSON:

The following USE script fragment:

will result in the following output:

named buffer
json format {myJSON}
print {myJSON}
destination_variable_name

Details

The ipv4_to_decimal statement will convert an IPv4 address in conventional dotted-quad notation (such as 192.168.0.10 ) to a decimal value.

The statement will verify that the format of the value to be converted is a valid IPv4 address. If not, then the Extractor script will stop with an error.

The statement operates on the value of a variable and can be used in either of the ways illustrated in the Syntax section above. In the first case, the value to be converted is replaced with an ASCII representation of the decimal value and in the second case the value to be converted remains unmodified, the result being placed in to the variable named after the 'as' keyword.

If the first variable does not exist, this will cause an error and the Extractor will terminate. If the second variable does not exist then it will be created automatically. If the second variable does exist then its value will be overwritten with the converted value.

The decimal value generated by the ipv4_to_decimal statement is identical to the value that represents the IP address in the protocol headers of a network packet.

Example

This Extractor script snippet ...

... will produce the following output:

var x = 192.168.0.10
print Example 1 Original: ${x}
ipv4_to_decimal x
print Example 1 as decimal: ${x}

var y = 192.168.0.10
ipv4_to_decimal y as converted
print Example 2 Original: ${y}
print Example 2 as decimal: ${converted}
Details

The delaytime parameter is the number of milliseconds to wait before continuing. A value of 0 is allowed, in which case no delay will occur.

The pause statement may be useful in cases where an external data source imposes some form of rate limiting on the number of queries that can be serviced in a given time-frame, or to slow down execution at critical points when debugging a long or complex script.

Example

This example makes use of script parameters which are provided when USE is executed. For more information on script parameters please refer to the Extract introduction.

Syntax

gunzip filename as filename

gunzip {bufferName} as filename

Details

The gunzip statement can be used to extract the contents of a GZIP archive containing a single file. The GZIP archive may be a file on disk or may be the contents of a named buffer.

It is not possible to inflate GZIP data directly in memory, but the same effect can be achieved by extracting GZIP data in a named buffer to disk, and then loading the extracted data back into the named buffer as shown in the example below.

All paths and filenames are treated as relative to the Exivity home directory

Example

Details

The save statement will write the contents of a named buffer to filename. As well as providing a means of direct-to-disk downloading this can be useful for retrieving server responses and capturing them for later examination, whether it be for analysis, debugging or audit purposes.

If the destination file already exists then it will be overwritten.

If the filename argument contains a path component, then any directories not present in the path will be created. If the creation of the path destination file is not successful then an error will be logged and the USE script will fail.

The save statement is similar in effect to the http_savefile option supported by set, in that data from a server is written to disk. There is one important distinction however:

  • When set http_savefile has been used to specify a file to save, the next HTTP request will stream data to the file as it is received from the server

  • When a buffer statement is used to capture the server response, and a subsequent save statement is used to write it to disk, all the buffered data will be written to the file immediately

Example

named buffer
Details

Normally a USE script will finish execution when an error is encountered or when the end of the script file is reached, whichever comes first.

When the terminate statement is encountered, the script will finish at that point. No statements after the terminate statement will be executed.

By default, the script will exit with a success status, however it may be useful to exit deliberately when an error such as an invalid or unexpected response from an HTTP session is detected. Adding the keywords with error to the statement will cause it to exit with an error status.

Example

set http_savefile = extracted/serverdata.txt
buffer serverdata = http GET "https://server.com/uri"
if (${HTTP_STATUS_CODE} != 200) {
    print Got HTTP status ${HTTP_STATUS_CODE}, expected a status of 200
    print The server response was:
    print {serverdata} 
    terminate with error
} else {
    print Received data from server successfully
}
Details

A subroutine will automatically return to the location it was called from when the end of its body is reached. However, it may be desirable to explicitly exit the subroutine at some other point in which case the return statement is used.

The return statement cannot be used to return a value to the calling code (this should be done via the use of variables as described in the subroutine statement documentation)

Example

var path = "extracted/test/testdata.csv"

# Copy the path as we'll need it later
var file = ${path}

# Note: use the NAME of the variable, not the value
basename file

# The variable called 'file' now contains the result
print The basename of the path '${path}' is '${file}'

var path = "testdata.csv"
var file = ${path}
basename file

print The basename of the path '${path}' is '${file}'
The basename of the path 'extracted/test/testdata.csv' is 'testdata.csv'
The basename of the path 'testdata.csv' is 'testdata.csv'
var path = "extracted/test/testdata.csv"
basename ${path} as file
print The basename of the path '${path}' is '${file}'
The basename of the path 'extracted/test/testdata.csv' is 'testdata.csv'
var testdata = "Text to be encoded"

print Encoding a variable ...
# Base16-encode a variable
var encode_me = ${testdata}
encode base16 encode_me
print Encoded base16 result is: ${encode_me}

# Base64-encode a variable
var encode_me = ${testdata}
encode base64 encode_me
print Encoded base64 result is: ${encode_me}

print Encoding a buffer ...
# Base16-encode a buffer
buffer encode_buf = data ${testdata}
encode base16 {encode_buf}
print Encoded base16 result is: {encode_buf}

# Base64-encode a buffer
buffer encode_buf = data ${testdata}
encode base64 {encode_buf}
print Encoded base64 result is: {encode_buf}
buffer test = FILE system/extracted/escapeme.txt
escape quotes in {test}
print {test}

var testvar = "\"This is \"a test\" string\""
escape quotes in testvar using \"
print ${testvar}
\"this \"is some text\" with
some \"quotes\" in it\"

""This is ""a test"" string""
gosub subroutineName (argument1,
      argument2,
      argument3,
      )
buffer zippedData = FILE system/extracted/my_source/${dataDate}_usage.zip
unzip {zippedData}
save {zippedData} as system/extracted/my_source/${dataDate}_usage.csv
discard {zippedData}
var server = "https://my_json_server.com"
print Obtaining token from server
buffer response = http GET ${server}/generatetoken        
print Token received:
print {response}

# Create a variable called ${secret_token} from the
# 'access_token' string in the JSON in the {response} buffer
var secret_token = $JSON{response}.[access_token]

# We no longer need the {response} buffer as the value
# extracted from it is stored in a variable
discard {response}
print Original server response now discarded
var private = "-----BEGIN PRIVATE KEY-----${NEWLINE}key goes here${NEWLINE}-----END PRIVATE KEY-----"
var email = "[email protected]"
var url = "https://www.googleapis.com/oauth2/v4/token"
var scope = "https://www.googleapis.com/auth/cloud-platform"

var now = ${UNIX_UTC}
var expiry = (${now} + 3600)

var header = "{\"alg\":\"RS256\",\"typ\":\"JWT\"}"
var payload = "{\"iss\":\"${email}\",\"scope\":\"${scope}\",\"aud\":\"${url}\",\"iat\":\"${now}\",\"exp\":\"${expiry}\"}"

generate_jwt key ${private} ${header} ${payload} as JWT

# Make HTTP request according to https://developers.google.com/identity/protocols/OAuth2ServiceAccount
set http_header "Content-Type: application/x-www-form-urlencoded"
set http_body data "grant_type=urn%3Aietf%3Aparams%3Aoauth%3Agrant-type%3Ajwt-bearer&assertion=${JWT}"
buffer token = HTTP POST "${url}"

if (${HTTP_STATUS_CODE} != 200) {
	print Got HTTP status ${HTTP_STATUS_CODE}, expected a status of 200
	print The server response was:
	json format {token} 
	print {token}
	terminate
}

var access_token = $JSON{token}.[access_token]
print Access token: ${access_token}
{"title":"Example JSON data","heading":{"category":"Documentation","finalised":true},"items":[{"id":"01","name": "Item number one","subvalues":{"0":1,"10":42,"100":73,"1000":100},"category":"Example data","subcategory":"First array"},{"id":"02","name":"Item number two","subvalues":{"0":10,"10":442,"100":783,"1000":1009},"category":"Example data","subcategory":"First array"}]}
{
  "title": "Example JSON data",
  "heading": {
    "category": "Documentation",
    "finalised": true
  },
  "items": [
    {
      "id": "01",
      "name": "Item number one",
      "subvalues": {
        "0": 1,
        "10": 42,
        "100": 73,
        "1000": 100
      },
      "category": "Example data",
      "subcategory": "First array"
    },
    {
      "id": "02",
      "name": "Item number two",
      "subvalues": {
        "0": 10,
        "10": 442,
        "100": 783,
        "1000": 1009
      },
      "category": "Example data",
      "subcategory": "First array"
    }
  ]
}
Example 1 Original: 192.168.0.10
Example 1 as decimal: 3232235530
Example 2 Original: 192.168.0.10
Example 2 as decimal: 3232235530
var first = ${ARG_1}
var last = ${ARG_2}
var last += 1
var x = ${first}

# Retrieve a number of files from http://server.local/?.dat where ? is a number
# Wait for 1 second between each file
loop slurp {
    var url = http://server.local/datafiles/${x}.dat
    set http_savefile data/${x}.png
    print Getting datafile ${x}
    http GET ${url}
    if (${HTTP_STATUS_CODE} == 200) {
        print 200 OK
    }
    if (${HTTP_STATUS_CODE} == 404) {
        print Data file ${x} missing on server
    }
    var x += 1
    if (${x} == ${last}) {
        exit_loop
    }
        pause 1000   # Wait for 1 second
}
print ${x} files were downloaded
terminate
# Download an archive and extract it into a named buffer
buffer archivedata = http GET http://server/archived.csv.gz
gunzip {archivedata} as system/extracted/extracted.csv
buffer archivedata = FILE system/extracted/extracted.csv

# Download an archive and extract it to disk, automatically deriving the
# output filename from the input filename based on the .gz extension
var save_path = system/extracted
var archivefile = extracted.csv.gz
set http save_file ${save_path}/${archivefile}

match csv_name "(.*)\.gz$" ${filename}
if (${csv_name.STATUS} != MATCH) {
    print WARNING: Downloaded file does not end in .gz and will not be extracted
} else {
    gunzip "${save_path}/${archivefile}" as "${save_path}/${csv_name.RESULT}"
    print Extracted file: "${save_path}/${csv_name.RESULT}"
}
var server = "https://my_json_server.com"
buffer response = http GET ${server}/generatetoken        

# Save a copy of the original server response for diagnostic purposes
save {response} as "${baseDir}\diagnostics\token.json"

# Create a variable called ${secret_token} from the 'access_token'
# string in the JSON in the {response} buffer
var secret_token = $JSON{response}.[access_token]

# We no longer need the {response} buffer as the value extracted
# from it is stored in a variable
discard {response}
#
# Download two files into named buffers
# using a subroutine to do so
#
gosub getfile(data1, "http://intranet/datadump1.json")
gosub getfile(data2, "http://someotherserver/anotherfile.xml")

# (Script to do something with the data goes here)

#
# Argument 1: the name of the buffer to store the data
# Argument 2: the URL of the file to download
#
subroutine getfile {
    if (${SUBARG.COUNT} != 2) {
        print "Error: This subroutine requires two arguments
        return
    } 

    buffer ${SUBARG_1} = http GET "${SUBARG_2}"
    # There is an implicit 'return' here
}
Sets the 'Parent' filter to the account you selected to drill down on
  • As well as the ability to drill down you can also view the Services associated with an account on any level of your report. Note that this will change your view from the Accounts report to the Services report.

  • Date selector for reports
    Select a report definition
    Drilldown into your accounts report
    Provide the Start date, by selecting the initial month when this adjustment policy is applied.
  • Provide the End date, by selecting the month when this adjustment policy will be discontinued. This is optional since an adjustment policy can be applied permanently.

  • Select which Service or Service Category this policy is applied to. You can select multiple using the check-boxes that are provided. It is also possible to select all available service categories, which effectively applies the discount to all possible services.

  • Select a Type for this adjustment. This can be either a Discount or a Premium

  • Select the Target, meaning: is this Adjustment targeting the total Charge or the total Quantity of the selected service(s)?

  • Select the Difference setting, to indicate an Absolute value (i.e. 100 units, or 100 dollars) or a Relative value (such as 10%)

  • Lastly, provide the Adjustment value. In the example shown in the image above, there is a value of '10' provided in the Amount field, which will adjust the total charge with -10% given the provided parameters.

  • When you're done, click the Add Policy button. Your changes are now applied to all charge-related reports.

  • Create transformer (ETL) tasks

  • Configure report definitions

  • Run graphical usage & costs reports

  • Run textual usage & costs reports

  • View a detailed breakdown of costs in an invoice-like format in the Summary report

  • Schedule various tasks and execute them at a specific date by creating workflows

  • Map missing data by creating Lookups

  • Store information related to a specific account or service by adding Metadata

  • Create notifications for certain events, like for example the publishing of a report

  • Access and manage your Datasets

  • General configuration

  • Manage users & roles

  • White labeling

  • More features are added on a regular basis.

    REST-API
    extractors
    Edit global rate for a manual service

    A manual service can have up to 3 rate-related values that can be changed: the unit rate, the charge interval value and the COGS rate. To change these values, go to the Services > Rates screen and click on the service name for which you want to change the global rate value:

    To change the rate values of this service, consider the following:

    1. Effective date is the date from when this rate is applied to the service. A service can have one or multiple revisions. You may add new rate revisions by using the Add Revision button. Existing rate revision dates can be changed using the Change Date option

    2. The Per Unit rate value is the value that the service charged for, every (portion of) configured interval service. In this example, if this would be a daily service that is charged 1 euro per Gigabyte of database usage, and each day a 100 GB database is consumed, a value of € 100 will be charged per day (and € 3100 if used for the entire month of December)

    3. It is possible to configure a COGS rate for this service. This is applied the same way as the Per Unit rate

    4. To delete an invalid or wrong revision, use the Remove Revision button. Do bear in mind you cannot delete the last rate revision for a service

    5. To save your changes, which will also initiate a re-preparation of the data, click the Save Revision button (see to learn more about report preparation)

    6. If you are planning to make more changes to other services in the same report definition, use the Save Revision > Without Preparing option. This will avoid running the re-preparation several times, and allows you to start the re-preparation only after you've made all of the required rate changes.

    Details

    The table below shows the valid values for the loglevel argument. Either the numeric level or the label can be specified. If the label is used then it must be specified in CAPITAL LETTERS.

    The log levels are cumulative, in that higher log-level values include lower level messages. For example, a level of INFO will cause FATAL, ERROR, WARN and INFO level messages to be written to the log.

    Level

    Label

    Meaning

    0

    DEBUGX

    Extended debugging information

    1

    DEBUG

    Debugging information

    2

    INFO

    Standard informational messages

    3

    The loglevel statement takes immediate effect and may be used multiple times within a USE script in order to increase or decrease the logging level at any time.

    create them for other users
    workflow created for publishing your report monthly
    regular expressions
    channel
    Notification for sending a monthly report by using the Successful Workflow status
    Adding a regex for selecting files that match
    Notification engine
    Scheduling a monthly report
    Publishing the selected report monthly
    Updating the license
    Valid license

    Single-node

    Learn how to deploy Exivity on a single-node architecture

    Exivity can be easily deployed on any system running Windows Server 2016 or later.

    Single Node System Architecture

    In order to deploy a single node Exivity system, be aware of the following system architecture:

    All components can be installed on a single node. However, even in a single node deployment, it might be desirable to run the PostgreSQL database and RabbitMQ system on a different system. Apart from installing PostgreSQL from the Exivity installer, the Exivity software is also compatible with any PostreSQL compatible database engine (PostgreSQL on Linux, Amazon RDS, CockroachDB, etc) in order to achieve High Availability or Load Balancing. The same goes for the RabbitMQ component.

    Deploying Exivity on a Single Node

    After starting the Exivity installer and providing a valid license, ensure to have all components selected and click Next:

    Then make sure to provide a valid path for the Exivity program files:

    And select a folder for the Exivity home files:

    It is recommended to configure a dedicated volume with SSD performance for the Exivity home folder location

    Provide a custom administrator username and password, or leave the default:

    Then provide the details for the PostgreSQL database. The installer will configure a local PostgreSQL database by default:

    When installing PostgreSQL on a single node host, it is recommended to use a dedicated volume for the PostgreSQL database folder (refered to as 'PGDATA')

    In case you prefer to use a remote PostgreSQL database system, ensure to deselect 'Install Local PSQL Engine' and provide your PSQL server credentials:

    When using a PostgreSQL database on a remote host, the database and user must have been created beforehand. To create the database, ask your database administator to execute a database create statement similar to the one below:

    CREATE DATABASE exdb WITH OWNER = exadmin TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8' CONNECTION LIMIT = -1;

    When you are finished configuring your PostgreSQL database settings, click the Next button to configure RabbitMQ. To use a remote RabbitMQ instance, deselect the 'Install Local RabbitMQ Engine' and provide the appropriate hostname, username, password, vhost and TCP port. In case you require TLS/SSL towards your RabbitMQ instance, select that checkbox as well:

    Once the installation is finished, ensure to check 'Start the Exivity Windows Services' to start the Exivity services after clicking Finish.

    Executing a Single Node Silent Installation

    Follow the below steps to execute an unintended setup using the silent installer flags supported by the setup program.

    1 - installing Exivity on a single node with a custom PostgreSQL data path

    The below example will silently install Exivity with mostly default settings, except having the Exivity program, home, and PGDATA in separate locations:

    2 - installing Exivity on a single node using a remote PostgreSQL database and RabbitMQ

    The below example will silently install Exivity while using a remote PostgreSQL database:

    Azure Market Place

    Learn how to deploy Exivity on Azure Market Place

    Introduction

    Apart from installing Exivity in any on-premises environment, Exivity can also be deployed from the Azure Market Place (AMP). Deploying Exivity on AMP is straightforward, and can be finished within a few minutes via your Azure Portal.

    Azure Marketplace Offering

    Login to your Azure Portal at and then go to the Marketplace to search for the Exivity offer:

    Once you've selected the Exivity offering, you should be presented with the following screen:

    After clicking the Create button, you will be redirected to the VM deployment wizard

    Deployment Wizard

    1. Fill in a Windows user/pass and pick your deployment Resource Group:

    Make sure to write down this username and password, as you will need these when connecting to the Exivity Windows server using the Remote Desktop Protocol.

    1. Try to pick a recommended VM size type that has enough CPU's and Memory (see for general system requirements). Smaller machines are possible, but will influence performance:

    1. You may select any additional options, but none are required for running Exivity successfully, so you may skip this page simply by clicking the OK button:

    1. Review the summary and click Create to deploy your Exivity VM:

    This may take a few minutes. You may review the status of the Virtual Machine in your VM list:

    Write down the Public IP address once it is available. Optionally you may configure a custom DNS name to have an easy way to connect.

    Connecting to your Exivity instance

    You can log in to your Exivity instance with RDP, but after deployment, you should be able to connect to your instance using the public IP address or DNS name of your Exivity instance considering the following default URL:

    • https://<Your_Public_IP>:8001

    The default admin username is admin with password exivity.

    By default no data is loaded into the system, so you'll have to create a new for obtaining consumption data and a to process that data. A is then created to be able to report on your consumption metrics and costs.

    Next steps

    A couple of getting started guides are provided , but feel free to drop us an or create a in our support portal. We will then assist you to get you started for your specific use case.

    AWS Market Place

    Learn how to deploy Exivity on AWS Market Place

    Introduction

    Exivity can be deployed from the AWS Market Place allowing you to have a functional Exivity solution in a matter of minutes. This tutorial will get you up and running.

    AWS Marketplace Offering

    Login to your AWS portal and access the Exivity offering .

    1. Click on Continue to Subscribe.

    2. Read our Terms and Conditions, and when ready, click on Continue to Configuration.

    3. Select the Region where you want to deploy Exivity and click on Continue to Launch.

    Deployment Wizard

    1. In the first screen, try to pick a recommended VM size type that has enough CPU's and Memory (see for general system requirements). When you are done with your selection click on Next: Configure Instance Details.

    1. In this section, you can select your VPC Configuration, or leave the default values. When you are done with your configuration click on Next: Add Storage.

    2. In the Storage section, two drives are recommended, you can influence the Volume Type and the Size (GiB) parameters. When ready, click on Next: Add Tags.

    3. In the Tags section, include the tags that are meaningful to you, as a minimum, a Name tag is recommended. Click on Next: Configure Security Group.

    1. In the Security Group section, you can leave the default recommended security group or add more rules if needed. Click on Review and Launch.

    2. Review the details and click on Launch, select your preferred Key Pair to connect to the instance.

    In a few minutes, your instance will be deployed, you can track the progress in your EC2 Dashboard:

    Write down the Public IP address / Public DNS and the Instance ID once they are available.

    Connecting to your Exivity instance

    You can logon to your Exivity instance with RDP, but after deployment, you should be able to connect to your instance using the public IP address or DNS name of your Exivity instance considering the following default URL:

    • https://<Your_Public_DNS>:8001

    The default admin username is admin with password Instance ID.

    By default no data is loaded into the system, so you'll have to create a new for obtaining consumption data and a to process that data. A is then created to be able to report on your consumption metrics and costs.

    Next Steps

    A couple of getting started guides are provided , but feel free to drop us an or create a in our support portal. We will then assist you to get started for your specific use case.

    VMware vCloud

    This article describes how to report onVMware vCloud consumption with Exivity

    Introduction

    When deploying the vCloud Extractor template for Exivity, some input and configurations are required from your vCloud environment. The following process must be completed in order to report on vCloud consumption:

    1. Create Exivity vCloud user (vCloud < 9.1)

    2. Create Exivity vCloud user (vCloud >= 9.1)

    3. Configure an Extractor

    4. Configure a Transformer

    5. Create your Report

    Create Exivity vCloud user (vCloud < 9.1)

    For environments with a vCloud previous to 9.1 Exivity needs a user with the sysadmin role on the system ORG. Please follow this procedure:

    • Click the Administration tab and click Users in the left panel.

    • Click New, fill in the required details.

    • Note the username and password.

    • Finally, click OK

    Create Exivity vCloud user (vCloud >= 9.1)

    For environments with vCloud version 9.1 or higher, you can create a user with more fine-grained permissions. Exivity needs a user with a reader role on the system ORG. Please follow this procedure:

    First you will need to create a new custom role for the Exivity user:

    • From the main menu, select Administration.

    • In the left panel, under Access Control, click Roles

    • Click New.

    • Click Save.

    Once you have the Role created, you need to set up a new user and assign the Role previously created:

    • On the VMware Cloud Service toolbar, click the VMware Cloud Services icon and select Identity & Access Management.

    • Click Add Users.

    • On the Active Users tab, fill in the details of the user you want to add to the system organization.

    Configure an Extractor

    To create the Extractor, browse to Data Sources > Extractors in the Exivity GUI and click the Create Extractor button. This will try to connect to the Exivity Github account to obtain a list of available templates. For vCloud:

    • Pick vCloud_Extractor_AdminVM from the list

    • Provide a name for the Extractor in the name field

    • Click the Create button.

    Once you've created the Extractor, next go to the Variables tab:

    Fill in all required variables with the values that you gathered in the previous step. You have the option to encrypt them. Click on Update.

    Once you've filled in all details, go to the Run tab to execute the Extractor clicking on Run Now:

    Configure a Transformer

    Once you have successfully run your vCloud Extractor, you can create a Transformer template via Data Sources > Transformers in the Exivity GUI. Browse to this location and click the Create Transformer button. Make any changes that you feel necessary and then select the run tab to execute it for a single day (today) as a test.

    Create a Report

    Once you have run both your Extractor and Transformer successfully create a Report Definition via the menu option Reports > Definitions.

    Select your vCloud dataset, and your preferred Reporting Columns to break down the report (we recommend only Org_name and VDC for the default report). When you are ready, click on Create.

    Once you have created the report, you should then click the Prepare Report button after first making sure you have selected a valid date range from the date selector shown when preparing the report.

    VMware vCenter

    This article describes how to report on VMware vCenter consumption with Exivity

    Introduction

    When deploying the vCenter Extractor template for Exivity, it is required to configure a user with appropriate permissions. Additionally, the following process must be completed in order to report on vCenter consumption:

    1. Create Exivity vCenter user

    2. Configure an Extractor

    3. Configure a Transformer

    4. Create your Report

    Exivity supports Out of the Box integration with vCenter version 6.5 and higher. For versions before 6.5, please contact [email protected].

    Create Exivity vCenter user

    Exivity needs a user with a reader role in order to retrieve consumption data. Please follow this procedure:

    • In your vCenter Configuration Manager go to Configuration > Local Users and Groups > Users.

    • Create a new User, insert username and password.

    • Take note of the username and password, they will be used later on to configure Exivity

    Configure an Extractor

    To create the Extractor, browse to Data Sources > Extractors in the Exivity GUI and click the Create Extractor button. This will try to connect to the Exivity Github account to obtain a list of available templates. To create a new Extractor based on the vCenter template follow these steps:

    • Provide a name for the Extractor in the Name field above

    • Pick vCenter 6.5 (VM Inventory REST API) template from the list

    • Click the Create button.

    Once you've created the Extractor, go to the Variables tab:

    Fill in all required variables with the values that you gathered in the previous step. You have the option to encrypt a variable in case it contains sensitive information (i.e. password) by clicking the lock icon on the right of each variable field. When finished, click Update.

    Once you've filled in all details, go to the Run tab to execute the Extractor clicking on Run Now:

    If the variables are correct and your vCenter is reachable for Exivity, you should get a successful result.

    Configure a Transformer

    Once you have successfully run your vCenter Extractor, you can create a Transformer template via Data Sources > Transformers in the Exivity GUI. Browse to this location and click the Create Transformer button. Make any changes that you feel necessary and then select the run tab to execute it for a single day (today) as a test.

    Create a Report

    Once you have run both your Extractor and Transformer successfully create a Report Definition via the menu option Reports > Definitions:

    Select your vCenter dataset, and your preferred Reporting Columns to break down the report (we recommend only cluster_name for the default report). When you are ready, click on Create.

    Once you have created the report, you should then click the Prepare Report button after first making sure you have selected a valid date range from the date selector shown when preparing the report.

    Once this is done you should be able to run any of the Accounts, Instances, Services or Invoices report types located under the Report menu for the date range you prepared the report for.

    Budget management

    It is possible to define a Budget on any level within your organization. This enables different audiences to monitor costs across different clouds. When combining this feature with Notifications, a business owner can now set budget thresholds to inform customers, departments, or project owners when they are reaching their configured budget.

    Create a budget

    In order to create a budget, navigate to Accounts > Budgets. Then click the Create button to create a new budget. In this menu, a couple of items are presented:

    Global options

    Global options apply to the entire budget, and apply to the following items:

    • Interval: determine whether the budget is applied Monthly, Quarterly or Yearly

    • Apply to: a configured budget is by default applied to the total Charge of the configured Interval. It is however also possible to create a budget that is applied to the Cost of Cogs (COGS) instead

    Revisions

    A budget configuration can potentially change year over year, and therefore it is possible to create different budget revisions. Each revision can have the following settings applied:

    • Revision start date: the start Month, Quarter or Year for this budget revision

    • Filter by: typically a budget is applied to a single or multiple Accounts. It is however possible to add additional filtering on the Service or Service Category. By applying Service based filtering, it is possible to limit the scope for a configured budget

    Accounts

    When a budget is created, it is possible to set a budget money amount for one or multiple accounts. In case each account for which a budget is set also has 1 or multiple levels of child accounts, it is possible to control how the budget 'trickles down' the organizational structure:

    • Account selection: it is required to select an Account from any level in your Report Definition. It is then possible to set a budget value (i.e. $100000) in the grey box next to the Account.

      • The same applies to any Child Accounts for which you may set/overwrite a budget. You may add a Child Account to the list, by clicking the green button left of the account name

    To create the Budget, click the Create button. It is now possible to view the spendings under budget via the

    Changing a budget

    An existing budget can be changed by navigating to Accounts > Budgets and then clicking the budget which you want to change.

    Changing an existing budget revision

    Once a budget has been saved, you will be unable to change the start date unless you edit the Budget Revision:

    Once you have enabled edit mode for an existing Budget Revision, you will be able to change the start date:

    After making these changes you will need to save these by clicking the blue checkbox on the right. You can also cancel your change using the blue x-sign, or delete the revision by clicking the red recycle bin.

    Adding a new budget revision

    Similar to changing a Budget Revision, it is possible to add a new revision that holds a different start date with the revised budget plan. To do this, you will need to click the green + sign:

    encrypt

    This article assumes knowledge of variables.

    The encrypt statement is used to conceal the value of a variable, such that it does not appear in plain text in a USE script.

    Syntax

    encrypt varname = value_to_be_encrypted

    Details

    The encrypt statement differs from other statements in that it takes effect before the execution of a USE script begins. In this regard, it is effectively a directive to the internal script pre-processor which prepares a script for execution.

    Comments, quotes and escapes in the value to be encrypted are treated as literal text up until the end of the line.

    White-space following the value to be encrypted will therefore be included in the encrypted result.

    White-space preceding the value to be encrypted will be ignored and will not be included in the encrypted result.

    Encrypting one or more variables

    Any variable prefixed with the word encrypt will be encrypted by the pre-processor and the script file itself will be modified as follows:

    • All text (including trailing white-space) from the word following the = character up to the end of the line is encrypted

    • The encrypted value is base64 encoded

    • The original variable value in the USE script is substituted with the result

    This process is repeated for all variables preceded by the encrypt keyword.

    As a side effect of the encryption process, it is not currently possible to encrypt a value that begins with a space or a tab. This functionality will be implemented in due course.

    Using encrypted variables

    Once encrypted a variable can be used just as any other, the only requirement being that the encrypted keyword preceding its declaration is not removed or modified.

    To change the value of an encrypted variable simply replace the declaration altogether and precede the new declaration with encrypt. Upon first execution, the USE script will be updated with an encrypted version of the variable as described above.

    Encrypted values can only be used on the system that they were created on. If an encrypted value is moved or copied to a different installation of Exivity then any attempt to reference or decrypt it will result in something other than the original value.

    Example

    Firstly, create the script as usual, with encrypt preceding any variables that are to be encrypted:

    Secondly, run the script. Prior to execution, the script will be automatically modified as shown below:

    hash

    The hash statement is used to generate a base-16 or base-64 encoded hash of data stored in a variable or named buffer.

    Syntax

    hash sha256 [HMAC [b16|b64]key] target|{target}asresult[b16|b64]

    hash md5target|{target}asresult[b16|b64]

    Details

    The hash statement uses the contents of target as its input and places the final result into result. The SHA256 and MD5 hash algorithms are supported.

    If target is surrounded with curly braces like {this} then it is taken to be the name of a memory buffer and the contents of the buffer will be used as input. Otherwise, it is treated as the name of the variable, the value of which will be hashed.

    By default, the resulting hash is base-16 encoded and the result placed into the variable specified by the result argument.

    result is the name of the variable to put the output into, and not a reference to the contents of that variable. This is why it is not ${result}

    If the optional HMACkey arguments are provided when the hash type is sha256 then the secret in key will be used to generate an HMAC-SHA-256 result. The optional b64 or b16 argument following the HMAC option indicates that the key is base-64 or base-16 encoded. By default, a clear-text key is assumed.

    If the optional b64 argument is used (base64 may also be specified) after the result variable, then the result will be base-64 encoded.

    The optional b16 argument (base16 may also be used) after the result variable is provided for completeness, but need not be specified as this is the default encoding to use.

    Example

    Running the script:

    results in the following output:

    loop

    The loop statement executes one or more statements multiple times.

    Syntax

    The opening { may be placed on a line of its own if preferred but the closing } must be on a line of its own.

    Details

    The loop statement will loop indefinitely unless one of three exit conditions cause it to stop. These are as follows:

    1. The number of loops specified by the count parameter are completed

    2. At least as many milliseconds as are specified by the timelimit parameter elapse

    3. An statement explicitly exits the loop

    In all three cases when the loop exits, execution of the script will continue from the first statement after the closing } marking the end of the loop.

    In the event that both count and timelimit parameters are specified, the loop will exit as soon as one or other of the limits have been reached, whichever comes first.

    Both the count and timeout parameters are optional. If omitted then the default for both of them will be infinite.

    The loop statement will automatically create and update a variable called loop_label.COUNT which can be referenced to determine how many times the loop has executed (as shown in the example below). This variable is not deleted when the loop exits which means that it is possible to know how many times any given loop executed, even after the loop has exited.

    Any specified timeout value is evaluated at the end of each execution of the loop and as such the actual time before the loop exits is likely to be a short time (typically a few milliseconds) greater than the specified value. In practice this should be of no consequence.

    Example

    The loop shown above will result in the following output:

    lowercase

    Overview

    The lowercase statement is used to set all letters in a variable or named buffer to lower case.

    Syntax

    lowercasevar_name

    lowercase{buf_name}

    Details

    The single parameter to the lowercase statement determines whether or not the text to be normalised to lower case is located in a variable value or in a named buffer.

    If the parameter starts and ends with { and } respectively then the text to be processed is in the named buffer identified by the contents of the curly braces.

    If not, then the text to be processed is the value of the variable named by the parameter.

    Examples

    The following script:

    ... produces the following output:

    uppercase

    Overview

    The uppercase statement is used to set all letters in a variable or named buffer to upper case.

    Syntax

    uppercasevar_name

    uppercase{buf_name}

    Details

    The single parameter to the uppercase statement determines whether or not the text to be normalised to upper case is located in a variable value or in a named buffer.

    If the parameter starts and ends with { and } respectively then the text to be processed is in the named buffer identified by the contents of the curly braces.

    If not, then the text to be processed is the value of the variable named by the parameter.

    Examples

    The following script:

    ... produces the following output:

    Server requirements

    Learn about the server requirements and browser support

    Server

    Exivity can be installed on any Microsoft Windows 2016 or higher server in your on-premises data center or in the cloud. Depending on the amount of data, Exivity recommends the following system configuration:

    Azure Stack

    This article describes how to report Azure Stack consumption with Exivity

    Introduction

    When deploying the Azure Stack for Exivity, some configuration is required within your Azure Stack environment and a lookup file needs to be created. The following process must be completed in order to report on Azure Stack consumption:

    1. Create an Exivity Enterprise Application in your Azure AD for authentication

    Azure EA

    Introduction

    When deploying the Azure EA for Exivity, some configuration is required within your Azure EA environment. The following process must be completed in order to report on Azure EA consumption:

    1. Create an Access Key and Secret in your Azure EA portal

    How to automatically send workflow errors as webhooks to a monitoring system

    Learn how to automatically send workflow errors as webhooks

    Exivity offers the possibility to . Users can get notified when a workflow has finished successfully or when it has failed. In some scenarios, you may want to get informed by a Monitoring System about any Failed Workflow alerts. This is possible through webhooks.

    What are Webhooks?

    Webhooks are automated messages sent from one app to another when something happens.

    In simple words

    Manage

    The services screen gives a user the ability to view and change the available services in the service catalogue of the Exivity deployment. When creating new services, it is required to use a Transformer with the statement.

    Obtaining details of a Service

    To view the details of a service that has already been created, click on one of the services listed in the Services > Overview screen:

    It is possible to add a new service or change an existing one using the following parameters:

    Subscriptions

    Subscriptions enable users to manage one-off and recurring transactions for charging non-metered services.

    • One-off Subscriptions may be used for managing server setup fees or applying a one-off correction.

    • Recurring Subscriptions are typically used to charge a specific service for a certain quantity every month or every year.

    Subscriptions are always applied to a leaf account on the deepest level of a definition.

    Configuration

    The Data pipelines menu allows an admin of the Exivity solution to manage USE 'Extractors'. USE has its own language reference, which is fully covered in a separate chapter of this documentation.

    As described in the , you are free to use your editor of choice to create and modify USE Extractors. However, the GUI also comes with a built-in USE Extractor-editor.

    Creating Extractors

    To create a new USE Extractor

    match

    The match statement is to used search either a specified string or the contents of a named using a regular expression.

    Syntax

    matchlabel expression target

    if

    The if statement is used to conditionally execute one or more statements. In conjunction with an optional else statement it can cause one or other of two blocks of statements to be executed depending on whether an expression is true or false.

    Syntax

    if(expression)

    looplabel [count] [timeout timelimit] { 
      # Statements
    }

    WARN

    Warnings and non-fatal errors

    4

    ERROR

    Run-time errors

    5

    FATAL

    Non-recoverable errors

    PSQL

    CPU

    RAM

    Storage

    Small

    < 5000

    1 – 2 node(s)

    0 – 1 node

    0 – 1 node

    4 cores

    12 GB

    200 GB

    Medium

    < 10000

    1 – 2 node(s)

    0 – 1 node

    0 – 1 node

    6 cores

    16 GB

    400 GB

    Large

    < 15000

    2 – 4 nodes

    1 instance*

    1 instance*

    8 cores

    24 GB

    600 GB

    X-Large

    > 15000

    4 or more nodes**

    1 instance*

    1 instance*

    8 cores

    32 GB

    TBD**

    * a managed high available and high-performance cluster would be recommended ** consult with Exivity for advisory on appropriate sizing of extra-large environments

    Each node can be either a physical or virtual machine. The CPU and RAM requirements provided in the table above are defined per node whereas the Storage requirements are to be considered for the complete UMB deployment as a whole and should be hosted on shared storage (i.e. SMB/NFS). An operating system will need to be pre-installed on every node. Web and Backend nodes require a version of Windows 2016 Standard or higher to be installed. In scenarios where the RabbitMQ and/or PostgreSQL nodes are deployed on dedicated or clustered environments, the operating system guidelines specific to those applications are to be considered.

    Client

    The Exivity front-end supports the following desktop browsers:

    • Google Chrome v59+

    • Microsoft Edge v41+ (EdgeHTML 16+ / Blink 80+)

    • Opera v46+

    • Mozilla Firefox v65+ (support added in Exivity v2.10.0)

    • Apple Safari v10.1+ (support added in Exivity v3.2.7)

    We aim to provide the fastest metering and billing solution available today, and this means we have to rely on modern (web) technologies. Part of our speed comes from pre-processing the raw data, and part comes from having almost all processed data available right in the browser, and streaming the missing pieces on request.

    To efficiently and reliably achieve this we use some very specific technologies not yet available in all browsers. When they do catch up, we'll fully support those browsers.

    Size

    CUPR

    Web & Backend

    RabbitMQ

    The encrypt keyword for that variable is changed to encrypted
  • The USE script is overwritten on disk in this new form

  • exit_loop
    var test = "123HellO WOrlD!!"
    print ${test}
    
    uppercase test
    print Upper variable: ${test}
    
    lowercase test
    print Lower variable: ${test}
    
    buffer testbuf = DATA "123HellO WOrlD!!"
    
    uppercase {testbuf}
    print Upper buffer: {testbuf}
    
    lowercase {testbuf}
    print Lower buffer: {testbuf}
    var test = "123HellO WOrlD!!"
    print ${test}
    
    uppercase test
    print Upper variable: ${test}
    
    lowercase test
    print Lower variable: ${test}
    
    buffer testbuf = DATA "123HellO WOrlD!!"
    
    uppercase {testbuf}
    print Upper buffer: {testbuf}
    
    lowercase {testbuf}
    print Lower buffer: {testbuf}
    Details

    The three parameters serve the following purposes:

    Parameter

    Value

    label

    A unique name to associate with this match

    expression

    The regular expression to apply to the target

    target

    The data to search using the expression

    Label

    The label associates a meaningful name to the search. Once the match has been attempted, two variables will be created or updated as follows:

    Variable

    Possible values

    Notes

    label.STATUS

    MATCH NOMATCH ERROR

    The result of applying the expression (ERROR infers an invalid expression)

    label.RESULT

    (A string) (Empty value)

    The text matched by the subgroup in the expression, if any

    These variables can be checked after the match in order to determine the result status and access the results.

    Expression

    The regular expression must contain one or more characters enclosed in brackets - ( ... ) - the contents of which are termed a subgroup. If a successful match is made then the portion of the target text that was matched by the subgroup will be returned in the _label.RESULT_variable.

    Target

    The target determines whether a supplied string or the contents of a named buffer are searched. By default the parameter will be treated as a string.

    If the string contains white-space then it must be enclosed in double quotes

    If the target argument is surrounded with curly braces - { ... } - then it is taken to be the name of a buffer and the expression will be applied to the contents of that buffer.

    Regular expressions are generally used for searching ASCII data. Searching binary data is possible but may be of limited usefulness.

    Examples

    Search the contents of a variable for the text following the word 'connection:' with or without a capital 'C':

    Search a text file previously retrieved from a HTTP request to locate the word 'Error' or 'error'

    buffer
    {

    } [else {

    }]

    Details

    If the condition evaluates to true, then the first block of statements is executed, and the second block (if present) is skipped over. If the condition evaluates to false then the first block of statements is skipped and the second block (if present) is executed.

    The opening { character at the start of each block may be placed on a line of its own if preferred but the closing } must be on a line of its own.

    Multiple conditions can be used in a single expression and combined with the Boolean operators && or || (for AND and OR respectively) so long as each condition is enclosed in braces. For example:

    Example

    Given the source JSON in a file called example.json, the following USE script:

    will produce the following output:

    `# Statements`
    # ---- Start Config ----
    encrypt var username = admin
    encrypt var password = topsecret
    var server = "http://localhost"
    var port = 8080
    var api_method = getdetails
    # ---- End Config ----
    
    set http_authtype basic
    set http_username ${username}
    set http_password ${password}
    
    buffer {response} = http GET ${server}:${port}/rest/v2/${api_method}
    # ---- Start Config ----
    encrypted var username = AGF5dU0KJaB+NyHWu2lkhw==
    encrypted var password = b0Sa29tyL+M8wix/+JokjMCdeMwiY9n5
    var server = "http://localhost"
    var port = 8080
    var api_method = getdetails
    # ---- End Config ----
    
    set http_authtype basic
    set http_username ${username}
    set http_password ${password}
    
    buffer {response} = http GET ${server}:${port}/rest/v2/${api_method}
    var hash_me = "This is the data to hash"
    var my_secret = "This is my secret key"
    
    # SHA256
    hash sha256 hash_me as result
    print The SHA256 hash of '${hash_me}' in base-16 is:
    print ${result}${NEWLINE}
    
    hash sha256 hash_me as result b64
    print The SHA256 hash of '${hash_me}' in base-64 is:
    print ${result}${NEWLINE}
    
    # HMACSHA256
    hash sha256 hmac ${my_secret} hash_me as result
    print The HMACSHA256 hash of '${hash_me}' (using '${my_secret}') in base-16 is:
    print ${result}${NEWLINE}
    
    hash sha256 hmac ${my_secret} hash_me as result b64
    print The HMACSHA256 hash of '${hash_me}' (using '${my_secret}') in base-64 is:
    print ${result}${NEWLINE}
    The SHA256 hash of 'This is the data to hash' in base-16 is:
    1702c37675c14d0ea99b7c23ec29c36286d1769a9f65212218d4380534a53a7a
    
    The SHA256 hash of 'This is the data to hash' in base-64 is:
    FwLDdnXBTQ6pm3wj7CnDYobRdpqfZSEiGNQ4BTSlOno=
    
    The HMACSHA256 hash of 'This is the data to hash' (using 'This is my secret key') in base-16 is:
    cf854e99094ea5c2a88ee0901a305d5f25dfb5a0f0905eec703618080567b4b5
    
    The HMACSHA256 hash of 'This is the data to hash' (using 'This is my secret key') in base-64 is:
    z4VOmQlOpcKojuCQGjBdXyXftaDwkF7scDYYCAVntLU=
    loop example 10 {
        This is loop number ${example.COUNT}
    }
    This is loop number 1
    This is loop number 2
    This is loop number 3
    This is loop number 4
    This is loop number 5
    This is loop number 6
    This is loop number 7
    This is loop number 8
    This is loop number 9
    This is loop number 10
    =================================
    USE: Unified Scriptable Extractor
    =================================
    123HellO WOrlD!!
    Upper variable: 123HELLO WORLD!!
    Lower variable: 123hello world!!
    Upper buffer: 123HELLO WORLD!!
    Lower buffer: 123hello world!!
    
    USE script finished successfully
    
    =================================
    USE: Unified Scriptable Extractor
    =================================
    123HellO WOrlD!!
    Upper variable: 123HELLO WORLD!!
    Lower variable: 123hello world!!
    Upper buffer: 123HELLO WORLD!!
    Lower buffer: 123hello world!!
    
    USE script finished successfully
    
    match varsearch "[Cc]onnection: (.*)" ${variable}
    if (${varsearch.STATUS} = MATCH) {
        print Connection string is: ${varsearch.RESULT}
    } else {
        print No match found
    }
    match error_check "([Ee]rror)" {text_data}
    if (${error_check.STATUS} == MATCH) {
        print Found: ${error_check.RESULT}
    } else {
        print No error was found
    }
    `# Statements`
    if (($JSON{example}.[status] == "OK") || (${override} == "enabled")) { 
        # Execute if the status is "OK" or if we have set ${override} to "enabled"
    }
    var JSON_dir = "examples\json"
    buffer example = FILE "${JSON_dir}\doc.json"
    
    var title = 
    
    # For every element in the 'items' array ...
    foreach $JSON{example}.[items] as this_item
    {
        # Extract the item name and id
        var item_name = $JSON(this_item).[name]
        var sub_id = $JSON(this_item).[id]
    
        if (${sub_id} == 02) {
            # For every child of the 'subvalues' object ...
            foreach $JSON(this_item).[subvalues] as this_subvalue
            {
                # Get the subvalue name and value
                var sub_name = ${this_subvalue.NAME}
                var sub_value = ${this_subvalue.VALUE}
    
                # Render an output line
                print ${title} (id:${sub_id} -> Item: ${item_name} -> Subvalue:${sub_name} = ${sub_value} 
            }
        } else {
                print Skipping unwanted id: ${sub_id}
            }
    
    }
    discard {example}
    terminate
        Skipping unwanted id: 01
        Example JSON data (id: 02) -> Item: Item number two -> Subvalue:0 = 10
        Example JSON data (id: 02) -> Item: Item number two -> Subvalue:10 = 442
        Example JSON data (id: 02) -> Item: Item number two -> Subvalue:100 = 783
        Example JSON data (id: 02) -> Item: Item number two -> Subvalue:1000 = 1009

    In Choose Action select Launch though EC2 and click on Launch to access the Deployment Wizard.

    here
    this page
    Extractor
    Transformer
    Report Definition
    here
    e-mail
    ticket
    Login to Exivity
    .
    Enter a name and, optionally, a description for the new role.
  • Select the rights that you want to associate with the role. Exivity needs all the View Rights in the different tabs and also Perform Administrator Queries under the General tab.

  • In the Role in organization text box, assign the role previously created.
    vCloud Extractor

    Click on Create.

  • Under Users and Groups, click Add, select the previously created user.

  • On the Select Users and Groups dialog, click Add, and then click OK.

  • Add a Reader role in the dropdown.

  • Finally, click on OK.

  • vCenter Extractor
    Prepare your Report
    NOTE
    : a child account may also be excluded from a budget, by clicking the Exclude checkbox right of the name of the Account
  • Remainder: using the Remainder drop-down it is possible to control the distribution of the budget towards child accounts. The two options to pick by default are:

    • even: each child account will get an even amount of budget. Example: Consider a top-level account 'ACME Corp' with a monthly budget set of $100.000. When 'ACME Corp' has 10 child 'Business Unit' accounts, each of these 'Business Units' will get an even amount of the budget: In this case, this will be $10.000.

    • shared: when the distribution of the remainder is set to shared, the consumption of child accounts is ignored. As long as the total spendings of all child accounts do not go beyond the configured budget. Example: Consider a top-level account 'ACME Corp' with a monthly budget set of $100.000. When 'ACME Corp' has 10 child 'Business Unit' accounts, each of these 'Business Units' combined should not use more than $100.000.

    • none: the option to not distribute any remainders is only applicable when overriding the budget percentage for each child account. This means it is required to set a distinct budget percentage manually for each child account.

  • Budget Report
    Edit a budget revision
    Change a budget revision start date
    Add a new budget revision

    Configure a rate card lookup file

  • Configure an Extractor

  • Configure a Transformer

  • Create your Report

  • Creating an Enterprise Application

    For Exivity to authenticate with Azure Stack, you will need to create an application in the Azure AD where you have registered your Azure Stack management node:

    Example Exivity Azure AD Application

    Make sure to write down the Application ID and its corresponding Secret, since you will need these when configuring the Extractor later.

    When you create this application in your Azure AD make sure it has (at least) the Reader Role in your Default Provider Subscription:

    Providing Reader Role access to the Exivity Application

    Create a rate card

    As Microsoft does not provide rate card information via the Azure Stack Consumption API you will only obtain usage metrics from Azure Stack for all of the Meter IDs that are mentioned here by Microsoft.

    Exivity provides a template rate card that you can use for creating your own rates. Please bear in mind that these rates are fictional, thus you should update it with your preferred values. However to get started, you can use the file linked above by placing it as a csv file in your Exivity home folder at the following location:

    %EXIVITY_HOME_PATH%\system\extracted\AzureStack\rates\azure_stack_example_rates.csv

    Once loaded into the system using a Transformer you will be able to change the rates easily through the GUI. This will also enable you to test any draft rates.

    Configure Extractor

    To create the Extractor, browse to Data Sources > Extractors in the Exivity GUI and click the Create Extractor button. This will try to connect to the Exivity Github account to obtain a list of available templates. For Azure Stack:

    • Pick Azure_Stack_Extractor_(App+Secret) from the list

    • Provide a name for the Extractor in the name field

    • Click the Create button.

    Once you've created the Extractor, go to first tab: Variables

    Azure Stack Variables

    Fill in all required variables marked within the red box in above screenshot. If you don't know some of the required GUIDs, most of these can be obtained by browsing to the Azure Stack management node URL:

    • https://adminmanagement.**<your.domain.com>**/metadata/endpoints?api-version=2015-01-01

    Another way to obtain some of this information is using the Diagnostics button in your management portal:

    Diagnostics file from Azure Stack management portal

    When you click the Show Diagnostics link, it should download a JSON file containing most of the parameters you'll need, such as Provider GUID, Audience GUID etc.

    Once you've filled in all details, go to the Run tab to execute the Extractor for a single day:

    Executing the Extractor manually for a single day

    The Extractor requires two parameters in yyyyMMdd format:

    • from_date is the date for which you wish to collect consumption data

    • to_date should be the date immediately following from_date

    These should be specified as shown in the screenshot above, separated with a space.

    When you click the Run Now button, you should get a successful result.

    Configure Transformer

    Once you have successfully run your Azure Stack Extractor, you can create a Transformer template via Data Sources > Transformers in the Exivity GUI. Browse to this location and click the Create Transformer button. Make any changes that you feel necessary and then select the run tab to execute it for a single day as a test.

    Make sure that when running the Transformer you select custom range in the drop-down menu labelled Run for and select the same day as for which you have extracted consumption data in the previous step.

    Create a Report

    Once you have run both your Extractor and Transformer successfully create a Report Definition via the menu option Reports > Definitions:

    Creating a Report Definition

    Select the column(s) by which you would like to break down the costs. Once you have created the report, you should then click the Prepare Report button after first making sure you have selected a valid date range from the date selector shown when preparing the report.

    Prepare your Report

    Once this is done you should be able to run any of Accounts, Instances, Services or Invoices report types located under the Report menu for the date range you prepared the report for.

    Extraction template

    Configure the Azure EA Extractor

  • Configure your Azure EA Transformer

  • Create a Report definition

  • Creating an Access Key & Secret

    For Exivity to authenticate with Azure EA, you will need to create an access key and secret in the Azure EA Portal. Also, you will need to find your enrollment number. To do this, login to your Azure EA Portal on https://ea.azure.com and navigate to the Reports menu:

    Creating an API Access Key & Secret for your Azure EA environment

    Just under the Windows logo in the left menu section you will find your enrollment number which you will need to provide later when configuring the Extractor. To create the Access Key & Secret, click on Download Usage and then on API Access Key. This brings you to the menu where you can manage your Access Keys and corresponding secret which you will need to configure the data Extractor.

    Configure Extractor

    To create the Extractor, browse to Data Sources > Extractors in the Exivity GUI and click the Create Extractor button. This will try to connect to the Exivity Github account to obtain a list of available templates. For Azure EA:

    • Pick Azure EA from the list

    • Provide a name for the Extractor in the name field

    • Click the Create button.

    Once you've created the Extractor, go to the first tab: Variables

    Azure EA Extraction Template

    Fill in all variables in the above screenshot and feel free to encrypt any sensitive data using the lock symbol on the right.

    Once you've filled in all details, go to the Run tab to execute the Extractor for a single day:

    Executing the Extractor manually for a single day

    The Extractor requires two parameters in yyyyMMdd format:

    • from_date is the date for which you wish to collect consumption data

    • to_date should be the date immediately following from_date

    These should be specified as shown in the screenshot above, separated with a space.

    When you click the Run Now button, you should get a successful result.

    Configure Transformer

    Once you have successfully run your Azure EA Blob Extractor, you can create a Transformer template via Data Sources > Transformers in the Exivity GUI. Browse to this location and click the Create Transformer button. Make any changes that you feel necessary and then select the run tab to execute it for a single day as a test.

    Make sure that when running the Transformer you select custom range in the drop-down menu labelled Run for and select the same day as for which you have extracted consumption data in the previous step.

    Create a Report

    Once you have run both your Extractor and Transformer successfully create a Report Definition via the menu option Reports > Definitions:

    Creating a Report Definition

    Select the column(s) by which you would like to break down the costs. Once you have created the report, you should then click the Prepare Report button after first making sure you have selected a valid date range from the date selector shown when preparing the report.

    Prepare your Report

    Once this is done you should be able to run any of Accounts, Instances, Services or Invoices report types located under the Report menu for the date range you prepared the report for.

    Extraction template
    : A webhook URL is provided by the receiving application (in this case, the Monitoring System), and acts like a phone number that the other application (in this case, Exivity) can "call" when an event happens. The data about the event is sent to the webhook URL in either JSON or XML format and it is known as the
    Payload
    . As seen in the diagrams above, there is no need to have continuous polling between the Monitoring system and Exivity when using webhooks.

    The Monitoring System, receiving the notification through the chosen URL, not only knows an event occurred on another system, but it can also react to it.

    How to test your webhooks

    In a real case scenario, you would have a webhook endpoint provided by your Monitoring System of choice. For the purpose of this tutorial, we are going to simulate that by using a mock endpoint.

    One powerful tool that can get you started with the configuration of a webhook channel is Hookdeck. Hookdeck is a platform that helps you manage your webhooks and create mock endpoints for testing purposes.

    All webhook events received through Hookdeck are monitored and readily available to you via the Dashboard or API.

    Create a Hookdeck connection

    You can easily create your first webhook with their Get Started guide.

    During the setting up process, make sure the Destination is configured as a mock endpoint, typically https://mock.hookdeck.com

    After you have created your connection, it should look similar to this one:

    Example of connection between Exivity and Hookdeck

    Create a webhook channel

    The next step is to create a notification channel that uses this webhook endpoint. 1. To achieve this, open the Exivity GUI, navigate to the Admin > My notifications menu and select the Channels tab.

    2. Provide a Channel name for your channel and select the Channel type as Webhook.

    3. Go to your Hookdeck Dashboard and retrieve the URL for the webhook. This URL can be easily found by navigating to the Connections menu, under the Source name:

    Copy the Webhook URL

    4. Fill in the Webhook URL with the URL provided by Hookdeck:

    Creating a Webhook Channel

    5. Click the Create button.

    Create a workflow notification

    Next, create a notification for your workflow and in the Channel section, add the Webhook Channel that you just created.

    Also, make sure you fill in a suggestive Name, Title and Description, since you want to identify the Error status easily in your Monitoring System (or in this case, in the Hookdeck events Dashboard):

    Example of notification for a workflow that fails

    Test case using Hookdeck

    Now that you have created a webhook channel and a notification for your workflow using that channel, you can test it to see that your Hookdeck mock endpoint catches your notifications successfully. 1. Ideally, none of your workflows would fail, but for testing purposes, we are going to make one of them fail. Select one of your workflows, and add a faulty step. For example, add an Execute type of step and fill in the following command: ls -la

    Obviously, the ls -la command is destined for Linux/Unix operating systems, not for Windows (which is the default OS for your Exivity installation), therefore it is going to fail.

    Example of a command that fails in your workflow

    2. Click Update to add the new step, then you may also click the Run Now button if you don't want to wait until your Workflow is scheduled to execute.

    3. You should immediately see a new event in the Hookdeck events section in the Dashboard:

    Hookdeck events for your Connection

    The 200 OK status code indicates that the connection between Exivity and Hookdeck was successful, not that the Workflow has not failed.

    4. Select the most recent event (which is at the top of the list) and on the right side of your screen, you will see a window containing the Response/Payload in a JSON format. In the body section you can see the message from the notification you created to let you know when your workflow has failed:

    The notification message got sent successfully to the Hookdeck endpoint

    Analogously, the same steps you have seen in this how-to guide will apply to sending your failed workflow notifications to a Monitoring System, except you will not use the mock endpoint provided by Hookdeck, but the webhook URL provided by your Monitoring System.

    create notifications for workflows
    Continuous Polling
    Webhook
    • The description contains the friendly name for this service

    • The unique key value of this service (see service)

    • The time-stamp when the service was created

    • The time-stamp when the service was updated

    • The DataSet where this service relates to

    • Where to obtain the service name from (in the header or in the data). The value will be used for the service description (see 1)

    • The source column that has the consumed quantity

    • The Instance column refers to the chargeable instance column value (i.e. VM ID) which is required for automatic Services

    • The interval defines the frequency of how often this service is being charged. Meaning: automatic (every occurrence/record/hour), daily or monthly

    • When using proration, this checkbox will be enabled. Proration takes into account whether to charge for a portion of a consumption interval. For example: when having 10 days of consumption for a monthly service with a rate configured of € 90 per unit that has proration enabled, will result in a line item of € 30 for that service monthly charge

    • The Billing Type provides information whether this is a service that has manual (using manually provided, adjustable-rate value) or automatic (using rate column) rates configured

    • COGS (Cost of Goods) of a service will have its own rate configuration, which can be either manual/automatic per unit or manual/automatic per interval

    Changing or Deleting a Service

    In case you need to change the configuration of an already populated service, the GUI enables you to do so. To change an existing service you will need to make sure that you have first selected the appropriate report from the Report Selector and the left top of the screen. To change the configuration or delete a service, you will need to follow these steps:

    • Navigate to the Services > Overview menu and click the white Edit button at the top of the service list. The system will warn you that any changes made to the existing service, may require you to re-Prepare the currently selected Report , found at Data pipelines > Reports.

    • If you have confirmed the warning message, you will be able to select one, multiple or all of the services within the currently selected Report. You can then select the Delete button next to the Edit button, to delete all selected services.

    • If you want to change the configuration of one of the services, you should first select the service which you'd like to change.

    • When you have the service that you want to change selected, you can change any of the available parameters such as the Instance Column, Interval, etc. Once you are satisfied with your changes, you may press the Update button.

    • Ensure to re-Prepare your Report in case you have made any changes.

    Creating a Service

    Although we recommend automatically creating services from the Transformer ETL process, it is also possible to manually create a service in the GUI. To create a new service, navigate to the Services > Overview menu and click the Edit button at the top of the service list. The system will warn you that any changes made to the existing service, may require you to re-Prepare the currently selected Report , found at Data pipelines > Reports.

    After confirmation of this warning message, the Create button is enabled:

    Now it is possible to create a new service. Ensure to fill in all fields, since all are mandatory:

    When your new service configuration satisfies your need, you may click the Create button. Ensure to add a rate revision afterward.

    Apply Metadata to Services

    Once a Metadata Defenition has been applied to a Dataset, it is possible to apply tagging or other metadata keys/values to a service. This can be achieved by selecting a service and then selecting the Metadata tab:

    Configuring Metadata values for a Service

    In this tab it is possible to configure all metadata fields which are available in the parent Dataset. To save your changes, click the Update button.

    services
    Creating a Subscription

    To create a subscription, first, browse to the Accounts > Subscriptions menu. In the Subscriptions menu, it is now first required to select a 'leaf account' (meaning: an account at the deepest level of a report definition). Once this account has been selected, a new Subscription can be created:

    Creating a new Subscription for a leaf account

    When creating a new subscription, it is mandatory to first give it a name and select the Service Category and Service where this Subscription applies to:

    Selecting the Service

    Once a Service has been selected, depending on the type of Service, you are either free to fill in a customer Subscription Rate or optional Cost. If this service has already rates configured in Exivity, it will automatically show and use those:

    Providing custom rates for a Subscription

    Next, the Interval section will be required to fill in. First provide the Subscription Type, which can be either a Recurring subscription or a One-off transaction. In the case of a Recurring subscription, it is required to provide an Interval of Daily, Monthly or Yearly. This determines how often the Subscription is being charged. Then the Subscription Start date needs to be set, which determines the initial charge date. Also, an optional End date is configured. If none is provided, it will be charged until the Subscription is removed, or changed later to include an End date.

    When subscribing to a Service that has Manual rates configured, the Subscription inherits the rate from the Global rate revision of the subscribed Service. It is therefore required to have the start date of the Subscription set to an equal or more recent date than the oldest configured Global Rate Revision of the subscribed Service.

    When creating a Monthly or Yearly Subscription, it is also required to select a Charge day. This determines the day of the month when the Subscription is being charged. In case of a Yearly subscription, it will also be required to specify a Charge month, to indicate which month of the year the Subscription will be charged:

    Depending on the Interval, a Change month and/or day are required

    The last step in creating the subscription is specifying a consumed Quantity, which should reflect the number of units being charged per interval. Optionally an Instance Identifier may be specified. If none is provided, it will fall back to the name of the Subscription.

    Click the Create button to create this subscription. In case you are creating a Subscription with a historical Start date, it will then be required to use the Create - with preparing button:

    In case you are creating multiple Subscriptions, it is advisable to stack these creations to avoid having to execute Create - With preparing multiple times in a row.

    Report
    , follow these steps:
    1. From the menu on the left, select Data pipelines > Extractors

    2. To create a new USE Extractor click the Create extractor button

    3. When your Exivity instance has access to the Internet, it will pull in the latest set of Extraction Templates from our Github account. These templates are then presented to you, and you can pick one from the list to start Extracting. If you don't have access to the internet, you can download them directly from Github. You are also free to start creating your own Extractor from scratch.

    4. Provide a meaningful name for your USE Extractor. In the above example, we're creating a USE Extractor for VMware vCenter 6.5 and higher. Therefore we call this USE Extractor: 'vCenter 6.5'

    5. When you're done creating your USE Extractor, click the Create at the bottom of the screen

    Edit and Delete Extractors

    When you want to change or delete an existing USE Extractor, first select one from the list of USE Extractors that you want to change:

    1. After you select your USE Extractor, you can change its variable values at the Variables tab.

    2. At the Editor tab you can make more advanced changes or delete the original USE script Such as:

      • changing existing API calls

      • changing the csv output format

    Don't forget to save any changes with the "SAVE" button.

    Run and Schedule Extractors

    To test your USE Extractor, you can execute or schedule it directly from the Glass interface:

    Run USE Extractors
    1. After you have selected the USE Extractor that you would like to run, click on the Run tab next to the Editor tab

    2. Most Extractors require one or more parameters, usually in a date format such as 20171231. In this example, the USE Extractor requires two parameters: a from and to date

    3. When you've provided the required run parameters, click Run Now to execute the USE Extractor. After the USE Extractor has completed running, you will receive some success or failed message, after which you might need to make additional changes to your USE Extractor

    4. Once you're happy with your output, you can schedule the USE Extractor via the Schedule tab, which is located next to the Run tab at the top of the screen.

    5. USE Extractors can be scheduled to run once a day at a specific time. Also, you should provide a from and (optionally) to date, which are provided by using an offset value. For example, if you want to use the day before yesterday as a from date, you should use the down-pointing arrows on the right, to select a value of -2. If the to date should always correspond with yesterday's date, you should provide a value there of -1.

    6. If your Use Extractor requires additional parameters, you may provide these as well in the Schedule with these arguments text field.

    7. When you're done with the schedule configuration, you may click the Schedule button. In case you want to change or remove this schedule afterwards, click the Unschedule button.

    As of version 1.6, it is recommended to use the Workflow function instead of the Extractor schedule

    USE documentation
    defining reports
    https://portal.azure.com
    this page
    Extractor
    Transformer
    Report Definition
    here
    e-mail
    ticket
    Exivity on Azure Market Place
    Deploy the Exivity AMP solution
    Pick an Azure VM type for your Exivity deployment
    login to Exivity

    Amazon AWS CUR

    Prerequisites

    This tutorial assumes that you have CUR (Cost and Usage Report) set up in your AWS environment. If this is not the case please follow the steps in Turning on the AWS Cost and Usage Report before proceeding.

    Please note that we also need the following requisites:

    • S3 bucket where the CUR reports are placed

    • The AWS region of the S3 bucket

    • The path where the reports are placed inside the S3 bucket

    To get these requirements you can go to the Amazon S3 service - Buckets and note the name and the region of our CUR S3 bucket.

    Once inside the CUR bucket, you must find the path where the monthly reports reside, please navigate the bucket until you find a folder like this one and note the path:

    Obtaining the API Credentials

    We have to create an IAM User that has read access to the CUR bucket. Please follow these steps:

    • Go to IAM - Policies and click on Create policy.

    • In the next screen: Service select S3, Actions select Read, Resources select Specific and in bucket click on Add ARN.

    • In the next screen, type the name of your bucket followed by /* in Bucket name (example: billing-bucket/*) and click on Add.

    • Click on Review Policy.

    • Give a Name and a Description to the policy and finally click on Create policy.

    Now, we will attach the policy to a user:

    • Go to IAM – Users – Add user.

    • Add a User name and select Access type: Programmatic access.

    • Click on Attach existing Policies directly and filter by the name of the policy you just created.

    • Select the policy and click on Next:Tags.

    • Add a Key-Value pair for the Tags and click on Next:Review.

    Configure Extractor

    To create the Extractor in Exivity, browse to Data Sources > Extractors and click the Create Extractor button. This will try to connect to the Exivity Github account to obtain a list of available templates. For AWS, please click AWS_CUR_Extractor from the list. Provide a name for the Extractor in the name field, and click the Create button.

    Once you have created the Extractor, go to the first tab: Variables

    • Bucket: Type the name of your S3 CUR bucket.

    • AWS region: The name of your S3 CUR bucket region code can be found in the (Region column).

    • Access key: You can find it in the csv that you have downloaded in the previous section.

    Once you have filled in all details, go to the Run tab to execute the Extractor for a single day:

    The Extractor requires one parameter in yyyMMdd format:

    • from_date is the date for which you wish to collect consumption data.

    When you click the Run Now button, you should get a successful result.

    Configure Transformer

    Once you have successfully run your AWS CUR Extractor, you should be able to create a Transformer template via Data Sources > Transformers and click the Create Transformer button. Select the AWS CUR Transformer and run it for a single day as a test. Make sure that it is the same day as for which you extracted consumption data in the previous step.

    Create Report

    Once you have run both your Extractor and Transformer successfully create a Report Definition via the menu option Report Definition via the menu Reports > Definitions:

    Select the column(s) by which you would like to break down the costs. Once you have created the report, you should then click the Prepare Report button after first making sure you have selected a valid date selector shown when preparing the report.

    Once this is done you should be able to run any of the Accounts, Instances, Services or Invoices report types located under the Report menu for the date range you prepared the report for.

    Google Cloud

    This article describes how to setup Exivity to report on Google Cloud consumption.

    Prerequisites

    This tutorial assumes that you have a Billing Account for your Google Cloud Projects in place. In the event this is not the case please follow the steps in this tutorial before proceeding.

    Setting up BigQuery

    Exivity will leverage the GCP BigQuery service. This will enable the export of detailed Google Cloud billing data to a BigQuery dataset, giving Exivity the capability to query the billing data in a more granular manner than with the standard file export (which is going to be by Google).

    Before proceeding further, make sure your user has the Billing Account Administrator and the BigQuery User roles associated

    First and foremost, Exivity will create a dataset containing the billing table which will be periodically queried. Please follow these instructions:

    1. Go to the service page.

    2. Select the project that will contain your dataset in the project drop down.

    3. Click CREATE DATASET.

    Once you have the dataset created in BigQuery you need to enable Cloud Billing export to BigQuery:

    1. Sign in to the , select Organization and main Billing Account.

    2. Go to the Billing Export tab.

    3. Click Edit Settings to enable the export.

    Creating Exivity Service Account

    Exivity requires a GCP Service Account with the BigQuery User role associated to retrieve the billing data. Please follow to create a Service Account, to associate a Private Key with the service account, and finally to associate the role with the service account.

    Make sure to write down the Mail address/service account and its associated Private Key, as these parameters will be required for the Exivity Data Extractor

    Configuring the Extractor

    To create the Extractor in Exivity, browse to Data Sources > Extractors and click the Create Extractor button. This will try to connect to the Exivity Github account to obtain a list of available templates. For Google Cloud, please click Google_Cloud from the list. Provide a name for the Extractor in the name field, and click the Create button.

    Once you have created the Extractor, go to Variables tab and fill the parameters:

    • Hostname: Default endpoint for Google BigQuery with version

    • Private: Provide the RSA private key PEM format

    IMPORTANT make sure to replace \n with ${NEWLINE} in the RSA Private key using a text editor, before pasting the key in the Exivity Extractor field

    • Email: Mail address/service account associated with the private key, obtained in the Creating Exivity Service Account section.

    • Project: Main GCP Project name.

    • BigQuery project: GCP BigQuery Billing Project, obtained in the Setting up BigQuery section.

    Finally, click on Update.

    Once you have filled in all details, go to the Run tab to execute the Extractor for a single day:

    The Extractor requires two parameters in yyyMMdd format:

    • from_date is the date for which you wish to start collecting consumption data.

    • to_date is the end date for which you wish to collect consumption data.

    When you click the Run Now button, you should get a successful result.

    Configure a Transformer

    Once you have successfully run your Google Cloud Extractor, you should be able to create a Transformer template via Data Sources > Transformers and click the Create Transformer button.

    Select the Google Cloud Transformer that you just created, go to the Run Tab and run it for a single day as a test. Make sure that it is the same day as for which you extracted consumption data in the previous step.

    When you click the Run Now button, you should get a successful result.

    Create your Report

    Once you have run both your Extractor and Transformer successfully create a Report Definition via the menu option Report Definition via the menu Reports > Definitions:

    Select the column(s) by which you would like to break down the costs (you can start with only project_name as a test). Once you have created the report, you should then click the Prepare Report button after first making sure you have selected a valid date selector shown when preparing the report.

    Once this is done you should be able to run any of the Accounts, Instances, Services or Invoices report types located under the Report menu for the date range you prepared the report for.

    buffer

    The buffer command is used to create and/or populate one of these named buffers with data.

    Syntax

    buffername=protocol protocol_parameter(s)

    Details

    The first argument to the buffer statement is the name of the buffer to create. If a buffer with this name already exists then any data it contains will be overwritten.

    There must be a whitespace on both sides of the 'equals' symbol following the buffer name.

    The following protocols are supported:

    file

    bufferbuffername= filefilename

    The file protocol imports a file directly into a buffer. This can be very useful when developing USE scripts, as the USE script for processing for a JSON file (for example) can be implemented without requiring access to a server.

    If the specified buffer name already exists, then a warning will be logged and any data in it will be cleared before importing the file.

    data

    bufferbuffername= datastring

    The data protocol populates the buffer with the literal text specified in string. This is useful when extracting embedded JSON. For example, the JSON snippet below contains embedded JSON in the instanceData field:

    In this case, the instanceData field can be extracted using a parslet, placed into a new buffer and re-parsed to extract the values within it. Assuming the snippet is in a file called my_data.json this would be done as follows:

    http

    bufferbuffername= httpmethod url

    For full details on the HTTP protocol and its parameters please refer to the article.

    Once the HTTP request has been executed, any data it returned will be contained in the named buffer, even if the data is binary in format (eg: images, audio files or anything else non-human readable).

    If the HTTP request returned no data, one of the following will apply:

    • If the buffer does not already exist then the buffer will not be created

    • If the buffer already exists then it will be deleted altogether

    For details of how to access the data in a named buffer, please refer to the article.

    odbc

    bufferbuffername= odbcdsn [username password] query

    username and password are optional, but neither or both must be specified

    where:

    • dsn is the ODBC Data Source Name (this should be configured at the OS level)

    • username and password are the credentials required by the DSN

    • query is an SQL query

    Once the query has been executed, the resulting data is located in the named buffer. It can subsequently be saved as a CSV file to disk using:

    save {buffername} as filename.csv

    The resulting CSV uses a comma (,) as the separator and double quotes (") as the quoting character. Any fields in the data which contain a comma will be quoted.

    odbc_direct

    bufferbuffername= odbc_directquery

    where query is an SQL query.

    Executes SQL query against ODBC datasource that is described in odbc_connect parameter.

    Once the query has been executed, the resulting data is located in the named buffer. It can subsequently be saved as a CSV file to disk using:

    save {buffername} as filename.csv

    The resulting CSV uses a comma (,) as the separator and double quotes (") as the quoting character. Any fields in the data which contain a comma will be quoted.

    Examples

    The following examples retrieve data from ODBC and HTTP sources:

    foreach

    The foreach statement defines a block of zero or more statements and associates this block with multiple values. The block of statements is executed repeatedly, once for each value.

    Syntax

    foreachparsletasloop_label{

    }

    The opening { may be placed on a line of its own if preferred

    Details

    The foreach statement is used to iterate over the values in an array or object (identified by a ) within the data in a .

    The loop will execute for as many elements as there are in the array, or for as many members there are in the object. For the purposes of this documentation, the term child will be used to refer to a single array element or object member.

    If the array or object is empty, then the body of the loop will be skipped and execution will continue at the statement following the closing }.

    The loop_label can be any string, but must not be the same as any other loop_label values in the same scope (ie: when nesting foreach loops, each loop must have a unique label). This label is used to uniquely identify any given loop level when loops are nested.

    The foreach statement will execute the statements in the body of the loop once for every child. foreach loops can be nested, and at each iteration the loop_label can be used to extract values from an array or object in the current child using a . See the examples at the end of this article for a sample implementation showing this in action.

    As the foreach loop iterates over the children, a number of variables are automatically created or updated as follows:

    Examples

    Basic looping

    Consider the following JSON in a file called samples/json/array.json:

    To generate a list of IDs and names from the items array, the following would be used:

    Nested looping

    To extract values from an array using nested loops:

    Given the source JSON in a file called example.json, the following USE script:

    will produce the following output:

    get_last_day_of

    The days_in_month statement sets a variable to contain the number of days in the specified month.

    Syntax

    get_last_day_ofyyyyMMasvarName

    Details

    The get_last_day_of statement will set the value of the variable called varName to contain the number of days in the month specified by yyyyMM where yyyy is a four-digit year and MM is a 2-digit month.

    The statement will take leap years into account.

    Example

    subroutine

    The subroutine keyword is used to define a named subroutine.

    Syntax

    Details

    Overview

    A subroutine is a named section of code that can be executed multiple times on demand from anywhere in the script. When called (via the statement), the execution of the script jumps to the start of the specified subroutine. When the end of the code in the subroutine body is reached or a statement is encountered (whichever comes first), execution resumes at the statement following the most recent statement that was executed.

    The code in the body of a subroutine statement is never executed unless the subroutine is explicitly called using . If a subroutine is encountered during normal linear execution of the script then the code in it will be ignored.

    Subroutines in USE do not return any values, but any variables that are set within the subroutine can be accessed from anywhere in the script and as such, they should be used for returning values as needed.

    Subroutine Arguments

    When invoked via the statement, arguments can be passed to the subroutine. These arguments are read-only but may be copied to normal variables if required.

    Arguments are accessed using the same syntax as is used for variables as follows:

    ${SUBARG.COUNT} contains the number of arguments that were passed to the subroutine

    ${SUBARG_N} is the value of any given argument, where N is the number of the argument starting at 1

    Every time a subroutine is called, any number of arguments may be passed to it. These arguments are local to the subroutine and will be destroyed when the subroutine returns. However, copying an argument to a standard variable will preserve the original value as follows:

    After the subroutine above has been executed the return_value variable will retain the value it was set to.

    It is not permitted to nest subroutine statements. If used within the body of a subroutine statement, a subroutine statement will cause the script to terminate with an error.

    Example

    The following demonstrates using a subroutine to detect when another subroutine has been provided with an incorrect number of arguments:

    http

    The http statement initiates an HTTP session using any settings previously configured using the set statement. It can also be used for querying response headers.

    Syntax

    httpmethod url

    http dump_headers

    http get_headerheaderNameasvarName

    Details

    Executing an HTTP request

    The http statement performs an HTTP request against the server and resource specified in the url parameter. Any http-related settings previously configured using will be applied to the request.

    The method argument determines the HTTP method to use for the request and must be one of GET, PUT, POST or DELETE.

    The url argument must start with either http: or https:. If https: is used then SSL will be used for the request.

    The url argument must also contain a valid IP address or hostname. Optionally, it may also contain a port number (preceded by a colon and appended to the IP address or hostname) and a resource.

    The following defaults apply if no port or resource is specified:

    The format of the http statement is identical when used in conjunction with the statement.

    Querying response headers

    To dump a list of all the response headers returned by the server in the most recent session use the statement:

    http dump_headers

    This will render a list of the headers to standard output, and is useful when implementing and debugging USE scripts. This statement intends to provide a tool to assist in script development, and as such, it would normally be removed or suppressed with a debug mode switch in production environments.

    To retrieve the value of a specific header, use the statement:

    http get_headerheaderNameasvarName

    This will set the variable varName to be the value of the header headerName.

    If headerName was not found in the response, then a warning will be written to the log file In this case varName will not be created but if it already exists then its original value will be unmodified.

    Examples

    Example 1

    Example 2

    The following shows the process of retrieving a header. The output of:

    Takes the following form:

    Known issues

    This page lists known issues with the latest version of Exivity. All isues that are resolved and released are mentioned under the Releases section

    Stacking of Quantity and Charge Adjustments Policies can cause negative charges

    Affected versions

    Script basics

    USE scripts are stored in <basedir>/system/config/use, and are ASCII files that can be created with any editor. Both UNIX and Windows end-of-line formats are supported but in certain circumstances, they may be automatically converted to UNIX end-of-line format.

    Statements

    Each statement in a USE script must be contained on a single line. Statements consist of a keyword followed by zero or more parameters separated by whitespace. The contains documentation for each statement.

    Exivity_{version}_setup.exe /S /EXIVITY_PROGRAM_PATH=C:\Exivity\Program /EXIVITY_HOME_PATH=D:\Exivity\home /PGDATA=E:\PostgreSQL\PGDATA 
    Exivity_{version}_setup.exe /S /EXIVITY_PROGRAM_PATH=C:\Exivity\Program /EXIVITY_HOME_PATH=D:\Exivity\home /PGUSER=exivityadmin /PGPASSWORD=S3cret!123 /PGHOST=db.exivity.local /PGPORT=5432 /PGDB=exivitydb /PSQL_INSTALLED=0 /MQ_INSTALLED=0 /MQHOST=mq.exivity.local /MQUSER=mqexivity /MQPASSWORD=mqpass /MQVHOST=/exvt /MQSSL=1 
    subroutine subroutine_name {
       # Statements
    }
    http
    USE script basics
    set's

    Variable

    Value

    loop_label.COUNT

    The number of times the loop has executed. If the object or array is empty then this variable will have a value of 0.

    loop_label.NAME

    The name of the current child

    loop_label.VALUE

    The value of the current child

    loop_label.TYPE

    The type of the current child

    parslet
    named buffer
    dynamic parslet
    gosub
    return
    gosub
    gosub
    gosub

    Field

    Default

    port

    80 if using http 443 if using https

    resource

    /

    set
    buffer
    Quotes and escapes

    By default, a space, tab or newline will mark the end of a word in a USE script. To include whitespace in a word (for example to create a variable with a space in it) then double quotes - " - or an escape - \ - must be used to prevent the parser from interpreting the space as an end of word marker. Unless within double quotes, to specify a literal tab or space character it must be escaped by preceding it with a backslash character - \.

    Examples:

    The following table summarises the behaviour:

    Characters

    Meaning

    " ... "

    Anything inside the quotes, except for a newline, is treated as literal text

    \"

    Whether within quotes or not, this is expanded to a double quote - " - character

    \t

    When used outside quotes, this is expanded to a TAB character

    \

    When used outside quotes, a space following the \ is treated as a literal character

    \\

    When used outside quotes, this is expanded to a backslash - \ - character

    Comments

    Comments in a USE script start with a # character that is either of

    • the first character of a line

    • the first character in a word

    Comments always end at the end of the line they were started on

    Currently, comments should not be used on the same line as the encrypt statement as it will consider the comment as part of the value to encrypt

    Variables

    Overview

    USE scripts often make use of variables. Variables have a name and a value. When a variable name is encountered on any given line during the execution of the script, the name is replaced with the value before the line is executed.

    To reference a variable, the name should be preceded with ${ and followed by }. For example, to access the value of a variable called username, it should be written as ${username}.

    The length (in characters) of a variable can be determined by appending LENGTH to the variable name when referencing it. Thus if a variable called result has a value of success then ${result.LENGTH} will be replaced with 7.

    Creation

    Variables may be explicitly declared using the var statement, or may be automatically created as a consequence of actions performed in the script. Additionally, a number of variables are automatically created before a script is executed.

    For a list of variables created automatically please consult the article on the var statement

    Encryption

    It may be desirable to conceal the value of some variables (such as passwords) rather than have them represented as plain text in a USE script. This can be accomplished via the encrypt statement.

    Publishing to the user interface

    Variables may be exposed in the GUI by prefixing their declaration with the word public as follows:

    Any variable so marked may be edited using a form in the GUI before the script is executed. If a public variable is followed by a comment on the same line, then the GUI will display that comment for reference. If there is no comment on the same line, then the line before the variable declaration is checked, and if it starts with a comment then this is used. Both variants are shown in the example below:

    If a variable declaration has both kinds of comment associated with it then the comment on the same line as the variable declaration will be used

    Named buffers

    A named buffer (also termed a response buffer) contains data retrieved from an external source, such as an HTTP or ODBC request. Buffers are created with the buffer statement.

    Once created, a buffer can be referenced by enclosing its name in { and } as follows:

    • Buffer names may be up to 31 characters in length

    • Up to 128 buffers may exist simultaneously

    • Up to 2Gb of data can be stored in any given buffer (memory permitting)

    Extracting data with Parslets

    Parslets are used to extract data from from the contents of a named buffer.

    Please refer to the full article on parslets for more information on parslets and their use.

    USE Reference Guide
    "properties": {
    "subscriptionId":"sub1.1",
    "usageStartTime": "2015-03-03T00:00:00+00:00",
    "usageEndTime": "2015-03-04T00:00:00+00:00",
    "instanceData":"{\"Microsoft.Resources\":{\"resourceUri\":\"resourceUri1\",\"location\":\"Alaska\",\"tags\":null,\"additionalInfo\":null}}",
    "quantity":2.4000000000,
    "meterId":"meterID1"
    
    }
    buffer properties = file my_data.json
    var instanceData = $JSON{my_data}.[properties].[instanceData]
    
    buffer embedded = data ${instanceData}
    print The embedded resourceUri is $JSON{embedded}.[Microsoft.Resources].[resourceUri]
    # Typical usage in USE script to retrieve all data from the usage table
    
    buffer odbc_csv = odbc ExivityDB admin secret "select * from usage"
    save {odbc_csv} as "odbc.csv"
    discard {odbc_csv}
    
    # Retrieve the service summary from a local CloudCruiser 4 server and place it in a buffer
    set http_username admin
    set http_password admin
    set http_authtype basic
    
    buffer services = http GET "http://localhost:8080/rest/v2/serviceCatalog/summaries"
    # The 'services' buffer now contains the HTTP response data
    `# Zero or more USE script statements go here` 
    {
        "totalCount" : 2,
        "items" : [
            {
                "name" : "Item number one",
                "id" : "12345678"
            },
            {
                "name" : "Item number two",
                "id" : "ABCDEFGH"
            }
        ]
    }
    buffer data = file "samples/json/array.json"
    
    foreach $JSON{data}.[items] as this_item
    {
        print Customer ${this_item.COUNT}: $JSON(this_item).[id] $JSON(this_item).[name]
    }
    
    discard {data}
    var JSON_dir = "examples\json"
    buffer example = FILE "${JSON_dir}\doc.json"
    
    var title = $JSON{example}.[title]
    
    # For every element in the 'items' array ...
    foreach $JSON{example}.[items] as this_item
    {
        var item_name = $JSON(this_item).[name]
    
        # For every child of the 'subvalues' object ...
        foreach $JSON(this_item).[subvalues] as this_subvalue
        {
            var sub_name = ${this_subvalue.NAME}
            var sub_value = ${this_subvalue.VALUE}
    
            # Render an output line
            print ${title} -> Item: ${item_name} -> Subvalue:${sub_name} = ${sub_value} 
        }
    }
    discard {example}
    Example JSON data -> Item: Item number one -> Subvalue:0 = 1
    Example JSON data -> Item: Item number one -> Subvalue:10 = 42
    Example JSON data -> Item: Item number one -> Subvalue:100 = 73
    Example JSON data -> Item: Item number one -> Subvalue:1000 = 100
    Example JSON data -> Item: Item number two -> Subvalue:0 = 10
    Example JSON data -> Item: Item number two -> Subvalue:10 = 442
    Example JSON data -> Item: Item number two -> Subvalue:100 = 783
    Example JSON data -> Item: Item number two -> Subvalue:1000 = 1009
    #
    # Check a specific date to see if it is the last day of a month
    #
    var somedate = 20180228
    gosub detect_end_of_month(${somedate})
    
    if (${is_last_day} == TRUE) {
        print ${somedate} is the last day of a month
    } else {
        print ${somedate} is not the last day of a month
    }
    
    #
    # Check todays date to see if it is the last day of the month
    #
    gosub detect_end_of_month()
    if (${is_last_day} == TRUE) {
        print Today is the last day of the month
    } else {
        print Today is not the last day of the month
    }
    
    # This subroutine determines whether a date is the last
    # day of a month or not
    #
    # If no argument is provided it defaults to the current system
    # time, else it uses the supplied yyyyMMdd format argument
    #
    # It sets a variable called 'is_last_day' to TRUE or FALSE
    
    subroutine detect_end_of_month {
    
        if (${SUBARG.COUNT} == 0) {
            get_last_day_of ${YEAR}${MONTH} as last_day
    
            if (${last_day} == ${DAY}) {
                var is_last_day = TRUE
            } else {
                var is_last_day = FALSE
            }
            return
        }
    
        # Verify argument format
        match date "^([0-9]{8})$" ${SUBARG_1}
        if (${date.STATUS} != MATCH) {
            print Error: the provided argument is not in yyyyMMdd format
            terminate with error
        }
    
        # Get the day portion of the argument    
        match day "^[0-9]{6}([0-9]{2})$" ${SUBARG_1}
        var day_to_check = ${day.RESULT}
    
        # Get the yyyyMM portion of the argument
        match yyyyMM "^([0-9]{6})" ${SUBARG_1}
        var month = ${yyyyMM.RESULT}
    
        get_last_day_of ${month} as last_day
    
        if (${last_day} == ${day_to_check}) {
            var is_last_day = TRUE
        } else {
            var is_last_day = FALSE
        }
    }
    subroutine example {
        if (${SUBARG.COUNT} == 0) {
            var return_value = "NULL"
        } else {
            var return_value = ${SUBARG_1}
        }
    }
    if (${ARGC} == 0) {
        print This script requires a yyyyMMdd parameter
        terminate with error
    } 
    
    # Ensure the parameter is an 8 digit number
    gosub check_date(${ARG_1})
    #
    # (script to make use of the argument goes here)
    #
    terminate
    
    # ----
    #     This subroutine checks that its argument
    #     is an 8 digit decimal number
    # ----
    subroutine check_date {
        # Ensure this subroutine was called with one argument
        gosub check_subargs("check_date", ${SUBARG.COUNT}, 1)
    
        # Validate the format
        match date "^([0-9]{8})$" ${SUBARG_1}
        if (${date.STATUS} != MATCH) {
            print Error: the provided argument is not in yyyyMMdd format
            terminate with error
        }
    }
    
    # ----
    #     This subroutine generates an error message for
    #     other subroutines if they do not have the correct
    #     number of arguments
    #
    #     It is provided as a useful method for detecting internal
    #     script errors whereby a subroutine is called with the
    #     wrong number of arguments
    #
    #     Parameters:
    #        1: The name of the calling subroutine
    #        2: The number of arguments provided
    #        3: The minimum number of arguments permitted
    #        4: OPTIONAL: The maximum number of arguments permitted
    # ----
    subroutine check_subargs {
        # A check specific to this subroutine as it can't sanely call itself
        if ( (${SUBARG.COUNT} < 3) || (${SUBARG.COUNT} > 4) ) {
            print Error: check_subargs() requires 3 or 4 arguments but got ${SUBARG.COUNT}
            terminate with error
        }
    
        # A generic check
        var SCS_arg_count = ${SUBARG_2}
        var SCS_min_args = ${SUBARG_3}
        if (${SUBARG.COUNT} == 3) {
           var SCS_max_args = ${SUBARG_3}
        } else {
           var SCS_max_args = ${SUBARG_4}
        }
    
        if ( (${SCS_arg_count} < ${SCS_min_args}) || (${SCS_arg_count} > ${SCS_max_args}) ) {
            if (${SCS_min_args} == ${SCS_max_args}) {
                print Error in script: the ${SUBARG_1}() subroutine requires ${SCS_min_args} arguments but was given ${SCS_arg_count}
            } else {
                print Error in script: the ${SUBARG_1}() subroutine requires from ${SCS_min_args} to ${SCS_max_args} arguments but was given ${SCS_arg_count}
            }
            terminate with error
        }
    }
    # A simple request using the default port and no SSL
    set http_savefile "/extracted/http/customers.json"
    http GET "http://localhost/v1/customers"
    
    # A more complex request requiring setup and a custom port
    clear http_headers
    set http_header "Accept: application/json"
    set http_header "Authorization: FFDC-4567-AE53-1234"    
    set http_savefile "extracted/http/customers.json"
    buffer customers = http GET "https://demo.server.com:4444/v1/customers"
    buffer temp = http GET https://www.google.com
    http dump_headers
    http get_header Date as responseDate
    print The Date header from google.com was: ${responseDate}
    Last response headers:
    HTTP/1.1 200 OK
    Cache-Control: private, max-age=0
    Date: Mon, 26 Mar 2018 13:50:39 GMT
    Transfer-Encoding: chunked
    Content-Type: text/html; charset=ISO-8859-1
    Expires: -1
    Accept-Ranges: none
    P3P: CP="This is not a P3P policy! See g.co/p3phelp for more info."
    Server: gws
    Set-Cookie: 1P_JAR=2018-03-26-13; expires=Wed, 25-Apr-2018 13:50:39 GMT; path=/; domain=.google.co.uk
    Set-Cookie: [redacted]; expires=Tue, 25-Sep-2018 13:50:39 GMT; path=/; domain=.google.co.uk; HttpOnly
    Vary: Accept-Encoding
    X-XSS-Protection: 1; mode=block
    X-Frame-Options: SAMEORIGIN
    Alt-Svc: hq=":443"; ma=2592000; quic=51303432; quic=51303431; quic=51303339; quic=51303335,quic=":443"; ma=2592000; v="42,41,39,35"
    
    The Date header from google.com was: Mon, 26 Mar 2018 13:50:39 GMT
    "This quoted string is treated as a single word"  
    var myname = "Eddy Deegan"  
    This\ is\ treated\ as\ a\ single\ word
    "The character \" is used for quoting"
    # This is a comment
    set http_header "Content-Type: application/x-www-form-urlencoded"     # This is a comment
    var usage#1 = Usage1   # The '#' in 'usage#1' does not start a comment
    public var username = username
    public encrypt var password = something_secret
    public var username = login_user  # Set this to your username
    # Set this to your password
    public var password = "<please fill this in>"
    # Example of buffer creation
    buffer token = http POST "https://login.windows.net/acme/oauth2/token"
    
    # Examples of referencing a buffer
    save {token} as "extracted\token.data"
    discard {token}
    Click on Create User.
  • On the next screen click on Download .csv we will use these credentials to populate our Exivity Extractor in the next section.

  • Finally, click on Close.

  • Secret key: You can find it in the csv that you have downloaded in the previous section.
  • Cur report: Type the name of your CUR report.

  • Bucket Directory: Type the path that you got in the previous section (if there is more than one folder separate them with /).

  • Regions, Availability Zones, and Local Zones Documentation
    In this case, cur is the path of the report (exivity_cur_report is the name of the report itself)
    Enter a Dataset ID
  • Select Data location (region)

  • Set Default table expiration to Never

  • Set Encryption to Google-managed key

  • Finally, click Create dataset

  • Note the Project, BigQueryProject and Table, these parameters will be used by Exivity on a later stage.

  • Select the project where you previously created the BigQuery dataset.

  • Select the specific dataset from the Billing export dataset list.

  • Click Save.

  • BigQuery table: GCP BigQuery Billing Table, obtained in the Setting up BigQuery section.
    deprecated
    BigQuery
    Cloud Billing Console
    this manual
    this manual
    this manual
    Example of GCP Extractor Variables
    All versions of Exivity

    Description

    When creating two Absolute Discount Adjustment Policies, where both Policies apply to the same Account, Service and date Period, there is a chance of generating negative charges for the services where both Adjustment Policies apply to as can be seen in the following screenshot:

    Workaround

    The current workaround is to avoid stacking multiple Absolute Adjustments for each unique Account + Service combination. Instead, try to combine them within a single Adjustment Policy

    Status

    This issue is currently being resolved (internal reference: EXVT-<TBD>) and will be included with the release after version 3.5.7.

    Installing or Upgrading to version 3.5.6 populates invalid RabbitMQ settings

    Affected versions

    Exivity versions 3.5.0 till 3.5.6

    Description

    When installing or upgrading to version 3.5.0 - 3.5.6, when installing RabbitMQ on the local system, the initial configuration values are incorrectly populated:

    Workaround

    To work around this issue, you may uncheck the checkbox Install Local RabbitMQ Engine and then check the same checkbox again. The values will now be correctly populated.

    In case you have already executed the upgrade progress and your system is broken, you will need to change the config.json to align the mq settings:

    Status

    This issue is currently being resolved (internal reference: EXVT-5042) and will be included with the 3.5.7 release.

    Discount Adjustments are created as Premium

    Affected versions

    Exivity versions 3.5.0 till 3.5.5

    Description

    When a new Adjusmtent Discount policy is created, it will be created as a Premium policy instead. To confirm this behavior, when refreshing the screen they will show up as Premium:

    Workaround

    In order to create a discount policy in version 3.5.5, use the API interface to create one.

    Status

    This issue is resolved with the 3.5.6 release.

    Installer upgrade generates an error: Error opening file for writing

    Affected versions

    Exivity versions 3.5.0 and higher

    Description

    When running the installer, an error could popup where the installer is unable to update certain files such as the erlsrv.exe executable:

    Resolution

    This issue occurs when Windows or a 3rd party application locks this file. This can happen with certain monitoring and anti-virus software. In the example given above, the solution would be temproraly stopping the Windows Event Log service, as shown in the screenshot below:

    After stopping the service, click the Retry button in the installer.

    Relative quantity discounts can cause negative charges with prorated monthly services

    Affected versions

    All version of Exivity

    Description

    When configuring a prorated monthly service, and then applying a relative quantity discount (adjustment), in certain cases the total charge can be less than 0 resulting in a credit. Although this might be an unusual configuration and might not always happen, it is important to be aware of this behavior.

    Status

    This issue is pending. (internal reference: EXVT-1337)

    Exivity Backend Service does not start / access to merlin.exe has been denied

    Affected versions

    Exivity version 3.5.0 and higher

    Description

    Some Anti-Virus software vendors (specifically: McAfee and APEX) incorrectly flag the Exivity binary merlin.exe as a backdoor/trojan horse. As a result, the Exivity Backend Service might not be able to start as shown in the example below:

    Backend Service might not be able tostart

    The Windows event viewer may display the following error (depending on your A/V vendor):

    A/V software may deny access to merlin.exe

    Resolution

    Response by certain A/V vendors:

    [merlin.exe] is detected by [a coupe of] AV's as a BackDoor as it's functionality is very similar and malware authors can use this to their own advantage to compromise systems and gather sensitive system information.

    We also understand that [merlin.exe] can be used for legitimate purposes and therefore we will change the classification of the sample submitted to PUP. This way will be able to exclude detection through our scanner configuration but at the same time we are still protecting our customers who may be not be aware of this software running on their systems.

    Hence if you want to allow this application in your environment kindly exclude it by detection name.

    Multi-node

    Learn how to deploy Exivity on a multi-node architecture

    Exivity can be deployed on a single node, or on multiple nodes for HA and load balancing. This guide walks you through the steps to install some of the Exivity components on different nodes.

    Multi-Node System Architecture

    The following diagram outlines the various components that can be deployed on separate nodes:

    For larger environments, it is recommended to deploy each component on separate virtual machine nodes. As displayed in the diagram above, a typical multi-node deployment will consist of the following elements:

    • Load Balancer (optional)

      • An optional but recommended 3rd party load balancer may be used to ensure high availability of the Exivity front end web application.

    • Exivity Web/UI node(s)

    The platform is designed in such a way that adding and removing Exivity nodes should be relatively straightforward, which compliments thr potential growth or shrinkage of data processing needs.

    Deploying a PostgreSQL node

    Exivity highly recommends deploying your own PostgreSQL database cluster on Linux (or use a managed PSQL service from , or other vendors). To achieve High Availability, any PostgreSQL compatible cluster manager software may be used. At Exivity we have good experiences with , which is an PostgreSQL on Linux cluster manager (backed by Microsoft) in case you prefer to self-manage the Exivity PostgreSQL database.

    When using a PostgreSQL database on a remote host, the database and user must have been created beforehand. To create the database, ask your database administator to execute a database create statement similar to the one below:

    CREATE DATABASE exdb WITH OWNER = exadmin TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8' CONNECTION LIMIT = -1;

    In addtion, make sure to set the following minimal (or higher) PostgreSQL configuration parameters:

    In case you prefer to install PostgreSQL on Windows, Exivity recommends installingthe PostgreSQL role together with the API/Scheduler backend components during the installation wizard.

    Deploying a RabbitMQ node

    Exivity highly recommends deploying your own instance on Linux (or use a managed RabbitMQ service from , , or other vendors).

    Deploying a Backend node

    In order to deploy a backend node, the following steps need to be executed manually or automatically thru the use of the silent installer CLI interface.

    After starting the installer, click Next and provide a valid license key when asked. After clicking Next again, the component screen will be shown:

    Ensure to deselect the Web Service . The API Service component can also be excluded, although in some cases it is recommended to have the API and Backend Services running on the same system. Please consult with Exivity in case you are not certain what to select.

    Click Next to continue. Then provide a folder for the Exivity program files, and afterward select a folder for the Exivity home files.

    Provide a custom administrator username and password, or leave the default:

    Now specify a remote PostgreSQL database instance, or select to install the PostgreSQL database locally on the API/backend node:

    When using a PostgreSQL database on a remote host, the database and user must have been created beforehand. To create the database, ask your database administator to execute a database create statement similar to the one below:

    CREATE DATABASE exdb WITH OWNER = exadmin TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8' CONNECTION LIMIT = -1;

    When you are finished configuring your PostgreSQL database settings, click the Next button to configure RabbitMQ. To use a remote RabbitMQ instance, deselect the 'Install Local RabbitMQ Engine' and provide the appropriate hostname, username, password, vhost and TCP port. In case you require TLS/SSL towards your RabbitMQ instance, select that checkbox as well:

    Once the installation is finished, ensure to check 'Start the Exivity Windows Services' to start the Exivity services after clicking Finish.

    In order to achieve High Availability for the Exivity API/backend node, it is advisable to leverage the HA capabilities of your hypervisor (i.e. vSphere HA). In case you want to achieve OS level HA, you may consider implementing a Windows Failover Cluster. Please reach out to / to learn more about this kind of configuration.

    Silently install Backend node

    The below example will silently install an Exivity API/Backend node while using a remote PostgreSQL database instance as well as a remote RabbitMQ instance using SSL:

    Deploying a Web/API node

    In order to deploy a Web/UI only node (with optional API), the following steps need to be executed manually or automatically thru the use of the silent installer CLI interface.

    After starting the installer, click Next and provide a valid license key when asked. After clicking Next again, the component screen will be shown:

    Select the Web Service and optionally also the API Service. In some cases, the API can also be deployed on the front-end node, but if you are not certain, install only the Web Service on the front-end node, and the API Service on the backend node.

    Click Next to continue. Then choose a folder for the Exivity program files, and afterward select a folder for the Exivity home files.

    In the following screen, it will be required to specify the remote host and port of your Exivity API node:

    Ensure that the Exivity API host is active and accepting connections. After clicking the Next button, the installer will issue a connection attempt to the Exivity API host

    Once the installation is finished, ensure to check 'Start the Exivity Windows Services' to start the Exivity services after clicking Finish.

    Silent install Web/UI node

    The below example will silently install an Exivity Web/UI node:

    uri

    The uri statement is used to encode the contents of a variable such that it does not contain any illegal or ambiguous characters when used in an HTTP request.

    Syntax

    uri encodevarname

    uri component-encodevarname

    uri aws-object-encodevarname

    As well as uri component-encode you can use uri encode-component (the two are identical in operation). Similarly, uri aws-object-encode and aws-encode-object are aliases for each other.

    Details

    When sending a request to an HTTP server it is necessary to encode certain characters such that the server can accurately determine their meaning in context. The encoding involves replacing those characters with a percent symbol - % - followed by two hexadecimal digits representing the ASCII value of that character.

    Note that the last parameter to the uri statement is a variable name, so to encode the contents of a variable called my_query the correct statement would be uri encode my_query and not uri encode ${my_query} (The latter would only be correct if the value of my_query was the name of the actual variable to encode)

    USE script provides the following methods for encoding the contents of a variable:

    encode

    uri encodevarname

    This method will encode all characters except for the following:

    This is typically used to encode a URI which contains spaces (spaces encode to %20) but doesn't contain any query parameters.

    encode-component

    uri encode-componentvarname

    This method will encode all characters except for the following:

    This is typically used to encode query components of a URI, such as usernames and other parameters. Note that this method will encode the symbols =, & and ? and as such a URL of the form:

    server.com/resource?name=name_value&domain=domain_value

    is usually constructed from its various components using the values of the parameters as shown in the example below.

    aws-object-encode

    uri aws-object-encodevarname

    This method is specifically implemented to support the encoding of object names when downloading from Amazon S3 buckets. Amazon S3 buckets appear much like shared directories, but they do not have a hierarchical filesystem.

    The 'files' in buckets are termed objects and to assist in organizing the contents of a bucket, object prefixes may be used to logically group objects together.

    These prefixes may include the forward-slash character, making the resulting object name appear identical to a conventional pathname (an example might be billing_data/20180116_usage.csv). When downloading an object from S3 the object name is provided as part of the HTTP query string.

    When referencing an S3 object name there is an explicit requirement not to encode any forward slashes in the object name. USE therefore provides the aws-object-encode method to ensure that any S3 object names are correctly encoded. This method will encode all characters except for the following:

    More information may be found at where it states:

    URI encode every byte. UriEncode() must enforce the following rules:

    URI encode every byte except the unreserved characters: 'A'-'Z', 'a'-'z', '0'-'9', '-', '.', '', and '~'._

    The space character is a reserved character and must be encoded as "%20" (and not as "+").

    Each URI encoded byte is formed by a '%' and the two-digit hexadecimal value of the byte.

    Letters in the hexadecimal value must be uppercase, for example "%1A".

    Encode the forward slash character, '/', everywhere except in the object key name. For example, if the object key name is photos/Jan/sample.jpg, the forward slash in the key name is not encoded.

    The usr-object-encode method is compliant with the above requirements. For most trivial cases it should not be necessary to encode the AWS object name as it is relatively straightforward to do it by hand. However using uri aws-object-encode to URI-encode the object name may be useful for object names that contain a number of characters not listed above or for cases where the object name is provided as a parameter to the USE script.

    Example

    The above script will output:

    Configuration

    The 'Data Pipelines' menu allows an admin of the Exivity solution to manage Transcript 'Transformer' scripts. Transcript has its own language reference, which is fully covered in a separate chapter of this documentation.

    As described in the Transcript Documentation, you are free to use your editor of choice to create and modify Transformers. However, the GUI also comes with a built-in Transformers-editor.

    Creating Transformers

    To create a new Transformer for Transcript, follow these steps:

    1. From the menu on the left, select Data Pipelines > Transformer

    2. To create a new Transformer to normalise and enrich USE Extractor consumption and lookup data, click the 'Create' button

    3. When your Exivity instance has access to the Internet, it will pull in the latest set of Transformer Templates from our account. These templates are then presented to you, and you can pick one from the list to start Extracting. If you don't have access to the internet, you can download them directly from . You are also free to start creating your own Transformer from scratch.

    4. Provide a meaningful name for your Transformer. When we create a Transformer for a consolidated bill of various IT resources we would, for example, name it: 'IT Services Consumption'.

    5. When you're done creating your Transformer, click the Create at the bottom of the screen.

    The Transformer editor has syntax highlighting and auto-completion, to simplify the development of your scripts

    Edit and Delete Transformers

    When you want to change or delete an existing Transformer, first select one from the list of Transformer that you want to change:

    1. When you've selected your Transformer from the Data Pipelines > Transformers list, you can change the Transformer script in the editor

    2. In this example, we're adding a 'services' statement using auto-completion, to simplify the creation of services

    3. In case you want to save your changes, click the Save button at the bottom of the Editor screen. To delete this Transformer, you can do so by clicking the Remove

    Run and Schedule Transformers

    To test your Transformer, you can execute or schedule it directly from the interface:

    1. After you have selected the Transformer that you would like to run, click on the Run tab next to the Editor tab

    2. Manual execution of a Transformer can only be done for a single day. Provide the date you want to run this transformer for in dd-MM-yyyy format. You can also use the date picker, by clicking on the down facing arrow, on the right side of the date field

    3. When you've provided the required date, click 'Run Now' to execute the Transformer. After the Transformer

    As of version 1.6, it is recommend to use the Workflow function instead of the schedule tab to schedule transformers.

    Azure CSP

    Introduction

    When deploying the Azure CSP for Exivity, some configuration is required within your Microsoft Cloud Solution Provider Portal. The following process must be completed in order to report on Azure CSP consumption:

    1. Create a Partner Center Web Application

    csv

    The csv statement is used to create and populate CSV files. It is typically combined with loops to write values extracted from an array in a JSON and/or XML document stored in a .

    Details

    CSV files are produced via the use of multiple csv statements which perform the following functions:

    "mq": {
            "servers": [
                {
                    "host": "localhost",
                    "port": 5672,
                    "secure": false
                }
            ],
            "user": "guest",
            "password": "guest",
            "vhost": "/",
            "nodeID": "TR2021",
            "redialPeriod": 5
        }
    https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-header-based-auth.html

    Create a new empty CSV file

  • Define the headers

  • Finalise the headers

  • Write data to one or more rows of the file

  • Close the file

  • All CSV files created by the csv command use a comma as the separator character and a double quote as the quote character. Headers and data fields are automatically separated and quoted.

    Create a new CSV file

    The following is used to create a new, empty CSV file:

    csvlabel = filename

    The label must not be associated with any other open CSV file. Any number of CSV files may be open simultaneously and the label is used by subsequent csv statements to determine which of the open files the statement should operate on. Labels are case sensitive and may be from 1 to 15 characters in length.

    The specified filename is created immediately, and if it is the name of an existing file then it will be truncated to 0 bytes when opened.

    The filename argument may contain a path component but the csv statement does not create directories, so any path component in the filename must already exist. The path, if specified, will be local to the Exivity home directory.

    Example

    csv usage = "${exportdir}/azure_usage.csv"

    Define the headers

    This section refers to add_headers as the action, but either add_header or add_headers may be used. Both variants work identically.

    csv add_headerslabel header1 [header2 ... headerN]

    All CSV files created by USE script must start with a header row that names the columns in the file. The number of columns can vary from file to file, but in any given file every data row must have the same number of columns as there are headers.

    To create one or more columns in a newly created CSV file, the csv add_headers statement is used as shown above. The label must match the label previously associated with the file as described previously.

    One or more header names can be specified as arguments to csv add_headers. Multiple instances of the csv add_headers statement may reference the same CSV file, as each statement will append additional headers to any headers already defined for the file.

    No checks are done to ensure the uniqueness of the headers. It is therefore up to the script author to ensure that all the specified headers in any given file are unique.

    Example

    csv add_headers usage username user_id subscription_id

    Finalise the headers

    This section refers to fix_headers as the action, but either fix_header or fix_headers may be used. Both variants work in an identical fashion

    csv fix_headerslabel

    After csv add_headers has been used to define at least one header, the headers are finalised using csv fix_headers statement. Once the headers have been fixed, no further headers can be added to the file and until the headers have been fixed, no data can be written to the file.

    Example

    csv fix_headers usage

    Write data

    This section refers to write_fields as the action, but either write_field or write_fields may be used. Both variants work in an identical fashion

    csv write_fieldslabel value1 [value2 ... valueN]

    After the headers have been fixed, the csv write_fields statement is used to write one or more fields of data to the CSV file. Currently it is not possible to write a blank field using csv write_fields, however when extracting data from a buffer using a parslet, if the extracted value is blank then it will automatically be expanded to the string (no value).

    USE keeps track of the rows and columns as they are populated using one or more csv write_fields statements, and will automatically write the fields from left to right starting at the first column in the first data row and will advance to the next row when the rightmost column has been written to.

    It is the responsibility of the script author to ensure that the number of fields written to a CSV file is such that when the file is closed, the last row is complete, in order to avoid malformed files with one or more fields missing from the last row.

    Example

    Close the file

    csv closelabel

    Once all fields have been written to a CSV file, it must be closed using the csv close statement. This will ensure that all data is properly flushed to disk, and will free the label for re-use.

    Example

    csv close usage

    Example

    Consider the file "\examples\json\customers.json" representing two customers:

    Using a combination of foreach loops and parslets, the information in the above JSON can be converted to CSV format as follows:

    The resulting CSV file is as follows:

    foreach
    named buffer
    A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
    a b c d e f g h i j k l m n o p q r s t u v w x y z
    0 1 2 3 4 5 6 7 8 9 - _ . ~
    : / ? # [ ] @ ! $ & ' ( ) * + , ; =
    A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
    a b c d e f g h i j k l m n o p q r s t u v w x y z
    0 1 2 3 4 5 6 7 8 9 - _ . ~
    A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
    a b c d e f g h i j k l m n o p q r s t u v w x y z
    0 1 2 3 4 5 6 7 8 9 - _ . ~ /
    var name = "example/name"
    var domain = "[email protected]"
    
    uri encode-component name
    uri encode-component domain
    
    var URL = "http://server.com/resource?name=${name}&domain=${domain}
    print URL is now: ${URL}
    URL is now: http://server.com/resource?name=example%2Fname&domain=example%40domain.com
    csv write_fields usage Eddy 47EF-26EA-AAF1-B199 SUB_2311_89EFAA1273
    csv write_fields usage Tim 2492-ACC2-8829-4444 SUB_2991_BBAFE20BBA
        {
          "totalCount": 2,
          "items": [
            {
              "id": "1234-4567",
              "companyProfile": {
                "tenantId": "xyz-abc",
                "domain": "example.domain.com",
                "companyName": "Example, Inc"
              }
            },
            {
              "id": "9876-6543",
              "companyProfile": {
                "tenantId": "stu-vwx",
                "domain": "another.domain.com",
                "companyName": "A Company, Inc"
              }
            }
          ]
        }
            # Load the file into a named buffer
            buffer customers = FILE "${baseDir}\examples\json\customers.json"
    
            # Create an export file
            csv "customers" = "${baseDir}\exported\customers.csv"
    
            # Initialise and fix the headers (using two 'add_headers' statements for illustration)
            csv add_headers "customers" id tenant_id 
            csv add_headers "customers" domain company_name
            csv fix_headers "customers"
    
            # Iterate over the 'items' array in the JSON
            foreach $JSON{customers}.[items] as this_item
            {
                csv write_field "customers" $JSON(this_item).[id]
                csv write_field "customers" $JSON(this_item).[companyProfile].[tenantId]
                csv write_field "customers" $JSON(this_item).[companyProfile].[domain]
                csv write_field "customers" $JSON(this_item).[companyProfile].[companyName]
            }
    
            # Tidy up
            csv close "customers"
            discard {customers}
        "id","tenant_id","domain","company_name"
        "1234-4567","xyz-abc","example.domain.com","Example, Inc"
        "9876-6543","stu-vwx","another.domain.com","A Company, Inc"
    1 or more Exivity Web/UI nodes should be deployed to support the customer facing web application.
  • Exivity API/Backend node(s)

    • 1 or more Exivity API/Backend nodes should be deployed to support the customer facing web application.

    • Storage

      • In multi-node environments, it is advisable to provide a shared storage device (i.e. SMB/NFS) that can be accessed by the backend nodes

  • PostgreSQL node or cluster

    • Exivity solution relies on a PostgreSQL version 10 (or higher) compliant database engine.

  • RabbitMQ node or cluster

    • Exivity relies on a RabbitMQ version 3.8 (or higher) message broker

  • shared_buffers = 2GB

    work_mem = 32MB

    wal_buffers = 64MB

    max_prepared_transactions = 16

    Azure
    AWS
    pg_auto_failover
    open source
    RabbitMQ
    AWS
    CloudAMQP
    support
    [email protected]
    support.exivity.com
    button, after which you'll receive a confirmation pop-up where you'll have to click
    OK
    .
    has completed running, you will receive some success or failed message, after which you might need to make additional changes to your
    Transformer
    . For further investigations or troubleshooting, consult the "
    Log Viewer
    " found under the administration drop down menu top right of the screen.
  • Once you're happy with your output, you can schedule the Transformer via the Schedule tab, which is located next to the Run tab at the top of the screen

  • Transformer can be scheduled to run once a day at a specific time. Also, you should provide a date, which is provided by using an offset value. For example, if you want to execute this Transformer against yesterday's date with every scheduled run, you should provide a value there of -1

  • When you're done with the schedule configuration, you may click the 'Schedule' button. In case you want to change or remove this schedule afterwards, click the Unschedule button.

  • Github
    Github
    Glass
    Transformer Templates available in the GUI

    Configure Extractors for Azure CSP Usage, Billing & Ratecard

  • Configure Transformers

  • Create your Report

  • Create your Workflows

  • It is necessary to create independent Extractors/Transformers for the Usage and Billing, the Usage Extractors will be retrieving the data daily giving an estimation of your daily costs. The Billing Extractor will consolidate the rates based on the blended costs per service for the billing period.

    Create a Partner Center Web Application

    Perform the following to create the Azure AD configured to access the Partner Center API

    • Browse to Partner Center, https://partnercenter.microsoft.com, and log in using credentials that have admin agent and global admin privileges

    • Click on the following Dashboard –> Account Settings –> App Management

    • Click Add key to create a new Application Key for your App ID that can be used with Exivity.

    Make sure to write down the App ID and its corresponding Key, since you will need these when configuring the Extractor later.

    • Go to the Billing section.

    • Open the last month's invoice in pdf format.

    • Take note of your billing period

    Make sure to write down the billing period, since you will need it when configuring the Extractor later.

    Configure Extractors for Azure CSP Usage, Billing & Ratecard

    Go into the Exivity GUI and browse to Data Sources -> Extractors. Then click on Create Extractor and you should get a list of templates. Unfold Azure CSP and pick the usage template:

    After selecting the template, click the green Create button on the bottom right. Now make sure to give the new Extractor a name in the field at the top:

    Provide a name for the new Extractor

    Now click again on the green Create button at the bottom right. Then click on the Variables menu item:

    Fill in your Microsoft CSP connection details

    Now make sure to fill in your Client ID, Secret and your onmicrosoft.com domain. When required, you can encrypt security sensitive field using the lock button on the right of each field. Once you are filling in these details, click the Update button.

    Now test the Extractor by going into the Run tab and providing a from and to date like in this example:

    Provide a FROM and TO date in the Run tab

    Now click the Run Now button and confirm it works as expected:

    View Extractor execution results

    Create a second extractor using the template Azure_CSP_Invoice_Extractor and give it a name.

    This extractor uses the same variables in the Variables menu item as the previous extractor. You can now test the extractor by going to the Run tab, this script uses 3 arguments:

    • Positive offset: Starting on 0, it will retrieve the most recent invoice, a 1 will retrieve the previous report, etc.

    • Year of the report: Year of the report you want to retrieve.

    • Starting day of the billing period: If your billing period goes from 22nd to 21st, the input will be 22.

    After filling the arguments you can test the extractor by clicking in Run Now.

    Finally, follow the same steps for the Azure Rate Card Extractor, this extractor does not need any arguments.

    Configure Transformers

    Once you have successfully run your Azure CSP Usage, Billing & Rate Card Extractors, you can create the Transformers templates via Data Sources -> Transformers in the Exivity GUI. Browse to this location and click the Create Transformer button. You will need to create two separate transformers using these two templates:

    The Azure_CSP_Daily-Usage Transformer will transform the daily usage data and the Azure_CSP_End-of-Month Transformer will consolidate the usage with the final blended rates.

    Make any changes that you feel necessary and then select the Run tab to execute it for a single day as a test.Make sure that when running the Transformer you select custom range in the drop-down menu labelled Run for and select the same day as for which you have extracted consumption data in the previous step.

    Create a Report

    Once you have run both your Extractor and Transformer successfully create a Report Definition via the menu option Reports > Definitions:

    Creating a Report Definition

    Select the column(s) by which you would like to break down the costs. Once you have created the report, you should then click the Prepare Report button after first making sure you have selected a valid date range from the date selector shown when preparing the report.

    Prepare your Report

    Once this is done you should be able to run any of Accounts, Instances, Services or Invoices report types located under the Report menu for the date range you prepared the report for.

    Create your Workflows

    You may want to automate the CSP ETL process, you can achieve it by leveraging on the Exivity's Workflow capabilities. You will create two Workflows, one will run on a daily basis calculating the usage of your CSP subscriptions and the other one will run on a monthly basis to consolidate the service rates.

    Start by browsing to Administration -> Workflows in the Exivity GUI and click on the +Create button.

    Fill the Name and Description fields, in the SCHEDULES section configure the workflow to run on a daily basis at a convenient time.

    In the STEPS section you can create as many steps as needed by adding them with the + button. For the first daily workflow, a minimum of 4 steps are required, two steps for the Usage and Ratecard extractors, one for the Transformer and one for the Report. Make sure to input the right FROM and TO date offsets. Click on Update to finish the creation of your first Workflow.

    Create a second Workflow, fill the Name, and Description fields and configure the SCHEDULES section to run on a monthly basis, preferably 2 to 3 days after your month's billing period has finished. For the monthly workflow, a minimum of 3 steps are required, one step for the Billing extractor, one for the Transformer and one for the Report. Make sure to input the right FROM and TO date offsets (to cover the entire billing period) and arguments. Click on Update to finish the creating of the second Workflow.

    Extraction template

    Aggregation Levels and the Account Hierarchy

    Tiered rate revisions, report drill-downs and the aggregation level of a tiering configuration

    This article describes the implications of tiered services in the context of Exivity's hierarchical account system.

    Tiered Rate Configurations and Owners

    Key point

    The parameters defining how a tiered service is to be charged, including such things as the bucket ranges and rates, are collectively termed a Tier Configuration

    As with non-tiered service rates, Tier Configurations may have revisions such that they change over time (for example the prices for one year may be different to those of the previous year). However in the case of tiered services these revisions can only be changed on the boundary of a calendar month.

    This is because tiering is always applied to a monthly quantity derived from summing the instance quantities seen on each day and this requires consistency of the configuration for all days of the month.

    Tier configurations can be Global or Custom. A tiered service must have one Global configuration and may have any number of Custom configurations.

    The Global configuration is the default which will be used for calculating charges for all accounts that don't have a Custom tier configuration of their own. Thus by extension, Custom configurations are explicitly associated with a specific account.

    Key point

    The account associated with a Custom configuration is termed the Owner Account of that configuration

    A tiering configuration contains the following information:

    Item
    Description

    The Aggregation Level

    The aggregation level is the level of the account hierarchy at which tiering is applied to the quantity. For aggregation levels higher than the lowest level account, the quantity to be tiered will be the sum of all the quantities at lower levels.

    When creating a custom tier configuration, the aggregation level must be at or below the level of the owner account.

    Aggregation with a simple 2-level account hierarchy

    Consider a Standard tier configuration where the following bucket ranges and rates apply:

    Bucket
    Range
    Rate

    The diagram below illustrates how tiering would be applied to a simple 2-level account hierarchy, where the aggregation level is set to level 1 (the highest level).

    In the above diagram the aggregation level is surrounded with an orange box and the total quantity of the lowest-level (Level 2) accounts is surrounded with a purple box.

    It can be seen that the quantity consumed by the two child accountsLevel2A and Level2B is summed (aggregated) to determine the quantity at the aggregation level account Level1A.

    It is this aggregated quantity that is allocated to the buckets (at the aggregation level) based on the ranges specified in the table above, thus with a total quantity of 40 the first two buckets are allocated 5 each and the rest is placed into Bucket 3.

    Once this allocation has been done the bucket quantities for the lower level accounts Level2A and Level2B are calculated as a proportion of the quantities in the buckets for the Level1A parent account.

    Key point

    When the aggregation level is higher than the lowest level of the account hierarchy, the consumed quantities of the lowest level accounts are still the same but the quantities allocated to the buckets at the lowest level are different than if the aggregation level was at the lowest level.

    The key point above is critical to a proper understanding of how tiered charges work in Exivity so let's review them once more in the context of the diagram above.

    With an aggregation level of 1, the quantities at level 2 are summed, the resulting total quantity is allocated to buckets at the aggregation level and then the buckets at level 2 are set proportionally to the quantities of the buckets at level 1.

    In the example illustrated above this leads to bucket quantities of 2.5,2.5 and 15 for each of the level 2 accounts because those level 2 accounts accounted for half the consumed quantity each thus each of the buckets for the level 2 is half the quantity of the same bucket at the level 1 account.

    If the tier configuration specified an aggregation level of 2, the quantities of each level 2 account would have been allocated to buckets locally to that account with no consideration of any other account, such that both level 2 accounts would have had bucket quantities of 5,5 and 10 as shown below.

    In the above diagram the aggregation level is surrounded with an orange box and the individual quantities of the lowest-level (level 2) accounts are surrounded with purple boxes.

    With the aggregation level set to 2, each of the purple boxes would therefore have been tiered independently.

    Note that when the aggregation is at level 2, no calculations or charges are explicitly performed for level 1. However a report at level 1 in Exivity would still show the charges as the sum of those at the lower levels.

    Aggregation with multiple top-level accounts

    Having discussed the 'parent and two child accounts' scenario above we can now extend that example such that we have more than one parent account.

    The following diagram illustrates how tiering is performed in a situation where there are two level 1 accounts, each of which has two child accounts.

    As before, the aggregation level is boxed in orange and the quantities to which tiering is applied are boxed in purple.

    The manner in which tiering is applied is no different to that described previously. The main difference in the illustration above is simply that each of the level 1 accounts had tiering applied to them independently.

    Key point

    When there are multiple accounts at the aggregation level, tiering is applied independently to each of them and the resulting bucket allocations for each are further distributed down across their child accounts.

    Looking at the diagram above it can be seen that there are actually two 'subtrees' of accounts in the scenario. The root of each of these trees is a Level 1 account and the children of that account are evaluated in order to perform the tiering at the top level.

    Key point

    The bucket allocations (and the resulting charge) for each of the child accounts of Level1B are different to one another. This is because the Total Monthly Quantity of each child account has been used to determine the percentage of the quantity at the aggregation level that the child represents.

    Thus each bucket of each child gets the same percentage allocated to it based on the tiered quantities at the aggregation level.

    Mixed level aggregations

    Developing further the scenario previously outlined, let's consider there are two configurations: the Standard configuration in the beginning of this article and another Standard configuration, which is applied at a level two account.

    The first Standard tier configuration:

    Bucket
    Range
    Rate

    The second Standard tier configuration:

    Bucket
    Range
    Rate

    In the diagram above, the orange box uses the first Standard Configuration, while the purple outlined account has been configured to use the second Standard config.

    As we can observe, it is possible for multiple configurations to co-exist: the Level1C account distributes its quantities in buckets by the rules of the second configuration.

    start /wait Exivity_{version}_setup.exe /S ^
    	/PSQL_INSTALLED=0 ^
    		/PGUSER=exivityadmin /PGPASSWORD=S3cret!123 /PGHOST=db.exivity.local /PGPORT=5432 /PGDB=exivitydb ^
    	/MQ_INSTALLED=0 ^
    		/MQHOST=mq.exivity.local /MQPORT=5671 /MQUSER=exivity /MQPASSWORD=My5pecia1pas9 /MQVHOST=exvt /MQSSL=1 ^
    	/WEB_INSTALLED=0 ^
    	/JOBMAN_INSTALLED=1 ^
    	/SCHEDULER_INSTALLED=1 ^
    	/API_INSTALLED=1 ^
    	/BACKEND_INSTALLED=1 
    Exivity_{version}_setup.exe /S ^
    	/EXIVITY_PROGRAM_PATH=C:\Exivity\Program /EXIVITY_HOME_PATH=D:\Exivity\home ^
    	/PSQL_INSTALLED=0 ^
    	/MQ_INSTALLED=0 ^
    		/MQHOST=mq.exivity.local /MQPORT=5671 /MQUSER=exivity /MQPASSWORD=My5pecia1pas9 /MQVHOST=exvt /MQSSL=1 ^
    	/JOBMAN_INSTALLED=0 ^
    	/SCHEDULER_INSTALLED=0 ^
    	/API_INSTALLED=0 ^
    		/PROXIMITYHOST=remote.api.local /PROXIMITYPORT=443 ^
    	/WEB_INSTALLED=1

    Bucket ranges

    One or more ranges which collectively define how a monthly quantity is to be allocated to buckets

    Tiering type

    Whether Standard or Inherited tiering should be performed

    Owner ID

    The ID of the account associated with this tier configuration. Global configurations have an owner ID of 0

    Aggregation Level

    The level of the account hierarchy (with 1 being the highest level) to which quantities should be summed before tiering is applied to the resulting aggregate quantity

    1

    0 +

    10.00

    2

    > 5

    5.00

    3

    > 10

    3.00

    1

    0 +

    10.00

    2

    > 5

    5.00

    3

    > 10

    3.00

    1

    0 +

    20.00

    2

    > 10

    10.00

    3

    > 15

    5.00

    Aggregation at level 1 of a 2-level account hierarchy
    Aggregation at level 2 of a 2-level account hierarchy
    Aggregation at level 1 of a 2-level account hierarchy with multiple level 1 accounts
    Mixed level aggregation

    Services

    An introduction to Services

    In Exivity Services can be anything that corresponds to an SKU or sellable item from your Service Catalogue. It should relate to a consumption record (or multiple records) from your extracted data sources.

    Basic Example Services

    For example: with most public cloud providers, the provider defines the chargeable items that are shown on the end of month invoice. However, when working through a Managed Services Provider, a Cloud Services Provider, or a System Integrator, additional services can be sold on top of those. Potentially, you may want to apply an uplift to the rate or charge a fixed amount of money every month for a certain service. Different scenarios are possible here, it all depends on your business logic.

    Terminology

    A service is a named item with associated rates and/or costs used to calculate a charge that appears on a report, where rates represent revenue and costs represent overheads.

    When discussing services and their related charges several terms are required. Exivity uses the following terminology in this regard:

    Creating service definitions

    When created during the ETL process, service definitions are created via the statement in Transcript. During the execution of a Transcript task, service definitions created by these statements are cached in memory. Once the task has been completed successfully, the cached services are written to the where they remain indefinitely (or until they are manually deleted).

    If the task does not complete successfully then the service definitions cached in memory are discarded, the expectation being that the task will be re-run after the error condition that caused it to fail has been rectified and the services will be written to the global database at that time.

    Types of charges

    There are different types of charge that can be associated with a service. Collectively these influence the total charge(s) shown on the report and Exivity supports the following charge types as described in the Terminology table above:

    • unit rate

    • COGS rate

    At least one of these charge types must be associated with a service definition.

    Once the resulting charge has been calculated based on the charge types, it may be further modified through the application of adjustments, proration and minimum commit (all of which are detailed later in this article).

    Charge intervals

    In order to calculate the charge(s) associated with usage of a service Exivity needs to know the period for which each payment is valid. For example, a Virtual Machine may have a daily cost associated with it, in which case using it multiple times in a single day counts as a single unit of consumption whereas Network Bandwidth may be chargeable per Gigabyte and each gigabyte transferred is charged as it occurs.

    The charge interval (also termed simply interval) for a service can be one of the following:

    • individually - the charge for a service is applied every time a unit of the service is consumed, with no regard for a charging interval

    • daily - the charge is applied once per day

    • monthly - the charge is applied once per calendar month

    Although hourly charge intervals are not currently supported directly, it is possible to charge per hour by hourly records and using the EXIVITY_AGGR_COUNT column created during the process to determine the units of hourly consumption as a result.

    Charge models

    Monthly services may be charged in different ways:

    Peak

    For each day of the month, a 'candidate' charge is calculated using Quantity * Unit Rate. The monthly charge will reflect the day of the month which resulted in the highest charge.

    If multiple days share the same highest charge then that charge will be associated with the first of those days seen, unless a subsequent day in that set has a higher quantity, in which case the charge will be associated with that subsequent day.

    Average

    The average unit rate for those days where usage was seen in the month is calculated and multiplied by the average quantity for each day in the month. When calculating the average quantity, any days for which there was no consumption are factored in as having a quantity of 0.

    If Average Charging is applied in combination with proration, then the resulting average unit price as shown on reports may be less than you expect to see. This is because the average unit price as shown on the reports is calculated using charge / average_quantity and proration will reduce the charge if there was no consumption on one or more days of the month, resulting in a lower average unit price.

    Specific Day

    The charge is based on the quantity consumed on a specific day of the month.

    Last Day

    The charge is based on the quantity consumed on the last day of the month.

    Minimum commit

    The minimum commit is the minimum number of units of consumption that are charged every interval, or (in the case of services with an interval of individually) every time the service is used. If fewer units than the minimum commit are actually consumed then the service will be charged as if the minimum commit number of units had been used.

    Proration

    After the charge for usage of a monthly service has been determined, it may be prorated by modifying that charge based on the frequency of the usage.

    This process will reduce the charge based on the number of days within the month that the service was used. For example, if consumption of a service with a monthly charge interval was only seen for 15 days within a 30 day calendar month then the final charge will be 1/2 of the monthly charge.

    Service definitions

    A service definition comprises two categories of information:

    1. The service - Metadata describing fixed attributes of the service such as its name, description, group, interval, proration and charge type(s)

    2. The rate revision - Information detailing the charge type(s) associated with the service (the rate and COGS values) and additional information detailing the date(s) for which those values should be applied

    A service definition is associated with a specific as the units of consumption are retrieved from a column (named in the service definition itself) in usage data.

    The following tables summarize the data members that comprise each of these categories:

    Service attributes

    RDF, rate_type and cogs_type are automatically derived from the parameters provided to the and statements

    Rate revision attributes

    The rate_col and cogs_col are used when the specific value to use is derived at report-time from the usage data, as opposed to explicitly being included in the rate revision itself.

    A service may have any number of associated rate revisions so long as they have different effective_date or minimum commit values. This means that a service can have different charges applied depending on the date that the report is to be generated for, or depending on the specific values in the columns used by a report.

    A service may use either or both of rate and cogs.

    Either of rate or cogs may have a value of 0.0, in which case no charges will be leveraged against the service but the units of consumption will still be shown on reports.

    set

    The set statement is used to configure a setting for use by subsequent or statements.

    Syntax

    setsetting value

    Details

    A protocol such as http offers several configuration options. Any given option is either persistent or transient:

    Type

    Meaning

    Persistent

    The setting remains active indefinitely and will be re-used over successive HTTP calls

    Transient

    The setting only applies to a single HTTP call, after which it is automatically reset

    The following settings can be configured using set:

    http_progress

    set http_progress yes|no

    Persistent. If set to yes then dots will be sent to standard output to indicate that data is downloading when an HTTP session is in progress. When downloading large files if a lengthy delay with no output is undesirable then the dots indicate that the session is still active.

    http_username

    set http_usernameusername

    Persistent. Specifies the username to be used to authenticate the session if the http_authtype setting is set to anything other than none. If the username contains any spaces then it should be enclosed in double quotes.

    http_password

    set http_passwordpassword

    Persistent. Specifies the password to be used to authenticate the session if the http_authtype setting is set to anything other than none. If the password contains any spaces then it should be enclosed in double quotes.

    http_authtype

    set http_authtypetype

    Persistent. Specifies the type of authentication required when initiating a new connection. The type parameter can be any of the following:

    Value

    Meaning

    none (default)

    no authentication is required or should be used

    basic

    use basic authentication

    ntlm

    use NTLM authentication

    digest

    use digest authentication

    http_authtarget

    set http_authtargettarget

    Persistent. Specifies whether any authentication configured using the http_authtype setting should be performed against a proxy or the hostname specified in the http URL.

    Valid values for target are:

    • server (default) - authenticate against a hostname directly

    • proxy - authenticate against the proxy configured at the Operating System level

    http_header

    set http_header"name: value"

    Persistent. Used to specify a single HTTP header to be included in subsequent HTTP requests. If multiple headers are required, then multiple set http_header statements should be used.

    An HTTP header is a string of the form name: value.

    There must be a space between the colon at the end of the name and the value following it, so the header should be enclosed in quotes

    Example: set http_header "Accept: application/json"

    Headers configured using set http_header will be used for all subsequent HTTP connections. If a different set of headers is required during the course of a USE script then the clear statement can be used to remove all the configured headers, after which set http_header can be used to set up the new values.

    By default, no headers at all will be included with requests made by the http statement. For some cases this is acceptable, but often one or more headers need to be set for a request to be successful.

    Typically these will be an Accept: header for GET requests and an Accept: and a Content-Type: header for POST requests. However, there is no hard and fast standard so the documentation for any API or other external endpoint that is being queried should be consulted in order to determine the correct headers to use in any specific scenario.

    Headers are not verified as sane until the next HTTP connection is made

    http_body

    set http_body datastring - use the specified string as the body of the request

    set http_body filefilename - send the specified file as the body of the request

    set http_body{named_buffer} - send the contents of the named buffer as the body of the request

    Transient. By default, no data other than the headers (if defined) is sent to the server when an HTTP request is made. The http_body setting is used to specify data that should be sent to the server in the body of the request.

    When using http_body a Content-Length: header will automatically be generated for the request. After the request this Content-Length: header is discarded (also automatically). This process does not affect any other defined HTTP headers.

    After the request has been made the http_body setting is re-initialised such that the next request will contain no body unless another set http_body statement is used.

    http_savefile

    set http_savefilefilename

    Transient. If set, any response returned by the server after the next HTTP request will be saved to the specified filename. This can be used in conjunction with the buffer statement, in which case the response will both be cached in the named buffer and saved to disk.

    If no response is received from the next request after using set http_savefile then the setting will be ignored and no file will be created.

    Regardless of whether the server sent a response or not after the HTTP request has completed, the http_savefile setting is re-initialised such that the next request will not cause the response to be saved unless another set http_savefile statement is used.

    No directories will be created automatically when saving a file, so if there is a pathname component in the specified filename, that path must exist.

    http_savemode

    set http_savemodemode

    Persistent.

    • If mode is overwrite (the default) then if the filename specified by the set http_savefile statement already exists it will be overwritten if the server returns any response data. If no response data is sent by the server, then the file will remain untouched.

    • If mode is append then if the filename specified by the set http_savefile statement already exists any data returned by the server will be appended to the end of the file.

    http_timeout

    set http_timeoutseconds

    Persistent. After a connection has been made to a server it may take a while for a response to be received, especially on some older or slower APIs. By default, a timeout of 5 minutes (300 seconds) is endured before an error is generated.

    This timeout may be increased (or decreased) by specifying a new timeout limit in seconds, for example:

    The minimum allowable timeout is 1 second.

    http_retry_count

    set http_retry_countcount

    Persistent. Sets the number of retries that will be made in case of transport-level failures, such as an inaccessible server or a name resolution issue. Server responses with non-200 HTTP code are not considered transport-level failures.

    By default, this option has a value of 1, which means one initial request and one retry. To disable retrying set the value to 0.

    http_retry_delay

    set http_retry_delaymilliseconds

    Persistent. Set the pause between retries in milliseconds. The default value is 5000 milliseconds. Used only if http_retry_count is non-zero.

    http_redirect_count

    set http_redirect_countcount

    Persistent. Set the maximum number to follow HTTP redirects. Valid values are in the range 0-32, where 0 disable redirects completely. By default redirects are disabled.

    http_secure

    set http_secure yes|no

    Persistent. Switches on or off several server HTTPS certificate validation checks, such as:

    • certificate is issued by trusted CA (Certificate Authority) or certificate chain of trust can be traversed to trusted CA (list of trusted CAs is located in common/certificates/cacert.pem file within Exivity home directory)

    • server name matches the name in the certificate

    Other certificate checks, such as certificate expiration date, cannot be disabled.

    Starting from Exivity version 3 this option is switched on by default.

    odbc_connect

    set odbc_connectconnection_string

    Persistent. Sets the ODBC connection string for use by the buffer statement's odbc_direct protocol. The connection string may reference an ODBC DSN or contain full connection details, in which case a DSN doesn't need to be created.

    A DSN connection string must contain a DSN attribute and optional UID and PWD attributes. A non-DSN connection string must contain a DRIVER attribute, followed by driver-specific attributes.

    Please refer to the documentation for the database to which you wish to connect to ensure that the connection string is well-formed.

    An example connection string for Microsoft SQL Server is:

    http
    buffer
    set http_timeout 60    # Set timeout to 1 minute
    set odbc_connect "DRIVER=SQL Server;SERVER=Hostname;Database=DatabaseName;TrustServerCertificate=No;Trusted_Connection=No;UID=username;PWD=password"

    The period of time that a unit of consumption is charged over (additional units of the same service instance consumed within the charge interval do not increase the resulting charge)

    unit rate

    rate

    The charge associated with 1 unit of consumption of a service instance in the charge interval

    COGS rate

    cogs

    (Short for Cost Of Goods Sold) The cost (overhead) to the provider of a service for providing 1 unit of consumption of that service per charge interval

    charge

    A generic term to indicate some money payable by the consumer of service instances to the provider of those instances

    The name of the column in the usage data from which the number of units consumed can be derived

    interval

    The charging interval for the service, such as 'daily', 'monthly' etc.

    proration or model

    Whether the service is prorated or not

    charge model

    (Monthly services only) Whether to use Peak or Average charging for the service

    rate type

    Whether the rate is disabled, explicitly specified or derived from the usage data

    cogs type

    Whether the COGS rate is disabled, explicitly specified or derived from the usage data

    The minimum commit value for the service (if this is 0 then no minimum commit is applied)

    Term

    Synonym/Abbreviation

    Meaning

    service definition

    service

    A template defining how service instances should be charged

    service instance

    instance

    Consumption of a service, associated with a unique value such as a VM ID, a VM hostname, a resource ID or any other distinguishing field in the usage data

    unit of consumption

    unit

    The consumption of 1 quantity of a service instance

    charge interval

    Attribute

    Purpose

    key

    A unique key (as a textual string) used to identify the service

    description

    A user-defined description or label for the service

    group or category

    An arbitrary label used to group services together

    unit label

    A label for the units of measure, such as 'GB' for storage

    RDF or DSET

    The DSET ID of the usage data against which the service is reported

    Field

    Description

    rate

    The cost per unit of consumption

    rate_col

    The name of a column containing the cost per unit of consumption

    cogs

    (Short for Cost Of Goods Sold) The cost per unit associated with delivery of the service

    cogs_col

    The name of a column containing the COGS cost per unit

    effective_date

    A date in yyyyMMdd format (stored internally as an integer) from which the rate is valid

    services
    global database
    aggregating
    DSET
    service
    services

    interval

    usage_col

    minimum commit

    Upgrading to version 3

    Please ensure to read and understand all subjects that are mentioned here. Implement the suggested changes (where applicable) before upgrading to version 3 to avoid unexpected behavior.

    Upgrading to a v3.x.x will require upgrading to version v2.10.x first. The installer will verify this and displays a warning when this requirement is not satisfied.

    PostgreSQL

    The biggest single change is the use of a new database engine powering all application states, audit logs and processed report data. Upgrading to this new database engine is transparent and the installer will take care of installing the database server as part of the regular installation process. After or during the upgrade, it is possible to leverage an .

    Windows Services

    In version 2.x.x there were only two Exivity Services installed:

    With Exivity version 3.0.0 up to version 3.4.3, assuming all components are installed on a single host system, there was a total of 4 different services:

    As of version 3.5.0, the following Exivity services are to be considered when all components are deployed on a single host, there will be 7 different services:

    In case you were using a service account in Exivity version 2.x.x for the Exivity Scheduling Service and the Exivity Web Service, you will have to reconfigure this service account for both services, as well as the Exivity API Service. In most cases, the Exivity Database Service may continue to run un the Local System account.

    In case your current Exivity version 2.x.x installation runs inside an Active/Passive Windows Cluster, you will need to re-register the Cluster Roles for the Exivity Scheduling Service. Additionally, a new Cluster Role should be created for the Exivity Database Service, in case you decide to not use an external database host.

    Default TCP ports

    In v2.x.x, the default port for the Exivity GUI was 8001 and 8002 for the Proximity API. Both services were already available through port 443 (the default port used for HTTPS traffic, which means clients don't have to explicitly specify the port) and in v3.x.x this will be used by default:

    This is achieved by shipping a web proxy configuration for Nginx, which routes all requests starting with /v1/ to port 8002 and all other requests to port 8001. The recommended configuration is to not expose port 8001 to the public and only accept incoming traffic on port 443. Port 8002 may still be opened to external hosts, typically in a configuration where the Web and API/Backend components are deployed . In such a scenario it is advisable to allow communication from the .

    Default Security Settings

    As of Exivity version 3, more strict security settings are applied by default. These can be found under Administration > Settings. One important item which should be considered when upgrading a multi-node environment is the use of CORS. It is required to list all possible front end UI nodes in the CORS origins field:

    Multiple hosts including https:// may be added while separating each URL using a , (comma) symbol. Wildcards may also be used as part of the hostname to match multiple URLs in one go such as: https://*.cors.exivity.io. An overview of all current security policies can be found .

    Transformer changes

    @FILE_EXISTS and @FILE_EMPTY

    The and functions in Transcript have been modified in a manner that may require changes to scripts that use them.

    Previously, it would only check for files in the system and exported folders within the Exivity home directory, and if a specified path + filename did not start with system/ or exported/ then these would be prepended automatically before the check was done.

    This behaviour has been changed in the following ways in v3.x.x:

    • Any path + filename within the Exivity home directory can now be checked

    • path + filenames are now accepted and treated as being relative to the Exivity home directory

    • The folders system/ and exported/ are no longer automatically prepended

    Consider a file somefile.csv in the %EXIVITY_HOME_PATH%/exported folder. Previously with version 2.x.x a user could check for the existence of this file using the following example:

    In version 3 it is required to include the entire path relative to the %EXIVITY_HOME_PATH% :

    This change may require modifications to existing Transformer scripts. This is because the system and exported will no longer be automatically prepended.

    Extractor changes

    HTTP server certificate validation

    The default behaviour of the HTTP subsystem in USE was changed to fully validate server SSL certificates, which may cause some USE scripts to fail. This typically applies to data Extractors, which are connecting to on-premises data sources that use self-signed SSL certificates. In version 2.x.x, the default behavior was to ignore these certificate errors. You can identify these errors in your Extractor logs:

    It is highly recommended to use valid SSL certificates, properly signed by a trusted CA (Certificate Authority). However, it is still possible to switch certificate validation off by specifying in an Extractor script before executing an HTTP request.

    The above will apply to all of your Data Extractors where you are connecting to (most likely internal) data sources which are using self-signed certificates. Make sure to apply this change before upgrading to v3 to avoid any data extraction errors.

    Extractor & Transformer Schedules

    In version 2.x.x it was still possible to schedule an Extractor or Transformer from the editor screen. This feature is removed from version 3.x.x. In case you still have schedules that are configured through this interface, you should Unschedule these and create an appropriate instead.

    GUI changes

    There are some minor changes in the GUI that are not backwards-compatible:

    • We removed the Excel export options from the reports. They were using the CSV format under the hood (i.e. they never actually produced valid Excel worksheets). In the future, we plan to implement proper Excel export formats for the reports, including a full summary report Excel export.

    • Removed functionality that would take a custom API port from the #port=xxx location hash parameter on the login screen. Specifying a custom API port (and hostname) is still possible by on your system.

    Changes to data processing for reports

    Due to changes to the processing of reports, when making changes to either services, rates or adjustments, associated reports should be prepared. In v2.x.x this was already required when making changes to services or rates. Since v3.x.x this will be also required when making changes to adjustments. We've planned further improvements to make this more transparent (i.e. handling the preparation of reports automatically in the background).

    API changes

    Normalised date/time data in responses

    Some endpoints were returning dates and timestamps in different formats. This has been normalised in such a way that all responses use the same serialization for dates and timestamps:

    • A date is always represented as ISO-8601 string: "yyyy-mm-dd", e.g. "2020-01-29"

    • A date/time is always represented as ISO-8601 string in UTC: "yyyy-mm-ddThh:mm:ssZ", e.g. "2020-01-29T11:26:52Z". Note that the Z suffix denotes the UTC time standard.

    This affects the responses (attributes) for the following group of API endpoints:

    • /v1/audit

      • created_at

    • /v1/budgetrevisions

    Changed functionality

    /v1/reports/{id}/run endpoint

    • The JSON format in the response is simplified. See examples below.

    This is an example response from v2.x.x:

    This is an example response from v3.x.x:

    Remove deprecated functionality

    /v1/usergroups endpoint

    Removed deprecated permission aliases.

    • upload_files (use manage_files instead)

    • manage_configuration (use manage_settings instead)

    • manage_system

    /v1/reports/{id}/run endpoint

    Functionality deprecated in v2.x.x has been removed. If you were relying on any of the following functionality, please use the suggested replacement instead:

    • The pdf/invoice option for the format query parameter has been removed. Please use pdf/summary instead.

    • The invoice_options query parameter has been removed in favour of summary_options.

    /v1/configuration endpoint

    • Configuration keys prefixed with INVOICE_ are replaced by respective keys prefixed with SUMMARY_.

    /v1/workflowsteplogs endpoint

    Also relevant to other endpoints including workflowsteplogs .

    • Removed the last_log attribute. Include the last log by specifying the query parameters include=steplogs&related[steplogs][limit]=1&related[steplogs][sort]=-start_ts.

    • Removed the timestamp attribute. Use start_timestamp instead.

    • Removed the message

    /v1/file endpoint

    Only applicable to POST requests to this endpoint.

    • The filename in the response from this endpoint will no longer include the import/ prefix to better align for other requests in this endpoint. See the example below:

    /v1/usergroups endpoint

    The following permissions have been removed in favour of their new counterparts:

    • UPLOAD_FILES has become MANAGE_FILES

    • MANAGE_CONFIGURATION has become MANAGE_SETTINGS

    • MANAGE_SYSTEM

    aws_sign_string

    The aws_sign_String statement is used to generate an AWS4-HMAC-SHA256 signature, used as the signature component of the Authorization HTTP header when calling the AWS API.

    Syntax

    aws_sign_stringvarName

    Transform

    Transcript executes user-definable scripts (termed tasks) in order to produce one or more (RDFs) from one or more input Dataset files in CSV format. These RDFs are later used by the reporting engine to generate results.

    Overview

    Transcript tasks are located in system/config/transcript/ and are ASCII files which can be created with any editor. Both UNIX and Windows end-of-line formats are supported.

    using
    secret_key date region service

    Details

    The authentication method used by AWS requires the generation of an authorization signature which is derived from a secret key known to the client along with specific elements of the query being made to the API.

    This is a fairly involved process and a full step-by-step walkthrough is provided by Amazon on the following pages (these should be read in the order listed below):

    • https://docs.aws.amazon.com/general/latest/gr/sigv4-create-canonical-request.html

    • https://docs.aws.amazon.com/general/latest/gr/sigv4-create-string-to-sign.html

    • https://docs.aws.amazon.com/general/latest/gr/sigv4-calculate-signature.html

    • https://docs.aws.amazon.com/general/latest/gr/sigv4-add-signature-to-request.html

    The aws_sign_string statement is used to generate the final signature as detailed on the calculate signature page listed above.

    Note that in order to use this statement it is necessary to have the following strings available:

    1. A string to sign, obtained by following the process of creating a string to sign, containing meta-data about the request being made

    2. A secret_key, obtained from Amazon which is used by any client application authorizing against their API

    3. The date associated with the API request, in YYYYMMDD format

    4. The AWS region associated with the API request (for example eu-central-1)

    5. The AWS service being accessed (for example s3)

    The aws_sign_string statement will use these inputs to generate the HMAC-SHA256 signature which is a component of the Authorization header when connecting to the API itself.

    The varName parameter is the name of a variable containing the string to sign. After executing aws_sign_string the contents of this same variable will have been updated to the base-16 encoded signature value.

    If there are any errors in the string to sign, _date, AWS region or AWS service strings used as input to aws_sign_string then a signature will still be generated, but the AWS API will reject the request. In this case it is necessary to review the process by which these strings were created as per the AWS guide provided above.

    Example

    The following is an example USE script that implements everything described above.

    #################################################################
    # This USE script will download a file from an S3 bucket        #
    #                                                               #
    # It takes three parameters:                                    #
    # 1) The name of the bucket                                     #
    # 2) The name of the object to download                         #
    # 3) The name of the file to save the downloaded object as      #
    #                                                               #
    # Created: 13th Jan 2018                                        #
    # Author: Eddy Deegan                                           #
    # --------------------------------------------------------------#
    # NOTES:                                                        #
    # - This script hardcodes the Region as eu-central-1 but this   #
    #   can easily be changed or made a parameter as required       #
    #################################################################
    
    if (${ARGC} != 3) {
        print This script requires the following parameters:
        print bucketName objectName saveFilename
        terminate
    }
    
    # Set this to 1 to enable a debug trace output when the script is run
    var DEBUG = 0
    
    # This is the text that appears to the left and right of debug headings 
    var banner = ________
    
    ######################################################################
    # Customer specific values here (these can be encrypted if required) #
    #                                                                    #
    var bucket = "${ARG_1}"
    var s3_object = "${ARG_2}"
    var AWS_Region = "eu-central-1"
    var AWS_Service = "s3"
    encrypt var access_key = <YOUR ACCESS KEY>
    encrypt var secret_key = <YOUR SECRET KEY>
    #                                                                    #
    # End customer specific values                                       #
    ######################################################################
    
    # This is the SHA256 hash of an empty string (required if making a request with no body)
    var hashed_empty_string = e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
    
    #########################################################################################
    # SETUP                                                                                 #
    # Create a number of variables to represent the various components that the steps       #
    # below are going to use in order to construct a correct AWS request                    #
    #---------------------------------------------------------------------------------------#
    # This is the request syntax for retrieving an object from a bucket:                    #
    # GET /<ObjectName> HTTP/1.1                                                            #
    # Host: <BucketName>.s3.amazonaws.com                                                   #
    # Date: date                                                                            #
    # Authorization: authorization string                                                   #
    #########################################################################################
    
    var HTTP_Method = GET
    var URI = ${s3_object}
    var query_params                    # Must have an empty variable for 'no query parameters'
    var host = ${bucket}.s3-${AWS_Region}.amazonaws.com
    var date = ${OSI_TIME_UTC}
    
    # Initialise config variables specific to this script
    var save_path = "system/extracted"
    var save_file = ${ARG_3}
    
    #########################################################################################
    # STEP 1                                                                                #
    # Create a canonical request as documented at                                           #
    # at https://docs.aws.amazon.com/general/latest/gr/sigv4-create-canonical-request.html  #
    #########################################################################################
    
    # 1a) Canonical Headers string
    #     - This is part of the Canonical Request string which will be generated below.
    #     - The Canonical Headers are a list of all HTTP headers (including values but
    #       with the header names in lowercase) separated by newline characters and in
    #       alphabetical order
    
    var canonical_headers = "date:${date}${NEWLINE}host:${host}${NEWLINE}x-amz-content-sha256:${hashed_empty_string}${NEWLINE}"
    if (${DEBUG} == 1) {
        print ${NEWLINE}${banner} Canonical Headers ${banner}${NEWLINE}${canonical_headers}
    }
    
    # 1b) Signed Headers string
    #     - This is a list of the header names that were used to create the Canonical Headers,
    #       separated by a semicolon
    #     - This list MUST be in alphabetical order
    #     - NOTE: There is no trailing newline on this variable (we need to use it both with and without
    #             a newline later so we explicitly add a ${NEWLINE} when we need to)
    
    var signed_headers = "date;host;x-amz-content-sha256"
    if (${DEBUG} == 1) {
        print ${banner} Signed Headers ${banner}${NEWLINE}${signed_headers}${NEWLINE}
    }
    
    # 1c) Canonical Request
    #     - The above are now combined to form a Canonical Request, which is created as follows:
    #     - HTTPRequestMethod + '\n' + URI + '\n' + QueryString + '\n' + CanonicalHeaders + '\n' +
    #       SignedHeaders + '\n' + Base16 encoded SHA256 Hash of any body content
    ##    - Note that the Canonical Headers are followed by an extra newline (they have one already)
    
    vvar canonical_request = "${HTTP_Method}${NEWLINE}/${URI}${NEWLINE}${query_params}${NEWLINE}${canonical_headers}${NEWLINE}${signed_headers}${NEWLINE}${hashed_empty_string}"
    iif (${DEBUG} == 1) {
        print ${banner} Canonical Request ${banner}${NEWLINE}${canonical_request}${NEWLINE}
    }}
    
    ## 1d) Hash of the Canonical Request
    ##     - This is an SHA256 hash of the Canonical Request string
    
    hhash sha256 canonical_request as hashed_canonical_request
    
    ######################################################################################
    ## STEP 2                                                                             #
    ## Create a 'string to sign' as documented at                                         #
    ## at https://docs.aws.amazon.com/general/latest/gr/sigv4-create-string-to-sign.html  #
    ##------------------------------------------------------------------------------------#
    ## In a nutshell this is the following components separated by newlines:              #
    ## 2a) Hash algorithm designation                                                     #
    ## 2b) UTC date in YYYYMMDD'T'HHMMSS'Z' format                                        #
    ## 2c) credential scope (date/region/service/"aws4_request")                          #
    ## 2d) base16-encoded hashed canonical request                                        #
    ######################################################################################
    
    ## Extract the yyyyMMdd from the UTC time
    mmatch yyyyMMdd "(.{8})" ${date}
    vvar yyyyMMdd = ${yyyyMMdd.RESULT}
    
    vvar string_to_sign = AWS4-HMAC-SHA256${NEWLINE}${date}${NEWLINE}${yyyyMMdd}/${AWS_Region}/${AWS_Service}/aws4_request${NEWLINE}${hashed_canonical_request}
    iif (${DEBUG} == 1) {
        print ${banner} String to sign ${banner}${NEWLINE}${string_to_sign}${NEWLINE}
    }}
    
    ######################################################################################
    ## STEP 3                                                                             #
    ## Calculate the signature for AWS Signature Version 4 as documented at:              #
    ## at https://docs.aws.amazon.com/general/latest/gr/sigv4-calculate-signature.html    #
    ##                                                                                    #
    ######################################################################################
    
    ## 3a) Derive a signing key and apply it to the string to sign
    ##     Use the secret access key to create the following hash-based auth codes:
    ##     a) ksecret (our secret access key)
    ##     b) kDate = HMAC("AWS4" + kSecret, Date) NOTE: yyyyMMdd only
    ##     c) kRegion = HMAC(kDate, Region)
    ##     d) kService = HMAC(kRegion, Service)
    ##     e) kSigning = HMAC(kService, "aws4_request")
    ##     f) HMAC the string_to_sign with the key derived using steps a - e
    
    vvar signature = ${string_to_sign}
    
    iif (${DEBUG} == 1) {
        print ${banner}Deriving Signing Key using these parameters${banner}${NEWLINE}${secret_key} ${yyyyMMdd} ${AWS_Region} ${AWS_Service}${NEWLINE}${NEWLINE}
    }}
    
    # # The following statement takes care of all the details listed above
    # # Notes: 
    # #      - The word 'signature' in the statement below is the NAME of a variable and
    # #        NOT a reference to its contents
    # #      - The contents of this variable are the string to sign, and after the statement
    # #        has completed these contents will have been modified to be the authorization
    # #        signature for that string
    #
    AWS_sign_string signature using ${secret_key} ${yyyyMMdd} ${AWS_Region} ${AWS_Service}
    
    #######################################################################################
    ## STEP 4                                                                             #
    ## Add the signing information to the request as documented at:                       #
    ## https://docs.aws.amazon.com/general/latest/gr/sigv4-add-signature-to-request.html  #
    ##                                                                                    #
    ######################################################################################
    
    vvar credential_scope = "${yyyyMMdd}/${AWS_Region}/${AWS_Service}/aws4_request"
    iif (${DEBUG} == 1) {
        print ${banner} Credential Scope ${banner}${NEWLINE}${credential_scope}${NEWLINE}${NEWLINE}
    }}
    
    vvar auth_header = "Authorization: AWS4-HMAC-SHA256 Credential=${access_key}/${credential_scope}, SignedHeaders=${signed_headers}, Signature=${signature}"
    
    iif (${DEBUG} == 1) {
        print ${banner} Authorization Header ${banner}${NEWLINE}${auth_header}${NEWLINE}
    }}
    
    sget http_header ${auth_header}
    
    ########################################################
    ## STEP 5                                              #
    ## Execute the query                                   #
    ##-----------------------------------------------------#
    ## Note that all the headers that were included in the #
    ## signed_headers created in STEP 1 must be set before #
    ## the request is executed                             #
    ########################################################
    
    sget http_header "Date: ${date}"
    sget http_header "x-amz-content-sha256: ${hashed_empty_string}"
    sget http_savefile ${save_path}/${save_file}
    
    sget http_progress yes
    pprint "Downloading ${host}/${URI}:"
    hhttp GET https://${host}/${URI}
    pprint ${NEWLINE}Done

    effective_from

  • /v1/dsets

    • earliest_rdf

    • latest_rdf

    • rdf_detail.created

    • rdf_detail.updated

  • /v1/extractors

    • last_modified

  • /v1/log

    • created

    • lines.date

  • /v1/reports

    • data_status.first_date

    • data_status.last_date

    • data_status.status.date

  • /v1/services

    • created_at

    • updated_at

  • /v1/transformers

    • last_modified

  • /v1/workflows

    • created_at

    • updated_at

  • /v1/workflowschedules

    • start_time

    • next_run

  • (use
    manage_settings
    instead)
    attribute. Use a combination of
    status
    and
    output
    instead.
    has become
    MANAGE_SETTINGS
    external database server
    as separate hosts
    Web node towards the API/Backend node on port 8002
    here
    @FILE_EXISTS
    @FILE_EMPTY
    set http_secure no
    Workflow
    setting up a config.json file
    Windows Services with Exivity version 2.x.x
    Windows Services with Exivity version 3.0.0 and =< 3.4.3
    Windows Services with Exivity version 3.5.0 and higher
    Default using port 443 for both UI and API
    Administering security settings
    v2: FILE_EXISTS statement auto-prepends exported / system
    v3: Path must be relative to %EXIVITY_HOME_PATH%
    Identifying Certificate Errors in the Extractor logs
    Example v2.x.x Schedule which will be removed in version 3.x.x
    Statements

    Each statement in a Transcript task must be contained on a single line. Statements consist of a keyword indicating the action to perform, followed by zero or more parameters, separated by white-space, required by the statement. Documentation for all the possible statements can be found in the Transcript language reference guide.

    Quotes and escapes

    By default a space, tab or newline will mark the end of a word in a Transcript task. To include white space in a parameter (for example to reference a column name with a space in it) then this can be done by enclosing it in double quotes or escaping it by preceding it with \.

    Examples: create columns from "Meter Name" using Quantity create columns from Meter\ Name using Quantity

    The following table summarizes the behaviour of quotes and escapes:

    Characters

    Meaning

    " ... "

    Anything inside the quotes, except for a newline or an escape character is treated as literal text

    \"

    Whether within quotes or not, this is expanded to a double quote - " - character

    \t

    When used outside quotes, this is expanded to a TAB character

    \

    When used outside quotes, a space following the \ is treated as a literal character

    \\

    Whether within quotes or not, this is expanded to a backslash - \ - character

    Comments

    Comments in a Transcript task start with a # character that is either of:

    • the first character of a line in the Transcript task

    • the first character in a word

    Comments always end at the end of the line they are started on.

    Variables

    Transcript statements may contain variables. Variables have a name and a value. When a variable name is encountered during the execution of the task, the name is replaced with the value of the variable with that name.

    To separate them from normal statement words, variable names are always preceded with ${ and followed by }. Therefore the variable with the name dataDate is referenced as ${dataDate} in the transcript task. As well as user-defined variables (created using the var statement), the following default variables are supported by Exivity:

    Variable

    Meaning

    ${dataDate}

    The currently in effect, in yyyyMMdd format

    ${dataDay}

    The day value in the dataDatevariable, expressed as a 2 digit number padded with a leading zero if necessary

    ${dataMonth}

    The month value in the dataDate variable, expressed as a 2 digit number padded with a leading zero if necessary

    ${dataMonthDays}

    The number of days in the month in the dataMonth variable

    ${dataDateStart}

    00:00:00 on the day in the dataDate variable, expressed as a UNIX timestamp

    Variable names ...

    • may be used multiple times in a single statement

    • are case sensitive - ${dataDate} is different to ${datadate}

    • may not be nested

    • may be embedded within surrounding text - xxx${dataDate}yyy

    • may be used within quotes: import "${baseDir}\to_import\AzureJuly${dataDate}.ccr" source AzureJuly

    • may appear as words of their own in a transcript statement - create column Date value ${dataDate}

    Regular Expression variables

    A regular expression variable is a special type of variable used to match the name of a column in a DSET. It is enclosed by ${/ and /} and the text within this enclosure can take either of the following two forms:

    1. ${/expression/}

      • The regular expression described by expression will be applied to the default DSET

    2. ${/dset.id/expression/}

      • If the text preceding the central / character is a valid then the expression after that / will be applied to the column names in that DSET

      • If the text preceding the / character is not a valid DSET ID then the entire text of the variable between the ${/ and /} enclosure is treated as a regular expression and will be applied to the default DSET

    Once the DSET ID and the expression have been established by the above, the expression is tested against each column name in the DSET and the first matching column name is returned. If no match is found, then an error is logged and the transcript task will fail.

    The regular expression may contain a subgroup, which is enclosed within parentheses - ( and ). If no subgroup is present, and a match is made, then the entire column name will be returned. If a subgroup is present and a match is made, then only the characters matching the portion of the expression within the parentheses are returned. For example:

    The expression does not have to match the entire column name. Assuming no subgroup is specified, as long as a match is made then the variable will be expanded to the whole column name.

    Regular expression variables are powerful tools when combined with the rename statement, as they can be used to transform an uncertain column name into a known one.

    Examples:

    Importing Data

    A Transcript task cannot manipulate data on disk directly, so it is necessary to import one or more Datasets in CSV format at runtime in order to process the data within them. When a Dataset is imported the following sequence of actions takes place:

    1. The Dataset (in CSV format) is read from disk

    2. A number of checks are done on the data to ensure it meets the requirements to qualify as a Dataset

    3. The data is converted into an internal format called a DSET

    4. The DSET is assigned two tags (source and alias) which when combined together form a unique ID to identify the DSET (see for more information)

    5. An index is constructed, which facilitates high speed manipulation of the data in the DSET

    6. The DSET is added to the list of DSETs available for use by subsequent statements in the Transcript task

    Once these actions have been completed, a DSET can be identified through the unique combination of source.alias. This permits Transcript statements to specify which DSET to operate on.

    In addition, a default DSET can be specified, which will be used if no alternative DSET is specified. Full details of these mechanisms are detailed in the reference guide, specifically in the import and default articles.

    Exporting Data

    Data can be exported in one of two ways during the execution of a Transcript task:

    Export on demand

    Many Transcript statements change the data in the DSET in some way. Columns may be created, renamed or deleted and rows may be added and removed for example.

    At any point in the Transcript process the current state of a DSET can be rendered to disk as an output CSV file. This is accomplished via use of the export statement. This permits snapshots of a DSET to be created for debugging or audit purposes, as well as the creation of partially processed CSV files for import into a later Transcript process.

    Finishing

    The finish statement creates a Reporting Database File (RDF) containing the data in a DSET. This RDF can then be used by the reporting engine.

    Reporting Database Files

    On-premises

    Exivity can be installed in your on-premises data center using the provided installer. You can automatically deploy it using the silent installation command-line options or execute it as an interactive installer.

    To install Exivity, you'll need the following:

    1. A system that complies with the Exivity minimal system requirements

    2. The Exivity software installation executable

    3. A valid Exivity license key

    If you need help meeting one or more of the above requirements, please get in contact with our department.

    Interactive installation

    To install Exivity interactively, execute the provided setup executable. Then follow the instructions depending on your intended architecture:

    Silent Installation

    Silent installation is only recommended for experienced users requiring automated deployments

    To execute a silent installation the following command line parameters are supported:

    Example to install using all defaults:

    <setup>.exe /S

    Example to install using the custom path and Exivity admin user:

    <setup>.exe /S /EXIVITY_PROGRAM_PATH="C:\Program Files\Exivity\program" /EXIVITY_HOME_PATH=D:\Exivity\home /ADMIN_USER=master /ADMIN_PASSWORD=P@ssword

    Updating minor versions

    Updating your installation of Exivity to a more recent minor and/or bugfix release (i.e. 3.x.x --> 3.x.x) is straightforward and may be done by installing the new version over the top of the old.

    Manually update

    Execute the setup executable. It will detect the installed version of Exivity, and will automatically upgrade when you click Next

    Silent update

    When executing <setup>.exe /S, your existing installation will be automatically upgraded.

    Upgrading major versions

    When upgrading to a more recent major version (i.e. 2.10.2 --> 3.x.x) consult first the to verify any breaking changes that may apply to your installation.

    Installing a valid SSL certificate

    Exivity comes as standard with an untrusted self-signed SSL certificate. It is therefore highly recommended to replace the default certificate with an official one, signed by your Certificate Authority. To install a signed certificate, follow this procedure:

    • Download the 32 bit version of openssl.exe from , and install this tool on the Exivity server

    • Use the openssl.exe executable to generate a valid key file on the Exivity server by executing the following command:

    • Run the following command to create a certificate signing request file:

    Replace example.com in the command above with the FQDN of the Exivity server.

    • You will be asked to enter general information like company name, city, etc. It is important to include the FQDN of the Exivity server when asked for Common Name (e.g. server FQDN or YOUR name) []:

    NOTE: when asked, it is required to not provide a password (leave this field empty and press return), otherwise the Exivity application will not be able to use your certificate.

    • The generated CSR file should be sent to your Certificate Authority. After processing by your CA, you should receive back a .crt file. Rename this file to webcertificate.crt and rename your exivity.key to webcertificate.key and copy it to the directory %EXIVITY_PROGRAM_PATH%\server\nginx\conf. This should overwrite the existing .key and .crt files.

    • Restart the Exivity Web Service Windows service to activate your signed certificate.

    Configuring a separate web server portal

    In some environments it may be desirable to separate the webserver from the backend components. This can be achieved by installing two separate Exivity instances. One instance could be placed in a DMZ and the second instance would then typically be deployed within a local network as shown in the following diagram:

    To achieve this, first install Exivity on the backend node using the standard procedure described . Afterwards install the Exivity software on the system that should become the User Portal and only install the Web Component. When asked, specify the API Backend hostname and port to finalize the installation.

    Make sure to replace HOSTNAME_BACKEND_PORTAL with the actual hostname or IP address of the system that serves as your Exivity Backend Portal.

    When using as an authentication mechanism for Single Sign On, and users also connect to a User Portal, take special attention to theX-Forwarded-Host and X-Forwarded-Port in the Nginx webproxy.conf. These are required when the User Portal is served on a different port (i.e. 443) compared to the backend portal API (i.e. 8002). When this is the case, the forwarded port needs to match the port number of the user portal.

    After completing your installation, you should now be able to access your Exivity User Portal.

    Using an Internet proxy when extracting data

    In cases where Exivity instance requires internet connectivity (i.e. to obtain Azure or AWS consumption data), and your network infrastructure requires use of a proxy server, it is necessary to configure a system environment variable.

    Right click on This PC in an Explorer Window and click on Properties:

    Then go to Advanced System Settings, then click the Environment Variables button:

    Now add a new System Variable with the name ALL_PROXY and fill in the address of your proxy server as the value for this variable:

    In case you do not want to use the proxy for certain address or domains, it is also possible to add an additional variable NO_PROXY:

    If the name in the noproxy list has a leading period, it is a domain match against the provided host name. This way ".example.com" will switch off proxy use for both "www.example.com" as well as for "foo.example.com".

    After confirming the change, make sure to restart both the Exivity Windows Services.

    Increase limit of filesize upload in the API

    By default, has a limit of 2048kb for filesize uploading, should you require to increase it, please modify the file located in %EXIVITY_PROGRAM_PATH%\server\php\php.ini

    Adjust the variables post_max_size and upload_max_filesize to your desired value.

    Enable PDF report exports

    In order to generate PDF documents through the , the Chrome browser needs to be installed and the directory where chrome.exe can be found should be made available in the systems Path environmental variable.

    Depending on your installation method chrome.exe should be installed in one of those directories:

    • C:\Program Files\Google\Chrome\Application

    • C:\Users\{username}\AppData\Local\Google\Chrome\Application

    To prevent server overload, generating PDF reports through the API is limited to take 1 minute. Very large reports (e.g. consolidated reports with instance details) may take more time and result in an API error. If that's the case, try generating reports for single accounts or without instance details.

    Restart the Exivity API service after making these changes.

    var

    Overview

    The var statement is used to create or update a variable which can subsequently be referenced by name in the USE script.

    {
       "report": [
          {
             "meta": {
                // ...
             },
             "error": false,
             "data": [
                {
                   // ...
                }
             ]
          }
       ]
    }
    {
       "report": [
          {
             // ...
          }
       ]
    }
    {
    -    "filename": "/import/generic/2020/01/31_uploaded_001.txt"
    +    "filename": "/generic/2020/01/31_uploaded_001.txt"
    }
    # Rename a column with 'Operations' in its name such that its
    # new name is whatever came before 'Operations' in the original name
    var prefix = ${/(.*)Operations/}
    rename column ${/.*Operations/} to ${prefix}
    # This is a comment
    import usage from Azure # The text from '#' onwards is a comment
    import usage#1 from Azure # The '#' in 'usage#1' does not start a comment
    # Rename a column containing 'Transfer' or 'transfer' in
    # its name, such that it is called 'Transfer':
    rename column ${/.*[Tt]ransfer/} to Transfer
    
    # As above, but specifically for the 'Azure.usage' DSET
    rename column ${/Azure.usage/.*[Tt]ransfer/} to Transfer
    
    # Rename a column with 'Operations' in its name such that its
    # new name is whatever came before 'Operations' in the original name
    var prefix = ${/(.*)Operations/}
    rename column ${/.*Operations/} to ${prefix}

    ${dataDateEnd}

    23:59:59 on the day in the dataDate variable, expressed as a UNIX timestamp

    ${dataYear}

    The year value in the dataDate variable, expressed as a 4 digit number

    ${homeDir}

    The base working directory currently in effect

    ${exportDir}

    This is the equivalent of ${baseDir}\exported

    DSET ID
    Core concepts
    datadate

    exivity

    /PGUSER

    PostgreSQL user

    postgres

    /PGPASSWORD

    PostgreSQL password

    randomized

    /PGHOST

    Hostname for PostgreSQL server

    localhost

    /PGPORT

    Port of PostgreSQL server

    5432

    /PGSSLMODE

    Disable or require PostgreSQL SSL communication

    disable

    /PGDATABASE

    Exivity database on PostgreSQL server

    exdb

    /PGDATA

    Location of PostgreSQL data files (only used when installing local database instance)

    <exivity_home>\system\pgdata

    /PSQL_INSTALLED

    1 (install locally) or 0 (do not install)

    1

    /PROXIMITYHOST

    Remote API hostname (required when installing Web component only)

    localhost

    /PROXIMITYPORT

    Remote API port (required when installing Web component only)

    8002

    /API_INSTALLED

    1 (install locally) or 0 (do not install)

    1

    /SCHEDULER_INSTALLED

    1 (install locally) or 0 (do not install)

    1

    /BACKEND_INSTALLED

    1 (install locally) or 0 (do not install)

    1

    /JOBMAN_INSTALLED

    1 (install locally) or 0 (do not install)

    1

    /MQ_INSTALLED

    1 (install locally) or 0 (do not install)

    1

    /MQHOST

    RabbitMQ hostname

    localhost

    /MQPORT

    RabbitMQ port number

    5672

    /MQSSL

    MQ SSL/TLS enabled (1) or disabled (0)

    0

    /MQUSER

    RabbitMQ user account

    guest

    /MQPASSWORD

    RabbitMQ password

    guest

    /MQVHOST

    RabbitMQ virtual host

    /

    Parameter

    Description

    Default

    /EXIVITY_PROGRAM_PATH

    Path to Exivity program files

    C:\Exivity\program

    /EXIVITY_HOME_PATH

    Path to Exivity home files

    C:\Exiivty\home

    /ADMIN_USER

    Exivity admin user

    admin

    /ADMIN_PASSWORD

    support
    Single-node deployment
    Multi-node deployment
    release notes
    https://slproweb.com/products/Win32OpenSSL.html
    here
    SAML2
    Exivity API
    Exivity API
    Read how to add a directory to the Path environmental variable.
    Separating web portal from backend components
    System Properties
    Manage System Variables
    ALL_PROXY variable
    Skip proxy for certain addresses

    Exivity admin password

    Syntax

    [public] varname [ = value]

    [public] varname operator number

    [public] encrypt varname = value

    For details on encrypted variables please refer to the encrypt article.

    Details

    Variables are created in one of two ways:

    1. Manually via the var command

    2. Automatically, as a consequence of other statements in the script

    If the word public precedes a variable declaration then the variable will be shown in, and its value can be updated from, the Exivity GUI. Only variables prefixed with the word public appear in the GUI (all others are only visible in the script itself). To make an automatic variable public, re-declare it with a value of itself as shown below:

    Manually defined variables

    A variable is a named value. Once defined, the name can be used in place of the value for the rest of the script. Amongst other things this permits the configuration of various parameters at the top of a script, making configuration changes easier.

    The = value portion of the statement is optional, but if used there must be white space on each side of the = character. To use spaces in a variable value it should be quoted with double quotes.

    Once a variable has been defined it can be referenced by prefixing its name with ${ and post-fixing it with a }. For example, a variable called outputFile can be referenced using ${outputFile}. If no value is specified, then the variable will be empty, eg:

    will result in the output:

    Variable names are case sensitive, therefore ${variableName} and ${VariableName} are different variables.

    If there is already a variable called name then the var statement will update the value.

    There is no limit to the number of variables that can be created, but any given variable may not have a value longer than 8095 characters.

    Arithmetic

    Variables that contain a numeric value can have the arithmetic operations performed on them in one of two ways.

    Method 1

    The first, and recommended way is to use an expression, as demonstrated in the example code below:

    When using expressions in this manner after a var statement it is necessary to enclose the expression in parentheses as shown above but both integer and floating point arithmetic can be performed.

    Method 2

    If working with integer arithmetic then one of the operators += , -= , *= , /= or %= can be used which will perform addition, subtraction, multiplication, (integer) division or modulo operations respectively.

    For example the statement var x += 10 will add 10 to the value of x. Note that both the value in the variable and the value following the operator must be integers.

    When performing arithmetic operations on a variable using this second method, any leading zeros in the value of that variable will be respected:

    Currently, only integer arithmetic is supported by the += , -= , *= , /= and %= operators.

    Automatic variables

    Automatic variables are referenced in exactly the same way as manually created ones; the only difference is in the manner of creation.

    The following variables are automatically created during the execution of a USE script:

    Variable

    Details

    ${ARGC}

    The number of parameters passed to the script

    ${ARG_N}

    For each parameter passed to the script a variable called ${ARG_N}, where N is a number greater than or equal to 1, will be created whose value is the argument value associated with that parameter

    ${DAY}

    The day of the current local date, padded to 2 digits if necessary

    ${DAY_NAME}

    The full English name of the current day of the week

    ${DAY_UTC}

    The day of the current date in UTC, padded to 2 digits if necessary

    To derive the short versions of the day and month names, use a match statement to extract the first 3 characters as follows:

    match day "(...)" ${DAY_NAME_UTC}

    var short_day = ${day.RESULT}

    The .LENGTH suffix

    On occasion it may be useful to determine the length (in characters) of the value of a variable. This can be done by appending the suffix .LENGTH to the variable name when referencing it. For example, if a variable called result has a value of success then ${result.LENGTH} will be replaced with 7 (this being the number of characters in the word 'success').

    A variable with no value will have a length of 0, therefore using the .LENGTH suffix can also be used to check for empty variables as follows:

    myvar.LENGTH is not a variable in its own right. The .LENGTH suffix merely modifies the manner in which the myvar variable is used.

    Examples

    Basic variable creation and use

    Creating encrypted variables

    Amazon AWS CUR (Athena)

    This Tutorial is for the AWS CUR Athena Extractor if you want to use the standard AWS CUR Extractor please use .

    Prerequisites

    This tutorial assumes that you have CUR (Cost and Usage Report) set up in your AWS environment. If this is not the case please follow the steps in

    Parslets

    Overview

    After populating a with data from an external source such as an HTTP request or a file, it is often necessary to extract fields from it for uses such as creating subsequent HTTP requests or rendering output files.

    This is accomplished using parslets. There are two types of parslet, static and dynamic. In both cases, when a parslet is used in a script it is expanded such that it is replaced with the value it is referencing, just like a variable is.

    openssl.exe genrsa -out exivity.key 4096
    openssl.exe req -new -key exivity.key -out exivity.csr -addext "subjectAltName = DNS:example.com"
    # Convert the automatic NEWLINE variable to be public
    public var NEWLINE = ${NEWLINE}
    var empty_var
    print Variable value: "${empty_var}"
    Variable value:
    var cpu_total = 5
    var cpu_total = (${cpu_total} + 2)   # Add 2 to cpu_total
    
    var ram = 8
    var ram_uplift = 4
    var ram = (${ram} + ${ram_uplift})  # Add ram_uplift to ram
    var x = 000
    var x += 5    # Result is 005
    var x += 10   # Result is 015
    var x += 100  # Result is 115
    var x += 1000 # Result is 1115
        var myvar
        if (${myvar.LENGTH} == 0) {
            print The variable 'myvar' is empty
        } else {
            print The variable 'myvar' has a value of ${myvar}
        }
        # Declare a variable
        var name = value
    
        # If the value contains whitespace then it must be quoted or escaped
        var sentence = "This sentence is contained in a variable"
    
        # Pathnames should be quoted to avoid any incidences of '\t' being expanded to tabs
        var exportfile = "C:\exivity\collected\Azure\customers.csv"
        # ---- Start Config ----
        encrypt var username = admin
        encrypt var password = topsecret
        var server = "http://localhost"
        var port = 8080
        var api_method = getdetails
        # ---- End Config ----
    
        set http_authtype basic
        set http_username ${username}
        set http_password ${password}
    
        buffer {response} = http GET ${server}:${port}/rest/v2/${api_method}

    ${DAY_NAME_UTC}

    The full English name of the current day o fthe week in UTC

    ${GET_TIME}

    The current local time in 'friendly' format, eg Tue Jan 16 14:04:32 2018

    ${loop_label.COUNT}

    A foreach loop creates this variable (where loop_name is the name of the loop). The value of the variable is updated every time the loop executes, with a value of 1 on the first loop. If no loops are performed, then the variable will have a value of 0

    ${loop_label.NAME} ${loop_label.VALUE}

    When iterating over the children of a JSON object (not an array) using foreach, these variables are updated with the name and value respectively of the current child every time the loop is executed (either may be blank if the child has no name or value respectively)

    ${loop_label.TYPE}

    When iterating over the children of a JSON object (not an array) using foreach, this variable is updated to reflect the type of the current child every time the loop is executed. The type will be one of boolean, number, string, array, object or null.

    ${HOUR}

    The hour of the current local time, padded to 2 digits if necessary

    ${HOUR_UTC}

    The hour of the current time in UTC, padded to 2 digits if necessary

    ${HTTP_STATUS_CODE}

    The HTTP status code returned by the server in response to the most recent http request executed. In case of transport-level failure contains value -1, HTTP_STATUS_TEXT variable contains error message

    ${HTTP_STATUS_TEXT}

    In case of transport-level failures, this variable will contain an error message intended to assist in identifying the issue

    ${MINUTE}

    The minute of the current local time, padded to 2 digits if necessary

    ${MINUTE_UTC}

    The minute of the current time in UTC, padded to 2 digits if necessary

    ${MONTH}

    The month of the current local date, padded to 2 digits if necessary

    ${MONTH_NAME}

    The full English name of the current month of the year

    ${MONTH_UTC}

    The month of the current date in UTC, padded to 2 digits if necessary

    ${MONTH_NAME_UTC}

    The full English name of the current month of the year in UTC

    ${NEWLINE}

    A newline (0x0A) character. Example use: var twolines = "This string${NEWLINE}contains two lines of text"

    ${SECOND}

    The second of the current local time, padded to 2 digits if necessary

    ${SECOND_UTC}

    The second of the current time in UTC, padded to 2 digits if necessary

    ${MSEC}

    The milliseconds of the current local time, padded to 3 digits if necessary

    ${MSEC_UTC}

    The milliseconds of the current time in UTC, padded to 3 digits if necessary

    ${SCRIPTNAME}

    The filename of the script being executed

    ${OSI_TIME_UTC}

    The current UTC time in YYYYMMDD'T'HHMMSS'Z' format, eg: 20180116T140432Z

    ${YEAR}

    The year of the current local date as a 4 digit number

    ${YEAR_UTC}

    The year of the current date in UTC as a 4 digit number

    ${UNIX_UTC}

    Current UNIX time (seconds since 1 January 1970 00:00:00 UTC)

    before proceeding.

    Please note that in order to deploy this solution the S3 bucket to which CUR reports are written must reside in one of the following AWS regions:

    • Northern Virginia

    • Ohio

    • Oregon

    • Mumbai

    • Seoul

    • Singapore

    • Sydney

    • Tokyo

    • Frankfurt

    • Ireland

    • London

    At this point in time, only the previous regions have all the necessary services deployed.

    Introduction

    This tutorial shows how to build a serverless solution for querying the AWS CUR Report using Exivity. This solution makes use of AWS serverless services such as Lambda and Athena, as well as other commonly used services such as S3, CloudFormation, and API Gateway. The following topics will be covered:

    1. Solution Overview

    2. Launching the CloudFormation Template

    3. Creating the Lambda function and API Gateway

    4. Configuring an Extractor

    5. Configuring a Transformer

    6. Creating your Report

    Solution Overview

    The Billing and Cost Management service writes your AWS Cost and Usage report to the S3 bucket that you designated when setting up the service. These files can be written on either an hourly or daily basis.

    The CloudFormation template that accompanies this tutorial builds a Serverless environment containing a Lambda function that reads a CUR file, processes it and writes the resulting report to an output S3 bucket. The output data object has a prefix structure of "year=current-year" and "month=current-month". For example, if a file is written 13/09/2018 then the Lambda function outputs an object called "bucket-name/year=2018/month=09/file_name".

    The next step in the template is to translate this processed report into Athena so that it can be queried. The following diagram shows the steps involved in the process:

    Afterwards, we will create a Lambda function to query the Athena database, returning a URL with the results of the query in CSV format. We will also create an API EndPoint with the AWS API Gateway service, which is used by Exivity to retrieve the data.

    Launching the CloudFormation template

    To deploy this solution successfully the following information is required:

    1. The name of your AWS Cost and Usage report.

    2. The name of the S3 bucket in which the reports are currently stored.

    Firstly, launch the CloudFormation template that builds all the serverless components that facilitate running queries against your billing data. When doing this, ensure that you choose the same AWS Region within which your CUR S3 bucket is located.

    Click on in the region associated with the S3 bucket containing your CUR files (this tutorial uses Ireland (eu-west-1) for illustrative purposes, but all the supported regions work in the same way).

    • Ireland

    • Ohio

    • Oregon

    • Northern Virginia

    Now follow the instructions in the CloudFormation wizard, using the following options, and then choose Create.

    • For CostnUsageReport, type the name of your AWS Cost and Usage report.

    • For S3BucketName, type a unique name to be given to a new S3 bucket which will contain the processed reports.

    • For s3CURBucket, type the name of the bucket into which your current reports are written.

    While your stack is building, a page similar to the following is displayed.

    When the Status column shows CREATE_COMPLETE, you have successfully created four new Lambda functions and an S3 bucket into which your transformed bills will be stored.

    Once you have successfully built your CloudFormation stack, you can create a Lambda trigger that points to the new S3 bucket. This means that every time a new file is added to, or and existing file is modified in, the S3 bucket the action will trigger the lambda function.

    Create this trigger using the following steps:

    • Open the Lambda console.

    • Choose Functions, and select the aws-cost-n-usage-main-lambda-fn-A Lambda function (note: do not click the check box beside it).

    • There should be no existing triggers. Choose Trigger, Add trigger.

    • For Trigger type (the box with dotted lines), choose S3.

    • Select the S3 bucket withing which your CUR reports are stored.

    • For Event type, choose Object Created (All) and check Enable trigger.

    • Click Submit.

    The database and table are not created until your function runs for the first time. Once this has been done, Athena will contain the database and table.

    Athena stores query results in S3 automatically. Each query that you run has a results file in CSV format and a metadata file (*.csv.metadata) that includes header information such as column type, etc.

    Testing (Optional)

    Once you have successfully added the trigger to the S3 bucket in which the Billing and Cost Management services writes your CUR reports, test the configuration using the following steps.

    • In the S3 path to which AWS writes your AWS Cost and Usage Billing reports, open the folder with your billing reports, open the folder with your billing reports. There will be either a set of folders or a single folder with a date range naming format.

    • Open the folder with the data range for the current month. In this folder, there is a metadata file that can be found at the bottom of the folder. It has a JSON extension and holds the S3 key for the latest report.

    • Download the metadata file. Ensure that the name of the file on your machine is the same as the version stored on your S3 bucket.

    • Upload the metadata file to the same S3 path from which you downloaded it. This triggers the Lambda function aws-cost-n-usage-main-lmbda-fn-A.

    • In the S3 bucket that you created to hold your processed files, choose the "year=" folder and then the "month=" folder that corresponds to the current month. You should see the transformed file there, with the time stamp that indicated that it was just written.

    Creating the Lambda function and API Gateway

    To automate this process a CloudFormation template will be provided. This template will create an IAM role and Policy so that our API can invoke Lambda functions. Then it will create a Lambda function with the capabilities of querying our previously created Athena Serverless DB, and save the output in an S3 bucket in .csv format, (this output will be later retrieved by Exivity). Finally, it will deploy an API Gateway allowing us to create an endpoint for our Lambda function, this is the endpoint that the Exivity extractor will consume. Make sure to launch the CloudFormation template in the same region that you have deployed the previous one.

    The metamodel of the Implementation

    Let's start by downloading the CloudFormation template (you only need to choose one of the formats, both are supported by AWS):

    Then follow the next steps:

    • Go to the CloudFormation console.

    • Choose Create Stack.

    • Choose Upload a template to Amazon S3.

    • Select from your computer the template that you have downloaded.

    • Follow the CloudFormation wizard - Add a Name to the Stack and select I acknowledge that AWS CloudFormation might create IAM resources with custom names in the last step.

    • Once the stack is created you should see an CREATE_COMPLETE message.

    • Click on Output to take a note of your endpoint (you will need to input this in the Exivity extractor).

    Next, we will associate an API Gateway trigger to our Lambda function:

    • Go to the Lambda console.

    • Choose the QueryAthena2 function.

    • Under Add Triggers select API gateway. You should see an image like the following:

    • Click on API Gateway figure to configure it.

    • On API select QueryAthena2.

    • On Deployment Stage select v1.

    • On Security select Open.

    • Choose Add.

    • Choose Save.

    You should see a screen like this:

    Finally, we will deploy the API Gateway:

    • Go to the API Gateway console.

    • Choose QueryAthena2.

    • In the Resources section, click on the ANY method.

    • In Actions, choose Delete Method.

    • Click on Delete.

    • In the Resources section, choose Actions.

    • Click on Deploy API

    • In Deployment Stage select V1.

    • Add a Deployment Description.

    • Choose Deploy.

    Securing the API Gateway

    Initially, the created API endpoint is public and as such is vulnerable to the possibility of misuse or denial-of-service attacks. To prevent this, associate an API Key with the endpoint as per the following steps:

    • Inside the API Gateway dashboard, select the QueryAthena2 API

    • In Resources, select Method Request

    • In Settings, change API Key Required to True

    • Click on Actions and choose Deploy API to effect the change

    • In Deployment Stage, select v1 and click on Deploy

    • Go to the API Keys section

    • Click on Actions and select Create API Key

    • In Name write ExivityAPIKey

    • Click on Save

    • Copy the API Key, as this will be required by the Exivity configuration

    • Go to Usage Plan

    • Click on Create.

    • In Name write ExivityUsagePlan

    • In the Throttling Section, change Rate to 100 and Burst to 10

    • In the Quota Section, change it to 50000 requests per Month

    • Click on Next

    • Click on Add API Stage

    • In API, select QueryAthena2 and in Stage select v1

    • Confirm the changes and click on Next

    • Click on Add API Key to Usage Plan

    • Select ExivityAPIKey, confirm the changes

    • Click on Done

    The API Key is now required to access the API endpoint thus adding a layer of security to mitigate unauthorized access attempts.

    Configure Extractor

    To create the Extractor in Exivity, browse to Data Sources > Extractors and click the Create Extractor button. This will try to connect to the Exivity Github account to obtain a list of available templates. For AWS, please click AWS_CUR_Extractor from the list. Provide a name for the Extractor in the name field, and click the Create button.

    Once you have created the Extractor, go to first tab: Variables

    • In the Bucket variable specify the name of the S3 bucket where the .csv with the output of the query will be saved (The S3BucketName previously specified when launching the CloudFormation template).

    • In the Api endpoint variable specify the API endpoint previously created plus the route /QueryAthena.

    • In the DBname variable specify the name of your DB, you can find it in the Athena main Dashboard.

    • In the Tablename variable specify the name of the table inside your DB, you can find it in the Athena main Dashboard.

    • In the API_Key variable specify the API Key that we have created in the Securing API Gateway Section.

    Once you have filled in all details, go to the Run tab to execute the Extractor for a single day:

    The Extractor requires two parameters in yyyMMdd format:

    • from_date is the date for which you wish to collect consumption data.

    • to_date should be the date immediately following from_date.

    These should be specified as shown in the screenshot above, separated with a space.

    When you click the Run Now button, you should get a successful result.

    Configure Transformer

    Once you have successfully run your AWS CUR Extractor, you should be able to create a Transformer template via Data Sources > Transformers and click the Create Transformer button. Select the AWS CUR Transformer and run it for a single day as a test. Make sure that it is the same day as for which you extracted consumption data in the previous step.

    Create Report

    Once you have run both your Extractor and Transformer successfully create a Report Definition via the menu option Report Definition via the menu Reports > Definitions:

    Select the column(s) by which you would like to break down the costs. Once you have created the report, you should then click the Prepare Report button after first making sure you have selected a valid date selector shown when preparing the report.

    Once this is done you should be able to run any of Accounts, Instances, Services or Invoices report types located under the Report menu for the date range you prepared the report for.

    this tutorial
    Turning on the AWS Cost and Usage Report
    9KB
    ExivityCURSolutionFinal.json
    Open
    ExivityCURSolutionFinal.json
    5KB
    ExivityCURSolutionFinal.yaml
    Open
    ExivityCURSolutionFinal.yaml

    Static parslets refer to a fixed location in XML or JSON data

  • Dynamic parslets are used in conjunction with foreach loops to retrieve values when iterating over arrays in XML or JSON data

  • Parslets can be used to query JSON or XML data. Although JSON is used for illustrative purposes, some additional notes specific to XML can be found further down in this article.

    A quick JSON primer

    Consider the example JSON shown below:

    The object containing all the data (known as the root node) contains the following children:

    Child

    Type

    title

    string

    heading

    object

    items

    array

    Objects and arrays can be nested to any depth in JSON. The children of nested objects and arays are not considered as children of the object containing those objects and arrays, i.e. the children of the heading object are not considered as children of the root object.

    Every individual 'thing' in JSON data, regardless of its type is termed a node.

    Although different system return JSON in different forms, the JSON standard dictates that the basic principles apply universally to all of them. Thus, any possible valid JSON may contain arrays, objects, strings, boolean values (true or false values), numbers and null children.

    It is often the case that the number of elements in arrays is not known in advance, therefore a means of iterating over all the elements in an array is required to extract arbitrary data from JSON. This principle also applies to objects, in that an object may contain any number of children of any valid type. Valid types are:

    Type

    Description

    object

    A node encompassing zero or more child nodes (termed children) of any type

    array

    A list of children, which may be of any type (but all children in any given array must be of the same type)

    string

    Textual data

    number

    Numeric data, may be integer or floating point

    boolean

    A true or false value

    Some systems return JSON in a fixed and predictable format, whereas others may return objects and arrays of varying length and content. The documentation for any given API should indicate which fields are always going to be present and which may or may not be so.

    Parslets are the means by which USE locates and extracts fields of interest in any valid JSON data, regardless of the structure. For full details of the JSON data format, please refer to http://json.org

    Static parslets

    Static parslets act like variables in that the parslet itself is expanded such that the extracted data replaces it. Static parslets extract a single field from the data and require that the location of that field is known in advance.

    In the example JSON above, let us assume that the data is held in a named buffer called example and that the title and heading children are guaranteed to be present. Further, the heading object always has the children category and finalised. Note that for all of these guaranteed fields, the value associated with them is indeterminate.

    The values associated with these fields can be extracted using a static parslet which is specified using the following syntax:

    $JSON{buffer_name}.[node_path]

    Static parslets always specify a named buffer in curly braces immediately after the $JSON prefix

    The buffer_name is the name of the buffer containing the JSON data, which must have previously been populated using the buffer statement.

    The node_path describes the location and name of the node containing the value we wish to extract. Starting at the root node, the name of each node leading to the required value is specified in square brackets. Each set of square brackets is separated by a dot.

    The nodepaths for the fixed nodes described above are therefore as follows:

    Nodepath

    Referenced value

    .[title]

    Example JSON data

    .[heading].[category]

    Documentation

    .[heading].[finalised]

    true

    Putting all the above together, the parslet for locating the category in the heading is therefore:

    $JSON{example}.[heading].[category]

    When this parslet is used in a USE script, the value associated with the parslet is extracted and the parslet is replaced with this extracted value. For example:

    print $JSON{example}.[heading].[category]

    will result in the word Documentation being output by the statement, and:

    var category = $JSON{example}.[heading].[category]

    will create a variable called category with a value of Documentation.

    Currently, a parslet must be followed by whitespace in order to be correctly expanded. If you want to embed the value into a longer string, create a variable from a parslet and use that instead:

    When using JSON parslets that reference values that may contain whitespace it is sometimes necessary to enclose them in double quotes to prevent the extracted value being treated as multiple words by the script

    Anonymous JSON arrays

    It may be required to extract values from a JSON array which contains values that do not have names as shown below:

    Extraction of values that do not have names can be accomplished via the use of nested foreach loops in conjunction with an empty nodepath ([]) as follows:

    The result of executing the above against the sample data is:

    If the anonymous arrays have a known fixed length then it is also possible to simply stream the values out to the CSV without bothering to assign them to variables. Thus assuming that the elements in the metrics array always had two values, the following would also work:

    Which method is used will depend on the nature of the input data. Note that the special variable ${loopname.COUNT} (where loopname is the label of the enclosing foreach loop) is useful in many contexts for applying selective processing to each element in an array or object as it will be automatically incremented every time the loop iterates. See foreach for more information.

    Dynamic parslets

    Dynamic parslets are used in to extract data from locations in the data that are not known in advance, such as when an array of unknown length is traversed in order to retrieve a value from each element in the array.

    A dynamic parslet must be used in conjunction with a foreach loop and takes the following form:

    Note the following differences between a static parslet and a dynamic parslet:

    1. A dynamic parslet does not reference a named buffer directly, rather it references the name of a foreach loop

    2. Parentheses are used to surround the name of the foreach loop (as opposed to curly braces)

    3. The nodepath following a dynamic parslet is relative to the target of the foreach loop

    The following script fragment will render the elements in the items array (in the example JSON above) to disk as a CSV file.

    In the example above, the first foreach loop iterates over the elements in the 'items' array, and each of the dynamic parslets extract values from the current element in that loop. The dynamic parslets use the current element, this_item as the root for their node paths.

    If a parslet references a non-existent location in the XML or JSON data then it will resolve to the value EXIVITY_NOT_FOUND

    XML parslets

    XML parslets work in exactly the same way that JSON parslets do, apart from the following minor differences:

    1. XML parslets are prefixed $XML

    2. When extracting data from XML, the foreach statement only supports iterating over XML arrays (whereas JSON supports iterating over objects and arrays)

    3. An XML parslet may access an XML attribute

    To access an XML attribute, the node_path should end with [@atrribute_name] where attribute_name is the name of the attribute to extract. For example given the following data in a buffer called xmlbuf:

    The following script:

    will produce the following output:

    named buffer
    csv

    Tiered Services

    Learn about advanced charging models using Tiered Services and how to configure it

    Introduction

    Exivity offers Service Providers the possibility to apply tiered prices and volume based rating to their services. Service Providers can benefit from the potentiality to set the price in a manner inversely proportional to the volume: the more you buy, the less you pay for that service. This way they can appeal to a wider base of customers and they can create their own sales strategy by offering them lower or higher entry-point prices.

    Consider a scenario whereby customers pay per gigabyte of disk storage. For a non-tiered service, this is quite straightforward; we create the service (monthly or daily), configure a unit rate and the resulting charge will be the number of gigabytes consumed multiplied by the unit rate.

    This is a somewhat restrictive model however and it may be preferable to automatically apply a series of discounts that kick in as the number of gigabytes consumed increases. In such cases, a tiered service provides just such capability.

    var category = $JSON{example}.[heading].[category]
    var filename = JSON_${category}_${dataDate}
    {
      "data": {
        "result": [
          {
            "account": {
              "name": "account_one"
            },
            "metrics": [
              [
                34567,
                "partner"
              ],
              [
                98765,
                "reseller"
              ]
            ]
          },
          {
            "account": {
              "name": "account_two"
            },
            "metrics": [
              [
                24680,
                "internal"
              ],
              [
                13579,
                "partner"
              ]
            ]
          }
        ]
      }
    }
    buffer json_data = FILE system/extracted/json.json
    
    csv OUTFILE = system/extracted/result.csv
    csv add_headers OUTFILE account related_id type
    csv fix_headers OUTFILE
    
    foreach $JSON{json_data}.[data].[result] as this_result {
    
        # Extract the account name from each element in the 'result' array
        var account_name = $JSON(this_result).[account].[name]
    
        print Processing namespace: ${account_name}
    
        # Iterate over the metrics array within the result element
        foreach $JSON(this_result).[metrics] as this_metric {
    
        # As the metrics array contains anonymous arrays we need to iterate
        # further over each element. Note the use of an empty notepath.
    
            foreach $JSON(this_metric).[] as this_sub_metric {
                if (${this_sub_metric.COUNT} == 1) {
                    # Assign the value on the first loop iteration to 'related_id'
                    var related_id = $JSON(this_sub_metric).[]
                }
                if (${this_sub_metric.COUNT} == 2) {
                    # Assign the value on the second loop iteration to 'type'
                    var type = $JSON(this_sub_metric).[]
                }
            }
    
            csv write_fields OUTFILE ${account_name} ${related_id} ${type}
        }    
    }
    csv close OUTFILE
    "account","related_id","type"
    "account_one","34567","partner"
    "account_one","98765","reseller"
    "account_two","24680","internal"
    "account_two","13579","partner"
    buffer json_data = FILE system/extracted/json.json
    
    csv OUTFILE = system/extracted/result.csv
    csv add_headers OUTFILE account related_id type
    csv fix_headers OUTFILE
    
    foreach $JSON{json_data}.[data].[result] as this_result {
    
        # Extract the account name from each element in the 'result' array
        var account_name = $JSON(this_result).[account].[name]
    
        print Processing namespace: ${account_name}
    
        # Iterate over the metrics array within the result element
        foreach $JSON(this_result).[metrics] as this_metric {
    
        # As the metrics array contains anonymous arrays we need to iterate
        # further over each element. Note the use of an empty notepath.
    
            csv write_field OUTFILE ${account_name}
    
            foreach $JSON(this_metric).[] as this_sub_metric {
                    csv write_field OUTFILE $JSON(this_sub_metric).[]
            }        
        }    
    }
    csv close OUTFILE
    $JSON(loopName).[node_path]
    # For illustrative purposes assume that the JSON
    # is contained in a named buffer called 'myJSON'
    
    # Create an export file
    csv "items" = "system/extracted/items.csv"
    csv add_headers id name category subcategory
    csv add_headers subvalue1 subvalue2 subvalue3 subvalue4
    csv fixheaders "items"
    
    foreach $JSON{myJSON}.[items] as this_item
    {
        # Define the fields to export to match the headers
        csv write_field items $JSON(this_item).[id]
        csv write_field items $JSON(this_item).[name]
        csv write_field items $JSON(this_item).[category]
        csv write_field items $JSON(this_item).[subcategory]
    
        # For every child of the 'subvalues' array in the current item
        foreach $JSON(this_item).[subvalues] as this_subvalue
        {
            csv write_field items $JSON(this_item).[0]
            csv write_field items $JSON(this_item).[10]
            csv write_field items $JSON(this_item).[100]
            csv write_field items $JSON(this_item).[1000]
        }
    }
    csv close "items"
    <note>
    <to>Tove</to>
    <from>
        <name comment="test_attribute">Jani</name>
    </from>
    <test_array>
        <test_child>
            <name attr="test">Child 1</name>
            <age>01</age>
        </test_child>
        <test_child>
            <name attr="two">Child 2</name>
            <age>02</age>
        </test_child>
        <test_child>
            <name attr="trois">Child 3</name>
            <age>03</age>
        </test_child>
        <test_child>
            <name attr="quad">Child 4</name>
            <age>04</age>
        </test_child>
    </test_array>
    <heading>Reminder</heading>
    <body>Don't forget me this weekend!</body>
    </note>
    foreach $XML{xmlbuf}.[test_array] as this_child {
        print Child name ${this_child.COUNT} is $XML(this_child).[name] and age is $XML(this_child).[age] - attribute $XML(this_child).[name].[@attr]
    }
    Child name 1 is Child 1 and age is 01 - attribute test
    Child name 2 is Child 2 and age is 02 - attribute two
    Child name 3 is Child 3 and age is 03 - attribute trois
    Child name 4 is Child 4 and age is 04 - attribute quad

    There are a few key points about Tiering that are worth remembering:

    • There are two models of tiering available in Exivity: Standard Tiering and Inherited Tiering.

    • Tiering is always applied at the Service level.

    • Tiering is always applied based on monthly consumption figures (even for non-monthly services).

    • The tiered rating mechanism can be in the rate management screen.

    Instance vs. Service quantities

    In order to get the most from this documentation it is important to understand the difference between Service-level quantities and Instance-level quantities.

    Usage data is comprised solely of instance-level quantities. Every record in the usage from Amazon AWS, Microsoft Azure or even a less account-orientated source such as VMware or Veeam backup will almost certainly contain a reference to a unique instance ID of some kind whether it be a hostname, host ID, resource ID, disk ID or similar.

    Consider a hypothetical cloud vendor who offers three sizes of VM; small, medium and large all of which are charged on a monthly basis. Each has differing amounts of disk, CPU and RAM capacity but all usage records for VMs reference one of those three sizes.

    A customer of that vendor could provision a number of VMs for different purposes. Perhaps a couple of small VMs for sandbox/test servers, half a dozen medium VMs for development servers and four large VMs; two for email servers and two for database servers.

    With 2 small VMs, 6 medium VMs and 4 large VMs in service, the customer would expect to pay for them on that basis - 2 x small, 6 x medium and 4 x large - on the premise that this is how many VMs they are operating.

    From the cloud vendor's perspective, there are still only 3 services, however, these being the small, medium and large VMs. It is the monthly charge for each VM size multiplied by the instances of those VMs that determine the final bill that the customer has to pay.

    Exivity fully understands the difference between instances and services. In our above example, there would be three services but the billing records may look more like this:

    Service
    Instance
    Rate

    Small VM

    sandbox1

    10.00

    Small VM

    sandbox2

    10.00

    Medium VM

    dev_server1

    15.00

    It can be seen that regardless of the number of services offered there can be any number of instances. Services are defined in a finite service catalogue but there is no hard limit on the number of instances that can be instantiated (well, short of trying to spin more of them up than the cloud can support but that's a little academic in practice).

    Reports provided by Exivity are powered by a charge engine that processes usage data and emits charge records. Each charge record can represent either a service-level or an instance-level summary of cost.

    In the above scenario, a greatly simplified view of the charge records produced by Exivity would be as follows:

    Type
    Service
    Instance
    Quantity
    Charge

    Service

    Small VM

    -

    2

    20.00

    Instance

    Small VM

    sandbox1

    Note that the service-level records are aggregations of the instance level records, both in terms of quantity and charge. The aggregation renders the 'instance' column meaningless for service-level records but should drill-down be required then the instance-level records are available for closer examination.

    Drill-down is important as it provides the ability to look inside the aggregated service-level charges in order to see the individual instances that contributed to those charges.

    Instance-level records represent the most granular view of the data possible.

    This concludes our brief diversion into the difference between Service-level and Instance-level charge records but bear in mind that tiering in Exivity is always applied at the Service level.

    Exivity also provides drill-downs for tiered services and details of how the instance-level records accurately sum to the service-level records (not only in quantity but also in charge and on a per-bucket basis) are covered in subsequent sections of this documentation.

    Standard vs. Inherited Tiering

    Developing further the scenario previously outlined in the Introduction section, let us consider the following desired business model for pricing storage per gigabyte:

    Bucket
    Quantity range
    Rate

    1

    0 +

    1.00

    2

    > 100

    0.80

    3

    > 1000

    0.60

    Exivity uses the term Bucket to distinguish between each of the quantity ranges. Buckets are always numbered, starting from 1, and the first bucket will always have a quantity range of 0+

    The first 100 gigabytes consumed are to be charged at 1.00 unit of currency each. For any quantity over 100, up to 1000 (inclusive), they are to be charged at 0.80 each and for any quantity over 1000 they are to be charged at 0.60 each.

    Exivity supports two different methods of applying this business rule each of which will result in a differing final charge. These methods are Standard and Inherited tiering.

    Standard Tiering

    Standard Tiered pricing works so that the price per unit changes once each quantity within a "tier" has been reached.

    If 2000 gigabytes of storage was consumed then the resulting allocation to each bucket, and the final charge, would be as follows:

    Bucket
    Quantity
    Rate
    Bucket charge

    1

    100

    1

    100.00

    2

    900

    0.8

    720.00

    3

    The final charge after applying Standard Tiering as shown in the table above would be the sum of the bucket charges, thus 100 + 720 + 600 = 1,420.00 .

    The rate tier interval is always monthly, meaning the tiered rates will be applied after calculating the monthly quantity totals for a service.

    Inherited Tiering

    The Inherited Tiering mechanism is designed to optimize service costs by automatically moving data to the most cost-effective tier.

    This model places the entire quantity consumed into a single bucket, which is the highest-numbered bucket that would have any quantity allocated to it if Standard Tiering was applied.

    To illustrate this, we can continue with the case above: if the 3rd (and highest) bucket had some quantity allocated to it and Inherited Tiering was applied, then the result would be as follows:

    Bucket
    Quantity
    Rate
    Bucket charge

    1

    0

    1

    0.00

    2

    0

    0.8

    0.00

    3

    In other words as each bucket is filled, its contents are carried over to (inherited by) the next bucket. The final charge after applying Inherited Tiering is simply the charge associated with the bucket into which all the quantity was allocated, thus 1,200.00 in the example above.

    How to configure tiered pricing

    Tiering is currently a Beta feature. Enable Beta features in order to use it.

    First, you need to set a service to be Tiered. 1. Navigate to the Services > Overview menu and select a service from the list on the right side of the screen.

    2. Click the Unlock button to enable edit mode.

    3. Scroll down to the Billing section and select the preferred Tiering: Standard or Inherited.

    Updating a service to a Tiered charge type

    4. Click Update to apply your changes.

    Next, you need to create your tiered model which will apply to the service you just updated.

    5. Navigate to the Services > Rates menu and select the service from the list on the right side of the screen.

    6. Click on the +New button at the top next to Revisions or the one at bottom of the Revision box.

    7. Choose the start date in the Effective date field.

    8. Choose the preferred Aggregation level. Consult the documentation for Aggregation Levels and Account Hierarchy to get a better understanding.

    9. Then you may add your preferred intervals/buckets for the selected service, by clicking on the green + button. For example:

    Example of buckets for a Tiered service

    10. Finally, click Create to save your changes.

    null

    A null value

    Mumbai
    Seoul
    Singapore
    Sydney
    Tokyo
    Frankfurt
    London

    Medium VM

    dev_server2

    15.00

    Medium VM

    dev_server3

    15.00

    Medium VM

    dev_server4

    15.00

    Medium VM

    dev_server5

    15.00

    Medium VM

    dev_server6

    15.00

    Large VM

    email1

    20.00

    Large VM

    email2

    20.00

    Large VM

    database1

    20.00

    Large VM

    database2

    20.00

    1

    10.00

    Instance

    Small VM

    sandbox2

    1

    10.00

    Service

    Medium VM

    -

    6

    90.00

    Instance

    Medium VM

    dev_server1

    1

    15.00

    Instance

    Medium VM

    dev_server2

    1

    15.00

    instance

    Medium VM

    dev_server3

    1

    15.00

    Instance

    Medium VM

    dev_server4

    1

    15.00

    Instance

    Medium VM

    dev_server5

    1

    15.00

    Instance

    Medium VM

    dev_server6

    1

    15.00

    Service

    Large VM

    -

    4

    80.00

    Instance

    Large VM

    email1

    1

    20.00

    Instance

    Large VM

    email2

    1

    20.00

    Instance

    Large VM

    database1

    1

    20.00

    Instance

    Large VM

    database2

    1

    20.00

    1000

    0.6

    600.00

    2000

    0.6

    1200.00

    configured

    Releases

    Install / upgrade

    A copy of the Exivity installer can be obtained on our website. Installing or upgrading to the latest release is a straightforward process, refer to the on-premises article for more information.

    Migration to PostgreSQL within the Exivity architecture As of version 3.0.0 Exivity stores, all global configuration data and reports results in PostgreSQL as opposed to SQLite. This is a mandatory and breaking change. Please refer to the for more information.

    As of version 3.2.1, quantity adjustments are applied first and before any charge related adjustments. This may potentially affect your billing records. If you were depending on mixing charge and quantity adjustments, please reach out to for guidance.

    As of version 3.5.0, Exivity uses the RabbitMQ message broker software for inter-component communication. By default, RabbitMQ will be installed automatically on the Exivity host. Alternatively, you may leverage a remote/dedicated RabbitMQ instance. Make sure to verify the system requirements in the installation guide before upgrading.

    Release feed

    You may register with the following to obtain information about new releases.

    Changelog

    v3.6.9

    June 16, 2022

    Bug fixes

    • Fixed an issue with 'option noquote' When importing a file containing a non-escaped quote in a data field, the import statement would fail with an error when option noquote was in effect. This has now been fixed.

    • Fixed an issue when creating a new report Resolved issue where creating a new report definition would generate an invalid error, stating it requires a dset while a dset had already been assigned.

    v3.6.8

    June 10, 2022

    New features

    • Added the ability to sort a DSET in a Transformer script There is now a statement in Transformer scripts that can be used to sort a DSET in either ascending or descending order by one or multiple columns.

    • Added 'dequote' statement to Transformers The new statement can be used to remove quotes surrounding column names and values.

    Bug fixes

    • Fixed an issue that could show an incorrect account during PDF export Resolved issue where account filter wasn't properly applied to the PDF export of the summary report.

    • Fixed an issue with missing labels in the summary report export Resolved styling issue that made labels invisible for the summary export.

    • LDAP users can be created without a password Local users require a password. The creation of new users required a password, that matched the configured password policy. LDAP users do not require a password stored in our database. This is because our system always verifies the password with the LDAP system, and never stores the LDAP password. If an admin user wanted to create and configure a new LDAP user before they logged into our system, this was not possible as the empty password would fail the password validation. This change now allows LDAP users to be created without a password, while still validating local users. If the user source changes to a local user, a password will be required.

    v3.6.7

    May 11, 2022

    Bug fixes

    • Hidden navigation links Resolved an issue where navigation links were unintentionally hidden

    v3.6.6

    May 9, 2022

    Bug fixes

    • Fixed an issue when creating a new budget revision Not all budget items were copied over for new budget revision. This has been resolved now.

    • Fixed an issue when switching reports from the rates screen Switching reports while on the rates screen could cause an error. This has been resolved now.

    • Fixed an issue with Adjustment policies Previously the adjustment policies screen would become inaccessable.

    v3.6.5

    April 26, 2022

    New features

    • Improved API functionality for PATCH requests Previously, PATCH requests would return an error if the 'name' attribute was supplied in a request. Now, as long as the value hasn't changed, it is possible to supply this field in a valid request.

    • Removed unused API routing There were some unused API routes that could lead to confusion for customers. We have removed these to simplify the API.

    Bug fixes

    • Fixed an issue where multiple 'services' blocks are used in the same Transformer If a transformer script contained multiple 'services' statement blocks, and if any except for the last of them didn't see any new services to create then an internal error would be logged and the job would fail. This has now been fixed.

    • Fixed an issue with inherited rates for manual service subscriptions If an account was inheriting a custom rate set for a service at a parent (or higher) level of the account structure then subscriptions for that service would be charged at the global rate. This has now been fixed.

    • Fixed an informational message when creating an Adjustment for Service Categories Resolved an issue where the user wasn't shown a warning when selecting a category that included a service with a charge type that is not supported for adjustments.

    v3.6.4

    April 1, 2022 🐸

    Bug fixes

    • Fixed a time-out issue for long-running jobs with newer RabbitMQ releases When the system was using a RabbitMQ server version higher than 3.8.15, it was timing out for long-running processes before they were finished in the background. This is now fixed and all processes are ending without being timed out.

    • Fixed a rare issue with quantity adjustments With certain combinations of quantity and price adjustments, it could be that if the last quantity adjustment resulted in a negative quantity then no compensation to bring the final quantity up to zero would be applied. This has now been fixed.

    v3.6.3

    March 17, 2022

    Bug fixes

    • Solved an issue when creating global rate revisions through a Transformer Transcript was considering rate revisions to be the same when they were different by approximately 0.001 or more, therefore new revisions weren't being created. This comparison should now detect differences greater than 0.00000001 and will create new rate revisions accordingly.

    v3.6.2

    March 9, 2022

    Bug fixes

    • Fixed an issue with non-escaped quotes when importing data into a Transformer When importing data into a Transformer, if option noquote is enabled then non-escaped quotes will no longer cause the import to fail.

    • Fixed a memory leak in the 'match' statement in Extractor scripts When using the 'match' statement many times in a loop, memory consumption would increase significantly. This issue has now been fixed.

    • Fixed an issue when displaying services in the rates screen When services from different datasets were linked to the same service category, all services in that category would show up in the rates screen, regardless of the dataset/report they belong to. This has now been resolved to honour the dataset a service belong to.

    v3.6.1

    February 21, 2022

    Bug fixes

    • Fixes issue with incorrect status for workflow runs When a workflow run was interrupted during execution, the status of the run could not be read by the API.

    v3.6.0

    February 16, 2022

    New features

    • Rate tiers The charge engine and reports now support tiered rates (beta feature)

    • Improved API documentation The API documentation has been overhaled, with more information and new examples added. Hopefully this will make using the API easier for users.

    • Signed backend components Backend executables are now code signed to avoid false-positive hits from A/V scanners.

    Bug fixes

    • Fixed an issue which could cause workflows with a "Publish Report" or "Evaluate Budget" step to fail

    • Fixed incorrect payload sent to the IdP server when logging out using the SAML2 SLO endpoint The payload sent to the IdP server did not contain the correct NameID data needed to link the session to the user logging out. This was causing the logout process to fail in some situations.

    • Reset password redirect Resolved issue where a user could be redirected away from the reset password page.

    v3.5.7

    December 01, 2021

    Bug fixes

    • Fixed an issue when installing RabbitMQ locally When using the interactive installer and installing RabbitMQ locally, the installer would store invalid values for the RabbitMQ configuration. This has now been resolved.

    • Fixed an issue when system returns from hibernation If a Windows system that is running the "Exivity Backend Service" would return from hibernation mode, the service (merlin.exe) would not recover and remain in a broken state. This has been resolved.

    • Fixed an issue with the Transformer editor Resolved a corner case whereby the editor could break when using Transformer editor code snippets.

    v3.5.6

    November 22, 2021

    Bug fixes

    • Adjustment type Fixed an issue where the type of adjustment was always shown as premium.

    v3.5.5

    November 19, 2021

    New features

    • The name of the EXIVITY_AGGR_COUNT column created by the 'aggregate' statement is now configurable

      The 'aggregate' statement now supports an optional parameter (called 'counter_column') which allows you to specify the name of the column into which the aggregation counters will be written. Please refer to the 'aggregate' documentation for more details.

    • Transformer scripts will now log a warning if no RDF files were updated or created

      When running a Transformer script that does not create or update an RDF file using the 'finish' statement, a warning will be written to the log to that effect.

    Bug fixes

    • Fixed an issue when executing quoted scripts from a workflow

      When a script (i.e. Powershell) was quoted in a workflow step, the workflow would not execute the script. This has now been resolved.

    • Fixed an installer issue

      When installing RabbitMQ into a program path that contains spaces, RabbitMQ could fail to install the service. This has now been resolved.

    v3.5.4

    September 15, 2021

    Release Candidate - available to selected users only

    New features

    • Added more information to audit logs when deleting services The audit log now contains the service key when a service is deleted. Previously it recorded the event,\ but did not include specific information about the service itself.

    • Added Filtering on workflow-step type The feature allows users to filter workflow steps by type.

    • The installer has now valid code signing applied

      As of this version, the Exivity installer will be signed using the Exivity code signing certificate in order to increase the security of the distributed installer package.

    Bug fixes

    • Increased the number of API handlers This release now runs 18 concurrent API processes running on a node with the API service installed to allow a higher number of parallel requests to the REST API.

    • Fixed an issue with overwriting a Lookup table Solved an issue where saving a lookup could lead to unexpected behavior

    • Fixed an issue with user notification subscriptions Solved an issue where it was not possible to save a notification as a user

    v3.5.3

    August 27, 2021

    Release Candidate - available to selected users only

    Bug fixes

    • Fixed an issue with the ordering of headings in the Transformer Preview There was an issue where the Transformer Preview would sort the headings in reversed order. This has now been resolved.

    • Fixed an issue when creating a Transformer from a template When creating a transformer from a template, after creation the GUI would show a "leave the page" popup. This has been resolved.

    • Fixed an issue when creating an Extractor from a template When creating an Extractor from a template, after creation the screen would not switch to the Variables tab. This has now been resolved.

    v3.5.1

    August 18, 2021

    Release Candidate - available to selected users only

    Bug fixes

    • Fixed an installer error for the workflow migration script PHP environment errors could show up in the installer log during the execution of the workflow migration script. This has been resolved

    • Added cleanup steps to uninstaller to remove PSQL and RabbitMQ traces

    • Hidden scrollbar preview Resolved issue where the scrollbar wasn't visible in a Transcript Preview.

    v3.5.0

    July 26, 2021

    Release Candidate - available to selected users only

    New features

    • Implemented account lockout strategy User accounts will now get locked out for 15 minutes after 5 consecutive failed login attempts.

    • Account names are now set by the latest date seen Previously, when preparing reports account names were set based on the last data to be processed, even if that data was older than that used to originally set the account name. Now, when accounts are being synchronised during the report preparation process, names will only be updated if the data being processed is newer than that which was used to last set or update the name.

    • Increased calculation precision in Transformer scripts The precision of calculations in Transformer scripts has been increased to 14 decimal places.

    Bug fixes

    • Fixed an issue with manually created accounts When creating accounts manually in the GUI, on occasion an account could be created which did not fit correctly into the account hierarchy. This has now been fixed.

    • Fixed and issue with updating account names When preparing a report, it could be that some account names were not updated when they should have been. This has now been fixed.

    • Fixed saving service resources including 'budgetitems' relationship When saving a service resource including the budgetitems relationship, an error was returned. This has been fixed and the relationship between service and budgetitems works both ways.

    v3.4.3

    March 22, 2021

    New features

    • Usage data generation based on consumption start/stop events Added functionality to produce usage data from consumption start/stop/update events, including the consumption which spans several days. See in Transform documentation for details.

    v3.4.2

    February 24, 2021

    Bug fixes

    • Fixed an issue with SAML user account access provisioning Previously, user account access provisioning would grant access to a matching account, including all its child accounts. With this release, the SAML user account access provisioning only grants a user access to the first matching account (i.e. with the lowest depth) in the account hierarchy.

    • Fixed an issue where reports could become invalid for certain user account permission configurations

    • Revolved a race condition related to preparing a report On rare occasions the actions of multiple users administering accounts and reports simultaneously could result in duplicate charges in reports. This has now been fixed.

    v3.4.1

    December 08, 2020

    Bug fixes

    • Fixed an issue with PDF export of the summary report In certain environments, the PDF export feature was broken. This issue has now been resolved.

    • Default SAML user group re-applied when a user logs in An issue has been resolved which caused the default user group to be re-applied when existing users were logging in through a SAML Identity Provider.

    v3.4.0

    November 17, 2020

    New features

    • Executing a Reports now requires a valid license

    Bug fixes

    • Fixed a minor issue with the calendar widget

    • Fixed a minor issue with the SAML SLS endpoint

    • Fixed an issue with custom rates and subscriptions

      Sometimes a subscription would not reflect changes in custom rates for a service. This has now been fixed.

    v3.3.0

    November 02, 2020

    Due to security considerations, starting with this release, each user is bound to the logon provider set in the source attribute. E.g. if a user has their source set to 'local', they can't log in with the SAML2 or LDAP SSO provider.

    New features

    • Improved user provisioning (beta) Added options to provision user attributes from a SAML Identity Provider/AD response. It's possible to provision the users display name, username and email address.

    • Added claims based account access provisioning (beta) Added options to provision users permission levels using attributes from a SAML Identity Provider/AD. It's possible to provision both the usergroup and user account access (by matching either the account key or metadata value to a SAML/AD response attribute).

    Bug fixes

    • Change to log filenames for extractors and transformers When scheduling extractors and transformers with workflows, the selected environment is now part of the log filename to distinguish their log files when they are running at the same time.

    • Fixed opening curly brace detection When opening curly brace wasn't preceded by space, sometimes it wasn't properly processed. It has been fixed

    • Improved error message on lookups screen Fixed an issue where a user might get a confusing error message when saving a lookup file.

    v3.2.7

    October 09, 2020

    New features

    • Added support for the Safari web browser

      Exivity now supports the Safari web browser

    • Added ability to set custom escape character in Transformer Previewer Added a dropdown to the transformer previewer where the client can select which escape character to use during Transformer preview mode

    Bug fixes

    • Fixed an issue with the Dataset Manager Resolved regression issues regarding the display of RDF dates

    • Fixed an issue with report filters There was a bug if a user accidentally submits a string instead of a number on filtering 'parent_account_id' with report. This is now being fixed

    • Fixed an issue with displaying workflow start times in the workflow list The time column of the workflow list omitted the hour at which the workflow would run for daily and monthly workflows

    v3.2.5

    September 30, 2020

    Bug fixes

    • Improved error message when creating invalid revision When creating a duplicate rate revision for the same effective date, an appropriate error message is now shown

    • Improved error message for LDAP Added a more meaningful message when unable to connect to an LDAP server

    • Service Category names must be unique It was possible to have duplicate service category names. This could lead to confusion and has now been resolved. The Service Category name must now be unique

    v3.2.4

    September 9, 2020

    Bug fixes

    • Increased the boundaries of some USE arithmetical operations

      The operators +=, -=, *= and /= were limited to a 32-bit range and this could cause an overflow in certain real-world applications. The range has now been increased to match that of the other arithmentical operations which is based on the range of a 64-bit floating point value: 1.7E +/- 308 (up to 15 digits).

    • Fixed an issue where the CSV export of the instance report could fail The CSV export of the instance report did not work when the usage column was included and some of the instances did not have any usage data

    • One-off subscriptions should ignore the charge_day field

    v3.2.1

    September 1, 2020

    • Quantity and Charge Adjustment are now handled in strict order As of this release, quantity adjustments are applied first and before any charge related adjustments. This in turn enables the ordering of individual adjustment policies.

    • Implemented translations settings (Beta) Implemented an automatic translation feature for Dutch, German, and French. This can be configured on a system or user level. Currently, this is released as a beta feature.

    • Delete old Nginx log files Old Nginx log files will now be deleted by the garbage collector. This will help reduce disk space.

    Bug fixes

    • Fixed an issue when partial preparing manual services When partial prepare kicks in, it previously skipped manual services. This has now been resolve.

    • Fixed SAML ACS invalid schema error Some users were experiencing an Invalid Schema bug when accessing the /v1/auth/saml/acs endpoint. This has now been fixed.

    • Fixes "Nginx Log File" not found error The location of some log files could not be found. This has been resolved.

    v3.1.5

    July 21, 2020

    Bug fixes

    • Premature validation warning

      Resolved an issue where a user could get a premature validation error for certain input fields.

    • Fixed an issue with the budget report

      With certain budget configurations, the budget report would display a message instead of the report even if the budget configuration was valid. This has now been resolved.

    • Improved user message when execution time is exceeded

    v3.1.2

    July 3, 2020

    New features

    • Ability to skip database backup when updating When updating the software, the user can now skip the PostgreSQL database backup to improve update speed

    Bug fixes

    • Fixed an issue with the webproxy.conf proxy_pass URL

      When updating the software, the proxy_pass URL for the webproxy.conf NGINX configuration was always reverted back to https://127.0.0.1:8002. This has now been resolved

    v3.1.1

    June 28, 2020

    Fixed charges no longer available As of version 3.1.0, we're removing (or: interval-based charges) because of the limited amount of use cases and low customer adoption with this feature. In case you are an Exivity customer and are still using this service parameter, please reach out to so we can provide you with alternative solutions.

    New features

    • Improved auditing when managing accounts manually Added a number of new audit points when performing manual account management

    • Auditing token creation correctly logs client IP address when API is behind a proxy server

    • Updating user profile information now requires providing the current password

    Bug fixes

    • Fixed an issue where some users could select budgets even if it would not contain any details

    • Fixed an issue which sometimes caused the workflows screen to load for a long time A resource-intensive operation that ran whenever the workflow page is visited got removed

    • getCUPRs function ported to PSQL getCUPRs function was changed from using SQLite global DB to PSQL global DB

    v3.0.5

    May 26, 2020

    Bug fixes

    • Fixed issue at /lookups where in specific cases an error was undefined

    • Fixed issue where user was shown an incorrect validation warning When providing a number with more then one decimal, the user interface would show an error message. This has been resolved.

    • Added cache check in partial preparation There was an issue that partial preparation code didn't check for cache table presence before analysis, which caused execution errors, and it has been fixed.

    v3.0.4

    April 30, 2020

    New features

    • New feature: Subscriptions When Beta features are enabled, users can now create one-off and recurring daily, monthly or yearly for services for which a metered data source is not available.

    • New feature: Workflows widget When Beta features are enabled, a new Workflow traffic-light widget will be shown on the main Dashboard page.

    • New feature: Global Variables When Beta features are enabled, users can now manage under System Administration. This enables users to decouple Extractor and Transformer variable values from scripts thus supporting different values on a per Exivity instance basis without changing underlying scripts.

    Bug fixes

    • Fixed an issue with manage_metadata_definitions Granting the "Manage Metadata" permission is now possible when creating a security group

    • Fixed an edge case for incorrect net value on accounts table Whenever cogs or charge had 0 as value the net value wouldn't be displayed correctly on the details table of the accounts report.

    • Fixed an issue when creating a manual service Manual per unit cogs type services will not require cogs column

    Older release notes can be found .

  • Fixed an issue with RabbitMQ connection for backend components In certain cases, a connection issue could occur towards RabbitMQ from several of the backend components (chronos, griffon and merlin). This has now been resolved.

  • SQL errors during report generation should now cause workflows to fail Previously, when running a report if a SQL error occurred while writing the report data to the database then the workflow would still complete with a success status. This has now been changed and the workflow will fail.
    SAML Token added to garbage collector For SAML users, a new SAML Token table has been added to the system. To make sure this doesn't grow to large over time, removing expired tokens has been added to the garbage collector.
  • Add key selector to resources at rate screen At the rate screen, added the possibility to select key as label and filter for services and accounts.

  • Added Extractor support for uppercase and lowercase of values Following is now possible in order to change casing of values in an Extractor: uppercase var_name | {buffer_name} lowercase var_name | {buffer_name}

  • Adding "last day" as possible charge model to services It is now possible to have a service charged on the last day of a month. The "last day" charge_model has been added to services to allow this. Please not that this applies to monthly services only.

  • Improve service validation on charge_model attribute The charge_model validation has been improved. Now the interval will be validated with the charge model.

  • Long JSON parslets are no longer truncated at a short length than they should be When expanding a JSON parslet, the maximum length of the resulting data is truncated at a shorter length the maximum allowable length of 8095 characters. This has now been fixed such that up to 8095 characters can be retrieved by a parslet.

  • Negative uplifts and adjustments for instances Fixed an issue where negative uplifts/adjustments were not visible under the instances report of summary.

  • Resolved an issue where an optional webhook notification input field was treated as required.

  • Recurring subscriptions end date bug When creating a recurring subscription, an error was shown if no end date was supplied. Recurring subscriptions should not require an end date. This has now been fixed.

  • Update resources relationship missing data exception When updating a resource's relationship with missing data attributes, an exception was thrown. This shouldn't be required. Now this will now only happen if the data is null.

  • Fixed an issue when using the word 'subroutine' in a 'print' statement Previously, a 'print' statement that had the word 'subroutine' as one of its arguments would cause the script to fail to execute. This has now been fixed.

  • Improve validation by enforcing maximum adjustment numbers For a given Account and given Service, no more than 16 Adjustments policies are allowed to be active at the same time. Validation has now been added for this.

  • Fixed an installer issue when using remote RabbitMQ server Previously the interactive installer would write an invalid/incomplete config.json when providing a remote RabbitMQ server. This has now been resolved.

  • The 'terminate' statement in a Transformer will now accept 'with error' as optional parameters

    The statement 'terminate with error' is now supported by Transformer scripts and will cause the processing of the current day to abruptly end. A message will be written to the logfile to the effect that the error was explicitly requested and the task will terminate. If the 'continue' option is enabled, processing will resume on the next day in the range (if a range was specified) else the workflow will exit.

    Increased the number of API handlers

    This release now runs 18 concurrent API processes running on a node with the API service installed to allow a higher number of parallel requests to the REST API.

  • Fixed an issue with report level labels

    Fixed an issue where report labels weren't updated.

  • Fixed an issue with the Adjustments

    Resolved an issue where details in some cases weren't reflected in the form of the Adjustments screen.

  • Fixed issue with archiving old log files

    In some cases, collecting old log files resulted in archiving some files that were already archived before.

  • Fixed issue with updating an extractor

    When an extractor was updated with NULL arguments, the request was incorrectly considered invalid.

  • Fixed an issue when creating budgets

    Resolved an issue where there was unexpected behavior when creating a multi-level budget in the GUI

  • DEBUG level logs no longer contain the value of parameters provided to subroutines

    The values of parameters passed to subroutines are no longer included in DEBUG level logfiles as such values could potentially contain sensitive data

  • Fixed an issue when viewing proximity log

    Solved an issue where certain log types caused the log screen to fail.

  • Fixes issue with some users not being able to change password

    Some users with view_report permissions only were not able to change their password. This issue has now been fixed.

  • Log message could contain metadata

    Logs entries can contain 'context' to help debugging. This is usually one JSON string. Some log entries could contain multiple contents, which was unexpected. Now if this occurs, these contexts are merged into one JSON string.

  • Fixed an issue when clicking arrow list

    Solved issue where a list could disappear when clicking the arrow in a dropdown list in the GUI

  • Fixed an issue with the report level filter

    Resolved an issue where a default report level wasn't selected when selecting a new report in the GUI

  • Clarified the log entry written if an extractor script has an unmatched double quote on the last line

    In cases where the last line of an extractor script has an unmatched double-quote, and that last line is also not terminated with a carriage return, then an internal error would be generated. The result is correct (the script will not execute) but the error has now been changed to clearly indicate the exact nature of the problem.

  • Fixed an issue related to sorting of datasets

    Resolved an issue with the date sort of datasets in the GUI

  • Fixed an issue when exporting the Consolidated Summary report Solved an issue where the consolidated export was only exporting the first account

  • Fixed a security issue related to User Access Control Solved an issue where the cache was not properly cleared and as such, the user could view invalidated data.

  • Fixed an issue with sorting service names in the summary report Services in the summary were sorted on ServiceName instead of Service Description. This has now been resolved.

  • Fixed an issue where a certain type of invalid XML parslet would cause an extractor crash In rare cases, using an XML parslet to extract an attribute value could cause a crash. This has now been fixed.

  • Fixed missing budget filter "service" bug When a user selected the "service" filter in a budget, this data was not stored correctly. This fixes this issue.

  • Fixed an issue with auto-completion There was an issue where auto-completion of example blocks in the Extractor/Transformer editor would not support "tabbing-through" function parameters. This has now been resolved. Additionally, an issue was fixed where the "rename" function would autocomplete wrongly to "convert".

  • Fixes issue with links generated from singular form of the resource type Links for relationships inside responses were generated using the singular form of the resource.

  • Fixes issue with incorrect links generated inside responses When a response contained a "link" attribute, it was incorrectly generated.

  • Better handling of hash and whitespace characters in quoted strings within extractor scripts Previously, a hash symbol at the start of a word in a quoted string literal would be treated as the start of a comment, causing the string to be truncated and an "Unmatched quote" error generated. Additionally, quoted strings containing multiple consecutive whitespace characters would be modified during execution such that they were replaced with a single space. These issues have both been fixed.

  • Resolved issue where the user couldn't set alt_interval The user couldn't save a subscription that used alternative intervals because of invalid values. This is now solved.

    Fixed an issue with environments and global variables in Transformer scripts Attempting to use different environments, and the global variables within them, in a transformer script may have caused an error and the script to fail. This has now been fixed.
  • Workflow status Resolved an issue where the GUI could crash because of an invalid status state

  • Added new log files to the log rotation policy Log files generated by the Notification component and by the new Scheduler component are included in the log rotation policy.

  • Added the option to run a workflow for a date range When running a workflow manually, you can now choose to run it for a date range, not only for a single date. When running against a date range the steps are executed individually for each day in that range.

  • Redesign of the Data Pipelines > Workflows screen Complete overhaul of the Workflows page, improving the configuration user experience for individual steps and making it easier to review associated step logs.

  • Added a cascading DELETE parameter to the API API now supports a new cascade: boolean query parameter. This parameter, if true, will allow the severing of non-nullable relationships and will remove the related record afterwards. This new parameter specifically applies to the workflowstep and workflowschedule endpoints.

  • Data aware report filters Report filters only show entities available in the selected reporting date range

  • Highly available scheduler The scheduler has been rewritten to support the new message bus architecture and has the ability to run on multiple nodes at the same time.

  • Creating a new report now requires the "all accounts" permission Only users that have access to all accounts will be able to create new reports. This avoids situations where a user could previously create a new report that was not visible after creation.

  • API support for changing user login method It is now possible to change an existing users authentication method to local, LDAP or SAML2. Currently this is only supported by invoking the the REST API.

  • Changed behaviour of logout when Single Sign-On is configured Exivity now allows the logout endpoint (SLO endpoint) to remain unconfigured, in which case the user will be logged out only from the application, not from the entire Single Sign-On environment.

  • Budget notification

  • Summary report notification Users can now choose to receive a pre-defined report in either CSV or PDF format through a notification

  • File available notification

  • Admin managed notifications Admin users and users with the manage_users permissions now have the ability to add notification channels and subscriptions to other users. This allows admin users to grant users subscriptions to notification types they would otherwise not have.

  • Minimum commitment and Adjustments included in cost summary subtotals Details of minimum commitments and adjustments are now included in the cost summary subtotals for service categories.

  • Webhook notification channel Allow webhooks as notification channel, forwarding the original event as JSON payload to a custom webhook URL

  • Implement internal communications between components using RabbitMQ message broker This allows for improved multi-node setups

  • Fixed an issue with some combinations of HTTP methods in extractors In extractor scripts, making an HTTP DELETE request could cause subsequent HTTP calls to continue to use the DELETE method even if another method was specified. This has now been fixed.

  • Added the 'notificationsubscription' relationship for a user entity When requesting a user entity, the notificationsubscription relationship can now be included in the response.

  • Audit screen freeze Fixed an issue where the audit screen could become unresponsive

  • Fix Publish Report notification The clear option for the account field now works.

  • Fix state update adjustments Fixed an issue where a field wouldn't be updated when commuting to the next resource.

  • Subscription inherited rates Fixed some incorrect behavior with rate inheritance on the subscription management screen

  • Removed the limit of CSV files that could be open at once in extractor scripts Previously, a maximum of 16 CSV files could be open for writing in extractor scripts. This limit has now been removed and any number of CSV files can be open simultaneously

  • Fix decimal inputs Fixed an issue where an input would not accept a decimal value with leading zero.

  • Lookup file Fixed an issue where switching between lookup files wouldnt cause the screen to be updated

  • Show manually created service Solved an issue where manually created services where not visible before a browser refresh

  • Disclaimer modal Fixed an issue where the disclaimer had to be dismissed twice

  • Fixed an issue with compound MSSQL statements in extractors where the first statement is not a SELECT Usually, SQL statements executed in an extractor script are based around some form of SELECT. However in some cases more complex compound statements are required. Previously, a statement using a temporary table in MS SQL server, such as ... CREATE TABLE #jos(Name char(20)) INSERT INTO #jos (teller) VALUES ('2') SELECT * FROM #jos; ... would not return any results, as the INSERT statement returns an empty result set. Extractors now ignore empty result sets in such statements and return the first result set that is not empty (assuming there is any data returned by the query at all).

  • Fixed an issue with transformer scripts when importing data with non-escaped quotes Importing data with non-escaped quotes could sometimes cause Transformer scripts to fail. This has now been fixed.

  • Fixed types for returned attributes When accessing various endpoints, id attributes were returned as integers instead of strings. This was fixed to comply with JSON:API specifications.

  • Fixed incorrect display of date fields in slack notifications When a slack notification was sent, the date of the notification was wrongly set to sometime in the future

  • Fixed incorrect display of extra HTML tags in emails When emails are sent, some HTML tags were shown as text in the email body

  • Fixed an issue with encoding of double quotes when exporting account data as CSV

  • Fixed long delay when displaying a large number of budgets when including reports

  • Fixed an issue with displaying workflow start date in dashboard Previously the start date of a workflow was displayed incorrectly in the Workflow dashboard widget

  • Fixed an issue with creating adjustments There was a problem where the GUI did not allow the creating of adjustments that had a non-integar amount. This has now been fixed.

  • Fixed a divide-by-zero error on reports In corner cases where the final chargeable quantity for all line-items on a report is 0, a divide-by-zero error would occur. This has now been fixed.

  • Fixed the description of the 'Budget' permission This permission only provides access to the Budget report. In the previous version, the description of this group permission contained false information.

  • Fixed an issue when configuring a Budget for a leaf node If a leaf node in the budget structure has the distribution set to even or account, the Budget engine generated a false warning that an amount for the node is left undistributed. This has now been resolved.

  • Fixed a rare cosmetic issue when updating rates in ETL When executing a 'services' statement in a Transformer, if there was an existing service rate revision with a different rate for the same effective date as a new revision then an error containing the text "incorrect binary data format" would be logged. This issue was cosmetic, but has now been fixed.

  • Fixed a very rare issue with minimum commit Resolved an issue whereby it was possible on extremely rare occasions for the minimum commit calculations applied to one service to be incorrectly applied also to other services in a report.

  • Fixes initializing directories in home directory There was a bug when the home directories were initially created and two directories were wrongly created in the /home directory.

  • Improved support for UTF-8 in usernames

  • Resolve global variables in "uri" statement In some statements (i.e. uri) global variables weren't properly resolved. This has now been fixed

  • Fixed an issue with COGS charges for services with an average monthly charge model COGS charges for monthly services that used the average charge model could be slightly lower than they should be, as the COGS rate for the first day seen was not factored into the rate averaging. This has now been fixed

  • Fixed an issue when managing rates An aggregation_level error could occur when updating rates. This has now been resolved.

  • Resolved a timezone offset issue for Workflows Resolved an issue where timezones were incorrectly applied when creating workflows

  • Resolved an issue with the Reports menu Fixed an issue where the report screen would crash after selecting multiple reports

  • Only recurring subscriptions require the "charge day" field, but it was required for all subscriptions. This has now been fixed
  • Fixed a decimal precision issue with the timeline graph The Y-axis of the timeline sometimes displayed long floats. To resolve this the precision has been fixed to two decimals

  • Fix for "space" as thousand separator It is possible to select different symbols as a thousand separator for large numbers shown to users. One of the options, space (" "), was not saving correctly. This has now been resolved

  • Fix error handling transformer Fixed an issue where it was possible that a failed transformer error was not processed properly which prevented the transformer from giving the user feedback

  • Added index to speed-up report preparation During report preparation, Edify executes several queries to the adj_lookup table. An index has been adding to this table to improve performance

  • Improve user error page Updated the error page to improve user experience. Customer logo will now be shown here, if it has been set.

  • Adjustment API endpoint will now accept order In the past, there was no way to reorder adjustments. Users had to delete and recreate them to do this. This has now been solved. The order field now takes in an order number. If this is not used, the current functionality will still work, using the creating date instead.

  • Reports can now include adjustment name The adjustment name can now be added to a report. This is useful for report users that do not have access to view the whole adjustment.

  • Support in UI for changing the order in which Adjustments are applied Since it is possible to create multiple Adjustments for a single account, they may affect each other. It is therefore desirable to have the ability to control the order in which they are applied. As of this version, it is possible to change the order in which Adjustments are applied.

  • Improved invalid token handling Changed how this error is logged. It will now be treated as a notice instead of a warning.

  • Fixed an issue with running transformers

    In release 3.2.0 there was a problem running transformers from the GUI. This has now been fixed.

  • Improved the message when the execution time of an Extractor, Transformer or Report is being exceeded

  • Fixed an issue with budget leaf account distribution When configuring a budget for a budget leaf, the distribution setting will be forced to be of the type "shared"

  • Fixed an issue when deleting services Previously it wasn't possible to delete a service if it still had any associated rates. This behavior has now been corrected so that services including its relationships will be deleted

  • Fixed an issue with the Garbage collector exits code The Garbage collector would previously always return an exit code of 259 when invoked manually. This has been resolved

  • Fixed an issue where subscriptions could show up on more than one report In some cases, a subscription (created for an account associated with a specific report) could be shown when a different report was run. This has now been fixed

  • Fixed graph image export When exporting a graph chart (i.e. Pie, Bar, Line) in any of the reports (Accounts, Services Instances) the file download would not start. This has now been resolved

  • Restored indicators for account access list Pencil-like indicators will inform the user where nested accounts have been selected. This functionality was unintentionally removed from the user management screen but has been placed back

  • Horizon performance improvements Implemented two seperate changes to improve Horizon budget execution performance. (1) budget gets validated only once after a budget configuration change and (2) added a new database index to improve database query execution

  • Always show decimal values Resolved an issue where some values would be displayed using their scientific notation

  • Resolved issue when deleting services Fixed an issue where the user would get an error when making multiple delete requests for services

  • Implemented current password validation A user is now required to enter his/her current password before any changes to its profile are applied
  • Added logging audit entries for metadata, service subscriptions, and budgets

  • Mitigated a potential security issue (internal reference EXVT-3773)
  • Fixed an issue were sometimes log files would appear malformed in the Transformer run tab

  • Reverted the way a user deletes an RDF

  • Fixed form behavior in rates screen

  • Fixed "other" option behavior at services

  • Fix report filter When switching reports, some of the filters were not reset and appeared unused while they actually held a value not related to the currently selected report. This made the reports appear empty. This has been resolved.

  • When the API runs out of memory (possible for very large reports) it will respond with a descriptive message

  • Incorrect display of minimum_commit_delta_charge and minimum_commit_delta_quantity When a minimum commit quantity was set for a service rate revision, the resulting charges could show up incorrectly by having the quantity and minimum commit delta swapped on the reports

  • Fixed escape option behaviour in 'import' There was an error that '\"' sequence was always treated as escaped quoted even when 'escape' option switched off (default), it has been fixed.

  • Rate revision date never changes Transcript was trying to change rate revision date when it detects an attempt to create new revision with the same attributes but earlier date, which conflicted with database constraint, so it was removed in order to avoid execution errors.

  • Fixed issue where line breaks were not shown When providing an address for the summary report with multiple new lines, only the first would be shown. This has been resolved.

  • Fixed undefined tool tip at workflows The was a column which showed a tool tip with undefined as value. This has been resolved.

  • Select a single date for transformer run After introduction of a new calendar, the single date selection was missing. This has now been resolved.

  • Services and categories at subscriptions are now alphabetically sorted

  • Fixed an issue where dataset columns were missing After repreparing a report, the columns associated with a dataset were not visible. This has been resolved.

  • Fixed visibility newly created dataset When running a transcript, newly created datasets were only visible after refresh. This has been resolved.

  • Show red underline when incorrect value for textarea input

  • Show correct validation values when saving environment

    When saving an environment, we will now show the correct error messages for missing values.

  • New feature: Metadata for Services Metadata can now be added to all services just like with accounts. Define a metadata definition first, then attach the definition to a dataset in Data pipelines > Datasets. All services in this dataset will now use this set of metadata fields. Metadata information itself can be added and modified in Services > Overview and is available in the services reports.

  • Increased rounding of set to match calculate statement The Transformer set and calculate statemets now have had their rounding precision increased to 12 decimal places.

  • Added more information to the summary report The summary report now contains the following additional columns: service_key, account keys (between 1 and 5 inclusive depending on the report), start_date, end_date. These columns are also included in CSV exports.

  • Improved quoting in exported CSVs To avoid potential complications with Excel, any cell values in CSVs exported from Exivity that are not numbers and begin with any of the characters =,+,- or @ are now preceded with a single quote.

  • A new setting has been added to allow users to stay logged in. If disabled (default), users need to log in each time they open Exivity in their browser.

  • Administrators can now set the token lifetime. When the token lifetime expires, users need to log in again.

  • Added ability for users to log out of all devices.

  • Mitigated a potential security issue (internal reference EXVT-3457)

  • Mitigated a potential security issue (internal reference: EXVT-3455)

  • Mitigated a potential security issue (internal reference EXVT-3270)

  • Implemented a new charge model for monthly services The charge for a monthly services may now be based on the usage of a specific day in the month.

  • Improvements in report performance Reports are now pre-loaded in the database which speeds up report loading times in the GUI.

  • Added support for LDAP authentication LDAP authentication was available as beta feature already, and is now generally available. A guide will be added to our documentation soon. Configuration options are available in the Settings screen (Single sign-on tab).

  • Fixed invalid format issue metadata lists Previously empty lines we're persisted as list option. Options are now trimmed and validated before persisting.

  • The API now requires additional attributes for certain service types

  • Fixed a small issue with the metadata selector in the report management screen

  • Removed toolbar from PDF export The Summary report PDF export included the toolbar on the top of every page of the PDF document. This has now been removed.

  • Fixed an Extractor XML parsing issue USE could previously fail when iterating over an empty XML node inside a foreach loop. This has now been resolved.

  • Fixed modified display issue in Accounts Overview When switching from Account Name to Account Key in the Accounts Overview screen, the modified state / pencil icon disappeared previously. This has now been resolved.

  • Fixed an issue when scrolling in the Accounts Overview When having a large amount of Accounts in a report definition, the Accounts Overview screen could generate cosmetic glitches when scrolling

  • Fixed a Transformer issue with skip_corrupted_records When import option skip_corrupted_records was set, import could fail if last column in the record is missing closing quote. This has been resolved.

  • Fixed a Transformer issue with aggregate When using the aggregate max function, Transcript could pick the wrong value.

  • Fixed a false warning about invalid COGS type in the logfile When preparing reports, in some cases a lot of warnings would appear in the logfile that state: Invalid cogs_type (0) in database for service ID nnn where "nnn" is a service ID. This was a false warning and could be ignored but looked concerning and could lead to larger logfiles. This issue has now been fixed.

  • Removed "remainder" option from lowest level accounts when setting budgets When a budget is set for an account at the deepest report level, the "remainder" option is no longer shown, as there are no sibling accounts to share the budget with.

  • Fixed an issue where (harmless) SQL errors could be logged when deleting services When deleting services, SQL errors could be present in the logfile for days where reports that reference those services had not been prepared. This has now been fixed.

  • version 3 upgrade guide
    [email protected]
    JSON feed
    sort
    dequote
    event_to_usage statement description
    fixed charges
    [email protected]
    subscriptions
    Global Variables
    here

    Archive

    Version 2

    v2.10.2

    March 25, 2020

    Bug fixes

    • Fixed an issue where deleting a metadata definition could remove a dataset metadata entry from the database. No actual data was affected by this bug.

    • Fixed an issue related to Transformer preview The preview menu could reference an incorrect line number when reporting a Transcript code error.

    • Fixed an issue when renaming a DSET If you refered to an incomplete DSET name in a rename statement, Transformer could crash.

    v2.10.1

    February 03, 2020

    New features

    • Added support for LDAP authentication LDAP authentication was available as beta feature already, and is now generally available. A guide will be added to our documentation soon. Configuration options are available in the Settings screen (Single sign-on tab).

    • When a user session is about to expire, an option is provided to prolong the session

    • Added option to specify custom dataset name in transform previewer

    Bug fixes

    • Fixed a small visual issue in the 'about' page

    • Fixed a rare issue with metadata In some cases, selecting a value from a 'list' type metadata field could lead to a crash, This has been fixed.

    • Fixed a small visual issue with updating workflow schedules

    v2.9.5

    January 02, 2020

    Bug fixes

    • Fixed an issue when aggregating an empty DSET Aggregating an empty DSET cauld case a Transcript crash, this has now been fixed.

    • Fixed resource leak in Workflow Engine Aeon was leaking Windows handles for internal I/O events that caused problems creating new workflow processes after several days of heavy use. This problem is fixed now.

    v2.9.4

    December 11, 2019

    New features

    • Improved aggregation function in Transcript

      Aggregation performance improved, especially when processing large sorted datasets

    Bug fixes

    • Resolved a rare Edify crash Resolved a rare bug with 'end of file' checking in the Edify pre-processor

    • Fixed an issue where changes to workflows were not processed in the scheduler

    • Case-sensitive column names in correlation

      Correlate doesn't fail with SQL error when two columns in DSET have similar names that differ only in case

    v2.9.3

    December 04, 2019

    New features

    • Restricted editing of users logging in from SSO providers Updating the username and password in the API is no longer possible for users logging in from SSO providers.

    • Add PUBLIC_ROOT to configuration Previously, if Glass and Proximity were on different machines, Proximity would guess the Glass base URL. This is not possible to set this variable via the PUBLIC_ROOT configuration value.

    Bug fixes

    • Fixed an issue with character encoding in the transformer and workflow API endpoints

    v2.9.2

    November 25, 2019

    Bug fixes

    • Implemented a fix to avoid global rate revision changes The global rate revision for services that are populated using the set_rate_using parameter in the services block, could cause the rate to be updated. This behavior has now been changed and existing rate revisions will no longer be touched.

    v2.9.1

    November 15, 2019

    New features

    • Improved the performance of the workflows screen when there are a large number of historical workflow runs.

    Bug fixes

    • Fixed a cosmetic issue on the reports page.

    • Fixed an issue with the search not showing results on the dashboard.

    • Restored the functionality to cancel a login attempt.

    • In the transformer preview, datasets are now detected when the import statement has indentation.

    v2.9.0

    November 13, 2019

    New features

    • Added a warning when the current sessions is about to expire. A user then is given the option to renew the session without having to log in again.

    • Added transformer error annotations. When previewing a transformer script containing an error, the editor will show an annotation on the line where the error occurred.

    • Small changes to settings screen, single sign-on tab. In preparation of an upcoming release of an LDAP adapter for single sign-on, some small visual changes to the settings have been made.

    Bug fixes

    • Fixed an issue where some screens could display an Insufficient rights error. This was caused by API calls for which the currently logged in user didn't have access to. This has been fixed by not exposing this part of the functionality in the GUI.

    • Fixed an edge-case where the GUI displays a blank screen after an upgrade. In some circumstances, after an upgrade, the GUI would start with a blank screen. It was possible to access the GUI by refreshing the browser window. This is no longer needed.

    • Fixed an edge-case where removing a workflow could lead to an error. Sometimes, when deleting workflows containing workflow steps - which in turn contained references to reports which were removed since creating the workflow - the API could not remove the workflow. This has been fixed.

    v2.8.3

    October 29, 2019

    New features

    • Ability to rename a service category in the GUI In is now possible to manually rename a service category in the Glass interface

    • Notification are now out of beta The configuration of Workflow notifications is now GA

    • Ability in the GUI to create and delete and account It is now possible to manually create and delete accounts from a report definition by using the Accounts menu in the Glass interface

    Bug fixes

    • Fixed a Glass GUI overflow issue A GUI overflow in the details part of the account overview screen could occur. This has been resolved.

    • Proximity will enforce max account level Changed the code to now load the parent account and check the level before adding the relationship when creating new accounts in the GUI / API.

    • Dropdown expanding incorrectly to top In budget management screen, a drop down menu could previously expand to the upper region of the screen, while enough space was available downwards.

    v2.7.2

    September 18, 2019

    • Fixed a Glass GUI overflow issue A GUI overflow in the details part of the account overview screen could occur. This has been resolved.

    • Proximity will enforce max account level Changed the code to now load the parent account and check the level before adding the relationship when creating new accounts in the GUI / API.

    • Dropdown expanding incorrectly to top In budget management screen, a drop down menu could previously expand to the upper region of the screen, while enough space was available downwards.

    Bug fixes

    • Fixed an issue with nested conditions in Transformers If a Transcript script opens a block of statements using 'if' or 'where' but does not have a closing brace, then if the transformer was run for multiple days it was possible to get an error stating that the maximum depth of nested statements had been reached. A check has now been implemented at the end of script execution which will verify that there are no unclosed statement blocks in effect. If there are then a meaningful error message will be logged and the task will fail.

    • Fixed a memory issue in USE A memory related corner case with certain Extractors could trigger an endless loop. This has now been resolved.

    • Removed error for future dates When reporting on a future Budget period start date , Horizon produced an invalid error message. This has now been resolved.

    v2.7.1

    September 11, 2019

    New features

    • Added the ability to Manage and View Budgets It is now possible to create and report on multi-level budgets. More information on this feature can be found at

    • Added header validation in the lookup editor

    • Added the ability to change mail server encryption It is now possible to select TLS, SSL or No mail encryption when configuring an e-mail server

    Bug fixes

    • Transcript fails if service key exceeds allowed size

      When adding services, if service key is longer then 127 characters, Transcript fails with descriptive error in log file

    • Improved diagnostics in Budget engine If there is no usage data to reporting period, Horizon returns more detailed diagnostic information

    • Fixed crash in case of invalid budget configuration Horizon was crashing when a budget revision contained no budget items. Issue has been resolved, and a more detailed error message has been added

    v2.6.2

    August 28, 2019

    New features

    • Added endpoints to the API to create, update and delete service categories CRUD API endpoints are now available for both services and service categories.

    Bug fixes

    • When encrypting variables, the encrypted result is now deterministic for any given system When encrypting variables, it was possible that for the same input value, different encrypted values would be generated in the script. This was harmless, but has now been fixed.

    • Buffer reset before HTTP retries There was an error were a buffer was not reset before retrying HTTP request. Therefore a buffer in some situations could contain the result of several tries, causing an invalid XML or JSON payload. This error was fixed by resetting the buffer before every retry.

    • Step output limited to 1MB Log from standard output of a Workflow step is limited now to 1 MB to minimize database polution

    v2.5.4

    August 08, 2019

    Bug fixes

    • Fixed variable resolution after 'if' block

      Transcript sometimes failed to resolve variables after skipped 'if' block.

    • Upgraded PHP to version 7.3.6

    v2.5.3

    August 06, 2019

    New features

    • Removed Github links in white labeled configurations Removed the github links for extractors and transformers when using a white labeled install

    • Removed default logo and icon for white labeled configurations Removed the standard Exivity logo's and icons when using a white labeled install

    • Updated disclaimer for white labeled configurations Disclaimer does not reference Exivity anymore when using a white labeled configuration

    Bug fixes

    • Fixed an issue with notification drivers

      Pigeon could not find the notification drivers for Slack and SMS. This has now been fixed.

    • Fixed an invalid API error when resetting password Reset password could return an internal error (500), if a password didn't exist for the user. This was an invalid error, and has been changed to a valid 204 response.

    v2.5.1

    July 31, 2019

    Bug fixes

    • Fixed an issue where a log message could be incorrectly tagged as an error When creating services, an error message may be generated in the logfile which begins "services: set proration for service". This should be a debug level message and is not an error. This has now been fixed.

    • Fixed an issue where the error message generated by the append statement could be incorrect Fixed an issue whereby when appending one DSET to another, if the first DSET did not exist then the resulting error message in the log would state that it was the second DSET that doesn't exist.

    • Fixed an issue with embeds When the embed option is enabled, if the fields in the CSV to import were quoted then embed would not work as expected. This has now been fixed.

    v2.5.0

    July 10, 2019

    If your Exivity installation connects to the Internet thru the use of an Internet Proxy, you will need to ensure that a number of system variables are in place according to before upgrading to version 2.4.7 or higher

    New features

    • Transcript script content is stored in RDF A copy of the ETL processing script used for generating data for any given day is now stored in each RDF alongside the processed data itself, thus enhancing support and diagnostic processes.

    • New import option to include the name of the imported file(s) It is now possible in a Transformer to automatically add a column to each dataset, which will contain the name of the imported file(s). This can be achieved by enabling the filename_column = true option.

    • Support for auto retrying failed HTTP requests in USE If an HTTP request fails during the execution of an Extractor, the script can now be changed to set the option http_retry_count. This will determine the amount of times a HTTP request is retried. The default value is 2 retries.

    Bug fixes

    • Continue when http get_header yields no results When retrieving the text of a header from an HTTP response in an Extractor using the feature, the Extractor script wil now continue to execute even if no header content was found.

    • Avoid overwriting of services because of charge_model introduction Fixed a bug where if the charge_model was changed, the update would not be reflected after executing the Transformer script. Now, when using in a script, services will be recreated if the charge model changes.

    • When creating services sometimes a rate revision would be created when there was no need to do so Under certain circumstances, when updating the service definitions via the

    v2.4.7

    June 21, 2019

    New features

    • Service category totals are added to summary reports

    • Removed the 'Use local storage' configuration setting Now, the data in the interface is always synced with the server in order to better ensure data integrity.

    • Fixed an issue when aggregating a DSET that is not the default DSET The statement may have failed to correctly aggregate a DSET if the DSET ID specified in the statement was not the default DSET. This has now been fixed.

    Bug fixes

    • Workflow notifications status trigger Fixed an issue where a notifications was always send regardless whether the failed / successful condition was met

    • Workflow status log historical dates Fixed an issue where the status logs for a workflow could display incorrect timestamps

    • Fixed an issue which could lead to a crash in the GUI for certain edge-case adjustment configurations

    v2.3.1

    January 23, 2019

    New features

    • The 'export' statement will no longer generate an error if asked to process an empty DSET while is set

    • Implemented user-definable timeout setting when retrieving data from HTTP sources

      When retrieving data from HTTP sources, the number of seconds to wait for a server response before timing out can be defined using .

    v2.3.0

    January 17, 2019

    New features

    • Added editable labels for report levels

    • Added the ability to mass delete services

    • Invoice report is now called Summary report

    • The 'finish' statement in Transcript can be made to cause the task to fail if the DSET it's given is empty

    Bug fixes

    • Fixed a bug where users couldn't update their own details (including their password)

    • Fixed a bug where only 10 datasets were displayed when creating a new report

    • Fixed a bug where the mail sender name was not persisted in the configuration

    v2.2.1

    December 13, 2018

    Bug fixes

    • A bug was fixed which could lead to an error in the invoice report when using a rate with a minimum commit set

    • Fixed an issue with minimum commit It was possible that when applying minimum commit to a service, that other services would be affected by that minimum commit. This has now been fixed.

    • Fixed an issue when retrieving NULL fields from an ODBC query When using ODBC to collect data, the presence of NULL values in the results could cause USE to crash. This has been fixed.

    v2.2.0

    November 30, 2018

    New features

    • Added the ability to view instance level details on the invoice reports.

    • Added the ability to customize the report exports (CSV format only) field delimiter and decimal separator. These settings are system-wide and available to administrators by navigating to Administration > Configuration > Formatting.

    • Added the ability for users to reset their own passwords. This requires the email address of users to be set and a working server configuration for sending emails. This can be configured in Administration > System > Environment.

    Bug fixes

    • The datasets selector visible when creating a new report definition is now alphabetically sorted.

    • Fixed a bug which caused the contents of extractor editor to not update after updating variables. The contents of the extractor script itself was always saved after updating variables, only those changes were not visible in the editor.

    • Fixed a bug which caused the account depth selector to reset after performing an upgrade.

    v2.1.5

    November 22, 2018

    Bug fixes

    • Updated the documentation links in the header to point to our new documentation site.

    • Fixed grouping behaviour in the details table of the accounts report. In some cases, accounts could appear grouped under the wrong parent account in the 'Detailed' table in the accounts report.

    v2.1.4

    October 31, 2018

    Bug fixes

    • Fixed an issue with incorrect quantities sometimes showing on reports. Occasionally, when running a report for a range of dates, the quantities on one or more services differed from the quantity for that service shown when a report was run for a different date range (or just the day in question). This issue has now been fixed.

    v2.1.3

    October 26, 2018

    New features

    • The USE 'basename' statement can now write its results to a new variable. Previously, the 'basename' statement would always modify the value of the variable whose name was supplied as the argument. It can now also accept a literal string and create a new, or update an existing, variable to hold the result.

    • Archives in GZIP format can now be decompressed using USE. USE now supports the 'gunzip' statement which can be used to inflate GZIP'd data. Details of how to use this statement may be found at

    • FIxed an issue whereby when running a Transform script the Audit database would be locked for the duration of the task. Transcript now only opens the Audit database when it needs to, reducing the likelihood of encountering errors pertaining to a locked audit database in the logfile.

    Bug fixes

    • Fixed an issue whereby when creating a service, the audit indicated that the service creation failed. When a service definition is successfully created, Transcript will now correctly audit that event as opposed to indicating that the attempt failed.

    • Fixed an issue whereby over-writing services could result in database errors in the logfile. Sometimes when overwriting services, a constraint error would be logged in the logfile and the service would not have any rate associated with it. This has been fixed.

    v2.1.2

    October 18, 2018

    Bug fixes

    • Fixed an issue that could cause database corruption.

      Fixed an issue that could cause database corruption due to the Aeon database being held open for long periods of time.

    v2.1.1

    October 10, 2018

    New features

    • Added a live preview feature when working with transforms. A new feature has been added which can display a live preview of the transformer output. Note: this feature is currently in beta and will be further updated in the next release.

    • The code editor has been updated. The code editor for Extractor and Transformer scripts has been updated (it now uses the open source Monaco editor - ) resulting in a significant improvement over our previous editor. This greatly enhances the user experience when editing scripts in the GUI. Note: This change also lays the foundation for more advanced features going forwards.

    • Charges for monthly services now take quantity into consideration as well as price.

    Bug fixes

    • Removed COGS option for users without rights to view COGS information. In the services and instances report, users with no access to view COGS will no longer be able to select the COGS type in the details table. Note: This bug never allowed users without appropriate access rights to view the actual COGS data.

    • Fixed a bug where the list of datasets on the report definition page was only showing the first 10 results. This could result in an inability to create new reports using datasets that were not included in those results

    • A link to the instances report has been added to the search feature in the header.

    v2.0.6

    September 05, 2018

    New features

    • Upgraded the underlying API framework For more information, please refer to the .

    Bug fixes

    • Fixed an issue whereby 'append' could crash if one or other DSET was empty When executing the 'append' statement in a transformation script, if one or other of the DSETs involved in the operation was empty (having no data rows) then a crash could occur. This has now been fixed.

    • Fixed an issue where an expression that evaluated as FALSE could show the wrong line number in a log message The DEBUG level logfile entry indicating an expression is true or false would contain a reference to the wrong line number if the expression evaluated to false. This has now been fixed.

    • Fixed an issue whereby some comparisons would evaluate incorrectly in expressions Fixed and issue whereby in some cases where a value was quoted in an expression, the quotes would be considered part of the value itself.

    v2.0.5

    August 28, 2018

    New features

    • Ability to filter data in report using search query: The search bar in Accounts, Services and Instances reports now supports the use of operators (for example >and <) to filter your results based on column values or strings.

    • Add avg_unit_based_rate to report/run API endpoint Added the average per unit rate field to the report/run API endpoint and a placeholder for the average per interval rate which will be implemented later.

    Bug fixes

    • Fixed an issue where deleting services could lead to adjustments not displaying correctly

    • Rate column in report details tables now use the configured rate precision setting

    • Fixed an issue whereby scheduled tasks that output more than 4kb of data to the console could suspend execution and do nothing until they timed out

    v2.0.4

    August 22, 2018

    New features

    • Transcript can now normalise scientific decimal numbers to standard format: When processing data that contains numbers in scientific format (such as 2.1E-5) the normalise statement can now be used to convert these to standard decimal notation (0.000021 in the above case) using the form normalise columncolNameas standardwhere colNameis the column containing the values to convert. Any values already in decimal will be unchanged, except that any trailing zeros will be removed from them. Non-numeric values will be converted to 0.

    • Support group and group_col

    Bug fixes

    • The replace statement in Transcript will no longer behave unexpectedly when given an empty string as the target to replace: When using replace to update substrings within the values in a column, if the target string (the text to replace) is empty then Transcript will generate a meaningful log entry explaining that it cannot be used to replace empty strings, and will no longer overwrite non-blank column values with multiple copies of the replacement text.

    • The export statement in Transcript now supports backslashes as path delimiters: When specifying a relative path for the exportstatement, Transcript will automatically create any directories that do not exist in that path. Previously there was a bug whereby the auto-creation of those directories would only work if UNIX-style forward slashes were used as delimiters in the path. This has now been fixed and Windows or UNIX style delimiters may be used when specifying an export path.

    v2.0.3

    August 17, 2018

    New features

    • USE scripts can now be forced to terminate with an error result Previously, the 'terminate' statement could be used to cancel script execution, but its use would always indicate that the script ran successfully. This may not be appropriate in all cases (for example if an error is detected by the script itself but ultimately cannot be resolved satisfactorily). The 'terminate' statement will still cause a script to exit with a success result by default, but may now be invoked as 'terminate with error' such that an error status is returned instead.

    • Added more service attributes as optional columns in the reports details table. The following extra service attributes can now be enabled as columns in the report details table: interval, charge type, cogs type and proration.

    • Support 'group' and 'groupcol' as service parameters in Transcript In the 'service' and 'services' statements in Transcript, the parameters to define the service category are 'category' and 'category_col'. These parameters now have aliases of 'group' and 'group_col' respectively, for those who prefer to use that terminology.

    Bug fixes

    • Fixed an issue where an ODBC connection could cause a crash in USE When executing an ODBC-based collection in USE, under certain circumstances an incorrect direct connection string could cause a crash. This has been fixed. Additionally, when an ODBC error occurs the error written to the logfile contains more detail than in previous releases.

    • The order of workflow steps in the status tab now corresponds to the order of workflow steps in the configuration tab.

    • An issue has been fixed where old user preferences could conflict by updates in the GUI, leading to errors when loading the service and instance reports.

    v2.0.2

    August 03, 2018

    Bug fixes

    • The 'export' statement in Transcript now supports backslashes as path delimiters

      When specifying a relative path for the 'export' statement, Transcript will automatically create any directories that do not exist in that path. Previously there was a bug whereby the auto-creation of those directories would only work if UNIX-style forward slashes were used as delimiters in the path. This has now been fixed and Windows or UNIX style delimiters may be used when specifying an export path.

    • The 'replace' statement in Transcript will no longer behave unexpectedly when given an empty string as the target to replace

      When using 'replace' to update substrings within the values in a column, if the target string (the characters to replace) is empty then Transcript will generate a meaningful log entry explaining that it cannot be used to replace empty strings, and will no longer overwrite non-blank column values with multiple copies of the replacement text.

    v2.0.1

    July 25, 2018

    New features

    • Increased default timeout when retrieving data from HTTP servers

      Currently a USE script will fail if more than 3 minutes elapse without response when downloading data from an HTTP server. This has been increased to 5 minutes to cater for slow APIs.

    v2.0.0

    July 19, 2018

    New features

    • Transcript can now normalise scientific decimal numbers to standard format When processing data that contains numbers in scientific format (such as 2.1E-5) the 'normalise' statement can now be used to convert these to standard decimal notation (0.000021 in the above case) using the form 'normalise column colName as standard' where 'colName' is the column containing the values to convert. Any values already in decimal will be unchanged, except that any trailing zeros will be removed from them. Non-numeric values will be converted to 0.

    • When accessing the GUI via http visitors will be redirected to https automatically

    • Progress indicator in the report/run endpoint can be disabled To disable, set the progress

    Bug fixes

    • Fixed a Transcript crash when deleting a DSET Transcript will no longer crash in certain circumstances when deleting a DSET using the 'delete dset' statement.

    • The Transcript 'export' statement now creates a path automatically When exporting data from Transcript, if a relative path is specified as part of the export filename, Transcript will automatically create the path if it does not exist. The path will be created relative to /exported

    • Changed the behaviour of some columns in the report tables The optional _per unit charges and per interval charges columns on the report pages represent a fraction of the total charge and as such should be considered a subtotal rather than a rate._

    Version 1

    v1.8.1

    June 06, 2018

    New features

    • Minimum commit is now supported in the charge engine When generating a report, the results for any services that have a minimum commit value (and for which the usage does not meet that minimum commit quantity) will be adjusted to reflect that minimum commit value.

    • Updates to internal service and rate schema The rate attributes min_commit and threshold are now implemented and the API will return a slightly different schema for the /v1/services endpoint (and related /v1/dump models) - the

    Bug fixes

    • Fixed an issue where the depth filter wouldn't reload after preparing a report

    • Improved the print/PDF layout of consolidated invoices

    • Fixed a bug where the summary in Instance reports would sometimes remain empty

    • The charge engine now correctly deletes un-needed RDFs The charge engine now includes a mechanism to 'unload' historical data. This is an internal mechanism which will be used by the GUI in a future release.

    v1.7.0

    May 03, 2018

    New features

    • Implemented search field in report details table Ability to filter and pin a selection using a search query in the Accounts, Services and Instances report details table

    • Quantity adjustments can now be applied to a customer Adjustments can now also be set to affect quantities instead of charges. Both relative and absolute quantity adjustments are supported.

    • Ability to show consumed quantity in a report

    Bug fixes

    • When encrypting a variable it could get corrupted

    • Transcript could previously crash when running for a large date range

    • Workflows status tab did not consistently show historical log files

    • Fix for Invoice report error "Depth can't be empty, 0 or greater than 5"

    v1.6.2

    April 13, 2018

    Bug fixes

    • Extractor arguments where not used correctly when running USE script interactively from GUI

    • Report timeline graph could previously show zero when there's consumption

      v1.6.1

    April 13, 2018

    New features

    • Add profile page where logged in users can change their own e-mail address and password.

    Bug fixes

    • Fixed issue where scheduling multiple steps could corrupt Workflow WARNING: as of this release it is required to re-create your Workflows from scratch, to avoid potential issues

    • Fix loading overlays to improve multitasking in GUI

    • Fixed an upgrade bug which caused creating report definitions to be broken

    April 8, 2018

    Notable new features

    • [] - Implement day and month name variables in USE

    • [] - Add option to enable.disable client certificate support in USE

    • [] - Scheduler is now called workflows

    v1.5.0

    March 26, 2018

    Notable new features

    • [] - Add support for SAML Single Sign-On

    • [] - Add ability in transcript aggregate to average the values from a column

    • [] - Add ability to base new extractor on templates from GitHub repository

    v1.4.1

    March 19, 2018

    Notable new features

    • [] - Fixed duplicate headings not always eliminated in filtered import in Transcript

    • [] - Fixed a Transcript crash on 'move rows' or 'delete' after a 'replace'.

    v1.4.0

    March 16, 2018

    Notable new features

    • [] - Added 'include' statement to Transcript

    • [] - Added UTC versions of time-related variables in USE

    • [] - The API can now render an invoice report as a native PDF document

    v1.3.1

    February 23, 2018

    Notable new features

    • [] - Fixed a corner case where USE can stop working when executed from Glass

    v1.3.0

    February 23, 2018

    Notable new features

    • [] - Add option to choose custom currency symbol

    • [] - Move report selector to sidebar

    • [] - Improved syntax highlighting for USE and Transcript in the Glass script editor

    A full changelog is available upon request.

    v1.2.0

    February 09, 2018

    Notable new features

    • [] - Ability to extract data from databases using ODBC connection

    • [] - Scheduler endpoints in API

    • [] - Ability to schedule the preparation of report definitions through the GUI

    A full changelog is available upon request.

    v1.1.1

    February 03, 2018

    Notable new features

    • [] - Fix for Cannot read property 'relationships' of undefined error when logging in as a user with limited account permissions.

    A full changelog is available upon request.

    v1.1.0

    February 02, 2018

    Notable new features

    • [] - Syntax highlighting for USE

    • [] - Add support for XML data extraction in USE

    • [] - Enable parallel processing in Eternity

    • [

    A full changelog is available upon request.

    v1.0.0

    January 12, 2018

    Initial release.

    🥂

    Fixed an escaping error when importing file in a Transfomer The Transformer escape option doesn’t escape last quote: \“ at the end of a field

  • Fixed a Transformer issue with skip_corrupted_records When import option skip_corrupted_records was set, import could fail if last column in the record is missing closing quote. This has been resolved.

  • Added 'unsaved' warning in extractor/transformer editor A warning is displayed in the toolbar when editing an extractor or transformer and changes are not saved yet.
  • Added option to run workflow steps in parallel When adding new steps to a workflow, it is possible to uncheck the 'wait' toggle for any given step. This will then run the step simultaneously with the previous step. When the wait toggle remains checked, all previous steps will finish executing before the step is started.

  • Added ability to search parent account names In the Accounts report, it is now possible to search by parent account names in the 'detailed' table.

  • Added columns in reports data table Added account and service key as extra columns to the reports 'detailed' table. It is also possible to search for, and export, these values.

  • Updated Nginx and PHP Updated the Nginx webserver to version 1.17.8 and PHP to version 7.3.14

  • Custom escape character in extractors The escape statement now accepts an optional escape character to be used instead of the default backslash when escaping quotes in the value of a variable.

  • Implemented support for JWT web token authentication in extractors In order to support sources that require it, such as Google Cloud using OAuth 2.0, USE now supports the generation of a JSON Web Token. For more information please refer to https://docs.exivity.com/data-pipelines/extract/language/generate_jwt

  • Search by account / service key In the Accounts and Services overview screens, it is now possible to search by account/service key instead of the name.

  • View metadata in report data tables Metadata fields can now be selected as optional columns in the Accounts and Services reports' 'detailed' tables. They also appear in search and CSV export.

  • Added support for Firefox browser

  • Improved detection of web-app URL This is especially useful in features such as SAML authentication and the sending of notifications (e.g. e-mail). The app URL is auto-detected and can be modified in the Settings screen (System tab).

  • New permissions added Two new permissions have been added to usergroup settings: Manage metadata definitions and Manage datasets. If a usergroup is set to have all permissions, these new ones will automatically be granted to members of that group.

  • Added metadata for services Metadata can now be added to all services just like with accounts. Define a metadata definition first, then attach the definition to a dataset in Data pipelines > Datasets. All services in this dataset will now use this set of metadata fields. Metadata information itself can be added and modified in the Services > Overview and is available in the Services reports.

  • Breakdown information available for monthly services For monthly services, breakdown information is now available in the Instances report 'detailed' table. To view this, make the 'Usage' column visible by clicking the overflow menu > Columns > Usage. The daily usage breakdown for monthly services is available in a pop-up screen.

  • Added account breadcrumbs in Account report legend

  • Datasets can now be managed from a dedicated screen Navigate to Data pipelines > Datasets to delete individual days from the set, and to assign a metadata definition to all services in a dataset. We've also made it easier to delete multiple days at once.

  • Fixed a crash on the adjustments screen When no reports are available in the system, the adjustments screen could crash. This is now fixed.
  • Check for key column presence before correlation The correlation function in Transformers now checks for the existence of the key column in the default DSET. Previously, if the column was missing this resulted in an SQL error being logged. In such cases a clearer error message is now generated.

  • Improved error reporting when importing non-matched CSVs in a single operation When importing multiple CSV files into a Transformer using a regex to match the filenames, if one had differing columns to the others then an internal error was reported. This is now correctly reported as a normal error as opposed to an internal one.

  • Sub-directories in the lookup folder are no longer displayed in the Lookup editor

  • Fixed an issue with services using an average charge model For monthly services with an average charge model, if there was usage on the last day of the month then the quantity on that last day may not have been not be factored into the average calculation. This issue has now been fixed.

    • Please note: It is recommended that the reports are reprepared where possible to ensure accurate historical reporting.

  • Fixed an issue with links on the Accounts overview page Fixed a minor issue where a link to Data pipelines > Report could sometimes be shown even if a user didn't have the appropriate permissions.

  • Fixed an issue where no loading indicator was shown on the login screen.

  • Fixed an issue with overwriting services

    Fixed an issue which would cause errors to appear in the log (and the service not to be updated) when overwriting a service with an updated version of itself.

  • SQLite version updated SQLite updated to version 3.30.1 to match Edify

  • Added HTTP redirect limit option in USE Added new option http_redirect_count that allows to limit number of HTTP redirects to follow, or disable following redirects (default) when building an Extractor

  • Added function in Transcript to capture parts of a string value

    When building a Transformer, it is now possible to obtain values from a cell by using the @EXTRACT_BEFORE and @EXTRACT_AFTER functions. For more information visit https://docs.exivity.com/data-pipelines/transform/language/if#extract_before

  • Fixed an edge-case where the GUI could crash when fetching reports.

  • Fixed a bug where the GUI could crash in older browsers in the transformer screen.

  • Fixed a bug where a license warning was shown even when there were no problem with the license.

  • Added regex validation for metadata string fields. When adding an invalid regex, the user will now get an appropriate message when saving metadata definitions.

  • Fixed a bug which could prevent workflows from being updated. In some cases, when a user visited the Status tab in the workflows screen, it would prevent workflows from being updated (including adding/removing schedules and steps). This has been fixed.

  • Corrected a off by 1 problem in the Transformer preview Previously, the execution of the transformer script would stop at the currently selected line in the editor, while the interface suggested that execution would stop_after_the currently selected line. This has now been corrected so that execution will include the currently selected line.

  • Fixed a bug where the list of columns was not updating when creating a new report.

  • Fixed an issue where Extractor arguments where truncated when 0 was one of the arguments.

  • Fixed an issue where the Run Now button on the workflow screen could be greyed out in some cases.

  • The service worker script in the front-end no longer relies on a third-party content delivery network (CDN).

  • Fixed an issue where API requests were not forwarded correctly on hosts with a web proxy setup. This was particularly a problem with requests to the API not invoked from within the GUI, e.g. when loading SAML2 endpoints on the API.

  • Solved a security issue (internal reference: EXVT-2812)

  • Fixed an issue whereby DST changes could affect certain operations

    Exivity makes frequent use of date ranges for operations such as preparing and running reports. We have identified, and fixed, an issue whereby sometimes when a DST change resulted in the clocks going back an hour, the day in question would be treated as two separate days.

  • Ability to change the Unit label of a Service in the GUI It is now possible to manually change the Unit label of a Service in the Glass user interface
  • Improved Proximity error responses If any of the backend components returns a valid JSON error response, this will now be shown to the user.

  • API handling of requests when including relationship When Proximity handled a request to include relationships, it did not return a proper result for a relation that doesn't exists.

  • Removed empty arrays from the budget results Proximity now filters out empty arrays from the Horizon budget output

  • Added the ability to create and view budgets It is now possible to create and report on multi-level budgets. More information on this feature can be found at https://docs.exivity.com/accounts/budgets

  • Add ability to specify a specific dataset while previewing the output from a transformer

  • Updating relationships through the API has been improved and is now more efficient in some edge case scenario's

  • support for option services = update When populating services in Transcript, it is now possible to update Unit label, Service description

  • Resolved an issue with the Summary Report title When changing the Summary report title in the administration menu, changes were not reflected on the report. This has now been resolved.

  • Transformer margins and alignment Corrected visual margins and alignment in Transformer menu

  • Resolved 3 cosmetic issues for the Transformer Previewer Preview could go off-screen when moving the divider bar. The vertical scrollbar is now always visible. When not in full-screen, prevent having a vertical scrollbar in the editor as well as in the browser.

  • Removed the forward slash prefix from copied lookup file path

  • Fixed an issue where the summary report could show null when no header/footer were configured

  • Fixed an issue when importing CSV or Excel files in lookups

  • System wide date format configuration is now used for all summary report dates

  • Fixed an issue when removing a single service using bulk edit

  • Fixed an usability issue when moving back or forth multiple steps at once in the reports date range selector

  • Fixed an issue where Minimum commit line item was displayed instead of the user-configurable text

  • Added support for SAML2 when the API is behind a proxy server

  • Improved error reporting in the API for input validation errors

  • Proration is now applied correctly Solved an issue where monthly prorated service rates were applied incorrectly

  • Rate revision update issue Transcript will now create a new rate revision if needed, when using the serivces block in read only mode

  • Resolved an issue with the Summary Report title When changing the Summary report title in the administration menu, changes were not reflected on the report. This has now been resolved.

  • Transformer margins and alignment Corrected visual margins and alignment in Transformer menu

  • Resolved 3 cosmetic issues for the Transformer Previewer Preview could go off-screen when moving the divider bar. The vertical scrollbar is now always visible. When not in full-screen, prevent having a vertical scrollbar in the editor as well as in the browser.

  • Removed the forward slash prefix from copied lookup file path

  • Fixed an issue where the summary report could show null when no header/footer were configured

  • Fixed an issue when importing CSV or Excel files in lookups

  • System wide date format configuration is now used for all summary report dates

  • Fixed an issue when removing a single service using bulk edit

  • Fixed an usability issue when moving back or forth multiple steps at once in the reports date range selector

  • Fixed an issue where Minimum commit line item was displayed instead of the user-configurable text

  • Added support for SAML2 when the API is behind a proxy server

  • Improved error reporting in the API for input validation errors

  • Ability to disable SAML2 user creation Enabled an option to not automatically create new users in SAML2 configurations
  • Extractor/Transformer editor will wrap long lines The editor will now wrap very long lines in order to make them more readable.

  • Added Budget Viewer API Endpoint Ability in the Proximity API to call budget viewer.

  • Ability to Search for Service and Category Added Service Category as a searchable field in the Services Report. Also added Service as searchable field in Instance Report.

  • Added API suppot for Global Variables The Proximity API now has CRUD support for Global Variables. Support in Glass GUI, Extractors and Transformers will be added in a future release.

  • Solved an issue with component encoding The USE component-encode did previously not encode numeric values. This has now been resolved.

  • Added the ability to manually create services A user is now able to manually create one or multiple services in the Service Catalogue

  • If the logfile is not writable, terminate the Transcript task Transcript changed to fail if log file cannot be written, rather than falling back to using stderr

  • Adjustments are now included in service group subtotals Adjustments were previously ignored from service group subtotals. This behavior has now changed, so that adjustments are now included in both subtotal and total costs in the cost summary report

  • Backend support for previewing a custom DSET In Transcript preview mode, a user can select a non-default DSET to preview. GUI support will be added in future release

  • Added a warning when using Etc timezones When a timezone in a Workflow is set to Etc, the user will receive a warning to make them aware of the Etc timezone behavior. To learn more, please consult the following article: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)

  • Added new table to database for Global Variables and Environments

  • Improved error diagnostics in Budget engine in case of a missing filter Horizon was producing incorrect error message when budget item referenced non-existent filter. Error message fixed to report specific probelm

  • Improved error handling when creating an RDF where 2 or columns have the same name with different cases Transcript will now analyse the column headings before creating an RDF and generates a meaningful log message when 2 or more column names would conflict before writing the daily RDF

  • Transcript fails if service key exceeds allowed size When adding services, if service key is longer then 127 characters, Transcript fails with descriptive error in log file

  • Added an automatic process to clean up excess workflow logs

  • The level parameter on the accounts API endpoint is now an integer instead of a string

  • Improved API error messages for validation errors

  • Added valid default value for service.charge_model

    Newly added field charge_model contains NULL values for existing services. These are now migrated to peak (1) as that is the default service charging behavior.

  • Upgraded nginx to version 1.17.3

    Upgraded the nginx web server from version 1.17.1 to version 1.17.3

  • Extractor scripts can now use the HTTP PATCH method The 'http' statement now supports the HTTP PATCH method

    Don't resolve variables in 'else' branch being skipped Transcript doesn't resolve variables in 'else' branch when 'else' is not executed, therefore it removed 'unknown variable' errors in the situations when variables declared and used only inside 'else'

  • Extractor scripts now have separate automatic variables for diagnosing HTTP related issues When connecting to a server over HTTP, the HTTP_STATUS_CODE variable will now always contain a numeric value. Any textual supplementary information pertaining to that value can be found in a new variable called HTTP_STATUS_TEXT. In the event that a timeout occurs and no HTTP response is received, the HTTP_STATUS_CODE variable will contain the value -1.

  • statement in Transcript a new revision could be created which was a duplicate of the existing revision. This has now been fixed.

    When logging out, users are redirected to the login screen automatically Instead of showing an intermediate screen requiring a user to first click on a link to login again.

  • Streamlined the usergroup permissions to align with the new navigation structure

  • Created new automatic variable UNIX_UTC in USE A new automatic variable is now available in extractor scripts. This is called UNIX_UTC and will return the current UTC time as a UNIX timestamp value.

  • Implemented a new 'average' monthly charging model Monthly services may now be created which are charged based on the average quantity used throughout the month.

  • Deprecated the current budget feature in favour of a new implementation The new budget implementation will be released in Q2-2019.

  • Added an option to Transcript to skip invalid records during import It is now possible to skip invalid records during the import phase of a Transformer.

  • Improved ordering of options in the side menu The menu on the left hand side of the interface has been overhauled in order to group the options in a more logical manner.

  • Now only privileged users will see an alert message when debugging is enabled

  • Manual account administration The Exivity back-end now supports manual creation, editing and deletion of accounts GUI support for this feature will be included in a future release

  • Added lookup editor for ad-hoc data sources Lookup data sources can be used for various types of data sources not obtainable through automated extractors. Edit lookups by navigating to Data pipelines > Lookups. Read more at docs.exivity.com.

  • Added the option to modify or translate certain labels displayed by the Exivity interface. Find the new options by navigating to Administration > Settings > Translations.

  • Fixed a small issue which prevented empty metadata values to be stored
  • Do not send notifications for internal workflows Changed behavior of the Workflow engine to only send out notifications for user created workflows, and not for internal/garbage collector jobs

  • Updating Workflows during Workflow execution Fixed an issue were none of the workflows could be changed while a Workflow was running

  • Fixed a bug which prevented certain pages from showing correctly when no reports are defined

  • Fixed an issue whereby retrieving audit information from the API could fail

  • Fixed an issue with workflows containing 12 steps or more Fixed an issue where workflows with 12 or more steps could cause an API error.

  • Fixed an issue with newlines in scheduler log entries On occasion, it was possible that log entries generated by the scheduler would contain newline characters which could cause problems with the log viewer. This has now been fixed (log entries should no longer contain newlines).

  • Fixed an issue whereby credentials in a connection string could appear in the logfile When using a direct connection string with credentials to collect from an external database, under some circumstances an error message could contain a copy of the connection string. Log entries containing connection strings should no longer contain credentials.

  • Workflow management menu option is hidden from users without permission to manage workflows When logged in as a user who does not have permission to manage workflows, the GUI will hide workflow management options. Previously, users without the rights to do so were unable to perform any actual workflow management even though the option was displayed.

  • Fixed an issue that could cause workflows to stop working Fixed an issue whereby on rare occasions, the scheduler could leave its configuration database in a locked state, leading to problems running workflows.

  • Implemented support for decompressing ZIP files that expand to more than 2Gb

  • Fixed nested foreach loops Fixed issues where a nested foreach loop could cause issues with some XML parslets

  • Fixed an issue whereby the data status associated with a report would sometimes not be shown When accessing a report definition, sometimes the data status (the list of dates and the status of the data associated with each date) would not appear until the report definition was accessed a second time. This has now been fixed.

  • Fixed a bug which sometimes made it impossible to go into full-screen mode in the GUI

  • Fixed a bug which caused a warning to be incorrectly displayed when users tried to change their password

  • Fixed an issue whereby on rare occasions monthly services could have adjustments applied more than once Fixed an issue whereby if a monthly service had new instances appear on a date in the month after the date that the first instance of that service was seen, and if adjustments were applied to that service + instance combination then it was sometimes possible for the adjustments to be applied more than once.

  • Fixed an issue which could cause a GUI crash after deleting a service After deleting a service, it was possible that the GUI would display an error when subsequently viewing service rates. This has now been fixed.

  • Fixed a rare issue where importing into a transformation script could cause a crash When importing files using a wildcard, Transcript would crash if one of the files to import was empty and 'option embed' was enabled. This has now been fixed.

  • Previously, when creating an RDF, the finish statement would perform no action if the DSET to create the RDF from was empty. This is still the case if 'option mode = permissive' is in force, but if 'option mode = strict' then the finish statement will now generate an error. The error will cause the task to fail for the current day, and if 'option continue' is not enabled, the task will be terminated, else the task will move onto the next date in the range of days being processed.

  • When importing multiple files using the pattern option to import, any invalid files will be skipped When multiple files matching a pattern are imported, any of those files that are malformed or otherwise non-importable will be skipped.

  • Added the ability to exit a subscript invoked via #include in a transform script The 'return' statement, when used in a Transform script, will now cause script execution to resume from the statement following the #include statement in a parent script that referenced the script containing the 'return' statement.

  • Increased the performance of the correlate statement Correlation should now be significantly faster than it was previously

  • Increased the performance of the aggregate statement

    Aggregation should now be significantly faster than it was previously

  • Fixed an issue when importing CSV files containing quotes

    When importing a CSV file with two successive quote characters at the end of a field, Transcript would reject the file as invalid. This has now been fixed.

  • Fixed an issue where deleting data let to GUI crashes on occasion When deleting data (RDFs) associated with a report, it could be that if one or more days had previously been overwritten, a stale database entry would cause issues after the RDFs were deleted. This has now been fixed.

  • Fixed a bug whereby using terminate within the body of an if statement in a Transform script could cause an error Invoking terminate in an if block when running a transform script against a range of dates could cause the error The maximum number of nested blocks (32) is already in use. This has now been fixed.

  • Logfiles generated from workflow tasks now include a timestamp. This prevents logfiles from consecutive runs of the same task from being overwritten.

  • Workflow status now automatically refreshes after a manual run.

  • Added a new Environment tab in Administration > System. In this tab, information about the system the Exivity instance is running on can be filled out. In the future this will be expanded to include more configuration options.

  • Invoice reports now include minimum commit uplifts as separate entries.

  • Carriage-returns and line-feeds in data extracted using ODBC are now replaced with spaces. When extracting data with USE, the presence of newlines in the data could cause corrupt CSV output. Carriage Return and Line Feed characters in data extracted from ODBC are therefore now replaced with spaces.

  • Enhanced expression support in the Extractor component. Conditional expressions have been enhanced in the Extractor component such that more complex conditions can be evaluated and additional operations can be performed. Additionally, it is now possible to set a variable value using an expression.

  • Services can now be manually deleted using the GUI.

  • Fixed a bug which could cause the interface to become unresponsive after preparing a report.
  • Fixed an issue when running Transform scripts for days with 25 hours in them. When running a Transform script with a data-date representing a day where the clocks were adjusted such that the day had 25 hours in it, the script would be executed a second time automatically once the first had completed. This could lead to unexpected errors and log entries on occasion, and has now been fixed.

  • When writing to CSV files in USE, embedded CR/LF characters are converted to spaces. USE will now automatically strip out embedded carriage-return and line-feed characters when writing data to CSV files. Each unique occurrence of one or more sequential CR/LF characters will be replaced with a single space.

  • Fixed a bug whereby the body of an 'if' statement in the Transformer could terminate prematurely. In some cases, using an 'import' statement with an 'options' block within the body of an 'if' statement could cause statements following the 'import' to be skipped. This has now been fixed.

  • A new system variable is now available containing the number of days in the current month. The existing dataMonth variable, which contains the yyyyMM of the current month is now supplemented with a new variable called dataMonthDays which contains the number of days in that month.

  • Changed default service type to 'automatic' in the 'services' statement in Transcript. When creating services, if no 'type' parameter is provided then the default service type will now be be set to 'automatic'.

  • If two or more days in a month have the same highest price then the one with the highest quantity will be reported. Previously, the first seen was reported which could lead to discrepancies between the reported quantity and price on the report.
  • When running reports blank instance values are now displayed as a hyphen. When running reports against data with blank instance values in the usage data, the instance value will now be represented as a hyphen, which improves the aesthetics of the report.

  • Added hardware information to Transcript log-files. Log-files created by Transcript now contain information about the CPU and RAM at the top of the log.

  • Increased auditing information in Transcript. Events relating to service, rate and RDF changes are now audited

  • The service interval column in the instances report now contains data. Previously this column was always blank

  • Fixed a bug where searching for units within a services reports leads to a GUI crash.

  • Fixed an issue whereby very rarely a charge would not be included in reports. On very rare occasions, information in a record in the prepared report caches was not included in the output when a report was run. This has now been fixed.

  • Fixed an issue that could cause Aeon database corruption. Fixed an issue that could cause database corruption (and workflows to fail) due to the Aeon database being held open for long periods of time.

  • Fixed an issue whereby re-using an existing named buffer in USE for ODBC purposes could lead to unexpected results. Fixed an issue in USE whereby if an existing named buffer was re-used to store data retrieved from ODBC then a new buffer could have been created with the same name as the existing buffer, and attempts to reference it would return the old data.

  • Fixed an issue when executing ODBC queries that return no data. Using the ODBC capability to execute a query that returns no data will no longer cause an extractor to return an error.

  • Fixed a condition where reports were not showing for non-admins

  • Quantity metric is available again for the timeline chart on the services and instances reports

  • Reports in the navigation menu dropdown are now alphabetically ordered

  • as service parameters in Transcript:
    In the
    service
    and
    services
    statements in Transcript, the parameters to define the service category are
    category
    and
    category_col
    These parameters now have aliases of
    group
    and
    group_col
    respectively, for those who prefer to use that terminology.
  • Fixed a bug in the scheduler that could cause schedules to fail: In some cases schedules could fail for no obvious reason. This has now been fixed.

  • Reduced the chance of a 'database is locked' warning when preparing reports When preparing reports, on occasion it is possible for a warning to appear in the logfile pertaining to the global database being locked. When this warning happened, it could cause some days in the reporting period to remain unprepared. A known specific cause of this issue has been fixed, significantly reducing the likelihood of it happening.

  • An issue has been fixed where certain characters in a workflow status could lead to errors in the API. Sometimes, when running a scheduled task, the output written to the database contains non-printable characters. The API now re-encodes those characters, which means the GUI will now correctly show the status for those workflows.

  • When selecting a reporting period that spans multiple months, the charts will now only show a single label for each month.

  • Fixed a USE crash bug with certain combinations of conditional expressions Fixed an issue whereby if an expression with more than 2 parameters was followed later in the script by an expression with fewer parameters than the first, a crash would occur.

  • Fixed issue where an extractor could crash when using a parslet after formatting some JSON A bug has been fixed whereby if the 'json format' statement was used to prettify some JSON in a named buffer, use of a parslet to extract data from the JSON could cause a crash.

  • Fixed an issue where sometimes an XML parslet would cause an 'out of memory' error in USE When using an XML parslet, it was possible that an 'out of memory' error would be returned in the logfile and the script would fail, even on small input files. This has now been fixed.

  • parameter to
    0
    . More information at our
    .
  • Filter selectors show which items are present in the current report The service category selector in the services and instances report, and service selector in the instances report will show items not available in the current report grayed out.

  • On-demand workflow execution Workflows can now be executed on demand. Also the schedule for a Workflow can be disabled.

  • Single workflows can now have multiple schedules

  • Workflows can now be scheduled in a specific timezone

  • Added the average rate column to the reports details table

  • Added the ability to show various totals in reports summary widget Summary widget now has the option to show all totals (previous behaviour) or only the totals for the current search results, or for the current pinned items.

  • Added report shortcuts to the dashboard

  • COGS, fixed COGS and fixed prices are now evaluated per instance when preparing reports Previously, if a service was created that used any of fixed_price_col, cogs_col or fixed_cogs_col to indicate that the rate in question should be obtained from the usage data for any given day, then the charge engine would use a single value from the specified column(s) and apply that to all instances of the services for the day. Now, each row of usage is individually consulted when preparing reports such that the specific value on that row is used (as is already the case when using 'rate_col' for pass-through rates)

  • When extracting XML or JSON values, parse errors no longer cause the USE script to terminate Previously, when using a static parslet to extract XML or JSON values from the contents of a named buffer, if the buffer contained invalid JSON or XML then the USE script failed with an error in the log saying that the contents of the buffer could not be parsed. Now, if a named buffer contains data that is not valid JSON or XML, any attempt to extract a value from it using a static parslet will be expanded to the value EXIVITY_INVALID_XML or EXIVITY_INVALID_JSON.

  • Improved performance and lowered memory requirements when running a report Previously, in some circumstances running a report could take longer than expected and consume large amounts of memory in the process. The performance and memory use of the report engine have both been improved.

  • Reduced memory and increased performance when preparing reports Previously it was possible for some installations to use large amounts of memory and exhibit unreasonably slow performance when preparing reports. Preparing reports is not intended to be a realtime feature and will always incur some time overhead, but this time should now be significantly reduced in many cases, and the memory required to complete the process will be much less.

  • New output format for the /report/run endpoint in the API Due to changes to the charge engine, the output format of the /report/run endpoint in the API has changed. An up-to-date overview of the attributes returned by this endpoint can be found at our API documentation.

  • Free formatted ODBC connect strings are now supported in USE This exposes all ODBC driver options to the user, and avoid the requirement of creating manually DSN at the operating system level.

  • The 'split' statement now supports discarding unwanted result columns When using the 'split' statement it is now possible to discard all but a selected range of the resulting new columns.

  • Allow users to see anonymous roll-up accounts even if they have no access When a user only has access to some children of a parent account, reports will now show the combined usage of those accounts grouped as an unknown account in the reports.

  • Fixed a rare bug where incorrect character encoding in the data source could lead to reports not loading

  • Currency symbol is no longer shown for quantity graphs

  • An issue has been fixed which could lead to empty reports when there actually was report data In some cases, selecting certain combination of filters could lead to reports showing No data while there actually was report data for the current set of filters. This behaviour was observed mainly on the instances report page.

  • Usernames are now allowed to contain special characters As a side effect of changing usernames to be case-insensitive, using special characters was no longer permitted since v1.8.1. This restriction is now removed.

  • Changed the behaviour of clearing the charge engine caches Clearing the charge engine (Edify) caches unprepares all reports. The button on the About page now reflects this.

  • It is now possible to use decimal values for adjustment amounts Previously this was only possible through the API. The GUI has been updated to also support this.

  • Changing the date in the invoice report no longer resets the account selection Previously, when changing the date range on the invoice report screen, the current account selection (dropdown inside the invoice page) would automatically select the first account in the list. This has now been fixed to remember the selection when changing the date.

  • Exivity now works correctly when installed in a directory containing spaces

  • Transcript variables were not properly expanded when using in an import filter

  • Export of consolidated invoice now contains data for all accounts Previously, selecting the CSV or Excel export of a consolidated invoice would only export data for the first account on the invoice.

  • Fixed crash bug in the 'services' statement When creating services, Transcript will no longer crash if a blank interval or model value is encountered while building the service definitions.

  • rate_type
    attribute is now called
    charge_type
    . More information can be found at
  • The charge engine now includes information about proration adjustments When applying proration to a monthly service, the charge engine will now include information in the raw report data which shows the amount that the unprorated charged was reduced by. This information will be used by the GUI in a future release.

  • Proration is now applied to monthly services where applicable Report results for monthly services that are flagged as being prorated will now reflect a percentage of the monthly charge, based on the number of days in the month that the service is used.

  • GUI preferences are now saved for each user For example, selected reports, date ranges and filters are now persisted for each user, so they can be restored after logging out and in again.

  • The charge engine can now execute a script passed to it via standard input The charge engine can now execute a reportfile passed to it via standard input. This internal change results in fewer termporary files on disk during normal use.

  • Error reporting can now be disabled in configuration

  • Transcript can now import usage data from existing RDFs The 'import' statement in Transcript can now retrieve the raw usage data from an existing RDF file.

  • Usernames are no longer case sensitive when loggin in

  • Transformers now always run with loglevel = warn when triggered in workflows

  • Service and service category filters now only show items actually in the visible report

  • USE will now trap more HTTP errors When enacting some HTTP operations, if an error such as a timeout or invalid host is encountered, USE will now return an error in the HTTP_STATUS_CODE variable instead of automatically terminating the script.

  • Added daily usage information for monthly services in the charge engine When generating a report, the charge engine will now include information about the usage quantity for each day in the charge interval. This information will be used by the GUI in a future release.

  • Drilldown functionality is now available from the legend in reports

  • Reference account information in ETL Account information can now be imported directly during the data transformation step, such that existing account data can be used to enrich the data being processed.

  • Increased HTTP client timeout USE will now wait for a three minutes by default before deciding that the connection has timed out if no data is received after the initial connection to a server has been made.

  • Improved the syntax for options to the 'import' statement in Transcript The options supported by the 'import' statement must now be formatted such that there is a single option per line of script. This removes the previous requirement to quote the list of column names when using 'select' and 'ignore', as well as the requirement to quote the expression used by the 'filter' option.

  • Added a system variable to return the last day of any given month A new system variable has been implemented which will return the last day in any given calendar month.

  • The 'correlate' transform now supports a default DSET for column names The 'correlate' statement always uses the default DSET as the destination for correlated columns but now supports an 'assume' parameter which determines the default DSET within which to locate non-fully-qualified source columns.

  • Added a button to the detailed widget in reports to toggle search field

  • Added an option to configuration to add a custom Google Analytics property

  • The charge engine can now be used to identify unused service definitions

    The charge engine now supports the ability to retrieve a list of services which are not used by any existing reports.

  • Services for users with limited access to accounts are now filtered

  • When creating services, an instancecol parameter is now required _Previously it was possible to create services with no instance_col specified. This would result in missing data in reports if no instance_col was specified. Transcript now requires that an instance_col parameter is provided to the 'service' and 'services' statements.

  • Consolidated invoices can now be exported to PDF

  • Fixed an issue where in some circumstances the reports wouldn't load

  • It is now possible to view budget audit trails

  • The 'import' statement in Transcript now correctly imports usage data in all forms of the statement Fixed a bug whereby when using automatic source and alias tagging, the 'import' statement would not permit the importing of usage data from an existing RDF

  • Improved readability of text when a light background colour is chosen

  • USE will no longer reject some valid expressions In some cases, a valid expression in a script was rejected as having an unbalanced number of brackets. This has now been fixed.

  • The charge engine can now delete services associated with DSETs that are unused by any reports Fixed a bug where the charge engine would not correctly delete services if there were no RDF files for the DSET that the service is associated with.

  • The reset pins button has been moved to the top of the detailed widget in reports

  • Ability in transcript to convert number bases The following is now possible in a transcript: convert colName from dec|hex to dec|hex
  • It is now possible in the Invoice cost report to consolidate all child accounts on a single page

  • Added option to create workflow step which purges Proximity cache.

  • Beta version of budget manager & viewer is now available.

  • Ability to specify to and from dates for transformers in workflows.

    v1.6.0

    [
    ] - When using COLNAME_NOT_EXISTS in a filter, it always evaluates to 'TRUE'
  • [EXVT-1024] - Internal Error when applying filters in a 'where' statement if import options are used

  • [EXVT-396] - Garbage collector

  • [EXVT-744] - Glass config file for default port/host

  • [EXVT-749] - Stacked bar chart option in accounts/services report, including optimized legend

  • [EXVT-805] - Extend `Run` tab in Transformer with `from` and `to` date

  • [EXVT-966] - Added Instance reports

  • [EXVT-985] - Make toggle so reports can go fullscreen

  • [EXVT-986] - Connect graphs + legend

  • [EXVT-988] - All lists in the front-end are now sorted alphabetically

  • [EXVT-993] - Ability to pin report items

  • [EXVT-958] - Enhance the 'hash' statement in USE to support base-64 encoding of the result
  • [EXVT-780] - Fixed manually editing the value of an encrypted variable in USE can cause a crash

  • [EXVT-924] - Fixed Eterenity hourly schedule does not consider start date

  • [EXVT-946] - Fixed OSI_TIME_UTC variable is missing a trailing Z

  • [EXVT-947] - Fixed some accounts show slight discrepancies when comparing to Excel calculation

  • [EXVT-949] - Fixed radio buttons don't update when changing adjustments in Glass

  • [
    ] - Ability to change the service description via the GUI
  • [EXVT-779] - Made updating of Extractor variables more robust, and added support for encrypted variables in the GUI

  • [EXVT-810] - Added the ability to use wildcard in import statement in USE

  • [EXVT-823] - Added a daterange wrapper for Transcript

  • [
    ] - Implement
    in Transcript
  • [EXVT-868] - Add additional checks to global conditions in Transcript

  • [EXVT-827] - Implement import filters in Transcript

  • [EXVT-881] - Add escaping option to import statement in Transcript

  • [EXVT-476] - Added scheduler interface

  • [EXVT-798] - Add report depth breadcrumbs to reports

  • [EXVT-832] - Extractor log is now shown when running on-demand through GUI

  • [EXVT-842] - Fixed an issue which caused small discrepancies when using different reporting definitions

  • [
    ] - Enhanced conditional execution in Transcript with support for regex matching
    ] - Create USE script for reading AWS S3 bucket
  • [EXVT-717] - Extractor and Transformer execution must show last 25 lines of corresponding log file

  • [EXVT-739] - Perform cross-browser test and add warning in unsupported browsers.

  • [EXVT-770] - Improve orbit performance when syncing large amounts of records

  • [EXVT-777] - Select single days in datepicker

  • [EXVT-783] - Support in Eternity for hourly and monthly schedules

  • https://docs.exivity.com/accounts/budgets
    this manual
    import
    http get_header
    option services = overwrite
    services
    aggregate
    option mode = strict
    set http_timeout
    https://docs.exivity.com/diving-deeper/extract/language/gunzip
    https://microsoft.github.io/monaco-editor/
    Laravel release notes
    EXVT-971
    EXVT-973
    EXVT-979
    EXVT-370
    EXVT-849
    EXVT-879
    EXVT-940
    EXVT-941
    EXVT-871
    EXVT-932
    EXVT-105
    EXVT-889
    EXVT-720
    EXVT-741
    EXVT-847
    EXVT-102
    EXVT-693
    EXVT-802
    EXVT-818
    EXVT-253
    EXVT-255
    EXVT-600
    API documentation
    https://api.exivity.com/#0f8c2ac2-1f56-3c54-3000-210a5e94bd61
    EXVT-1021
    EXVT-691
    EXVT-858
    rounding
    EXVT-774
    EXVT-681