LogoLogo
3.6.9
3.6.9
  • Introduction
  • Getting started
    • Installation
      • Prerequisites
        • Server requirements
      • On-premises
        • Single-node
          • Directory structure
        • Multi-node
      • Azure Market Place
      • AWS Market Place
    • Tutorials
      • Amazon AWS CUR
      • Amazon AWS CUR (Athena)
      • Azure Stack
      • Azure EA
      • Azure CSP
      • Google Cloud
      • VMware vCloud
      • VMware vCenter
    • How-to guides
      • How to configure receiving a monthly billing report
      • How to automatically trigger a monthly billing report
      • How to update your license
      • How to store contract information with an Account in a report
      • How to automatically send workflow errors as webhooks to a monitoring system
    • Concepts
      • User interface
      • Services
    • Releases
      • Upgrading to version 3
      • Known issues
      • Announcements
      • Archive
  • Reports
    • Accounts
    • Services
    • Instances
    • Summary
    • Budget
  • Services
    • Manage
    • Rates
      • Tiered Services
        • Aggregation Levels and the Account Hierarchy
    • Adjustments
    • Subscriptions
  • ACCOUNTS
    • Budget management
  • Data pipelines
    • Extract
      • Configuration
      • Extractor templates
      • Script basics
      • Parslets
      • Subroutines
        • check_dateformat
        • check_dateargument
        • format_date
        • validate_response
      • Language
        • aws_sign_string
        • basename
        • buffer
        • csv
        • clear
        • decimal_to_ipv4
        • discard
        • encode
        • encrypt
        • environment
        • escape
        • exit_loop
        • foreach
        • generate_jwt
        • get_last_day_of
        • gosub
        • gunzip
        • hash
        • http
        • if
        • ipv4_to_decimal
        • json
        • loglevel
        • loop
        • lowercase
        • match
        • pause
        • print
        • return
        • save
        • set
        • subroutine
        • terminate
        • unzip
        • uppercase
        • uri
        • var
    • Transform
      • Configuration
      • Transformer templates
      • Transform Preview
      • Language
        • aggregate
        • append
        • calculate
        • capitalise
        • convert
        • copy
        • correlate
        • create
        • default
        • delete
        • dequote
        • environment
        • event_to_usage
        • export
        • finish
        • Functions
        • if
        • import
        • include
        • lowercase
        • normalise
        • option
        • rename
        • replace
        • round
        • services
        • set
        • sort
        • split
        • terminate
        • timecolumns
        • timerender
        • timestamp
        • update_service
        • uppercase
        • var
        • where
    • Datasets
    • Lookups
    • Metadata
    • Reports
    • Workflows
  • Administration
    • User management
      • Users
      • Groups
    • Notifications
      • Budget Notifications
      • Report notifications
      • Workflow notifications
    • Settings
      • Global Variables
      • White Labeling
  • Advanced
    • Integrate
      • GUI automation
        • Examples
      • API docs
      • Single sign-on
        • Claims-based identity provisioning: users, Account access and user groups
        • Azure-AD
        • Auth0
        • OKTA
        • OneLogin
        • ADFS
        • LDAP
    • Digging deeper
      • Authentication flows
      • Transformer datadate
      • Dataset lifecycle
      • Config.json
      • Databases
  • Security
    • Security
    • Authentication
      • Token
      • LDAP
      • SAML2
    • Password reset
    • Password policy
    • Announcements
  • Troubleshooting
    • Logs
  • Terms & Conditions
  • Privacy Policy
Powered by GitBook
On this page
  • Overview
  • Syntax
  • Details
  • Clearing the flagged timestamp columns
  • Example

Was this helpful?

Export as PDF
  1. Data pipelines
  2. Transform
  3. Language

timecolumns

PreviousterminateNexttimerender

Last updated 3 years ago

Was this helpful?

Overview

The timecolumns statement is used to set the start time and end time columns in a DSET

Syntax

timecolumnsstart_time_col end_time_col

timecolumns clear

Details

The usage data stored in a DSET may or may not be time sensitive. By default, it is not, and every record is treated as representing usage for the entire day. In many cases, however, the usage data contains start and end times for each record which defines the exact time period within the day that the record is valid for.

If the usage data contains start and end times that are required for functions such as aggregation or reporting, the column(s) containing those times need to be marked such that they can be identified further on in the processing pipeline. This marking is done using the timecolumns statement.

The timecolumns statement does not perform any validation of the values in either of the columns it is flagging. This is by design, as it may be that the values in the columns will be updated by subsequent statements.

The values in the columns will be validated by the statement.

If the timecolumns statement is executed more than once, then only the columns named by the latest execution of the statement will be flagged. It is not possible to have more than one start time and one end time column.

Both the start_time_col and end_time_col parameters may be fully qualified column names, but they must both belong to the same DSET.

It is possible to use the same column as both the start and end times. In such cases, the usage record is treated as spanning 1 second. To do this, simply reference it twice in the statement:

timecolumns timestamp_col timestamp_col

Clearing the flagged timestamp columns

To clear both the start and end time columns, thus restoring the default DSET to treating each record as spanning the entire day, the statement timecolumns clear may be used.

This can be useful in the following use case:

  • The DSET is loaded and timestamp columns are created

  • The timestamp columns are cleared

  • Further processing is done on the DSET as required

Example

# Read data from file into a DSET called usage.data
import system/extracted/usage_data.csv source usage alias data

# Create two UNIX-format timestamp columns from the columns
# usageStartTime and usageEndTime, each of which records a time
# in the format '2017-05-17T17:00:00-07:00'
var template = YYYY.MM.DD.hh.mm.ss
timestamp START_TIME using usageStartTime template ${template}
timestamp END_TIME using usageEndTime template ${template}

# Flag the two columns we just created as being the start and
# end time columns
timecolumns START_TIME END_TIME

# Create the time sensitive DSET
finish

Currently the statement timecolumns clearwill only clear the timestamp columns in the

is used to create a time-sensitive RDF

The DSET is renamed using the statement

is used to create a second RDF which is not time-sensitive

finish
default DSET
finish
finish
rename dset