Loading tools¶
The loading tools allow you to bring models, data, and their schemas onto the platform via the UI.
Note
If you want to use the optional business segments tag to classify objects, it’s advisable to define this before you load anything (Initial set-up).
Load schema¶
Schemas are an important part of how TRAC understands Data and Model objects. They can be embedded into a Data or Model objects, or loaded and independent schema objects and re-used.
Schema objects are loaded as .csv files from a repository.
First log-in via the UI, then then select the branch and commit in which the schema file is located and click ‘Select file’. A pop-up window will appear showing the files than can be selected.
By default, only csv files are shown but you can toggle to view the entire file structure.

Once a schema file is selected, you can preview it before Uploading.

To complete the upload you must provide a Key, Name and Description and decide if the object will appear in search results. You can optionally assign a Business segment.

These attributes are required whenever an object (Schema, Data, Model, Flow or Job) is created. They can be changed later via the (Update Tags) page but it is better to set them correctly upfront.
Note
See Schema validation for more details on how TRAC uses object schemas when building flows and jobs.
Load data¶
Select ‘Upload a data set’ from either the homepage or the main drop-down menu.

Note
Data files up to 50GB can be loaded via the UI, in .xlsx or .csv format. In TRAC PROFESSIONAL, parquet files can also be loaded via the UI and data can be imported using an ImportData job.
Locate and select the file then click ‘Get schema’. TRAC will scan the file and infer the schema. Once you see ‘Import 100% complete’ click ‘Upload’.

Note
If the file does not contain data laid out in single table structure you may see an error message.
The file has been selected but it’s contents are not yet imported, you need to review and label it first.
The top of the page displays system generated tags (‘File details’) and a preview of the first 100 rows (‘Imported data’).

Lower down you confirm the schema. The first tab gives the option to directly edit the inferred schema.

And the second tab allows you to select and apply a pre-loaded Schema object. The drop-down contains the sub-set of schemas (see Load schema), that appear to match the data.

Note
Using a pre-loaded schema is advised because the inferred schema only considers the first 100 rows and some aspects of the inference - e.g. float vs integer and identifying categorical fields - is not foolproof.
You must assign general attributes (Name, Key, Description, Search Y/N) to complete the Upload. Once completed, a message will appear showing the new object ID, which can be used to Search for an object.
Import model¶
Models are imported from a repository.
First select the repository from the drop-down, then click ‘Authorize’. You will be asked to provide credentials according to the repositories required authorisation method (see View & manage tenant resources).

Note
If you do not see a repository that you are expecting to, it is likely an issue with the resources configuration See View & manage tenant resources for details on how to manage your repositories.
Once authorized, select the branch and commit and then click ‘Select file’, to open a pop-up showing the file structure and available models.

Models are uploaded one at a time. Pick a file and hit ‘Save’ when the inner pop-up says ‘selected’.

The pop-up should now close with your target model selected and some system generated tags displayed.

To complete the upload, confirm the general attributes and click ‘Upload model’ A pop-up should tell you that the ImportModel job has started.

This page will not tell you the outcome of the model import job, so navigate to (Find a job) page, where (hopefully) it will already be shown as ‘succeeded’.
If the import is successful you can find the Model via (Object search) and use it immediately.