Detailed Solution Architecture & Implementation Steps
This technical implementation guide provides detailed explanations of HOW to build the GlobalTech Electronics data warehouse solution. It bridges the gap between business requirements (covered in the Scenario document) and actual code implementation (covered in the Script Reference document).
Each phase includes:
The complete ETL script is organized into 13 logical sections (tabs):
Before loading any data, you need to establish the foundation:
Without proper foundation:
Decision 1: Use Variables for Paths
C:\QlikTraining\DataFiles\ everywherevSourcePath that can be changed onceDecision 2: Use Variables for Business Rules
vDiscountThreshold = 0.10Decision 3: Separate Source Creation
What to do:
C:\QlikTraining\DataFilesC:\QlikTraining\DataFiles\TransformedWhy two folders:
DataFiles = Source system data (unchanged)Transformed = Our processed data (our work product)What to do:
Navigation tip: Data Load Editor is where ALL ETL code lives
What to do:
DataFilesC:\QlikTraining\DataFilesWhat this does: Tells Qlik where to find files using a logical name (lib://DataFiles/) instead of physical path
Concept: Source Data Generation
In real projects, source data comes from databases, APIs, or file systems. For training, we'll create inline data and save it as QVD files to simulate real sources.
What is a QVD file?
QVD Benefits:
Create 5 Source Files:
Each source file needs its own script tab:
Create_Source_ProductsCreate_Source_CustomersCreate_Source_SalesCreate_Source_StoresCreate_Source_IncrementalIn each tab:
LOAD * INLINE to create data in memorySTORE TableName INTO [path] (qvd) to save as QVDDROP TABLE TableName to remove from memoryRun the load: This creates all 5 QVD files in your DataFiles folder
Technique 1: INLINE Data Loading
LOAD * INLINE [
Field1, Field2, Field3
Value1, Value2, Value3
];
Technique 2: STORE Statement
STORE TableName INTO [lib://DataFiles/filename.qvd] (qvd);
Technique 3: DROP TABLE
DROP TABLE TableName;
β Pitfall 1: Forgetting to create folders before running script
β Pitfall 2: Wrong folder path in connection
β Pitfall 3: Not dropping tables after STORE
β
Folders exist: C:\QlikTraining\DataFiles and Transformed subfolder
β
Data connection "DataFiles" appears in connections list
β
Script runs without errors
β
Five QVD files created:
Extraction means reading data from source systems into Qlik's memory so we can work with it.
Key principle: Extract first, transform later
Reason 1: Debugging
Reason 2: Auditability
Reason 3: Performance
Decision 1: Naming Convention - "_Raw" Suffix
DIM_Products_Raw, DIM_Customers_RawDIM_Products_Transformed versionsDecision 2: Load All Dimensions Before Facts
Decision 3: Date Formatting During Extraction
Create tab: Data_Load_Script
What to define:
Path Variables:
vSourcePath = Location of source QVDsvTargetPath = Location for transformed QVDsBusiness Rule Variables:
vDiscountThreshold = What discount level is "heavy" (10%)vHighValueOrder = Dollar amount that qualifies as "high value" ($1000)vCurrentYear = Current year for analysisvCurrentDate = Today's date for timestampsDate Variables:
vLastLoadDate = When did we last load data (for incremental)Status Variables:
vActiveStatus = What value means "Active" productvCompletedStatus = What value means order is completedSET vs LET:
SET variable = value β Value stored as-is (text)LET variable = expression β Expression evaluated first, result storedCreate tab section: Under Extract_Data_E
What to load:
DIM_Products_RawDIM_Products.qvdProductIDFields to load:
Date formatting:
Date(LastModifiedDate, 'YYYY-MM-DD') as Product_LastModifiedDate
Date() function converts to Qlik date format'YYYY-MM-DD' specifies the format patternProduct_LastModifiedDate to avoid conflicts laterWhat to load:
DIM_Customers_RawDIM_Customers.qvdCustomerIDFields to load:
Why geography fields matter:
What to load:
DIM_Stores_RawDIM_Stores.qvdStoreIDFields to load:
Store_CountryStore_RegionStore_CityWhy rename country/region/city?
Store_Country is clearly the store's location, not customer'sWhat to load:
FACT_Sales_RawFACT_Sales.qvdCustomerID, ProductIDFields to load:
WHERE Clause for Filtering:
WHERE ProcessedFlag = 0
Technique 1: LOAD FROM QVD
LOAD
Field1,
Field2
FROM [lib://DataFiles/filename.qvd] (qvd);
(qvd) tells Qlik this is QVD formatTechnique 2: Optimized Load
Technique 3: Date Formatting
Date(DateField, 'YYYY-MM-DD') as FormattedDate
'YYYY-MM-DD' is the output format patternTechnique 4: Field Renaming
OriginalField as NewField
Technique 5: WHERE Clause Filtering
WHERE ProcessedFlag = 0
β Pitfall 1: Forgetting (qvd) format specifier
(qvd) after the file pathβ Pitfall 2: Loading all fields with * when you need to transform
LOAD * FROM file.qvd (qvd); β Can't rename fields
β Pitfall 3: Date fields loading as text
Date() function during loadβ Pitfall 4: Not renaming conflicting fields
Store_Country, Customer_Country, etc.β Script runs without errors β Four tables visible in Data Model Viewer:
Transformation is where we add business intelligence to raw data:
Raw data answers "what happened" Transformed data answers "what does it mean"
Reason 1: Dimensions are stable
Reason 2: These attributes will be used in fact calculations
Reason 3: Keep transformations separate
Decision 1: Create New "_Transformed" Tables
_Raw tables_Transformed tables with calculationsDecision 2: Use RESIDENT Loads
RESIDENT DIM_Products_Raw)Decision 3: Document Business Rules in Comments
Create tab section: Transformed_Data_T1 (or similar)
Objective: Calculate product profitability and classify by price tier
Table name: DIM_Products_Transformed
Source: RESIDENT DIM_Products_Raw
Calculations to add:
1. ProfitPerUnit
ListPrice - UnitCost as ProfitPerUnit
2. ProfitMarginPercent
Round(((ListPrice - UnitCost) / ListPrice) * 100, 2) as ProfitMarginPercent
3. PriceTier Classification
If(ListPrice >= 2000, 'Premium',
If(ListPrice >= 1000, 'Mid-Range',
If(ListPrice >= 500, 'Standard', 'Budget'))) as PriceTier
4. IsActive Flag
If(Status = '$(vActiveStatus)', 1, 0) as IsActive
Why keep original fields?
After transformation:
DROP TABLE DIM_Products_Raw;
Table name: DIM_Customers_Transformed
Source: RESIDENT DIM_Customers_Raw
Calculations to add:
1. CustomerTenureYears
Year(Today()) - Year(RegistrationDate) as CustomerTenureYears
Year() extracts year from dateToday() gets current date2. LoyaltyTier
If(Year(Today()) - Year(RegistrationDate) >= 2, 'Gold',
If(Year(Today()) - Year(RegistrationDate) >= 1, 'Silver', 'Bronze')) as LoyaltyTier
3. MarketType
If(Region = 'North America' or Region = 'Europe', 'Developed', 'Emerging') as MarketType
Why these specific calculations?
After transformation:
DROP TABLE DIM_Customers_Raw;
Technique 1: RESIDENT Load
LOAD
*,
NewCalculation
RESIDENT SourceTable;
Technique 2: Arithmetic Calculations
Field1 - Field2 as Difference
Field1 * Field2 as Product
Field1 / Field2 as Ratio
Technique 3: Nested IF Statements
If(condition1, result1,
If(condition2, result2,
If(condition3, result3,
else_result))) as NewField
Technique 4: Date Functions
Today() β Current dateYear(date) β Extract year from dateMonth(date) β Extract monthDate(value, format) β Format as dateTechnique 5: Variable Usage in Expressions
If(Status = '$(vActiveStatus)', 1, 0)
$(variableName) expands variable valueTechnique 6: Round Function
Round(expression, decimals) as RoundedValue
Round(34.56789, 2) β 34.57β Pitfall 1: Forgetting to keep original fields
LOAD
NewCalculation as Field1 β Only calculated field
RESIDENT SourceTable;
*, before new fieldsβ Pitfall 2: Division by zero
Field1 / Field2 as Ratio β What if Field2 = 0?
If(Field2 <> 0, Field1/Field2, 0)β Pitfall 3: Wrong date arithmetic
Today() - RegistrationDate β Returns days, not years
β Pitfall 4: Nested IF too deep
β Pitfall 5: Not using variables for business rules
If(Discount > 0.10, 'Heavy', 'Light') β Hardcoded
If(Discount > $(vDiscountThreshold), 'Heavy', 'Light')β Script runs without errors β Two new tables in memory:
The Star Schema Principle: Facts should contain keys and measures, dimensions should contain descriptions
Currently:
FACT_Sales_Raw has ProductID and CustomerID (just numbers)Solution: JOIN dimension tables to fact table
A JOIN combines two tables based on matching values in a common field (the "key").
Types of Joins:
We use LEFT JOIN because:
Real-world example:
Decision 1: Join to Facts, Not Create Separate Tables
FACT_Sales_RawFACT_Sales_Raw becomes enriched with dimension attributesDecision 2: Rename Fields During Join (Critical!)
ProductName as Prod_ProductNameCustomerName as Cust_CustomerNameDecision 3: Join Products First, Then Customers
Create tab section: Extract_Data_E (continues from extraction)
Objective: Add product details to every sales transaction
What to do:
LEFT JOIN (FACT_Sales_Raw)
LOAD
ProductID,
ProductName as Prod_ProductName,
Category as Prod_Category,
... (all product fields, renamed with Prod_ prefix)
RESIDENT DIM_Products_Transformed;
How LEFT JOIN works:
FACT_Sales_RawDIM_Products_TransformedFACT_Sales_Raw now has product columnsFields to add (all renamed with Prod_ prefix):
Why keep "ProductID" without prefix?
What happens to FACT_Sales_Raw?
Create tab section: Transformed_Data_T2
Objective: Add customer details to every sales transaction
What to do:
LEFT JOIN (FACT_Sales_Raw)
LOAD
CustomerID,
CustomerName as Cust_CustomerName,
Country as Cust_Country,
... (all customer fields, renamed with Cust_ prefix)
RESIDENT DIM_Customers_Transformed;
Fields to add (all renamed with Cust_ prefix):
After this join:
FACT_Sales_Raw now has:
Result: Fully enriched fact table ready for final calculations
What is a Synthetic Key? When two or more tables share multiple field names (not just one key), Qlik automatically creates a synthetic key table.
Example of Problem:
Table 1: Sales (CustomerID, CustomerName, Country)
Table 2: Customers (CustomerID, CustomerName, Country)
Why Synthetic Keys are Bad:
How We Avoid Them:
Technique 1: LEFT JOIN Syntax
LEFT JOIN (TargetTable)
LOAD
KeyField,
OtherFields
RESIDENT SourceTable;
LEFT JOIN (TableName) specifies which table to join toTechnique 2: Field Prefixing
FieldName as Prefix_FieldName
Technique 3: Join Keys
Technique 4: Reading Table in Memory
RESIDENT TableName
β Pitfall 1: Forgetting to rename fields
LEFT JOIN (FACT_Sales_Raw)
LOAD
ProductID,
ProductName, β Not renamed!
Category β Not renamed!
RESIDENT DIM_Products_Transformed;
β Pitfall 2: Renaming the key field
LEFT JOIN (FACT_Sales_Raw)
LOAD
ProductID as Prod_ProductID, β WRONG!
...
β Pitfall 3: Using wrong join type
INNER JOIN (FACT_Sales_Raw) β Drops non-matching sales
β Pitfall 4: Inconsistent prefixes
Product_Name as Prod_ProductName
Product_Category as Prd_Category β Different prefix!
β Pitfall 5: Circular joins
LEFT JOIN (Table1) ... from Table2
LEFT JOIN (Table2) ... from Table1
β
Script runs without errors
β
FACT_Sales_Raw still has 15 rows (LEFT JOIN doesn't reduce)
β
FACT_Sales_Raw now has many more columns:
Now we have:
This phase creates the final fact table that will be used for all analysis.
Option 1: Continue modifying FACT_Sales_Raw
Option 2: Create new FACT_Sales_Final table (WE CHOOSE THIS)
Decision 1: Calculate All Metrics in One Place
Decision 2: Use Meaningful Field Names in Output
Decision 3: Add Metadata Fields
Decision 4: Clean Up After Transformation
Create tab section: Transformed_Data_T1
Objective: Create FACT_Sales_Final with all business logic applied
Source: RESIDENT FACT_Sales_Raw
Table name: FACT_Sales_Final
Why: Business Ask #6 requires time-based analysis
Fields to add:
OrderYear
Year(OrderDate) as OrderYear
OrderMonth
Month(OrderDate) as OrderMonth
OrderQuarter
Ceil(Month(OrderDate)/3) as OrderQuarter
Ceil() rounds up: Jan/Feb/Mar (1-3) / 3 = Q1Why Ceil() for Quarter?
OrderDayOfWeek
WeekDay(OrderDate) as OrderDayOfWeek
Customer Fields (remove Cust_ prefix):
CustomerID, β Keep as-is (key field)
Cust_CustomerName as CustomerName,
Cust_Segment as Segment,
Cust_LoyaltyTier as LoyaltyTier,
Cust_MarketType as MarketType,
Cust_Country as Country,
Cust_Region as Region,
Cust_City as City,
Cust_TenureYears as CustomerTenureYears
Product Fields (remove Prod_ prefix):
ProductID, β Keep as-is (key field)
Prod_ProductName as ProductName,
Prod_Category as Category,
Prod_SubCategory as SubCategory,
Prod_Brand as Brand,
Prod_PriceTier as PriceTier
Why remove prefixes now?
Original transaction fields:
Quantity,
Prod_ListPrice as ListPrice,
Prod_UnitCost as UnitCost,
Discount,
ShippingCost,
OrderStatus
Business Ask #1: Need accurate revenue calculations
GrossRevenue
Quantity * Prod_ListPrice as GrossRevenue
NetRevenue
Quantity * Prod_ListPrice * (1 - Discount) as NetRevenue
Why (1 - Discount)?
TotalCost
Quantity * Prod_UnitCost as TotalCost
Business Ask #3: Need profitability analysis
GrossProfit
(Quantity * Prod_ListPrice * (1 - Discount)) - (Quantity * Prod_UnitCost) as GrossProfit
DiscountAmount
Quantity * Prod_ListPrice * Discount as DiscountAmount
Business Ask #4: Categorize orders by value and discount level
OrderValueCategory
If((Quantity * Prod_ListPrice * (1 - Discount)) >= 1000,
'High Value',
'Standard') as OrderValueCategory
vHighValueOrder variableDiscountCategory
If(Discount > 0.10,
'Heavy Discount',
If(Discount > 0, 'Light Discount', 'No Discount')) as DiscountCategory
vDiscountThreshold variableIsCompleted
If(OrderStatus = 'Completed', 1, 0) as IsCompleted
TotalTransactionValue
(Quantity * Prod_ListPrice * (1 - Discount)) + ShippingCost as TotalTransactionValue
IsCurrentYear
If(Year(OrderDate) = 2026, 1, 0) as IsCurrentYear
ProcessedFlag
1 as ProcessedFlag
ETL_LoadDate
'2026-01-29' as ETL_LoadDate
Important step BEFORE dropping:
STORE DIM_Products_Transformed INTO [$(vTargetPath)DIM_Products_Transformed.qvd] (qvd);
STORE DIM_Customers_Transformed INTO [$(vTargetPath)DIM_Customers_Transformed.qvd] (qvd);
This ensures:
After saving, now drop:
DROP TABLE FACT_Sales_Raw;
DROP TABLE DIM_Products_Transformed;
DROP TABLE DIM_Customers_Transformed;
Why drop these tables?
Technique 1: Complex Calculated Fields
(Field1 * Field2 * (1 - Field3)) - (Field1 * Field4) as Result
Technique 2: Ceil() Function
Ceil(value) as RoundedUp
Technique 3: Multiple Conditions in IF
If(condition1, 'Result1',
If(condition2, 'Result2', 'DefaultResult')) as Category
Technique 4: Literal Values
1 as ConstantField
'Fixed Text' as TextField
Technique 5: Year/Month/Quarter Functions
Year(date) β Extract yearMonth(date) β Extract month (1-12)Quarter(date) β Extract quarter (1-4) Not used here due to potential parsing issuesWeekDay(date) β Day of week (0-6)β Pitfall 1: Wrong order of operations
Quantity * ListPrice * 1 - Discount β WRONG
Should be: Quantity * ListPrice * (1 - Discount)
β Pitfall 2: Dropping tables before saving
DROP TABLE DIM_Products_Transformed;
STORE DIM_Products_Transformed ... β ERROR: Table doesn't exist
β Pitfall 3: Not handling NULL values
Field1 / Field2 β What if Field2 is NULL?
β Pitfall 4: Inconsistent field naming
CustomerName β Here
Customer_Name β There
Cust_Name β Somewhere else
β Pitfall 5: Forgetting to rename prefixed fields
Prod_ProductName, β Still has prefix
Prod_Category β Not user-friendly
β
Script runs without errors
β
FACT_Sales_Final table created with 15 rows
β
All time fields present (OrderYear, OrderMonth, OrderQuarter, OrderDayOfWeek)
β
All dimension fields renamed (CustomerName, ProductName, Segment, Category, etc.)
β
All revenue calculations present (GrossRevenue, NetRevenue)
β
All cost calculations present (TotalCost)
β
All profit calculations present (GrossProfit, DiscountAmount)
β
All classifications present (OrderValueCategory, DiscountCategory, IsCompleted)
β
Metadata fields present (ProcessedFlag=1, ETL_LoadDate, IsCurrentYear)
β
Original tables dropped (FACT_Sales_Raw, DIM_Products_Transformed, DIM_Customers_Transformed)
β
Dimension QVDs saved before dropping
β
Pick a row and manually verify calculations are correct
Manual verification example:
The Problem with Full Loads:
As data grows, full reloads become:
The Incremental Solution:
Benefits:
Step-by-step process:
Mark what's been loaded
Track what's already in warehouse
Filter for new data only
Apply same transformations
Append to history
Mark as processed
Decision 1: Use ProcessedFlag Not Dates
Decision 2: Reload Dimensions Each Time
Decision 3: Use NoConcatenate Keyword
Decision 4: Drop and Reload Pattern
Create tab section: Incremental_Load
Why reload dimensions?
What to do:
DIM_Products_Transformed:
LOAD * FROM [$(vTargetPath)DIM_Products_Transformed.qvd] (qvd);
DIM_Customers_Transformed:
LOAD * FROM [$(vTargetPath)DIM_Customers_Transformed.qvd] (qvd);
Tables now in memory:
Source: FACT_Sales_Incremental.qvd
What to load:
FACT_Sales_Incremental_Raw:
LOAD
OrderID,
Date(OrderDate, 'YYYY-MM-DD') as OrderDate,
CustomerID,
ProductID,
Quantity,
Discount,
ShippingCost,
OrderStatus,
ProcessedFlag
FROM [$(vSourcePath)FACT_Sales_Incremental.qvd] (qvd)
WHERE ProcessedFlag = 0;
Key points:
FACT_Sales_Incremental_RawExpected result: 5 new orders loaded (ORD016-ORD020)
Join with Products:
LEFT JOIN (FACT_Sales_Incremental_Raw)
LOAD
ProductID,
ProductName as Prod_ProductName,
Category as Prod_Category,
SubCategory as Prod_SubCategory,
Brand as Prod_Brand,
UnitCost as Prod_UnitCost,
ListPrice as Prod_ListPrice,
ProfitPerUnit as Prod_ProfitPerUnit,
PriceTier as Prod_PriceTier,
IsActive as Prod_IsActive
RESIDENT DIM_Products_Transformed;
Join with Customers:
LEFT JOIN (FACT_Sales_Incremental_Raw)
LOAD
CustomerID,
CustomerName as Cust_CustomerName,
Country as Cust_Country,
Region as Cust_Region,
City as Cust_City,
Segment as Cust_Segment,
CustomerTenureYears as Cust_TenureYears,
LoyaltyTier as Cust_LoyaltyTier,
MarketType as Cust_MarketType
RESIDENT DIM_Customers_Transformed;
Exactly the same as Phase 4 joins!
Critical: Use NoConcatenate:
NoConcatenate
FACT_Sales_Incremental_Transformed:
LOAD
... (all same transformations as Phase 5)
RESIDENT FACT_Sales_Incremental_Raw;
Why NoConcatenate?
FACT_Sales_Final already exists in memoryTransformations to apply (same as Phase 5):
Result:
FACT_Sales_Incremental_Transformed table createdFACT_Sales_FinalNow explicitly concatenate:
CONCATENATE (FACT_Sales_Final)
LOAD * RESIDENT FACT_Sales_Incremental_Transformed;
What this does:
FACT_Sales_Incremental_TransformedFACT_Sales_FinalFACT_Sales_Final now has 15 + 5 = 20 rowsWhy CONCATENATE is safe now:
Drop temporary tables:
DROP TABLE FACT_Sales_Incremental_Raw;
DROP TABLE FACT_Sales_Incremental_Transformed;
DROP TABLE DIM_Products_Transformed;
DROP TABLE DIM_Customers_Transformed;
Why drop dimensions again?
Final state:
FACT_Sales_Final in memoryQlik's Auto-Concatenation Rule: When you create a table with identical field names to an existing table, Qlik automatically appends rows.
Example:
Table1:
LOAD
OrderID,
OrderDate,
Quantity
...
Table1: β Same name!
LOAD
OrderID,
OrderDate,
Quantity
...
Result: One table "Table1" with all rows combined
Problem: If you're in middle of transformations, auto-concatenation happens too early.
Solution - Option 1: Different Table Names
Table1:
LOAD ... initial data
Table2: β Different name
LOAD ... incremental data
CONCATENATE (Table1)
LOAD * RESIDENT Table2;
Solution - Option 2: NoConcatenate Keyword
Table1:
LOAD ... initial data
NoConcatenate β Prevents auto-concat
Table2:
LOAD ... incremental data (would auto-concat without NoConcatenate)
CONCATENATE (Table1)
LOAD * RESIDENT Table2;
We use Option 2: Makes table relationships clear
Technique 1: WHERE with ProcessedFlag
WHERE ProcessedFlag = 0
Technique 2: NoConcatenate Keyword
NoConcatenate
TableName:
LOAD ...
Technique 3: Explicit CONCATENATE
CONCATENATE (TargetTable)
LOAD * RESIDENT SourceTable;
Technique 4: Reloading from QVD
TableName:
LOAD * FROM [path/file.qvd] (qvd);
β Pitfall 1: Auto-concatenation happens unexpectedly
FACT_Sales_Final:
LOAD ... (creates table)
FACT_Sales_Incremental_Transformed:
LOAD ... (auto-concatenates to FACT_Sales_Final!)
β Pitfall 2: Forgetting to reload dimensions
LEFT JOIN (FACT_Sales_Incremental_Raw)
LOAD ... RESIDENT DIM_Products_Transformed; β ERROR: Table not found
β Pitfall 3: Inconsistent transformations
Initial: NetRevenue = Quantity * ListPrice * (1 - Discount)
Incremental: NetRevenue = Quantity * ListPrice β Missing discount!
β Pitfall 4: Not updating ProcessedFlag
1 as ProcessedFlag β Forgot this line
β Pitfall 5: Loading all data instead of new only
WHERE ProcessedFlag = 0 β Missing this line
β
Script runs without errors
β
Dimensions reloaded successfully from QVD
β
Incremental raw data loaded (5 rows)
β
Joins completed (product and customer details added)
β
Transformations applied (same logic as initial load)
β
NoConcatenate created separate table
β
CONCATENATE merged tables correctly
β
FACT_Sales_Final now has 20 rows (15 initial + 5 incremental)
β
All 20 rows have ProcessedFlag = 1
β
Temporary tables dropped
β
No duplicate records (check OrderID - should have ORD001-ORD020, no repeats)
β
Calculations correct on incremental records (spot check a few)
Test incremental load works: Next time script runs:
The Problem: Customer C001 is currently in the "Corporate" segment. Next month, they change to "Consumer" segment. Traditional approach:
Before: C001 | Corporate
After: C001 | Consumer β Lost history!
The Solution: SCD Type 2:
Version 1: C001 | Corporate | 2024-01-01 | 2024-05-31 | Not Current
Version 2: C001 | Consumer | 2024-06-01 | 9999-12-31 | Current
Type 0: No Changes Allowed
Type 1: Overwrite (No History)
Before: Address = "123 Old St"
After: Address = "456 New St" β Old address lost
Type 2: Track Full History (WE USE THIS)
Row 1: Address = "123 Old St" | 2020-01-01 | 2024-12-31 | Not Current
Row 2: Address = "456 New St" | 2025-01-01 | 9999-12-31 | Current
Type 3: Previous Value Column
Current_Segment = "Consumer"
Previous_Segment = "Corporate"
Type 4: History Table
We choose Type 2 because:
Key Fields:
Example:
CustomerID | Segment | LoyaltyTier | EffectiveStartDate | EffectiveEndDate | IsCurrent | VersionNumber
-----------|------------|-------------|-------------------|-----------------|-----------|---------------
C001 | Consumer | Bronze | 2023-06-15 | 2024-05-31 | 0 | 1
C001 | Corporate | Silver | 2024-06-01 | 9999-12-31 | 1 | 2
Interpretation:
Decision 1: Track Segment and LoyaltyTier Changes
Decision 2: Use 9999-12-31 for "Current"
Decision 3: Handle Initial Load vs Updates
Decision 4: Close Old Versions When Creating New
Create tab section: SCD_Type2
Why reload?
What to do:
DIM_Customers_For_SCD:
LOAD
CustomerID,
CustomerName,
Segment,
Country,
Region,
City,
LoyaltyTier
FROM [$(vTargetPath)DIM_Customers_Transformed.qvd] (qvd);
Note: Only load fields relevant for SCD
The logic:
How to check:
IF FileSize('$(vTargetPath)DIM_Customers_SCD.qvd') > 0 THEN
... (handle updates)
ELSE
... (handle initial load)
ENDIF
FileSize() function:
> 0 means file existsWhen: SCD file doesn't exist (ELSE branch)
What to do: Create version 1 for all customers
DIM_Customers_SCD_New:
LOAD
CustomerID,
CustomerName,
Segment,
Country,
Region,
City,
LoyaltyTier,
Date(Today()) as EffectiveStartDate,
Date('9999-12-31') as EffectiveEndDate,
1 as IsCurrent,
1 as VersionNumber
RESIDENT DIM_Customers_For_SCD;
Fields added:
Result: 10 rows, one per customer, all version 1
When: SCD file exists (THEN branch)
Steps:
Step 7.4.1: Load Existing SCD Data
DIM_Customers_SCD_Existing:
LOAD
CustomerID,
CustomerName,
Segment,
Country,
Region,
City,
LoyaltyTier,
EffectiveStartDate,
EffectiveEndDate,
IsCurrent,
VersionNumber
FROM [$(vTargetPath)DIM_Customers_SCD.qvd] (qvd);
Step 7.4.2: Get Current Active Records
DIM_Customers_Current:
LOAD
CustomerID,
Segment as Old_Segment,
LoyaltyTier as Old_LoyaltyTier,
VersionNumber as Old_Version
RESIDENT DIM_Customers_SCD_Existing
WHERE IsCurrent = 1;
What this does:
Step 7.4.3: Join to Find Changes
LEFT JOIN (DIM_Customers_For_SCD)
LOAD
CustomerID,
Old_Segment,
Old_LoyaltyTier,
Old_Version
RESIDENT DIM_Customers_Current;
What this does:
Step 7.4.4: Identify Changed Records
NoConcatenate
DIM_Customers_Changed:
LOAD *
RESIDENT DIM_Customers_For_SCD
WHERE Segment <> Old_Segment
OR LoyaltyTier <> Old_LoyaltyTier;
What this does:
Step 7.4.5: Check If Any Changes Detected
LET vChangesDetected = NoOfRows('DIM_Customers_Changed');
IF vChangesDetected > 0 THEN
... (create new versions)
ELSE
... (no changes, keep existing)
ENDIF
NoOfRows() function:
Step 7.4.6: Create New Versions (If Changes Detected)
NoConcatenate
DIM_Customers_SCD_New:
LOAD
CustomerID,
CustomerName,
Segment,
Country,
Region,
City,
LoyaltyTier,
Date(Today()) as EffectiveStartDate,
Date('9999-12-31') as EffectiveEndDate,
1 as IsCurrent,
Old_Version + 1 as VersionNumber
RESIDENT DIM_Customers_Changed;
What this does:
Step 7.4.7: Close Old Versions
CONCATENATE (DIM_Customers_SCD_New)
LOAD
CustomerID,
CustomerName,
Segment,
Country,
Region,
City,
LoyaltyTier,
EffectiveStartDate,
Date(Today()-1) as EffectiveEndDate,
0 as IsCurrent,
VersionNumber
RESIDENT DIM_Customers_SCD_Existing
WHERE IsCurrent = 1;
What this does:
Why Today()-1?
Step 7.4.8: Keep Unchanged Records
CONCATENATE (DIM_Customers_SCD_New)
LOAD *
RESIDENT DIM_Customers_SCD_Existing
WHERE IsCurrent = 0;
What this does:
Step 7.4.9: Handle No Changes
ELSE β If vChangesDetected = 0
NoConcatenate
DIM_Customers_SCD_New:
LOAD * RESIDENT DIM_Customers_SCD_Existing;
What this does:
DROP TABLE DIM_Customers_Changed;
DROP TABLE DIM_Customers_SCD_Existing;
DROP TABLE DIM_Customers_For_SCD;
DROP TABLE DIM_Customers_Current;
Result: Only DIM_Customers_SCD_New remains
Technique 1: FileSize() Function
IF FileSize('path/file.qvd') > 0 THEN
... file exists
ELSE
... file doesn't exist
ENDIF
Technique 2: NoOfRows() Function
LET vCount = NoOfRows('TableName');
Technique 3: Date Arithmetic
Date(Today()) as StartDate
Date(Today()-1) as EndDate
Date('9999-12-31') as FarFuture
Technique 4: Comparison in WHERE
WHERE Field1 <> Field2
<> means "not equal"Technique 5: OR Condition
WHERE Condition1 OR Condition2
Technique 6: Version Incrementing
Old_Version + 1 as VersionNumber
β Pitfall 1: Not handling first run separately
IF FileSize() > 0 THEN
... only has update logic, no ELSE
ENDIF
β Pitfall 2: Forgetting to close old versions
β Missing the "update EffectiveEndDate" step
β Pitfall 3: Off-by-one date errors
New start: 2024-06-01
Old end: 2024-06-01 β Overlap! Should be 2024-05-31
β Pitfall 4: Not using NoConcatenate**
DIM_Customers_SCD_New: β Might auto-concatenate!
LOAD ...
β Pitfall 5: Dropping tables in wrong order
DROP TABLE DIM_Customers_SCD_Existing;
CONCATENATE (DIM_Customers_SCD_New)
LOAD * RESIDENT DIM_Customers_SCD_Existing; β ERROR!
β Pitfall 6: Not tracking all changed attributes
WHERE Segment <> Old_Segment β Only checking Segment
β Missing LoyaltyTier check
β
Script runs without errors on first run (no SCD file)
β
DIM_Customers_SCD_New created with 10 rows
β
All rows have VersionNumber = 1
β
All rows have IsCurrent = 1
β
All rows have EffectiveStartDate = Today
β
All rows have EffectiveEndDate = 9999-12-31
β
SCD QVD file saved
β
Script runs without errors on second run (SCD file exists)
β
No changes detected β Same 10 rows
β
Test with actual change:
To test SCD properly:
Why We Need a Calendar Table:
Sales transactions have dates, but dates alone don't answer questions like:
Calendar table provides:
Option 1: Calculate in Charts (DON'T DO THIS)
Chart expression: Quarter(OrderDate)
Problem: Calculated for every chart, every time
Option 2: Master Calendar (BEST PRACTICE)
Calendar table: Pre-calculated Quarter field
Charts: Just use Quarter field
Decision 1: Generate from Data Not Fixed Range
Decision 2: Use AUTOGENERATE for Date Sequences
Decision 3: Include Business Logic
Decision 4: Link to Facts via OrderDate
Create tab section: Master_Calendar
What to do:
TempDates:
LOAD
Min(OrderDate) as MinDate,
Max(OrderDate) as MaxDate
RESIDENT FACT_Sales_Final;
What this does:
Result: One row with MinDate and MaxDate
What to do:
LET vMinDate = Peek('MinDate', 0, 'TempDates');
LET vMaxDate = Peek('MaxDate', 0, 'TempDates');
DROP TABLE TempDates;
Peek() function:
Peek('FieldName', RowNumber, 'TableName')Why use variables?
DROP TABLE TempDates:
Concept: Create one row for each date in range
Traditional SQL approach:
SELECT date FROM date_table
WHERE date BETWEEN '2024-01-15' AND '2024-02-14'
Qlik approach - AUTOGENERATE:
TempCalendar:
LOAD
Date($(vMinDate) + IterNo() - 1) as TempDate
AUTOGENERATE 1
WHILE $(vMinDate) + IterNo() - 1 <= $(vMaxDate);
How this works:
AUTOGENERATE 1:
IterNo():
WHILE condition:
Date arithmetic:
Iteration 1: $(vMinDate) + 1 - 1 = $(vMinDate) + 0 = Min date
Iteration 2: $(vMinDate) + 2 - 1 = $(vMinDate) + 1 = Min date + 1 day
Iteration 3: $(vMinDate) + 3 - 1 = $(vMinDate) + 2 = Min date + 2 days
...
Iteration N: $(vMinDate) + N - 1 = Max date (stops here)
Example: If MinDate = 2024-01-15 and MaxDate = 2024-02-14:
Result: TempCalendar with 31 rows, one per date
Create full calendar from dates:
MasterCalendar:
LOAD
TempDate as CalendarDate,
Week(TempDate) as Week,
Year(TempDate) as Year,
Month(TempDate) as Month,
Day(TempDate) as Day,
Ceil(Month(TempDate)/3) as Quarter,
WeekDay(TempDate) as WeekDay,
WeekName(TempDate) as WeekName,
MonthName(TempDate) as MonthName,
'Q' & Ceil(Month(TempDate)/3) as QuarterName,
Year(TempDate) & '-Q' & Ceil(Month(TempDate)/3) as YearQuarter,
If(Month(TempDate) >= 4,
Year(TempDate),
Year(TempDate) - 1) as FiscalYear,
If(WeekDay(TempDate) >= 0 and WeekDay(TempDate) <= 4, 1, 0) as IsBusinessDay,
If(TempDate = Today(), 1, 0) as IsToday,
If(MonthName(TempDate) = MonthName(Today()), 1, 0) as IsCurrentMonth,
If(Year(TempDate) = Year(Today()), 1, 0) as IsCurrentYear
RESIDENT TempCalendar;
DROP TABLE TempCalendar;
Attribute Explanations:
CalendarDate:
Week(TempDate):
Year(TempDate):
Month(TempDate):
Day(TempDate):
Ceil(Month(TempDate)/3) as Quarter:
WeekDay(TempDate):
WeekName(TempDate):
MonthName(TempDate):
'Q' & Ceil(Month(TempDate)/3) as QuarterName:
Year(TempDate) & '-Q' & Ceil(Month(TempDate)/3) as YearQuarter:
FiscalYear Calculation:
If(Month(TempDate) >= 4,
Year(TempDate),
Year(TempDate) - 1) as FiscalYear
IsBusinessDay:
If(WeekDay(TempDate) >= 0 and WeekDay(TempDate) <= 4, 1, 0)
IsToday:
If(TempDate = Today(), 1, 0)
IsCurrentMonth:
If(MonthName(TempDate) = MonthName(Today()), 1, 0)
IsCurrentYear:
If(Year(TempDate) = Year(Today()), 1, 0)
: Link Calendar to Sales
Add sales indicator:
LEFT JOIN (MasterCalendar)
LOAD DISTINCT
OrderDate as CalendarDate,
1 as HasSales
RESIDENT FACT_Sales_Final;
What this does:
DISTINCT keyword:
Why add HasSales?:
Result:
Technique 1: Min/Max Aggregate Functions
Min(Field) as MinValue
Max(Field) as MaxValue
Technique 2: Peek() Function
LET vVariable = Peek('FieldName', RowNumber, 'TableName');
Technique 3: AUTOGENERATE with WHILE
LOAD
Expression
AUTOGENERATE 1
WHILE condition;
Technique 4: IterNo() Function
Technique 5: Date Arithmetic
Date + Number = Date plus that many days
Technique 6: String Concatenation
'Text' & Field & 'MoreText' as CombinedField
& operator joins stringsTechnique 7: LOAD DISTINCT
LOAD DISTINCT Field1, Field2
Technique 8: Date Functions
Week(date) β Week numberYear(date) β YearMonth(date) β Month number (1-12)Day(date) β Day of monthWeekDay(date) β Day of week (0-6)WeekName(date) β Week identifierMonthName(date) β Month identifierβ Pitfall 1: Off-by-one in date generation
Date($(vMinDate) + IterNo()) as TempDate β Wrong!
Should be: $(vMinDate) + IterNo() - 1
β Pitfall 2: Wrong fiscal year logic
If(Month(TempDate) > 4, ...) β Should be >=
β Pitfall 3: Confusing WeekDay numbering
If(WeekDay(TempDate) >= 1 and WeekDay(TempDate) <= 5, ...) β Wrong!
β Pitfall 4: Not using Date() function**
$(vMinDate) + IterNo() - 1 as TempDate β Might be number
β Pitfall 5: Forgetting DROP TABLE TempCalendar**
β Missing DROP TABLE TempCalendar
β Pitfall 6: Calendar doesn't cover all sales dates**
β Used fixed date range instead of Min/Max
β Script runs without errors β Variables vMinDate and vMaxDate set correctly β TempCalendar created with correct number of rows β MasterCalendar has attributes:
Manual verification:
Layer 1: Source (Raw Data)
Layer 2: Transformed (Business Logic Applied)
Layer 3: Visualization (Optimized for Dashboards)
Reason 1: Speed
Reason 2: Incremental Loading Support
Reason 3: Reusability
Reason 4: Backup/Audit
Reason 5: Development Efficiency
Decision 1: Save Multiple Versions of Fact Table
FACT_Sales_Transformed.qvd: Used for next incremental loadFACT_Sales_Complete.qvd: Used for visualizationDecision 2: Drop Tables After Saving
Decision 3: Save SCD and Calendar
DIM_Customers_SCD.qvd: Needed for historical trackingMasterCalendar.qvd: Reusable across reloadsDecision 4: Organized Folder Structure
DataFiles/ β Source (never modify)
DataFiles/Transformed/ β Our transformed data
Create tab section: Save_Transformed_Data
Save for incremental loading:
STORE FACT_Sales_Final INTO [$(vTargetPath)FACT_Sales_Transformed.qvd] (qvd);
Save for visualization:
STORE FACT_Sales_Final INTO [$(vTargetPath)FACT_Sales_Complete.qvd] (qvd);
Why save twice?
Products:
STORE DIM_Products_Transformed INTO [$(vTargetPath)DIM_Products_Transformed.qvd] (qvd);
Customers:
STORE DIM_Customers_Transformed INTO [$(vTargetPath)DIM_Customers_Transformed.qvd] (qvd);
Stores:
STORE DIM_Stores_Raw INTO [$(vTargetPath)DIM_Stores_Transformed.qvd] (qvd);
Note: DIM_Stores_Raw (not transformed)
STORE DIM_Customers_SCD_New INTO [$(vTargetPath)DIM_Customers_SCD.qvd] (qvd);
Critical: This is the historical tracking table
STORE MasterCalendar INTO [$(vTargetPath)MasterCalendar.qvd] (qvd);
Why save?
DROP TABLE DIM_Stores_Raw;
DROP TABLE FACT_Sales_Final;
DROP TABLE DIM_Customers_SCD_New;
DROP TABLE MasterCalendar;
Why drop after saving?
Current state in memory: EMPTY
Performance Comparison:
Loading from:
CSV file: 1000 rows/second
Database: 5000 rows/second
QVD (optimized): 100,000+ rows/second
Optimized vs Non-Optimized Load:
Optimized Load (fastest):
LOAD * FROM file.qvd (qvd);
Non-Optimized Load (still fast, but slower):
LOAD
Field1,
Field2,
Field3 * 2 as DoubleField3 β Transformation
FROM file.qvd (qvd);
Technique 1: STORE Statement
STORE TableName INTO [lib://Connection/path/filename.qvd] (qvd);
(qvd) specifies formatTechnique 2: Multiple STORE of Same Table
STORE Table1 INTO [path/fileA.qvd] (qvd);
STORE Table1 INTO [path/fileB.qvd] (qvd);
Technique 3: STORE then DROP Pattern
STORE TableName INTO [path/file.qvd] (qvd);
DROP TABLE TableName;
Technique 4: File Path with Variables
[$(vTargetPath)filename.qvd]
β Pitfall 1: DROP before STORE
DROP TABLE MyTable;
STORE MyTable INTO ... β ERROR: Table doesn't exist!
β Pitfall 2: Forgetting to save before dropping
DROP TABLE DIM_Products_Transformed; β Lost the data!
β Pitfall 3: Wrong file extension
STORE Table INTO [path/file.txt] (qvd);
β Pitfall 4: Forgetting (qvd) format specifier
STORE Table INTO [path/file.qvd]; β No format specified
β Pitfall 5: Using same filename for different content
STORE DIM_Products INTO [path/Dimensions.qvd] (qvd);
STORE DIM_Customers INTO [path/Dimensions.qvd] (qvd); β Overwrites!
β Pitfall 6: Saving to wrong folder
STORE Table INTO [$(vSourcePath)file.qvd] (qvd); β Wrong folder!
Should be: [$(vTargetPath)file.qvd]
β Script runs without errors β All tables saved successfully β Check Windows Explorer - files exist in Transformed folder:
File size expectations:
Test QVD files: Create a new script tab:
TEST:
LOAD * FROM [$(vTargetPath)FACT_Sales_Complete.qvd] (qvd);
Run β Should load successfully β Confirms QVD is valid
ETL is complete, but...
Why a Separate Loading Phase?
Reason 1: Different Requirements
Reason 2: Field Naming
Reason 3: Selective Loading
Reason 4: Multiple Apps
What is Star Schema?
Components:
Benefits:
Decision 1: Qualify Dimension Fields
Decision 2: Keep SCD Table Separate
Customer_History is standalone (not linked)Decision 3: Fully Qualify SCD Fields
Decision 4: Load Dimensions with Specific Fields
LOAD * for dimensionsCreate tab section: Load_Data_for_Viz
What to load:
Sales:
LOAD * FROM [$(vTargetPath)FACT_Sales_Complete.qvd] (qvd);
Table name: Sales (not FACT_Sales_Final)
Fields included:
This is the center of star schema
With qualified fields:
Products:
LOAD
ProductID,
ProductName as Dim_ProductName,
Category as Dim_Category,
SubCategory as Dim_SubCategory,
Brand as Dim_Brand,
UnitCost as Dim_UnitCost,
ListPrice as Dim_ListPrice,
Status,
Product_LastModifiedDate,
ProfitPerUnit as Dim_ProfitPerUnit,
ProfitMarginPercent,
PriceTier as Dim_PriceTier,
IsActive
FROM [$(vTargetPath)DIM_Products_Transformed.qvd] (qvd);
Key points:
Why rename?
Sales table has ProductName, Category, BrandProducts table would also have ProductName, Category, BrandRelationship:
With qualified fields:
Customers:
LOAD
CustomerID,
CustomerName as Dim_CustomerName,
Country as Dim_Country,
Region as Dim_Region,
City as Dim_City,
Segment as Dim_Segment,
RegistrationDate,
Customer_LastModifiedDate,
CustomerTenureYears as Dim_CustomerTenureYears,
LoyaltyTier as Dim_LoyaltyTier,
MarketType as Dim_MarketType
FROM [$(vTargetPath)DIM_Customers_Transformed.qvd] (qvd);
Same pattern:
Relationship:
Minimal changes needed:
Stores:
LOAD
StoreID,
StoreName,
Store_Country,
Store_Region,
Store_City,
StoreType,
OpeningDate,
Store_LastModifiedDate
FROM [$(vTargetPath)DIM_Stores_Transformed.qvd] (qvd);
Why minimal changes?
Usage:
Load all fields:
Calendar:
LOAD * FROM [$(vTargetPath)MasterCalendar.qvd] (qvd);
Why LOAD * here?
Relationship:
Usage in dashboards:
Fully qualified to prevent linking:
Customer_History:
LOAD
CustomerID as History_CustomerID,
CustomerName as History_CustomerName,
Segment as History_Segment,
Country as History_Country,
Region as History_Region,
City as History_City,
LoyaltyTier as History_LoyaltyTier,
EffectiveStartDate as History_StartDate,
EffectiveEndDate as History_EndDate,
IsCurrent as History_IsCurrent,
VersionNumber as History_Version
FROM [$(vTargetPath)DIM_Customers_SCD.qvd] (qvd);
Critical: ALL fields prefixed
Why separate?
Usage:
Tables in memory:
Relationships:
Products.ProductID = Sales.ProductID
Customers.CustomerID = Sales.CustomerID
Calendar.CalendarDate = Sales.OrderDate
Standalone:
Field naming in model:
Technique 1: Selective Field Loading
LOAD
Field1,
Field2,
Field3 as RenamedField3
FROM file.qvd (qvd);
Technique 2: Consistent Prefixing
Dim_FieldName β Dimension fields
History_FieldName β SCD fields
Store_FieldName β Store-specific fields
Technique 3: Strategic Key Field Handling
ProductID, β Never rename key fields
ProductName as Dim_ProductName β Rename descriptive fields
Technique 4: Intentional Relationship Breaking
CustomerID as History_CustomerID β Breaks link
β Pitfall 1: Not renaming duplicate fields
LOAD
ProductID,
ProductName, β Also in Sales!
Category β Synthetic key created
FROM file.qvd (qvd);
β Pitfall 2: Renaming key fields unintentionally
LOAD
ProductID as Dim_ProductID, β WRONG!
...
β Pitfall 3: Inconsistent prefixes
Dim_ProductName
Prd_Category β Different prefix!
Product_Brand β Yet another prefix!
β Pitfall 4: Loading unnecessary fields
LOAD * FROM file.qvd (qvd); β Loads everything
β Pitfall 5: Forgetting to load a table
β Missing: LOAD Calendar
β Pitfall 6: Wrong table names in visualization layer
FACT_Sales_Final: β ETL naming
LOAD ...
β Script runs without errors β All 6 tables loaded successfully β Data Model Viewer shows:
Visual validation in Data Model Viewer:
Test with filter:
End-to-End Test:
Test 1: Revenue Calculation
Pick order ORD001:
- Quantity: 2
- ListPrice: 1299
- Discount: 0.05
- Expected GrossRevenue: 2 Γ 1299 = 2598
- Expected NetRevenue: 2598 Γ 0.95 = 2468.10
- Verify these values in Sales table
Test 2: Join Integrity
Pick order ORD001:
- ProductID: P001
- Should have: ProductName = "iPhone 14 Pro"
- Should have: Category = "Electronics"
- CustomerID: C001
- Should have: CustomerName = "John Smith"
- Should have: Segment = "Corporate"
Test 3: Time Intelligence
Pick order on 2024-01-15:
- OrderYear should be: 2024
- OrderMonth should be: 1
- OrderQuarter should be: 1
- Match to Calendar table:
- Year: 2024
- Month: 1
- Quarter: 1
Test 4: Business Classifications
Find order with >$1000 net revenue:
- OrderValueCategory should be: "High Value"
Find order with >10% discount:
- DiscountCategory should be: "Heavy Discount"
Test 5: Incremental Load
Initial load: 15 orders (ORD001-ORD015)
After incremental: 20 orders (ORD001-ORD020)
Check: No duplicates, all have ProcessedFlag = 1
Test 6: SCD Versioning
Initial load:
- 10 customers
- All VersionNumber = 1
- All IsCurrent = 1
After change (simulate):
- Edit source: Change C001 Segment to "Consumer"
- Reload
- Check C001 has 2 versions:
- Ver 1: Corporate, IsCurrent=0, EndDate=Yesterday
- Ver 2: Consumer, IsCurrent=1, EndDate=9999-12-31
Load Time Targets:
Memory Usage:
Completeness:
Accuracy:
Consistency:
Issue 1: Synthetic Keys
Issue 2: Wrong Row Counts
Issue 3: NULL Values in Calculations
Issue 4: Dates Not Sorting
Issue 5: Incremental Loading All Data
Load Data:
LOAD Field1, Field2 FROM [file.qvd] (qvd);
LOAD Field1, Field2 RESIDENT TableName;
LOAD * INLINE [ ... ];
Transform:
Field1 as NewName
Field1 * Field2 as Calculated
If(condition, true_result, false_result) as ConditionalField
Join:
LEFT JOIN (TargetTable) LOAD ... RESIDENT SourceTable;
Concatenate:
CONCATENATE (TargetTable) LOAD ... RESIDENT SourceTable;
NoConcatenate TableName: LOAD ...
Save/Drop:
STORE TableName INTO [file.qvd] (qvd);
DROP TABLE TableName;
Control:
IF condition THEN ... ELSE ... ENDIF
WHERE condition
Date:
Date(value, format) - Format as dateToday() - Current dateYear(date) - Extract yearMonth(date) - Extract month (1-12)Day(date) - Extract dayWeekDay(date) - Day of week (0-6)WeekName(date) - Week identifierMonthName(date) - Month identifierMath:
Round(number, decimals) - Round to decimalsCeil(number) - Round upFloor(number) - Round downMin(field) - Minimum valueMax(field) - Maximum valueString:
'Text' & Field - ConcatenateUpper(text) - UppercaseLower(text) - LowercaseAggregate:
Sum(field) - TotalCount(field) - CountAvg(field) - AverageSpecial:
Peek('Field', row, 'Table') - Read from tableNoOfRows('Table') - Count rowsFileSize('path') - Check file existsIterNo() - Loop counter