Best Practices

Link to section

Overview

This guide covers best practices for building reliable applications with the Reporting API. These recommendations focus on patterns specific to the Reporting API's architecture — schema discovery, query construction, the Continue Wait pattern, and data freshness.

Link to section

Schema discovery and validation

Note

Golden Rule: Never hard-code measures, dimensions, or segments. Always discover them from /v1/meta.

Bad — hard-coded measures:

const MEASURES = ['Orders.net_sales', 'Orders.tips_amount'];

Good — discovered at runtime:

const metadata = await fetchMetadata(); const measures = metadata.cubes.Orders.measures.map(m => m.name);

Schema can evolve as new measures and dimensions are added. Hard-coded values will break silently when they no longer match the API.

Link to section

Cache metadata appropriately

Recommended TTL: 1–24 hours depending on your use case.

  • Refresh at application startup
  • Refresh periodically (every 1–24 hours)
  • Refresh after "measure not found" or similar schema errors

If a requested measure or dimension is missing from metadata, fall back to an alternative rather than failing outright. This makes your integration resilient to schema changes.

Link to section

Validate queries before execution

Before sending a query to /v1/load, check that every measure, dimension, and segment in the request exists in the current metadata. This catches typos and stale references early and produces clearer error messages than a failed API call.

Link to section

Query construction

Link to section

Always use segments for report parity

Bad — manual filtering:

{ "measures": ["Orders.net_sales"], "filters": [{ "member": "Orders.state", "operator": "equals", "values": ["COMPLETED"] }] }

Good — use the segment:

{ "measures": ["Orders.net_sales"], "segments": ["Orders.closed_checks"] }

Segments encapsulate business logic maintained by Square. Using them ensures your results match Square dashboard reports.

Link to section

Specify explicit date ranges

Always include a dateRange in your timeDimensions. Open-ended queries can attempt to return years of data and time out.

{ "timeDimensions": [{ "dimension": "Orders.sale_timestamp", "dateRange": ["2024-01-01", "2024-01-31"] }] }
Link to section

Use appropriate granularity

Use CaseGranularity
Intra-day monitoringhour
Daily reportsday
Weekly trendsweek
Monthly analysismonth
Quarterly reportsquarter
Year-over-yearyear

Avoid using day granularity for a full year of data — use month or quarter to keep result sets manageable.

Link to section

Limit and order results

Always set a limit to avoid unexpectedly large result sets, especially when using high-cardinality dimensions like customer_id.

Recommended limits:

  • UI display: 10–100
  • Export/analysis: 1,000–10,000
  • Pagination: 100–500 per page

Use order to sort results meaningfully — chronological for time series, descending by measure for rankings.

Link to section

Batch multiple measures

Bad — three separate queries:

const netSales = await query({ measures: ['Orders.net_sales'] }); const tips = await query({ measures: ['Orders.tips_amount'] }); const tax = await query({ measures: ['Orders.sales_tax_amount'] });

Good — single query:

const data = await query({ measures: [ 'Orders.net_sales', 'Orders.tips_amount', 'Orders.sales_tax_amount' ] });

One query is faster and uses fewer API calls.

Link to section

Minimize dimensions

Each additional dimension multiplies the result set size. Only include dimensions you actually need for your analysis.

Link to section

Data freshness and caching

The Orders cube has a data freshness of approximately 15 minutes.

  • Historical data (yesterday and earlier) won't change — cache it aggressively (24 hours or longer)
  • Today's data — cache for 15 minutes to match the cube's refresh cycle
  • Metadata (/v1/meta) — cache for 1–24 hours

This simple tiered caching strategy significantly reduces API calls without sacrificing data accuracy.

Link to section

Error handling

Link to section

Implement Continue Wait retry

The Reporting API uses a "Continue wait" pattern for queries that take time to compute. Your client must handle this:

Warning

Without Continue Wait handling, complex queries will appear to fail on the first attempt. This is the most common integration issue.

Link to section

Handle schema evolution gracefully

When a preferred measure or dimension is unavailable in the current metadata, fall back to an alternative rather than crashing. This is especially important during schema transitions when measures may be temporarily renamed or replaced.

Link to section

Security

Store your Square access token in environment variables — never hard-code tokens in source code. Use separate tokens for sandbox and production environments, and rotate them periodically.

Validate any user-supplied input (date ranges, location IDs) before incorporating it into queries. Enforce reasonable limits on date range spans to prevent abuse.

Link to section

Summary checklist

  • [ ] Metadata is discovered at runtime from /v1/meta, not hard-coded
  • [ ] Metadata cache has appropriate TTL (1–24 hours)
  • [ ] Queries are validated against current metadata before execution
  • [ ] Continue Wait retry logic is implemented
  • [ ] Explicit date ranges are specified in all queries
  • [ ] Orders.closed_checks segment is used for sales reports
  • [ ] Historical data is cached aggressively (24+ hours)
  • [ ] Result sets are limited to reasonable sizes
  • [ ] Access tokens are stored securely in environment variables