diff --git a/documentation/architecture/time-series-optimizations.md b/documentation/architecture/time-series-optimizations.md
index 58bfaf418..e6f5cae75 100644
--- a/documentation/architecture/time-series-optimizations.md
+++ b/documentation/architecture/time-series-optimizations.md
@@ -30,7 +30,7 @@ sequential reads, materialized views, and in-memory processing.
/>
- **Out-of-order data:**
- When data arrives out of order, QuestDB [rearranges it](/docs/concepts/partitions/#splitting-and-squashing-time-partitions) to maintain timestamp order. The
+ When data arrives out of order, QuestDB [rearranges it](/docs/concepts/partitions/#partition-splitting-and-squashing) to maintain timestamp order. The
engine splits partitions to minimize [write amplification](/docs/getting-started/capacity-planning/#write-amplification) and compacts them in the background.
diff --git a/documentation/concepts/deduplication.md b/documentation/concepts/deduplication.md
index a58e7a52c..a36e6ea0f 100644
--- a/documentation/concepts/deduplication.md
+++ b/documentation/concepts/deduplication.md
@@ -6,9 +6,18 @@ description:
when reloading data.
---
+import Screenshot from "@theme/Screenshot"
+
Deduplication ensures that only one row exists for a given set of key columns.
When a new row matches an existing row's keys, the old row is replaced.
+
+
## When to use deduplication
**Use deduplication when:**
@@ -38,15 +47,15 @@ DEDUP UPSERT KEYS(ts, ticker);
With this configuration, each `(ts, ticker)` combination can have only one row:
```questdb-sql
-INSERT INTO prices VALUES ('2024-01-15T10:00:00', 'AAPL', 185.50);
-INSERT INTO prices VALUES ('2024-01-15T10:00:00', 'AAPL', 186.00); -- replaces previous
+INSERT INTO prices VALUES ('2026-01-15T10:00:00', 'AAPL', 185.50);
+INSERT INTO prices VALUES ('2026-01-15T10:00:00', 'AAPL', 186.00); -- replaces previous
SELECT * FROM prices;
```
| ts | ticker | price |
|----|--------|-------|
-| 2024-01-15T10:00:00 | AAPL | 186.00 |
+| 2026-01-15T10:00:00 | AAPL | 186.00 |
Only the last value is kept.
diff --git a/documentation/concepts/designated-timestamp.md b/documentation/concepts/designated-timestamp.md
index 9405549cd..ab26a4eea 100644
--- a/documentation/concepts/designated-timestamp.md
+++ b/documentation/concepts/designated-timestamp.md
@@ -5,6 +5,8 @@ description:
Why every QuestDB table should have a designated timestamp and how to set one.
---
+import Screenshot from "@theme/Screenshot"
+
Every table in QuestDB should have a designated timestamp. This column defines
the time axis for your data and unlocks QuestDB's core time-series capabilities
including partitioning, time-series joins, and optimized interval scans.
@@ -13,6 +15,13 @@ Without a designated timestamp, a table behaves like a generic append-only
store - you lose partitioning, efficient time-range queries, and most
time-series SQL features.
+
+
## Why it matters
The designated timestamp is not just metadata - it determines how QuestDB
diff --git a/documentation/concepts/partitions.md b/documentation/concepts/partitions.md
index 9e98b51fb..38b008595 100644
--- a/documentation/concepts/partitions.md
+++ b/documentation/concepts/partitions.md
@@ -6,205 +6,164 @@ description:
feature that will help you craft more efficient queries.
---
-[Database partitioning](/glossary/database-partitioning/) is the technique that
-splits data in a large database into smaller chunks in order to improve the
-performance and scalability of the database system.
+QuestDB partitions tables by time intervals, storing each interval's data in a
+separate directory. This physical separation is fundamental to time-series
+performance - it allows the database to skip irrelevant time ranges entirely
+during queries and enables efficient data lifecycle management.
-QuestDB offers the option to partition tables by intervals of time. Data for
-each interval is stored in separate sets of files.
+## Why partition
+
+Partitioning provides significant benefits for time-series workloads:
+
+- **Query performance**: The SQL optimizer skips partitions outside your query's
+ time range. A query for "last hour" on a table with years of data reads only
+ one partition, not the entire table.
+- **Data lifecycle**: Drop old data instantly with
+ [DROP PARTITION](/docs/query/sql/alter-table-drop-partition/) - no expensive
+ DELETE operations. Detach partitions to cold storage, reattach when needed.
+- **Write efficiency**: Out-of-order data only rewrites affected partitions, not
+ the entire table. Smaller partitions mean less write amplification.
+- **Concurrent access**: Different partitions can be written and read
+ simultaneously without contention.
+
+## How partitions work
+
+Partitioning requires a [designated timestamp](/docs/concepts/designated-timestamp/)
+column. QuestDB uses this timestamp to determine which partition stores each row.
import Screenshot from "@theme/Screenshot"
-## Properties
+Each partition is a directory on disk named by its time interval. Inside, each
+column is stored as a separate file (`.d` for data, plus index files for
+[SYMBOL](/docs/concepts/symbol/) columns).
+
+## Choosing a partition interval
+
+Available intervals: `HOUR`, `DAY`, `WEEK`, `MONTH`, `YEAR`, or `NONE`.
-- Partitioning is only possible on tables with a
- [designated timestamp](/docs/concepts/designated-timestamp/).
-- Available partition intervals are `NONE`, `YEAR`, `MONTH`, `WEEK`, `DAY`, and
- `HOUR`.
-- Partitions are defined at table creation. For more information, refer to the
- [CREATE TABLE section](/docs/query/sql/create-table/).
+| Interval | Best for | Typical row count per partition |
+|----------|----------|--------------------------------|
+| `HOUR` | High-frequency data (>1M rows/day) | 100K - 10M |
+| `DAY` | Most time-series workloads | 1M - 100M |
+| `WEEK` | Lower-frequency data | 5M - 500M |
+| `MONTH` | Aggregated or sparse data | 10M - 1B |
+| `YEAR` | Very sparse or archival data | 100M+ |
-### Default partitioning by creation method
+**Guidelines:**
+- Target partitions with 1-100 million rows each
+- Smaller partitions = faster out-of-order writes, more directories to manage
+- Larger partitions = fewer directories, but slower writes for late data
+- Match your most common query patterns (if you query by day, partition by day)
-| Creation method | Default partition | WAL enabled? | Supports dedup/replication? |
-|-----------------|-------------------|--------------|------------------------------|
-| SQL `CREATE TABLE` (no `PARTITION BY`) | `NONE` | No | No |
-| SQL `CREATE TABLE` (with `PARTITION BY`) | As specified | Yes | Yes |
-| ILP auto-created tables | `DAY` | Yes | Yes |
+For ILP (InfluxDB Line Protocol) ingestion, the default is `DAY`. Change it via
+`line.default.partition.by` in `server.conf`.
-**This difference matters.** Tables without partitioning cannot use WAL, which means
-they don't support concurrent writes, deduplication, or replication.
+## Creating partitioned tables
-When using SQL, always specify `PARTITION BY` for time-series tables:
+Specify partitioning at table creation:
```questdb-sql
-CREATE TABLE prices (ts TIMESTAMP, price DOUBLE)
-TIMESTAMP(ts) PARTITION BY DAY; -- Explicitly partitioned
+CREATE TABLE trades (
+ ts TIMESTAMP,
+ symbol SYMBOL,
+ price DOUBLE,
+ amount DOUBLE
+) TIMESTAMP(ts) PARTITION BY DAY;
```
-The ILP default (`PARTITION BY DAY`) can be changed via `line.default.partition.by`
-in `server.conf`.
+### Default behavior by creation method
+
+| Creation method | Default partition |
+|-----------------|-------------------|
+| SQL `CREATE TABLE` (no `PARTITION BY`) | `NONE` |
+| SQL `CREATE TABLE` (with `PARTITION BY`) | As specified |
+| ILP auto-created tables | `DAY` |
### Partition directory naming
-The naming convention for partition directories is as follows:
-
-| Table Partition | Partition format |
-| --------------- | ---------------- |
-| `HOUR` | `YYYY-MM-DDTHH` |
-| `DAY` | `YYYY-MM-DD` |
-| `WEEK` | `YYYY-Www` |
-| `MONTH` | `YYYY-MM` |
-| `YEAR` | `YYYY` |
-
-## Advantages of adding time partitions
-
-We recommend partitioning tables to benefit from the following advantages:
-
-- Reducing disk IO for timestamp interval searches. This is because our SQL
- optimizer leverages partitioning.
-- Significantly improving calculations and seek times. This is achieved by
- leveraging the chronology and relative immutability of data for previous
- partitions.
-- Separating data files physically. This makes it easy to implement file
- retention policies or extract certain intervals.
-- Enables out-of-order indexing. Heavily out-of-order commits can
- [split the partitions](#splitting-and-squashing-time-partitions) into parts to
- reduce
- [write amplification](/docs/getting-started/capacity-planning/#write-amplification).
-
-## Checking time partition information
-
-The following SQL keyword and function are implemented to present the partition
-information of a table:
-
-- The SQL keyword [SHOW PARTITIONS](/docs/query/sql/show/#show-partitions)
- returns general partition information for the selected table.
-- The function [table_partitions('tableName')](/docs/query/functions/meta/)
- returns the same information as `SHOW PARTITIONS` and can be used in a
- `SELECT` statement to support more complicated queries such as `WHERE`,
- `JOIN`, and `UNION`.
-
-## Splitting and squashing time partitions
-
-Heavily out-of-order commits, i.e. commits that contain newer and older
-timestamps, can split the partitions into parts to reduce write amplification.
-When data is merged into an existing partition as a result of an out-of-order
-insert, the partition will be split into two parts: the prefix sub-partition and
-the suffix sub-partition.
-
-A partition split happens when both of the following are true:
-
-- The prefix size is bigger than the combination of the suffix and the rows to
- be merged.
-- The estimated prefix size on disk is higher than
- `cairo.o3.partition.split.min.size` (50MB by default).
-
-Partition split is iterative and therefore a partition can be split into more
-than two parts after several commits. To control the number of parts QuestDB
-squashes them together following the following principles:
-
-- For the last (yearly, ..., hourly) partition, its parts are squashed together
- when the number of parts exceeds `cairo.o3.last.partition.max.splits` (20 by
- default).
-- For all the partitions except the last one, the QuestDB engine squashes them
- aggressively to maintain only one physical partition at the end of every
- commit.
-
-All partition operations (ALTER TABLE
-[ATTACH](/docs/query/sql/alter-table-attach-partition/)/
-[DETACH](/docs/query/sql/alter-table-detach-partition/)/
-[DROP](/docs/query/sql/alter-table-drop-partition/) PARTITION) do not
-consider partition splits as individual partitions and work on the table
-partitioning unit (year, week, ..., hour).
-
-For example, when a daily partition consisting of several parts is dropped, all
-the parts belonging to the given date are dropped. Similarly, when the multipart
-daily partition is detached, it is squashed into a single piece first and then
-detached.
-
-### Examples
-
-For example, Let's consider the following table `x`:
-
-```questdb-sql title="Create Demo Table"
-CREATE TABLE x AS (
- SELECT
- cast(x as int) i,
- - x j,
- rnd_str(5, 16, 2) as str,
- timestamp_sequence('2023-02-04T00', 60 * 1000L) ts
- FROM long_sequence(60 * 23 * 2 * 1000)
-) timestamp (ts) PARTITION BY DAY WAL;
-```
+| Interval | Directory format | Example |
+|----------|------------------|---------|
+| `HOUR` | `YYYY-MM-DDTHH` | `2026-01-15T09` |
+| `DAY` | `YYYY-MM-DD` | `2026-01-15` |
+| `WEEK` | `YYYY-Www` | `2026-W03` |
+| `MONTH` | `YYYY-MM` | `2026-01` |
+| `YEAR` | `YYYY` | `2026` |
-```questdb-sql title="Show Partitions from Demo Table"
-SHOW PARTITIONS FROM x;
-```
+## Inspecting partitions
-| index | partitionBy | name | minTimestamp | maxTimestamp | numRows | diskSize | diskSizeHuman | readOnly | active | attached | detached | attachable |
-| ----- | ----------- | ---------- | --------------------------- | --------------------------- | ------- | --------- | ------------- | -------- | ------ | -------- | -------- | ---------- |
-| 0 | DAY | 2023-02-04 | 2023-02-04T00:00:00.000000Z | 2023-02-04T23:59:59.940000Z | 1440000 | 71281136 | 68.0 MiB | FALSE | FALSE | TRUE | FALSE | FALSE |
-| 1 | DAY | 2023-02-05 | 2023-02-05T00:00:00.000000Z | 2023-02-05T21:59:59.940000Z | 1320000 | 100663296 | 96.0 MiB | FALSE | TRUE | TRUE | FALSE | FALSE |
+Use `SHOW PARTITIONS` or the `table_partitions()` function:
+
+```questdb-sql
+SHOW PARTITIONS FROM trades;
+```
-Inserting an out-of-order row:
+| index | partitionBy | name | minTimestamp | maxTimestamp | numRows | diskSizeHuman |
+|-------|-------------|------|--------------|--------------|---------|---------------|
+| 0 | DAY | 2026-01-15 | 2026-01-15T00:00:00Z | 2026-01-15T23:59:59Z | 1440000 | 68.0 MiB |
+| 1 | DAY | 2026-01-16 | 2026-01-16T00:00:00Z | 2026-01-16T12:30:00Z | 750000 | 35.2 MiB |
-```questdb-sql title="Insert Demo Rows"
-INSERT INTO x (ts) VALUES ('2023-02-05T21');
+The `table_partitions()` function returns the same data and can be used in
+queries with `WHERE`, `JOIN`, or `UNION`:
-SHOW PARTITIONS FROM x;
+```questdb-sql
+SELECT name, numRows, diskSizeHuman
+FROM table_partitions('trades')
+WHERE numRows > 1000000;
```
-| index | partitionBy | name | minTimestamp | maxTimestamp | numRows | diskSize | diskSizeHuman | readOnly | active | attached | detached | attachable |
-| ----- | ----------- | ------------------------ | --------------------------- | --------------------------- | ------- | -------- | ------------- | -------- | ------ | -------- | -------- | ---------- |
-| 0 | DAY | 2023-02-04 | 2023-02-04T00:00:00.000000Z | 2023-02-04T23:59:59.940000Z | 1440000 | 71281136 | 68.0 MiB | FALSE | FALSE | TRUE | FALSE | FALSE |
-| 1 | DAY | 2023-02-05 | 2023-02-05T00:00:00.000000Z | 2023-02-05T20:59:59.880000Z | 1259999 | 65388544 | 62.4 MiB | FALSE | FALSE | TRUE | FALSE | FALSE |
-| 2 | DAY | 2023-02-05T205959-880001 | 2023-02-05T20:59:59.940000Z | 2023-02-05T21:59:59.940000Z | 60002 | 83886080 | 80.0 MiB | FALSE | TRUE | TRUE | FALSE | FALSE |
-
-To merge the new partition part back to the main partition for downgrading:
+## Storage on disk
-```questdb-sql title="Squash Partitions"
-ALTER TABLE x SQUASH PARTITIONS;
+A partitioned table's directory structure:
-SHOW PARTITIONS FROM x;
+```
+db/trades/
+├── 2026-01-15/ # Partition directory
+│ ├── ts.d # Timestamp column data
+│ ├── symbol.d # Symbol column data
+│ ├── symbol.k # Symbol column index
+│ ├── symbol.v # Symbol column values
+│ ├── price.d # Price column data
+│ └── amount.d # Amount column data
+├── 2026-01-16/
+│ ├── ts.d
+│ ├── ...
+└── _txn # Transaction metadata
```
-| index | partitionBy | name | minTimestamp | maxTimestamp | numRows | diskSize | diskSizeHuman | readOnly | active | attached | detached | attachable |
-| ----- | ----------- | ---------- | --------------------------- | --------------------------- | ------- | -------- | ------------- | -------- | ------ | -------- | -------- | ---------- |
-| 0 | DAY | 2023-02-04 | 2023-02-04T00:00:00.000000Z | 2023-02-04T23:59:59.940000Z | 1440000 | 71281136 | 68.0 MiB | FALSE | FALSE | TRUE | FALSE | FALSE |
-| 1 | DAY | 2023-02-05 | 2023-02-05T00:00:00.000000Z | 2023-02-05T21:59:59.940000Z | 1320001 | 65388544 | 62.4 MiB | FALSE | TRUE | TRUE | FALSE | FALSE |
+## Partition splitting and squashing
-## Storage example
+When out-of-order data arrives for an existing partition, QuestDB may split that
+partition to avoid rewriting all its data. This is an optimization for write
+performance.
-Each partition effectively is a directory on the host machine corresponding to
-the partitioning interval. In the example below, we assume a table `trips` that
-has been partitioned using `PARTITION BY MONTH`.
+A split occurs when:
+- The existing partition prefix is larger than the new data plus suffix
+- The prefix exceeds `cairo.o3.partition.split.min.size` (default: 50MB)
-```
-[quest-user trips]$ dir
-2017-03 2017-10 2018-05 2019-02
-2017-04 2017-11 2018-06 2019-03
-2017-05 2017-12 2018-07 2019-04
-2017-06 2018-01 2018-08 2019-05
-2017-07 2018-02 2018-09 2019-06
-2017-08 2018-03 2018-10
-2017-09 2018-04 2018-11
-```
+Split partitions appear with timestamp suffixes in `SHOW PARTITIONS`:
-Each partition on the disk contains the column data files of the corresponding
-timestamp interval.
+| name | numRows |
+|------|---------|
+| 2026-01-15 | 1259999 |
+| 2026-01-15T205959-880001 | 60002 |
+QuestDB automatically squashes splits:
+- Non-active partitions: squashed at end of each commit
+- Active (latest) partition: squashed when splits exceed
+ `cairo.o3.last.partition.max.splits` (default: 20)
+
+To manually squash all splits:
+
+```questdb-sql
+ALTER TABLE trades SQUASH PARTITIONS;
```
-[quest-user 2019-06]$ dir
-_archive cab_type.v dropoff_latitude.d ehail_fee.d
-cab_type.d congestion_surcharge.d dropoff_location_id.d extra.d
-cab_type.k dropoff_datetime.d dropoff_longitude.d fare_amount.d
-```
+
+Partition operations (`ATTACH`, `DETACH`, `DROP`) treat all splits of a
+partition as a single unit.
diff --git a/documentation/concepts/ttl.md b/documentation/concepts/ttl.md
index abe4aa98b..51d11ba33 100644
--- a/documentation/concepts/ttl.md
+++ b/documentation/concepts/ttl.md
@@ -1,117 +1,134 @@
---
title: Time To Live (TTL)
sidebar_label: Time To Live (TTL)
-description: Conceptual overview of the time-to-live feature in QuestDB. Use it to limit data size.
+description: Automatic data retention in QuestDB - configure TTL to automatically drop old partitions.
---
-If you're interested in storing and analyzing only recent data with QuestDB, you
-can configure a time-to-live (TTL) for the table data. Both the `CREATE TABLE`
-and `ALTER TABLE` commands support the `TTL` clause.
+TTL (Time To Live) automatically drops old partitions based on data age. Set a
+retention period, and QuestDB removes partitions that fall entirely outside that
+window - no cron jobs or manual cleanup required.
-TTL provides automatic data retention by dropping old partitions without manual
-intervention. For manual control over partition removal, see
-[Data Retention](/docs/operations/data-retention/) which covers the
-`DROP PARTITION` command.
+import Screenshot from "@theme/Screenshot"
-This feature works as follows:
+
-1. The age of the data is measured by the most recent timestamp stored in the table
-2. As you keep inserting time-series data, the age of the oldest data starts
- exceeding its TTL limit
-3. When **all** the data in a partition becomes stale, the partition as a whole
- becomes eligible to be dropped
-4. QuestDB detects a stale partition and drops it as a part of the commit
- operation
+## Requirements
-To be more precise, the latest timestamp stored in a given partition does not
-matter. Instead, QuestDB considers the entire time period for which a partition
-is responsible. As a result, it will drop the partition only when the end of
-that period falls behind the TTL limit. This is a compromise that favors a low
-overhead of the TTL enforcement procedure.
+TTL requires:
+- A [designated timestamp](/docs/concepts/designated-timestamp/) column
+- [Partitioning](/docs/concepts/partitions/) enabled
-To demonstrate, assume we have created a table partitioned by hour, with TTL set
-to one hour:
+These are standard for time-series tables in QuestDB.
+
+## Setting TTL
+
+### At table creation
```questdb-sql
-CREATE TABLE tango (ts TIMESTAMP) timestamp (ts) PARTITION BY HOUR TTL 1 HOUR;
--- or:
-CREATE TABLE tango (ts TIMESTAMP) timestamp (ts) PARTITION BY HOUR TTL 1H;
+CREATE TABLE trades (
+ ts TIMESTAMP,
+ symbol SYMBOL,
+ price DOUBLE
+) TIMESTAMP(ts) PARTITION BY DAY TTL 7 DAYS;
```
-1\. Insert the first row at 8:00 AM. This is the very beginning of the "8 AM"
-partition:
+### On existing tables
```questdb-sql
-INSERT INTO tango VALUES ('2025-01-01T08:00:00');
+ALTER TABLE trades SET TTL 7 DAYS;
```
-| ts |
-|----|
-| 2025-01-01 08:00:00.000000 |
-
-2\. Insert the second row one hour later, at 9:00 AM:
+Supported units: `HOUR`/`H`, `DAY`/`D`, `WEEK`/`W`, `MONTH`/`M`, `YEAR`/`Y`.
```questdb-sql
-INSERT INTO tango VALUES ('2025-01-01T09:00:00');
+-- These are equivalent
+ALTER TABLE trades SET TTL 2 WEEKS;
+ALTER TABLE trades SET TTL 2w;
```
-| ts |
-|----|
-| 2025-01-01 08:00:00.000000 |
-| 2025-01-01 09:00:00.000000 |
+For full syntax, see [ALTER TABLE SET TTL](/docs/query/sql/alter-table-set-ttl/).
-The 8:00 AM row remains.
+## How TTL works
-3\. Insert one more row at 9:59:59 AM:
+TTL drops partitions based on the **partition's time range**, not individual row
+timestamps. A partition is dropped only when its **entire period** falls outside
+the TTL window.
-```questdb-sql
-INSERT INTO tango VALUES ('2025-01-01T09:59:59');
-```
+**Key rule**: A partition is dropped when `partition_end_time < reference_time - TTL`.
-| ts |
-|----|
-| 2025-01-01 08:00:00.000000 |
-| 2025-01-01 09:00:00.000000 |
-| 2025-01-01 09:59:59.000000 |
+### Reference time
-The 8:00 AM data is still there, because the "8 AM" partition ends at 9:00 AM.
+By default, TTL uses wall-clock time as the reference, not the maximum timestamp
+in the table. This protects against accidental data loss if a row with a
+far-future timestamp is inserted (which would otherwise cause all existing data
+to appear "expired").
-4\. Insert a row at 10:00 AM:
+The reference time is: `min(max_timestamp_in_table, wall_clock_time)`
-```questdb-sql
-INSERT INTO tango VALUES ('2025-01-01T10:00:00');
+To restore legacy behavior (using only max timestamp), set in `server.conf`:
+
+```ini
+cairo.ttl.use.wall.clock=false
```
-| ts |
-|----|
-| 2025-01-01 09:00:00.000000 |
-| 2025-01-01 09:59:59.000000 |
-| 2025-01-01 10:00:00.000000 |
+:::caution
+Disabling wall-clock protection means inserting a row with a future timestamp
+(e.g., year 2100) will immediately drop all partitions that fall outside the TTL
+window relative to that future time.
+:::
-Now the whole "8 AM" partition is outside its TTL limit, and has been dropped.
+### Example
-## Managing TTL
+Table partitioned by `HOUR` with `TTL 1 HOUR`:
-### Setting TTL on existing tables
+| Wall-clock time | Action | Partitions remaining |
+|-----------------|--------|---------------------|
+| 08:00 | Insert row at 08:00 | `08:00-09:00` |
+| 09:00 | Insert row at 09:00 | `08:00-09:00`, `09:00-10:00` |
+| 09:59 | Insert row at 09:59 | `08:00-09:00`, `09:00-10:00` |
+| 10:00 | Insert row at 10:00 | `09:00-10:00`, `10:00-11:00` |
-Use `ALTER TABLE SET TTL` to add or change TTL on an existing table:
+The `08:00-09:00` partition survives until 10:00 because its **end time** (09:00)
+must be more than 1 hour behind the reference time. At 10:00, the partition end
+(09:00) is exactly 1 hour old, so it's dropped.
-```questdb-sql
-ALTER TABLE my_table SET TTL 3 WEEKS;
+## Checking TTL settings
--- Shorthand syntax also supported
-ALTER TABLE my_table SET TTL 12h;
+```questdb-sql
+SELECT table_name, ttlValue, ttlUnit FROM tables();
```
-For full syntax details, see the
-[ALTER TABLE SET TTL](/docs/query/sql/alter-table-set-ttl/) reference.
+| table_name | ttlValue | ttlUnit |
+|------------|----------|---------|
+| trades | 7 | DAY |
+| metrics | 0 | *null* |
+
+A `ttlValue` of `0` means TTL is not configured.
-### Checking current TTL settings
+## Removing TTL
-Use the `tables()` function to view TTL configuration for all tables:
+To disable automatic retention and keep all data:
```questdb-sql
-SELECT table_name, ttlValue, ttlUnit FROM tables();
+ALTER TABLE trades SET TTL 0;
```
-A `ttlValue` of `0` indicates TTL is not configured for that table.
+## Guidelines
+
+| Data type | Typical TTL | Rationale |
+|-----------|-------------|-----------|
+| Real-time metrics | 1-7 days | High volume, recent data most valuable |
+| Trading data | 30-90 days | Compliance requirements vary |
+| Aggregated data | 1-2 years | Lower volume, longer analysis windows |
+| Audit logs | Per compliance | Often legally mandated retention |
+
+**Tips:**
+- Match TTL to your longest typical query range plus a buffer
+- TTL should be significantly larger than your partition interval
+- For manual control instead of automatic TTL, see
+ [Data Retention](/docs/operations/data-retention/)
diff --git a/documentation/deployment/aws.md b/documentation/deployment/aws.md
index 39eb2e6b4..c2e9ac0a8 100644
--- a/documentation/deployment/aws.md
+++ b/documentation/deployment/aws.md
@@ -1,241 +1,281 @@
---
-title: Deploying to Amazon Web Services (AWS)
+title: Deploying QuestDB on AWS
sidebar_label: AWS
description:
- This document explains what to hardware to use, and how to provision QuestDB on Amazon Web Services (AWS).
+ Deploy QuestDB on Amazon Web Services using EC2, with instance sizing, storage, and networking recommendations.
---
-import FileSystemChoice from "../../src/components/DRY/_questdb_file_system_choice.mdx"
-import MinimumHardware from "../../src/components/DRY/_questdb_production_hardware-minimums.mdx"
import InterpolateReleaseData from "../../src/components/InterpolateReleaseData"
import CodeBlock from "@theme/CodeBlock"
+## Quick reference
-## Hardware recommendations
+| Component | Recommended | Notes |
+|-----------|-------------|-------|
+| Instance | `m7i.xlarge` or `r7i.2xlarge` | 4-8 vCPUs, 16-64 GiB RAM |
+| Storage | `gp3`, 200+ GiB | 16000 IOPS / 1000 MBps |
+| File system | `zfs` with `lz4` | Or `ext4` if compression not needed |
+| Ports | 9000, 8812, 9009, 9003 | Restrict to known IPs only |
-
+---
-### Elastic Compute Cloud (EC2) with Elastic Block Storage (EBS)
+## Infrastructure
-We recommend starting with `M8` instances, with an upgrade to
-`R8` instances if extra RAM is needed. You can use either `i` (Intel) or `a` (AMD) instances.
+Plan your infrastructure before launching. This section covers instance types,
+storage, and networking requirements.
-These should be deployed with an `x86_64` Linux distribution, such as Ubuntu.
+### Instance sizing
-For storage, we recommend using `gp3` disks, as these provide a better price-to-performance
-ratio compared to `gp2` or `io1` offerings.`5000 IOPS/300 MBps` is a good starting point until
-you have tested your workload.
+| Workload | Instance | vCPUs | RAM | Use case |
+|----------|----------|-------|-----|----------|
+| Development | `m7i.large` | 2 | 8 GiB | Testing, small datasets |
+| Production (starter) | `m7i.xlarge` | 4 | 16 GiB | Light ingestion, moderate queries |
+| Production (standard) | `r7i.2xlarge` | 8 | 64 GiB | High ingestion, complex queries |
+| Production (heavy) | `r7i.4xlarge` | 16 | 128 GiB | Heavy workloads, large datasets |
-
+**Choosing an instance family:**
-### Elastic File System (EFS)
+- **`m7i` / `m7a`** - Balanced compute and memory. Good starting point.
+- **`r7i` / `r7a`** - Memory-optimized. Better for large datasets or complex queries.
+- **`m8i` / `r8i`** - Latest generation. Best performance if available in your region.
-QuestDB **does not** support `EFS` for its primary storage. Do not use it instead of `EBS`.
+Intel (`i`) and AMD (`a`) variants perform similarly. Choose based on
+availability and pricing.
-You can use it as object store, but we would recommend using `S3` instead, as a simpler,
-and cheaper, alternative.
+**ARM instances (Graviton):**
-### Simple Storage Service (S3)
+Graviton instances (`r7g`, `r8g`) cost less and perform well for ingestion.
+However, queries using JIT compilation or SIMD vectorization run slower on ARM.
+Choose Graviton when your workload is primarily ingestion or cost is a priority.
-QuestDB supports `S3` as its replication object-store in the Enterprise edition.
+**Storage-optimized instances:**
-This requires very little provisioning - simply create a bucket or virtual subdirectory and follow
-the [Enterprise Quick Start](/docs/getting-started/enterprise-quick-start/) steps to configure replication.
+Instances with local NVMe (`i7i`, `i8i`) provide fastest disk I/O but lose data
+on termination. Only use with QuestDB Enterprise, which replicates to S3.
-### Minimum specification
+### Storage
-- **Instance**: `m8i.xlarge` or `m8a.xlarge` `(4 vCPUs, 16 GiB RAM)`
-- **Storage**
- - **OS disk**: `gp3 (30 GiB)` volume provisioned with `3000 IOPS/125 MBps`.
- - **Data disk**: `gp3 (100 GiB)` volume provisioned with `3000 IOPS/125 MBps`.
-- **Operating System**: `Linux Ubuntu 24.04 LTS x86_64`.
-- **File System**: `ext4`
+**EBS configuration:**
-### Better specification
+| Workload | Volume | Size | IOPS | Throughput |
+|----------|--------|------|------|------------|
+| Development | `gp3` | 50 GiB | 3000 | 125 MBps |
+| Production | `gp3` | 200+ GiB | 16000 | 1000 MBps |
+| High I/O | `gp3` | 500+ GiB | 16000+ | 1000+ MBps |
-- **Instance**: `r8i.2xlarge` or `r8a.2xlarge` `(8 vCPUs, 64 GiB RAM)`
-- **Storage**
- - **OS disk**: `gp3 (30 GiB)` volume provisioned with `5000 IOPS/300 MBps`.
- - **Data disk**: `gp3 (300 GiB)` volume provisioned with `5000 IOPS/300 MBps`.
-- **Operating System**: `Linux Ubuntu 24.04 LTS x86_64`.
-- **File System**: `zfs` with `lz4` compression.
+Use `gp3` volumes. They offer better price-performance than `gp2` or `io1`.
+Separate your OS disk (30 GiB) from your data disk.
:::note
-
-If the above instance types are not available in your region, then simply downgrade to an earlier version i.e. `8 -> 7 -> 6`.
-
+EBS throughput is limited by instance type. Smaller instances cannot sustain
+high IOPS or throughput regardless of volume provisioning. Check your instance's
+EBS bandwidth limits in the [AWS documentation](https://docs.aws.amazon.com/ec2/latest/instancetypes/gp.html)
+before provisioning storage.
:::
-### AWS Graviton
+**File system:**
-QuestDB can also be deployed on AWS Graviton (ARM) instances, which have a strong price-to-performance ratio.
+Use `zfs` with `lz4` compression to reduce storage costs. If you don't need
+compression, `ext4` or `xfs` offer slightly better performance.
-For example, `r8g` instances are cheaper than `r6i` instances, and will offer superior performance for most Java-centric code.
-Queries which rely on the `JIT` compiler (native WHERE filters) or vectorisation optimisations will potentially run slower.
-Ingestion speed is generally unaffected.
+**Unsupported storage:**
-Therefore, if your use case is ingestion-centric, or your queries do not heavily leverage SIMD/JIT, `r8g` instances
-may offer better performance and better value overall.
+- **EFS** - Not supported. Network latency is too high for database workloads.
+- **S3** - Not supported as primary storage. Use for replication (Enterprise only).
-### Storage Optimised Instances (Enterprise)
+### Networking
-AWS offers storage-optimised instances (e.g. `i7i`), which include locally-attached NVMe devices. Workloads which
-are disk-limited (for example, heavy out-of-order writes) will benefit significantly from the faster storage.
+**Security group rules:**
-However, it is not recommended to use locally-attached NVMe on QuestDB OSS, as instance termination or failure
-will lead to data loss. QuestDB Enterprise replicates data eagerly to object storage (`S3`), preserving
-data in the event of an instance failure, and can therefore can safely leverage the faster disks.
+| Port | Protocol | Source | Purpose |
+|------|----------|--------|---------|
+| 22 | TCP | Your IP | SSH access |
+| 9000 | TCP | Your IP / VPC | Web Console & REST API |
+| 8812 | TCP | Your IP / VPC | PostgreSQL wire protocol |
+| 9009 | TCP | Application servers | InfluxDB line protocol |
+| 9003 | TCP | Monitoring servers | Health check & Prometheus |
-## Launching QuestDB on EC2
+:::warning
+Never expose ports 9000, 8812, or 9009 to `0.0.0.0/0`. Restrict access to known
+IP ranges or use a bastion host.
+:::
-Once you have provisioned your `EC2` instance with attached `EBS` storage, you can simply
-follow the setup instructions for a [Docker](docker.md) or [systemd](systemd.md) installation.
+**VPC recommendations:**
-You can also keep it simple - just [download](https://questdb.com/download/) the binary and run it directly.
-QuestDB is a single self-contained binary and easy to deploy.
+- Deploy QuestDB in a private subnet
+- Use a NAT gateway for outbound access (package updates, etc.)
+- Use VPC endpoints for S3 if using Enterprise replication
+- Consider placement groups for low-latency application access
-## Launching QuestDB on the AWS Marketplace
+---
-[AWS Marketplace](https://aws.amazon.com/marketplace) is a digital catalog with software listings from independent
-software vendors that runs on AWS. This guide describes how to launch QuestDB
-via the AWS Marketplace using the official listing. This document also describes
-usage instructions after you have launched the instance, including hints for
-authentication, the available interfaces, and tips for accessing the REST API
-and [Web Console](/docs/getting-started/web-console/overview/).
+## Deployment
-The QuestDB listing can be found in the AWS Marketplace under the databases
-category. To launch a QuestDB instance:
+Choose your deployment method:
-1. Navigate to the
- [QuestDB listing](https://aws.amazon.com/marketplace/search/results?searchTerms=questdb)
-2. Click **Continue to Subscribe** and subscribe to the offering
-3. **Configure** a version, an AWS region and click **Continue to** **Launch**
-4. Choose an instance type and network configuration and click **Launch**
+- **[AWS Marketplace](#aws-marketplace)** - Pre-configured AMI, fastest setup
+- **[Manual EC2](#manual-ec2)** - Full control, use your own AMI
-An information panel displays the ID of the QuestDB instance with launch
-configuration details and hints for locating the instance in the EC2 console.
+### AWS Marketplace
-The default user is `admin` and password is `quest` to log in to the Web Console.
+The QuestDB AMI comes pre-configured and ready to run.
-## QuestDB configuration
+**Steps:**
-Connect to the instance where QuestDB is deployed using SSH. The server
-configuration file is at the following location on the AMI:
+1. Go to the [QuestDB Marketplace listing](https://aws.amazon.com/marketplace/search/results?searchTerms=questdb)
+2. Click **Continue to Subscribe** and accept terms
+3. Click **Continue to Configure**, select your region
+4. Click **Continue to Launch**
+5. Select instance type, VPC, subnet, and security group
+6. Click **Launch**
-```bash
-/var/lib/questdb/conf/server.conf
-```
-
-For details on the server properties and using this file, see the
-[server configuration documentation](/docs/configuration/overview/).
+**After launch:**
-The default ports used by QuestDB interfaces are as follows:
+Connect to the Web Console at `http://:9000`
-- [Web Console](/docs/getting-started/web-console/overview/) & REST API is available on port `9000`
-- PostgreSQL wire protocol available on `8812`
-- InfluxDB line protocol `9009` (TCP and UDP)
-- Health monitoring & Prometheus `/metrics` `9003`
+Default credentials:
+- **Web Console**: `admin` / `quest`
+- **PostgreSQL**: `admin` / random (check `/var/lib/questdb/conf/server.conf`)
-### Postgres credentials
+:::warning
+Change default credentials immediately. See [Security](#security) below.
+:::
-Generated credentials can be found in the server configuration file:
+**Configuration file location:**
-```bash
+```
/var/lib/questdb/conf/server.conf
```
-The default Postgres username is `admin` and a password is randomly generated
-during startup:
+### Manual EC2
-```ini
-pg.user=admin
-pg.password=...
-```
+Deploy QuestDB on any EC2 instance you configure yourself.
-To use the credentials that are randomly generated and stored in the
-`server.conf`file, restart the database using the command
-`sudo systemctl restart questdb`.
+**Steps:**
-### InfluxDB line protocol credentials
+1. Launch an EC2 instance with your preferred AMI (Ubuntu 22.04+ recommended)
+2. Attach a `gp3` EBS volume for data
+3. Configure the security group per the [Networking](#networking) section
+4. SSH into the instance
+5. Install QuestDB via [Docker](/docs/deployment/docker/) or [systemd](/docs/deployment/systemd/)
-The credentials for InfluxDB line protocol can be found at
+You can also download the binary directly:
```bash
-/var/lib/questdb/conf/full_auth.json
+curl -L https://questdb.com/download -o questdb.tar.gz
+tar xzf questdb.tar.gz
+./questdb.sh start
```
-For details on authentication using this protocol, see the
-[InfluxDB line protocol authentication guide](/docs/ingestion/ilp/overview/#authentication).
+---
+
+## Security
-### Disabling authentication
+### Change default credentials
-If you would like to disable authentication for Postgres wire protocol or
-InfluxDB line protocol, comment out the following lines in the server
-configuration file:
+Update credentials immediately after deployment.
-```ini title="/var/lib/questdb/conf/server.conf"
-# pg.password=...
+**Web Console and REST API** - edit `server.conf`:
-# line.tcp.auth.db.path=conf/auth.txt
+```ini
+http.user=your_username
+http.password=your_secure_password
```
-### Disabling interfaces
+**PostgreSQL** - edit `server.conf`:
-Interfaces may be **disabled completely** with the following configuration:
+```ini
+pg.user=your_username
+pg.password=your_secure_password
+```
-```ini title="/var/lib/questdb/conf/server.conf"
-# disable postgres
-pg.enabled=false
+**InfluxDB line protocol** - edit `conf/auth.json`. See
+[ILP authentication](/docs/ingestion/ilp/overview/#authentication).
-# disable InfluxDB line protocol over TCP and UDP
-line.tcp.enabled=false
-line.udp.enabled=false
+Restart after changes:
-# disable HTTP (web console and REST API)
-http.enabled=false
+```bash
+sudo systemctl restart questdb
```
-The HTTP interface may alternatively be set to **readonly**:
+### Disable unused interfaces
-```ini title="/var/lib/questdb/conf/server.conf"
-# set HTTP interface to readonly
-http.security.readonly=true
-```
+Reduce attack surface by disabling protocols you don't use:
-## Upgrading QuestDB
+```ini title="server.conf"
+pg.enabled=false # Disable PostgreSQL
+line.tcp.enabled=false # Disable ILP
+http.enabled=false # Disable Web Console & REST API
+http.security.readonly=true # Or make HTTP read-only
+```
-:::note
+---
-- Check the [release notes](https://github.com/questdb/questdb/releases) and
- ensure that necessary [backup](/docs/operations/backup/) is completed.
+## Operations
-:::
+### Upgrading
-You can perform the following steps to upgrade your QuestDB version on an
-official AWS QuestDB AMI:
+**Marketplace AMI:**
-- Stop the service:
+1. Stop QuestDB:
+ ```bash
+ sudo systemctl stop questdb
+ ```
-```shell
-systemctl stop questdb.service
-```
+2. Back up data:
+ ```bash
+ sudo cp -r /var/lib/questdb /var/lib/questdb.backup
+ ```
-- Download and copy over the new binary
+3. Download new version:
(
-
-{`wget https://github.com/questdb/questdb/releases/download/${release.name}/questdb-${release.name}-no-jre-bin.tar.gz \\
-tar xzvf questdb-${release.name}-no-jre-bin.tar.gz
-cp questdb-${release.name}-no-jre-bin/questdb.jar /usr/local/bin/questdb.jar
-cp questdb-${release.name}-no-jre-bin/questdb.jar /usr/local/bin/questdb-${release.name}.jar`}
+
+{`wget https://github.com/questdb/questdb/releases/download/${release.name}/questdb-${release.name}-no-jre-bin.tar.gz
+tar xzf questdb-${release.name}-no-jre-bin.tar.gz
+sudo cp questdb-${release.name}-no-jre-bin/questdb.jar /usr/local/bin/questdb.jar`}
)}
/>
-- Restart the service again:
+4. Restart:
+ ```bash
+ sudo systemctl start questdb
+ ```
+
+**Manual deployments:** Follow upgrade steps for [Docker](/docs/deployment/docker/)
+or [systemd](/docs/deployment/systemd/).
-```shell
-systemctl restart questdb.service
-systemctl status questdb.service
+### Monitoring
+
+**Health check:**
+
+```bash
+curl http://localhost:9003/status
```
+
+**Prometheus metrics:**
+
+```bash
+curl http://localhost:9003/metrics
+```
+
+**CloudWatch integration:**
+
+Use the CloudWatch agent to collect:
+- System metrics (CPU, memory, disk I/O)
+- QuestDB logs from `/var/lib/questdb/log/`
+- Custom metrics scraped from the Prometheus endpoint
+
+---
+
+## Enterprise on AWS
+
+QuestDB Enterprise adds production features for AWS:
+
+- **S3 replication** - Continuous backup for durability
+- **Cold storage** - Move old partitions to S3, query on-demand
+- **High availability** - Automatic failover across instances
+
+See [Enterprise Quick Start](/docs/getting-started/enterprise-quick-start/).
diff --git a/documentation/deployment/azure.md b/documentation/deployment/azure.md
index d6b91f2ca..000dc923d 100644
--- a/documentation/deployment/azure.md
+++ b/documentation/deployment/azure.md
@@ -1,230 +1,342 @@
---
-title: Deploying to Microsoft Azure
+title: Deploying QuestDB on Azure
sidebar_label: Azure
description:
- This document explains what to hardware to use, and how to provision QuestDB on Microsoft Azure.
+ Deploy QuestDB on Microsoft Azure using Virtual Machines, with instance sizing, storage, and networking recommendations.
---
-import FileSystemChoice from "../../src/components/DRY/_questdb_file_system_choice.mdx"
-import MinimumHardware from "../../src/components/DRY/_questdb_production_hardware-minimums.mdx"
+import Screenshot from "@theme/Screenshot"
+import InterpolateReleaseData from "../../src/components/InterpolateReleaseData"
+import CodeBlock from "@theme/CodeBlock"
-## Hardware recommendations
+## Quick reference
-
+| Component | Recommended | Notes |
+|-----------|-------------|-------|
+| Instance | `D4s_v5` or `E8s_v5` | 4-8 vCPUs, 16-64 GiB RAM |
+| Storage | Premium SSD v2, 200+ GiB | 16000 IOPS / 1000 MBps |
+| File system | `zfs` with `lz4` | Or `ext4` if compression not needed |
+| Ports | 9000, 8812, 9009, 9003 | Restrict to known IPs only |
-### Azure Virtual Machines with Azure Managed Disk
-
-Azure Virtual Machines have a naming convention that is handy for finding compatible instances.
+---
-**Do not** use instances with the letter `p`. These are `ARM` architecture instances, usually running
-on `Cobalt` chips.
+## Infrastructure
-**Do** use instances with the letter `s`. This indicates that it is compatible with `Premium SSD` storage,
-preferred for QuestDB.
+Plan your infrastructure before launching. This section covers instance types,
+storage, and networking requirements.
-Either `AMD EPYC` CPUs (`a` letter) or `Intel Xeon` (no letter) are appropriate for `x86_64` deployments.
+### Instance sizing
-We recommend starting with `D-series` instances, and then later upgrading to `E-series` if necessary i.e. for more RAM.
+| Workload | Instance | vCPUs | RAM | Use case |
+|----------|----------|-------|-----|----------|
+| Development | `D2s_v5` | 2 | 8 GiB | Testing, small datasets |
+| Production (starter) | `D4s_v5` | 4 | 16 GiB | Light ingestion, moderate queries |
+| Production (standard) | `E8s_v5` | 8 | 64 GiB | High ingestion, complex queries |
+| Production (heavy) | `E16s_v5` | 16 | 128 GiB | Heavy workloads, large datasets |
-You should deploy using an `x86_64` Linux distribution, such as Ubuntu.
+**Understanding Azure instance names:**
-For storage, we recommend using [Premium SSD v2](https://learn.microsoft.com/en-us/azure/virtual-machines/disks-types#premium-ssd-v2) disks,
-and provisioning them at `5000 IOPS/300 MBps` until you have tested your workload.
+| Letter | Meaning | Recommendation |
+|--------|---------|----------------|
+| `D` | General purpose | Good starting point |
+| `E` | Memory optimized | Better for large datasets |
+| `s` | Premium storage capable | **Required** for QuestDB |
+| `a` | AMD EPYC processor | Similar performance, often cheaper |
+| `p` | ARM architecture | **Avoid** - limited optimization support |
-:::note
+Always choose instances with `s` in the name for Premium SSD support.
-`Premium SSD v2` disks only support locally-redundant storage (LRS). For Enterprise users, this
-is not an issue, as your data is secured using replication over Azure Blob Storage.
+**ARM instances:**
-For open-source users, you may want to:
+Azure ARM instances (Cobalt, Ampere) are not recommended. QuestDB's JIT
+compilation and SIMD optimizations are limited on ARM. Use `x86_64` instances.
-- downgrade to `Premium SSD` storage, which supports zone-redundant storage (ZRS).
-- or publish to multiple instances
-- or take frequent ZRS snapshots of your LRS disk.
-
-:::
+### Storage
-
+**Premium SSD v2 (recommended):**
-:::warning
+| Workload | Size | IOPS | Throughput |
+|----------|------|------|------------|
+| Development | 50 GiB | 3000 | 125 MBps |
+| Production | 200+ GiB | 16000 | 1000 MBps |
+| High I/O | 500+ GiB | 16000+ | 1000+ MBps |
-QuestDB does **not** support `blobfuse2`. Please use the above recommendations, or refer to [capacity planning](/docs/getting-started/capacity-planning/)
+Premium SSD v2 lets you provision IOPS and throughput independently of size.
+Separate your OS disk (30 GiB) from your data disk.
+:::note
+Premium SSD v2 throughput is limited by VM size. Check your instance's
+maximum disk throughput in the
+[Azure documentation](https://learn.microsoft.com/en-us/azure/virtual-machines/sizes)
+before provisioning.
:::
-### Azure NetApp Files
+**Premium SSD (alternative):**
-Azure NetAppFiles is a volume-as-a-service (VaaS) offering from Microsoft, supporting an NFS API.
+If Premium SSD v2 is unavailable, use Premium SSD with these minimum sizes:
-This should **not** be used as primary storage for QuestDB, but could be used as an object store for Enterprise replication.
+| Tier | Size | IOPS | Throughput | Use case |
+|------|------|------|------------|----------|
+| P20 | 512 GiB | 2300 | 150 MBps | Development |
+| P30 | 1 TiB | 5000 | 200 MBps | Light production |
+| P40 | 2 TiB | 7500 | 250 MBps | Production |
-We would recommend using `Azure Blob Storage` instead as a simpler, and cheaper, alternative.
+Premium SSD ties performance to disk size - you may need to over-provision
+capacity to get required IOPS.
-### Azure Blob Storage
+**Redundancy considerations:**
-QuestDB supports `Azure Blob Storage` as its replication object-store in the Enterprise edition.
+- Premium SSD v2 only supports locally-redundant storage (LRS)
+- Premium SSD supports zone-redundant storage (ZRS)
+- For LRS disks, take regular ZRS snapshots or use QuestDB Enterprise replication
-To get started, use `Azure Storage Explorer` to create new `Blob Container`, and then follow the
-[Enterprise Quick Start](/docs/getting-started/enterprise-quick-start/) steps to create a connection string and
-configure QuestDB.
+**File system:**
-### Minimum specification
+Use `zfs` with `lz4` compression to reduce storage costs. If you don't need
+compression, `ext4` or `xfs` offer slightly better performance.
-- **Instance**: `D4as v5` or `D4s v5` `(4 vCPUs, 16 GiB RAM)`
-- **Storage**
- - **OS disk**: `Premium SSD v2 (30 GiB)` volume provisioned with `3000 IOPS/125 MBps`.
- - **Data disk**: `Premium SSD v2 (100 GiB)` volume provisioned with `3000 IOPS/125 MBps`.
-- **Operating System**: `Linux Ubuntu 24.04 LTS x86_64`.
-- **File System**: `ext4`
+**Unsupported storage:**
-:::note
+- **Azure NetApp Files** - Not supported as primary storage (NFS latency too high)
+- **blobfuse2** - Not supported for database workloads
+- **Blob Storage** - Supported for Enterprise replication only, not primary storage
-If you use `Premium SSD` instead of `Premium SSD v2`, you should start with a `P20` size (`512 GiB`).
-This offers `2300 IOPS/150 MBps` which should be enough for basic workloads.
+### Networking
-:::
+**Network Security Group (NSG) rules:**
-### Better specification
+| Port | Protocol | Source | Purpose |
+|------|----------|--------|---------|
+| 22 | TCP | Your IP | SSH access |
+| 9000 | TCP | Your IP / VNet | Web Console & REST API |
+| 8812 | TCP | Your IP / VNet | PostgreSQL wire protocol |
+| 9009 | TCP | Application servers | InfluxDB line protocol |
+| 9003 | TCP | Monitoring servers | Health check & Prometheus |
-- **Instance**: `E8as v5`or `E8s v5` `(8 vCPUs, 64 GiB RAM)`
-- **Storage**
- - **OS disk**: `Premium SSD v2 (30 GiB)` volume provisioned with `5000 IOPS/300 MBps`.
- - **Data disk**: `Premium SSD v2 (300 GiB)` volume provisioned with `5000 IOPS/300 MBps`.
-- **Operating System**: `Linux Ubuntu 24.04 LTS x86_64`.
-- **File System**: `zfs`
+:::warning
+Never set source to `*` or `Any` for ports 9000, 8812, or 9009. Restrict access
+to known IP ranges or use Azure Bastion for secure access.
+:::
-:::note
+**VNet recommendations:**
-If you use `Premium SSD` instead of `Premium SSD v2`, you should upgrade to a `P30` size disk (`1 TiB`).
-This offers `5000 IOPS/200 MBps` which should be enough for higher workloads.
+- Deploy QuestDB in a private subnet
+- Use Azure Bastion or a jump box for SSH access
+- Use Private Endpoints for Blob Storage (Enterprise replication)
+- Consider proximity placement groups for low-latency application access
-:::
+---
-## Launching QuestDB on Azure Virtual Machines
+## Deployment
-This guide demonstrates how to spin up a Microsoft Azure Virtual Machine that is
-running QuestDB on Ubuntu. This will help get you comfortable with Azure VM
-basics.
+Deploy QuestDB on an Azure Virtual Machine.
### Prerequisites
-- A [Microsoft Azure account](https://azure.microsoft.com/) with billing
- enabled. Adding a credit card is required to create an account, but this demo
- will only use resources in the free tier.
-
-### Create an Azure VM
+- [Microsoft Azure account](https://azure.microsoft.com/) with billing enabled
+- SSH key pair for secure access
-1. In the Azure console, navigate to the **Virtual Machines** page. Once you are
- on this page, click the **Create** dropdown in the top left-hand corner of
- the screen and select the **Azure virtual machine** option.
+### Create the VM
-2. From here, fill out the required options. If you don't already have a
- **Resource group**, you can create one on this page. We made a "default"
- group for this example, but you are free to choose any name you like. Enter
- the name of your new virtual machine, as well as its desired Region and
- Availability Zone. Your dialog should look something like this:
+1. In the Azure Portal, navigate to **Virtual Machines**
+2. Click **Create** → **Azure virtual machine**
+3. Configure basics:
+ - Select or create a **Resource group**
+ - Enter a **Virtual machine name**
+ - Select your **Region** and **Availability zone**
+ - Choose **Ubuntu 24.04 LTS** for the image
-3. Scroll down and select your desired instance type. In this case, we used a
- `Standard_B1s` to take advantage of Azure's free tier.
-4. If you don't already have one, create a new SSH key pair to securely connect
- to the instance once it has been created.
+4. Select your instance size (see [Instance sizing](#instance-sizing))
+5. Configure SSH authentication:
+ - Select **SSH public key**
+ - Create a new key pair or use existing
-5. We will use Azure defaults for the rest of the VM's settings. Click
- **Review + create** to confirm your settings, then **Create** to download
- your new key pair and launch the instance.
+6. Click **Review + create**, then **Create**
+7. Download the private key when prompted
-Once you see this screen, click the **Go to resource** button and move on to the
-next section
+### Configure networking
-### Set up networking
-
-We now need to set up the appropriate firewall rules which will allow you to
-connect to your new QuestDB instance over the several protocols that we support.
-
-1. In the **Settings** sidebar, click the **Networking** button. This will lead
- you to a page with all firewall rules for your instance. To open up the
- required ports, click the **Add inbound port rule** on the right-hand side.
-2. Change the **Destination port ranges** to the `8812,9000,9003`, set the
- **Protocol** to `TCP`, change the name to `questdb`, and click the **Add**
- button. This will add the appropriate ingress rules to your instance's
- firewall. It may take a few seconds, and possibly a page refresh, but you
- should see your new firewall rule in the list. Port 8812 is used for the
- postgresql protocol, port 9000 is used for the web interface, the REST API,
- and ILP ingestion over HTTP. Port 9003 is used for metrics and health check.
+1. Go to your VM's **Networking** settings
+2. Click **Add inbound port rule**
+3. Add rules for QuestDB ports (see [Networking](#networking)):
+ - Set **Destination port ranges** to `9000,8812,9003`
+ - Set **Source** to your IP range (not `Any`)
+ - Set **Protocol** to `TCP`
+ - Name the rule `questdb`
-### Install QuestDB
+:::warning
+Only add port 9009 if you need ILP ingestion, and restrict the source to your
+application servers.
+:::
-Now that you've opened up the required ports, it's time to install and run
-QuestDB. To do this, you first need to connect to your instance over SSH. Since
-we named our SSH key `questdb_key`, this is the filename that the commands below
-use. You should substitute this with your own key name that you downloaded in
-the previous step. You also need to use your VM's external IP address instead of
-the placeholder that we have provided.
+### Install QuestDB
-We first need to adjust the permissions on the downloaded file, and then use it
-to ssh into your instance.
+1. Connect via SSH:
```bash
-export YOUR_INSTANCE_IP=172.xxx.xxx.xxx
-chmod 400 ~/download/questdb_key.pem
-ssh -i ~/download/questdb_key.pem azureuser@$YOUR_INSTANCE_IP
+chmod 400 ~/Downloads/your_key.pem
+ssh -i ~/Downloads/your_key.pem azureuser@
```
-Once we've connected to the instance, we will use `wget`
-to download the QuestDB binary, extract it, and run the start script. Please visit
-the Ubuntu section at the [binary installation page](/download/) to make sure you are using the latest
-version of the binary package and replace the URL below as appropriate.
+2. Download and start QuestDB:
(
{`wget https://github.com/questdb/questdb/releases/download/${release.name}/questdb-${release.name}-rt-linux-x86-64.tar.gz
-tar -xvf questdb-${release.name}-rt-linux-x86-64.tar.gz
+tar xzf questdb-${release.name}-rt-linux-x86-64.tar.gz
cd questdb-${release.name}-rt-linux-x86-64/bin
./questdb.sh start`}
)}
/>
-Once you've run these commands, you should be able to navigate to your instance
-at its IP on port 9000: `http://$YOUR_INSTANCE_IP:9000`
+3. Access the Web Console at `http://:9000`
+
+For production deployments, use [systemd](/docs/deployment/systemd/) to manage
+the QuestDB service.
+
+---
+
+## Security
+
+### Change default credentials
+
+Update credentials immediately after deployment.
+
+**Web Console and REST API** - edit `conf/server.conf`:
+
+```ini
+http.user=your_username
+http.password=your_secure_password
+```
+
+**PostgreSQL** - edit `conf/server.conf`:
+
+```ini
+pg.user=your_username
+pg.password=your_secure_password
+```
+
+**InfluxDB line protocol** - edit `conf/auth.json`. See
+[ILP authentication](/docs/ingestion/ilp/overview/#authentication).
+
+Restart after changes:
+
+```bash
+./questdb.sh stop
+./questdb.sh start
+```
+
+### Disable unused interfaces
+
+Reduce attack surface by disabling protocols you don't use:
+
+```ini title="conf/server.conf"
+pg.enabled=false # Disable PostgreSQL
+line.tcp.enabled=false # Disable ILP
+http.enabled=false # Disable Web Console & REST API
+http.security.readonly=true # Or make HTTP read-only
+```
+
+---
+
+## Operations
+
+### Upgrading
+
+1. Stop QuestDB:
+ ```bash
+ ./questdb.sh stop
+ ```
+
+2. Back up your data directory
+
+3. Download and extract the new version:
+
+ (
+
+{`wget https://github.com/questdb/questdb/releases/download/${release.name}/questdb-${release.name}-rt-linux-x86-64.tar.gz
+tar xzf questdb-${release.name}-rt-linux-x86-64.tar.gz`}
+
+)}
/>
-## Single Sign On with EntraID
+4. Start the new version:
+ ```bash
+ cd questdb-*/bin
+ ./questdb.sh start
+ ```
+
+### Monitoring
+
+**Health check:**
+
+```bash
+curl http://localhost:9003/status
+```
+
+**Prometheus metrics:**
+
+```bash
+curl http://localhost:9003/metrics
+```
+
+**Azure Monitor integration:**
+
+Use the Azure Monitor agent to collect:
+- VM metrics (CPU, memory, disk I/O)
+- QuestDB logs from the `log/` directory
+- Custom metrics from the Prometheus endpoint
+
+---
+
+## Enterprise on Azure
+
+QuestDB Enterprise adds production features for Azure:
+
+- **Blob Storage replication** - Continuous backup for durability
+- **Cold storage** - Move old partitions to Blob Storage, query on-demand
+- **High availability** - Automatic failover across instances
+- **EntraID SSO** - Single sign-on with Microsoft Entra ID
+
+For EntraID integration, see the
+[Microsoft EntraID OIDC guide](/docs/security/oidc/#microsoft-entraid).
-If you are using EntraID to manage users, [QuestDB enterprise](/enterprise/) offers the possibility to do Single Sign On and manage your database permissions.
-See more information at the [Microsoft EntraID OIDC guide](/docs/security/oidc/#microsoft-entraid).
+See [Enterprise Quick Start](/docs/getting-started/enterprise-quick-start/) for setup.
diff --git a/documentation/getting-started/capacity-planning.md b/documentation/getting-started/capacity-planning.md
index febb742d4..dba569149 100644
--- a/documentation/getting-started/capacity-planning.md
+++ b/documentation/getting-started/capacity-planning.md
@@ -151,7 +151,7 @@ into 2 parts:
- Suffix (including the merged row):`2023-01-01T75959-999999.2` with 1,001 rows
See
-[Splitting and squashing time partitions](/docs/concepts/partitions/#splitting-and-squashing-time-partitions)
+[Splitting and squashing time partitions](/docs/concepts/partitions/#partition-splitting-and-squashing)
for more information.
## CPU and RAM configuration
diff --git a/documentation/query/rest-api.md b/documentation/query/rest-api.md
index dc771cf25..fdacb9934 100644
--- a/documentation/query/rest-api.md
+++ b/documentation/query/rest-api.md
@@ -179,7 +179,7 @@ Content-Type with following optional URL parameters which must be URL encoded:
| `forceHeader` | No | `false` | `true` or `false`. When `false`, QuestDB will try to infer if the first line of the file is the header line. When set to `true`, QuestDB will expect that line to be the header line. |
| `name` | No | Name of the file | Name of the table to create, [see below](/docs/query/rest-api/#names). |
| `overwrite` | No | `false` | `true` or `false`. When set to true, any existing data or structure will be overwritten. |
-| `partitionBy` | No | `NONE` | See [partitions](/docs/concepts/partitions/#properties). |
+| `partitionBy` | No | `NONE` | See [partitions](/docs/concepts/partitions/#creating-partitioned-tables). |
| `o3MaxLag` | No | | Sets upper limit on the created table to be used for the in-memory out-of-order buffer. Can be also set globally via the `cairo.o3.max.lag` configuration property. |
| `maxUncommittedRows` | No | | Maximum number of uncommitted rows to be set for the created table. When the number of pending rows reaches this parameter on a table, a commit will be issued. Can be also set globally via the `cairo.max.uncommitted.rows` configuration property. |
| `skipLev` | No | `false` | `true` or `false`. Skip “Line Extra Values”, when set to true, the parser will ignore those extra values rather than ignoring entire line. An extra value is something in addition to what is defined by the header. |
diff --git a/documentation/query/sql/alter-table-squash-partitions.md b/documentation/query/sql/alter-table-squash-partitions.md
index 77deb5e3b..2e47d000a 100644
--- a/documentation/query/sql/alter-table-squash-partitions.md
+++ b/documentation/query/sql/alter-table-squash-partitions.md
@@ -8,7 +8,7 @@ Merges partition parts back into the physical partition.
This SQL keyword is designed to use for downgrading QuestDB to a version earlier
than 7.2, when
-[partition split](/docs/concepts/partitions/#splitting-and-squashing-time-partitions)
+[partition split](/docs/concepts/partitions/#partition-splitting-and-squashing)
is introduced. Squashing partition parts makes the database compatible with
earlier QuestDB versions.
diff --git a/static/images/docs/concepts/deduplication.svg b/static/images/docs/concepts/deduplication.svg
new file mode 100644
index 000000000..6776b5e80
--- /dev/null
+++ b/static/images/docs/concepts/deduplication.svg
@@ -0,0 +1,220 @@
+
diff --git a/static/images/docs/concepts/designatedTimestamp.svg b/static/images/docs/concepts/designatedTimestamp.svg
index 9e20ccdf8..d88323015 100644
--- a/static/images/docs/concepts/designatedTimestamp.svg
+++ b/static/images/docs/concepts/designatedTimestamp.svg
@@ -1 +1,167 @@
-
\ No newline at end of file
+
diff --git a/static/images/docs/concepts/partitionModel.svg b/static/images/docs/concepts/partitionModel.svg
new file mode 100644
index 000000000..a82f24575
--- /dev/null
+++ b/static/images/docs/concepts/partitionModel.svg
@@ -0,0 +1,127 @@
+
diff --git a/static/images/docs/concepts/ttl.svg b/static/images/docs/concepts/ttl.svg
new file mode 100644
index 000000000..4f83891b7
--- /dev/null
+++ b/static/images/docs/concepts/ttl.svg
@@ -0,0 +1,87 @@
+