Iceberg
You can enable the Iceberg wrapper right from the Supabase dashboard.
Open wrapper in dashboardApache Iceberg is a high performance open-source format for large analytic tables.
The Iceberg Wrapper allows you to read data from Apache Iceberg within your Postgres database.
Preparation
Before you can query Iceberg, you need to enable the Wrappers extension and store your credentials in Postgres.
Enable Wrappers
Make sure the wrappers
extension is installed on your database:
1create extension if not exists wrappers with schema extensions;
Enable the Iceberg Wrapper
Enable the iceberg_wrapper
FDW:
123create foreign data wrapper iceberg_wrapper handler iceberg_fdw_handler validator iceberg_fdw_validator;
Store your credentials (optional)
By default, Postgres stores FDW credentials inside pg_catalog.pg_foreign_server
in plain text. Anyone with access to this table will be able to view these credentials. Wrappers is designed to work with Vault, which provides an additional level of security for storing credentials. We recommend using Vault to store your credentials.
123456789101112-- Save your AWS credentials in Vault and retrieve the created-- `aws_access_key_id` and `aws_secret_access_key`select vault.create_secret( '<access key id>', -- secret to be encrypted 'aws_access_key_id', -- secret name 'AWS access key for Wrappers' -- secret description);select vault.create_secret( '<secret access key>' 'aws_secret_access_key', 'AWS secret access key for Wrappers');
Connecting to Icerberg
We need to provide Postgres with the credentials to connect to Iceberg. We can do this using the create server
command.
For any server options need to be stored in Vault, you can add a prefix vault_
to its name and use the secret ID returned from the select vault.create_secret()
statement as the option value.
Connecting to AWS S3 Tables
123456789101112131415create server iceberg_server foreign data wrapper iceberg_wrapper options ( -- The key id saved in Vault from above vault_aws_access_key_id '<key_ID>', -- The secret id saved in Vault from above vault_aws_secret_access_key '<secret_key>', -- AWS region region_name 'us-east-1', -- AWS S3 table bucket ARN aws_s3table_bucket_arn 'arn:aws:s3tables:us-east-1:204203087419:bucket/my-table-bucket' );
Connecting to Iceberg REST Catalog + AWS S3 (or compatible) storage
123456789101112131415161718192021create server iceberg_server foreign data wrapper iceberg_wrapper options ( -- The key id saved in Vault from above vault_aws_access_key_id '<key_ID>', -- The secret id saved in Vault from above vault_aws_secret_access_key '<secret_key>', -- AWS region region_name 'us-east-1', -- Iceberg REST Catalog URI catalog_uri 'https://rest-catalog/ws', -- Warehouse name warehouse 'warehouse', -- AWS S3 endpoint URL, optional "s3.endpoint" 'https://alternative-s3-storage:8000' );
For other optional S3 options, please refer to PyIceberg S3 Configuration.
Create a schema
We recommend creating a schema to hold all the foreign tables:
1create schema if not exists iceberg;
Options
The full list of foreign table options are below:
table
- Fully qualified source table name with all namespaces in Iceberg, required.
Entities
We can use SQL import foreign schema to import foreign table definitions from Iceberg.
For example, using below SQL can automatically create foreign tables in the iceberg
schema.
12345678910111213-- create all the foreign tables from Iceberg "docs_example" namespaceimport foreign schema "docs_example" from server iceberg_server into iceberg;-- or, only create "readme" and "guides" foreign tablesimport foreign schema "docs_example" limit to ("readme", "guides") from server iceberg_server into iceberg;-- or, create all foreign tables except "readme"import foreign schema "docs_example" except ("readme") from server iceberg_server into iceberg;
By default, the import foreign schema
statement will silently skip all the incompatible columns. Use the option strict
to prevent this behavior. For example,
123456import foreign schema "docs_example" from server iceberg_server into icebergoptions ( -- this will fail the 'import foreign schema' statement when Iceberg table -- column cannot be mapped to Postgres strict 'true');
Iceberg Tables
This is an object representing Iceberg table.
Ref: Iceberg Table Spec
Operations
Object | Select | Insert | Update | Delete | Truncate |
---|---|---|---|---|---|
table | ✅ | ❌ | ❌ | ❌ | ❌ |
Usage
You can manually create the foreign table like below if you did not use import foreign schema
.
12345678910create foreign table iceberg.guides ( id bigint, title text, content text, created_at timestamp) server iceberg_server options ( table 'docs_example.guides' );
Query Pushdown Support
This FDW supports where
clause pushdown with below operators.
Operator | Note |
---|---|
= , > , >= , < , <= , <> , != | |
is null , is not null | |
x , not x , x is true , x is not true | column x data type is boolean |
x between a and b | column x data type can be datetime or numeric types |
like 'abc%' , not like 'abc%' | only support starts with pattern |
in (x, y, z) , not in (x, y, z) |
For multiple filters, only logical AND
is supported. For example,
12345-- this can be pushed downselect * from table where x = a and y = b;-- this cannot be pushed downselect * from table where x = a or y = b;
Supported Data Types
Postgres Type | Iceberg Type |
---|---|
boolean | boolean |
real | float |
integer | int |
double precision | double |
bigint | long |
numeric | decimal |
text | string |
date | date |
time | time |
timestamp | timestamp, timestamp_ns |
timestamptz | timestamptz, timestamptz_ns |
jsonb | struct, list, map |
bytea | binary |
uuid | uuid |
Limitations
This section describes important limitations and considerations when using this FDW:
- Only supports specific data type mappings between Postgres and Iceberg
- Only supports read operations (no INSERT, UPDATE, DELETE, or TRUNCATE)
- Apache Iceberg schema evolution is not supported
- When using Iceberg REST catalog, only supports AWS S3 (or compatible) as the storage
- Materialized views using these foreign tables may fail during logical backups
Examples
Basic Example
First, create a server for AWS S3 Tables:
12345678create server iceberg_server foreign data wrapper iceberg_wrapper options ( aws_access_key_id '<AWS_access_key_ID>', aws_secret_access_key '<AWS_secret_access_key>', region_name 'us-east-1', aws_s3table_bucket_arn 'arn:aws:s3tables:us-east-1:204203087419:bucket/my-table-bucket' );
Import the foreign table:
123456789101112131415-- Run below SQL to import all tables under namespace 'docs_example'import foreign schema "docs_example" from server iceberg_server into iceberg;-- or, create the foreign table manuallycreate foreign table if not exists iceberg.guides ( id bigint, title text, content text, created_at timestamp) server iceberg_server options ( table 'docs_example.guides' );
Then query the foreign table:
1select * from iceberg.guides;
Read Cloudflare R2 Data Catalog
First, follow the steps in Getting Started Guide to create a R2 Catalog on Cloudflare. Once it is completed, create a server like below:
12345678910create server iceberg_server foreign data wrapper iceberg_wrapper options ( aws_access_key_id '<R2_access_key_ID>', aws_secret_access_key '<R2_secret_access_key>', token '<R2 API token>', warehouse 'xxx_r2-data-catalog-tutorial', "s3.endpoint" 'https://xxx.r2.cloudflarestorage.com', catalog_uri 'https://catalog.cloudflarestorage.com/xxx/r2-data-catalog-tutorial' );
Then, import all the tables in default
namespace and query it:
123import foreign schema "default" from server iceberg_server into iceberg;select * from iceberg.people;
Query Pushdown Examples
12345678910-- the filter 'id = 42' will be pushed down to Icebergselect * from iceberg.guides where id = 42;-- the pushdown filter can also be on the partition column 'created_at',-- this can greatly reduce query costselect * from iceberg.guideswhere created_at >= timestamp '2025-05-16 12:34:56';-- multiple filters must use logical 'AND'select * from iceberg.guides where id > 42 and title like 'Supabase%';