JavaScript connector guide
This guide is for developers building connectors, integrations, and analytics tooling on top of ClickHouse using the official JavaScript client. It covers client setup, connection management, schema discovery, type mapping, querying, and data ingestion patterns.
For the full API reference — including all client options, method signatures, and format details — see the ClickHouse JS reference. For ingestion and consumption patterns that apply across languages, see the ingestion patterns and consumption patterns guides.
Overview
The official JavaScript client ships as two packages:
@clickhouse/client— Node.js and other server-side runtimes (ETL pipelines, API backends, CLI tools, serverless functions on AWS Lambda or similar)@clickhouse/client-web— browsers (Chrome, Firefox) and Cloudflare Workers
Both packages expose the same API surface. The difference is in the underlying HTTP transport: the Node.js package uses Node's built-in http/https modules and manages an HTTP keep-alive connection pool; the web package uses the browser Fetch API and defers connection management to the runtime.
The client is written in TypeScript and ships full type definitions. No additional @types/ package is needed.
Installation
For server-side connectors, ETL pipelines, and API backends:
For browser-based BI dashboards or Cloudflare Workers:
TypeScript requires version 4.5 or later. The client uses inline import/export syntax introduced in that release.
Creating a client
For ClickHouse Cloud, always use HTTPS on port 8443. Plaintext HTTP is not accepted. Never hardcode credentials — read them from environment variables or a secrets manager.
For the web client, the import path changes but the API is identical:
TypeScript users can import type definitions directly from the package:
Connection pool (Node.js)
The Node.js client manages an HTTP keep-alive connection pool internally. No external pooling library is required. Key settings:
| Option | Default | Notes |
|---|---|---|
max_open_connections | 10 | Maximum simultaneous open sockets. Increase for high-concurrency connectors. |
keep_alive.enabled | true | Do not disable. Disabling forces a new TCP handshake per request. |
request_timeout | 300_000 (ms) | Set this above your max_execution_time server setting so the client does not abort before ClickHouse finishes. |
Always call client.close() when the client is no longer needed to drain the connection pool cleanly.
Schema discovery
Listing columns
Use system.columns to enumerate columns for schema browsers, column pickers, or query builders. Prefer system.columns over INFORMATION_SCHEMA.columns — it exposes ClickHouse-specific metadata like is_in_sorting_key that is absent from the standard view.
Parsing type modifiers
ClickHouse wraps types with Nullable(T) and LowCardinality(T) modifiers. Strip these before mapping to JavaScript types:
Apply unwrapType before passing type to your type mapping logic. A column typed Nullable(LowCardinality(String)) should ultimately map the same as String.
Type mapping
Numeric types
When using JSONEachRow format, ClickHouse serializes numeric columns as JSON numbers or strings depending on the type:
| ClickHouse type | JavaScript value (JSONEachRow) |
|---|---|
Int8, Int16, Int32 | number |
UInt8, UInt16 | number |
UInt32 | number |
Int64, UInt64 | string (when output_format_json_quote_64bit_integers=1 is set) or number (default — unsafe, see below) |
Int128, Int256, UInt128, UInt256 | string |
Float32, Float64 | number |
Decimal* | string (to preserve precision) |
The Int64 precision problem
This is the most important gotcha when building a JavaScript connector. JSON.parse() silently loses precision for integers beyond Number.MAX_SAFE_INTEGER (2^53 − 1 = 9,007,199,254,739,991). ClickHouse sends Int64 and UInt64 columns as JSON numbers by default, meaning large values are silently corrupted during parsing — with no error thrown.
Always set output_format_json_quote_64bit_integers=1 by default in connectors that will run against arbitrary tables. Users cannot be expected to audit which columns might exceed the safe integer range.
With this setting, Int64/UInt64 values arrive as quoted strings ("18446744073709551615"). Parse them with BigInt() or a BigInt-aware library rather than Number():
If you need to expose the value to a downstream system that cannot accept BigInt, cast the column in SQL first:
Date and time types
All date/time types arrive as strings in JSON output:
| ClickHouse type | JSON representation | Notes |
|---|---|---|
Date | "2024-01-15" | YYYY-MM-DD. No timezone. |
Date32 | "2024-01-15" | Extended range. No timezone. |
DateTime | "2024-01-15 12:00:00" | Server timezone applies if no column-level timezone is set. |
DateTime64(n) | "2024-01-15 12:00:00.123" | Fractional seconds to n digits. Same timezone behavior. |
Parse DateTime and DateTime64 strings with new Date() carefully — JavaScript Date parses the value in local time, not UTC. Pass an explicit timezone offset or use a library like date-fns-tz or luxon when timezone correctness matters:
Querying
Streaming with ResultSet
For large result sets, stream rows one at a time rather than buffering the entire response in memory:
Always consume the full stream or call resultSet.close(). If you break out of the loop early without closing, the underlying socket is held open and the connection pool is exhausted.
Buffered query for small results
For small result sets (schema queries, metadata, configuration lookups), buffer the full response with resultSet.json():
Do not use .json() for large or unbounded queries — it loads the entire response into memory.
Parameterized queries
Always use query_params with {name:Type} placeholders in the SQL string. Never concatenate user input into the query string.
Query tagging
Set a query_id on every query for traceability in system.query_log. Use log_comment to attach feature-level context:
If you retry after a timeout, reuse the same query_id. ClickHouse will return the result of the already-running query rather than executing it a second time.
Handling the HTTP 200 on error
ClickHouse begins streaming the response body and sends HTTP 200 before it knows whether the query will succeed. If an error occurs mid-stream, it is appended to the response body — the HTTP status code stays 200.
The JS client detects errors in the response body and throws a ClickHouseError in both buffered and streaming paths. For streaming queries, this means the error may surface after some rows have already been processed:
Inserting data
Stream insert
For large batches in Node.js, pipe a Readable stream directly into the insert. This avoids buffering the entire batch in memory:
Array insert
For smaller batches where you already have data in memory, pass an array directly:
Batch sizing
Every INSERT creates a new on-disk data part. ClickHouse merges parts asynchronously. If inserts arrive faster than merges complete, the active part count in a partition crosses the default threshold (300 parts) and ClickHouse raises Too many parts.
Target 10,000–100,000 rows per insert. Never insert one row at a time.
Idempotent inserts
Pass insert_deduplication_token to make inserts safe to retry. ClickHouse deduplicates inserts with the same token within a configurable window:
Use a deterministic token derived from the batch contents or job identifier. If you retry the same insert after a timeout, use the same token — ClickHouse will skip the duplicate rather than writing the rows twice.
Web client limitations
The web client (@clickhouse/client-web) has constraints imposed by the browser Fetch API:
- No streaming inserts. The Fetch API does not support streaming request bodies in all browsers. Use array inserts only, and keep batch sizes small.
- No connection pool. The browser manages connections.
max_open_connectionshas no effect. - CORS. Browser-direct connections to ClickHouse Cloud require CORS to be configured. Add the allowed origins in the ClickHouse Cloud console before enabling browser-direct access.
- Read-heavy workloads only. The web client is well-suited for BI dashboards, query editors, and data exploration tools. It is not suitable for high-throughput ingestion.
Never expose ClickHouse credentials in browser-side code. Even with CORS configured, credentials embedded in a browser bundle are visible to any user of the page. For write-heavy workloads, or any workload where the ClickHouse user has elevated privileges, proxy all ClickHouse traffic through your backend.
Error handling
Catch ClickHouseError for ClickHouse-specific errors. The code property contains the ClickHouse error code string, which is stable across server versions and suitable for programmatic handling:
Retry strategy:
- Retry on network-level errors (
ECONNREFUSED,ETIMEDOUT, socket errors) with exponential backoff and jitter. - Retry on
ClickHouseErrorwhen the code suggests a transient condition (e.g., server overload). Reuse the samequery_idon retried inserts. - Do not retry on
UNKNOWN_TABLE,UNKNOWN_DATABASE,ACCESS_DENIED,READONLY, or syntax errors. These will not resolve on their own.
A minimal exponential backoff helper: