Showing preview only (637K chars total). Download the full file or copy to clipboard to get everything.
Repository: vahid-sohrabloo/chconn
Branch: main
Commit: 519075190b83
Files: 123
Total size: 603.4 KB
Directory structure:
gitextract_xi8nbcuh/
├── .codecov.yml
├── .github/
│ ├── dependabot.yml
│ └── workflows/
│ ├── ci.yaml
│ └── lint.yaml
├── .gitignore
├── .golangci.yml
├── LICENSE
├── Makefile
├── README.md
├── block.go
├── block_test.go
├── chconn.go
├── chconn_test.go
├── chpool/
│ ├── common_test.go
│ ├── conn.go
│ ├── insert_stmt.go
│ ├── pool.go
│ ├── pool_test.go
│ ├── select_stmt.go
│ └── stat.go
├── client_info.go
├── column/
│ ├── array.go
│ ├── array2.go
│ ├── array2_nullable.go
│ ├── array3.go
│ ├── array3_nullable.go
│ ├── array_base.go
│ ├── array_nullable.go
│ ├── base.go
│ ├── base_big_cpu.go
│ ├── base_little_cpu.go
│ ├── base_test.go
│ ├── base_validate.go
│ ├── bench_test.go
│ ├── column_helper.go
│ ├── date.go
│ ├── date_test.go
│ ├── error_test.go
│ ├── errors.go
│ ├── helper_test.go
│ ├── lc.go
│ ├── lc_indices.go
│ ├── lc_nullable.go
│ ├── lc_test.go
│ ├── map.go
│ ├── map_base.go
│ ├── map_nullable.go
│ ├── map_test.go
│ ├── nested.go
│ ├── nested_test.go
│ ├── nullable.go
│ ├── nullable_test.go
│ ├── point.go
│ ├── size.go
│ ├── string.go
│ ├── string_base.go
│ ├── string_test.go
│ ├── tuple.go
│ ├── tuple1.go
│ ├── tuple2_gen.go
│ ├── tuple3_gen.go
│ ├── tuple4_gen.go
│ ├── tuple5_gen.go
│ ├── tuple_test.go
│ ├── tuples_template/
│ │ ├── tuple.go.tmpl
│ │ ├── tuple2.json
│ │ ├── tuple3.json
│ │ ├── tuple4.json
│ │ └── tuple5.json
│ └── tuples_test.go
├── config.go
├── config_test.go
├── doc.go
├── doc_test.go
├── errors.go
├── errors_ch_code.go
├── errors_test.go
├── go.mod
├── go.sum
├── helper_test.go
├── insert.go
├── insert_test.go
├── internal/
│ ├── ctxwatch/
│ │ ├── context_watcher.go
│ │ └── context_watcher_test.go
│ ├── helper/
│ │ ├── features.go
│ │ ├── strs.go
│ │ └── validator.go
│ └── readerwriter/
│ ├── compress_reader.go
│ ├── compress_writer.go
│ ├── consts.go
│ ├── reader.go
│ └── writer.go
├── ping.go
├── ping_test.go
├── profile.go
├── profile_event.go
├── profile_test.go
├── progress.go
├── select_stmt.go
├── select_stmt_test.go
├── server_info.go
├── server_info_test.go
├── settings.go
├── sqlbuilder/
│ ├── injection.go
│ ├── select.go
│ └── select_test.go
└── types/
├── Int256.go
├── date_type.go
├── decimal.go
├── decimal_test.go
├── int128.go
├── int128_test.go
├── int256_test.go
├── ip_test.go
├── ipv4.go
├── ipv6.go
├── tuple.go
├── uint128.go
├── uint128_test.go
├── uint256.go
├── uint256_test.go
├── uuid.go
└── uuid_test.go
================================================
FILE CONTENTS
================================================
================================================
FILE: .codecov.yml
================================================
ignore:
- "**/main.go"
- "./internal/readerwriter/*"
coverage:
status:
project:
default:
target: 50%
threshold: null
patch: false
changes: false
range: 70..95
round: up
precision: 1
================================================
FILE: .github/dependabot.yml
================================================
version: 2
updates:
- package-ecosystem: gomod
directory: "/"
schedule:
interval: daily
- package-ecosystem: github-actions
directory: "/"
schedule:
interval: daily
================================================
FILE: .github/workflows/ci.yaml
================================================
name: CI
on:
push:
branches:
- master
pull_request:
jobs:
test-coverage:
name: Test Coverage
runs-on: ubuntu-latest
env:
VERBOSE: 1
GOFLAGS: -mod=readonly
steps:
- uses: vahid-sohrabloo/clickhouse-action@v1
with:
version: '22.9'
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: 1.19
- name: Checkout code
uses: actions/checkout@v3.3.0
- name: Test
run: make test-cover
- name: Send coverage
uses: codecov/codecov-action@v3
with:
file: coverage.out
test:
name: Test
runs-on: ubuntu-latest
strategy:
matrix:
golang-version: [1.18.5, 1.19]
clickhouse-version: ['22.11', '22.10', '22.9', '22.8', '22.7', '22.6', '22.5', '22.4']
env:
VERBOSE: 1
GOFLAGS: -mod=readonly
steps:
- uses: vahid-sohrabloo/clickhouse-action@v1
with:
version: '${{ matrix.clickhouse-version }}'
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: 1.18.5
- name: Checkout code
uses: actions/checkout@v3.3.0
- name: Test
run: make test
================================================
FILE: .github/workflows/lint.yaml
================================================
name: golangci-lint
on:
push:
branches:
- main
pull_request:
jobs:
lint:
name: lint
runs-on: ubuntu-latest
steps:
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: 1.19
- name: Checkout code
uses: actions/checkout@v3.3.0
- name: golangci-lint
uses: golangci/golangci-lint-action@v3
with:
version: v1.50
args: --timeout=10m
================================================
FILE: .gitignore
================================================
.envrc
bin/
vendor/
build/
coverage.out
================================================
FILE: .golangci.yml
================================================
linters-settings:
dupl:
threshold: 100
funlen:
lines: 130
statements: 60
goconst:
min-len: 5
min-occurrences: 3
gocritic:
enabled-tags:
- diagnostic
- experimental
- opinionated
- performance
- style
disabled-checks:
- dupImport # https://github.com/go-critic/go-critic/issues/845
- ifElseChain
- octalLiteral
- whyNoLint
- wrapperFunc
gocyclo:
min-complexity: 20
goimports:
local-prefixes: github.com/golangci/golangci-lint
gomnd:
settings:
mnd:
# don't include the "operation" and "assign"
checks: argument,case,condition,return
ignored-numbers: 1000000
govet:
check-shadowing: false
lll:
line-length: 140
maligned:
suggest-new: true
misspell:
locale: US
nolintlint:
allow-leading-space: true # don't require machine-readable nolint directives (i.e. with no leading space)
allow-unused: false # report any unused nolint directives
require-explanation: false # don't require an explanation for nolint directives
require-specific: false # don't require nolint directives to be specific about which linter is being skipped
linters:
disable-all: true
enable:
# - bodyclose
- depguard
- dogsled
- dupl
- errcheck
- exportloopref
- funlen
- gochecknoinits
- goconst
- gocritic
- gocyclo
- gofmt
- goimports
- goprintffuncname
- gosec
- gosimple
- govet
- ineffassign
- lll
- misspell
- nakedret
# - noctx
- nolintlint
- staticcheck
- stylecheck
- typecheck
- unconvert
# - unparam
- unused
- whitespace
# don't enable:
# - asciicheck
# - scopelint
# - gochecknoglobals
# - gocognit
# - godot
# - godox
# - goerr113
# - interfacer
# - maligned
# - nestif
# - prealloc
# - testpackage
# - revive
# - wsl
# - gomnd
issues:
# Excluding configuration per-path, per-linter, per-text and per-source
exclude-rules:
- path: _test\.go
linters:
- goconst
- dupl
- funlen
- gocyclo
- gosec
- goerr113
- maligned
- errcheck
- path: cmd/chgogen
linters:
- goconst
- funlen
- gocyclo
- path: _unsafe\.go
linters:
- dupl
- path: main\.go
linters:
- goconst
- gocritic
- dupl # todo fix later
run:
skip-dirs:
================================================
FILE: LICENSE
================================================
MIT License
Copyright (c) 2020 vahid-sohrabloo
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
================================================
FILE: Makefile
================================================
# A Self-Documenting Makefile: http://marmelab.com/blog/2016/02/29/auto-documented-makefile.html
OS = $(shell uname | tr A-Z a-z)
export PATH := $(abspath bin/):${PATH}
# Build variables
BUILD_DIR ?= build
VERSION ?= $(shell git describe --tags --exact-match 2>/dev/null || git symbolic-ref -q --short HEAD)
COMMIT_HASH ?= $(shell git rev-parse --short HEAD 2>/dev/null)
DATE_FMT = +%FT%T%z
ifdef SOURCE_DATE_EPOCH
BUILD_DATE ?= $(shell date -u -d "@$(SOURCE_DATE_EPOCH)" "$(DATE_FMT)" 2>/dev/null || date -u -r "$(SOURCE_DATE_EPOCH)" "$(DATE_FMT)" 2>/dev/null || date -u "$(DATE_FMT)")
else
BUILD_DATE ?= $(shell date "$(DATE_FMT)")
endif
LDFLAGS += -X main.version=${VERSION} -X main.commitHash=${COMMIT_HASH} -X main.buildDate=${BUILD_DATE}
export CGO_ENABLED ?= 1
ifeq (${VERBOSE}, 1)
ifeq ($(filter -v,${GOARGS}),)
GOARGS += -v
endif
TEST_FORMAT = short-verbose
endif
# Project variables
# Dependency versions
GOTESTSUM_VERSION = 1.8.1
GOLANGCI_VERSION = 1.50.0
GOLANG_VERSION = 1.14
# Add the ability to override some variables
# Use with care
-include override.mk
.PHONY: up
up: start config.toml ## Set up the development environment
.PHONY: down
down: clear ## Destroy the development environment
docker-compose down --volumes --remove-orphans --rmi local
rm -rf var/docker/volumes/*
.PHONY: reset
reset: down up ## Reset the development environment
.PHONY: clear
clear: ## Clear the working area and the project
rm -rf bin/
docker-compose.override.yml:
cp docker-compose.override.yml.dist docker-compose.override.yml
.PHONY: start
start: docker-compose.override.yml ## Start docker development environment
@ if [ docker-compose.override.yml -ot docker-compose.override.yml.dist ]; then diff -u docker-compose.override.yml docker-compose.override.yml.dist || (echo "!!! The distributed docker-compose.override.yml example changed. Please update your file accordingly (or at least touch it). !!!" && false); fi
docker-compose up -d
.PHONY: stop
stop: ## Stop docker development environment
docker-compose stop
config.toml:
sed 's/production/development/g; s/debug = false/debug = true/g; s/shutdownTimeout = "15s"/shutdownTimeout = "0s"/g; s/format = "json"/format = "logfmt"/g; s/level = "info"/level = "debug"/g; s/addr = ":10000"/addr = "127.0.0.1:10000"/g; s/httpAddr = ":8000"/httpAddr = "127.0.0.1:8000"/g; s/grpcAddr = ":8001"/grpcAddr = "127.0.0.1:8001"/g' config.toml.dist > config.toml
.PHONY: run-%
run-%: build-%
${BUILD_DIR}/$*
.PHONY: run
run: $(patsubst cmd/%,run-%,$(wildcard cmd/*)) ## Build and execute a binary
.PHONY: clean
clean: ## Clean builds
rm -rf ${BUILD_DIR}/
rm -rf cmd/*/pkged.go
.PHONY: goversion
goversion:
ifneq (${IGNORE_GOLANG_VERSION_REQ}, 1)
@printf "${GOLANG_VERSION}\n$$(go version | awk '{sub(/^go/, "", $$3);print $$3}')" | sort -t '.' -k 1,1 -k 2,2 -k 3,3 -g | head -1 | grep -q -E "^${GOLANG_VERSION}$$" || (printf "Required Go version is ${GOLANG_VERSION}\nInstalled: `go version`" && exit 1)
endif
.PHONY: build-%
build-%: goversion
ifeq (${VERBOSE}, 1)
go env
endif
go build ${GOARGS} -tags "${GOTAGS}" -ldflags "${LDFLAGS}" -o ${BUILD_DIR}/$* ./cmd/$*
.PHONY: build
build: goversion ## Build all binaries
ifeq (${VERBOSE}, 1)
go env
endif
@mkdir -p ${BUILD_DIR}
go build ${GOARGS} -tags "${GOTAGS}" -ldflags "${LDFLAGS}" -o ${BUILD_DIR}/ ./cmd/...
.PHONY: build-release
build-release:
@${MAKE} LDFLAGS="-w ${LDFLAGS}" GOARGS="${GOARGS} -trimpath" BUILD_DIR="${BUILD_DIR}/release" build
.PHONY: build-debug
build-debug: ## Build all binaries with remote debugging capabilities
@${MAKE} GOARGS="${GOARGS} -gcflags \"all=-N -l\"" BUILD_DIR="${BUILD_DIR}/debug" build
.PHONY: check
check: test-all lint ## Run tests and linters
bin/gotestsum: bin/gotestsum-${GOTESTSUM_VERSION}
@ln -sf gotestsum-${GOTESTSUM_VERSION} bin/gotestsum
bin/gotestsum-${GOTESTSUM_VERSION}:
@mkdir -p bin
curl -L https://github.com/gotestyourself/gotestsum/releases/download/v${GOTESTSUM_VERSION}/gotestsum_${GOTESTSUM_VERSION}_${OS}_amd64.tar.gz | tar -zOxf - gotestsum > ./bin/gotestsum-${GOTESTSUM_VERSION} && chmod +x ./bin/gotestsum-${GOTESTSUM_VERSION}
TEST_PKGS ?= ./...
TEST_REPORT_NAME ?= results.xml
.PHONY: test
test: TEST_REPORT ?= main
test: TEST_FORMAT ?= short
test: SHELL = /bin/bash
test: bin/gotestsum ## Run tests
bin/gotestsum --format ${TEST_FORMAT} -- $(filter-out -v,${GOARGS}) -coverprofile=coverage.out -race -parallel 1 $(if ${TEST_PKGS},${TEST_PKGS},./...)
@go tool cover -func=coverage.out
@rm coverage.out
.PHONY: test-purego
test-purego: TEST_REPORT ?= main
test-purego: TEST_FORMAT ?= standard-quiet
test-purego: SHELL = /bin/bash
test-purego: bin/gotestsum ## Run tests
bin/gotestsum --format ${TEST_FORMAT} -- $(filter-out -v,${GOARGS}) -coverprofile=coverage.out -race -parallel 1 -tags purego $(if ${TEST_PKGS},${TEST_PKGS},./...)
@go tool cover -func=coverage.out
@rm coverage.out
CVPKG = $(shell go list ./... | grep -v 'chgogen\|generator' | tr '\n' ',')
.PHONY: test-cover
test-cover: TEST_REPORT ?= main
test-cover: TEST_FORMAT ?= standard-quiet
test-cover: SHELL = /bin/bash
test-cover: bin/gotestsum ## Run tests
bin/gotestsum --format ${TEST_FORMAT} -- $(filter-out -v,${GOARGS}) -coverpkg=${CVPKG} -coverprofile=coverage.out -covermode=atomic -parallel 1 $(if ${TEST_PKGS},${TEST_PKGS},./...)
@go tool cover -func=coverage.out
.PHONY: test-all
test-all: ## Run all tests
@${MAKE} GOARGS="${GOARGS} -run .\* " TEST_REPORT=all test
.PHONY: test-integration
test-integration: ## Run integration tests
@${MAKE} GOARGS="${GOARGS} -run ^TestIntegration\$$\$$" TEST_REPORT=integration test
.PHONY: test-functional
test-functional: ## Run functional tests
@${MAKE} GOARGS="${GOARGS} -run ^TestFunctional\$$\$$" TEST_REPORT=functional test
bin/golangci-lint: bin/golangci-lint-${GOLANGCI_VERSION}
@ln -sf golangci-lint-${GOLANGCI_VERSION} bin/golangci-lint
bin/golangci-lint-${GOLANGCI_VERSION}:
@mkdir -p bin
curl -sfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | BINARY=golangci-lint bash -s -- v${GOLANGCI_VERSION}
@mv bin/golangci-lint $@
.PHONY: lint
lint: bin/golangci-lint ## Run linter
bin/golangci-lint run --deadline=20m --concurrency 1
lint-fix: bin/golangci-lint ## Run linter
bin/golangci-lint run --deadline=20m --concurrency 1 --fix
release-%: TAG_PREFIX = v
release-%:
ifneq (${DRY}, 1)
@sed -e "s/^## \[Unreleased\]$$/## [Unreleased]\\"$$'\n'"\\"$$'\n'"\\"$$'\n'"## [$*] - $$(date +%Y-%m-%d)/g; s|^\[Unreleased\]: \(.*\/compare\/\)\(.*\)...HEAD$$|[Unreleased]: \1${TAG_PREFIX}$*...HEAD\\"$$'\n'"[$*]: \1\2...${TAG_PREFIX}$*|g" CHANGELOG.md > CHANGELOG.md.new
@mv CHANGELOG.md.new CHANGELOG.md
ifeq (${TAG}, 1)
git add CHANGELOG.md
git commit -m 'Prepare release $*'
git tag -m 'Release $*' ${TAG_PREFIX}$*
ifeq (${PUSH}, 1)
git push; git push origin ${TAG_PREFIX}$*
endif
endif
endif
@echo "Version updated to $*!"
ifneq (${PUSH}, 1)
@echo
@echo "Review the changes made by this script then execute the following:"
ifneq (${TAG}, 1)
@echo
@echo "git add CHANGELOG.md && git commit -m 'Prepare release $*' && git tag -m 'Release $*' ${TAG_PREFIX}$*"
@echo
@echo "Finally, push the changes:"
endif
@echo
@echo "git push; git push origin ${TAG_PREFIX}$*"
endif
.PHONY: patch
patch: ## Release a new patch version
@${MAKE} release-$(shell (git describe --abbrev=0 --tags 2> /dev/null || echo "0.0.0") | sed 's/^v//' | awk -F'[ .]' '{print $$1"."$$2"."$$3+1}')
.PHONY: minor
minor: ## Release a new minor version
@${MAKE} release-$(shell (git describe --abbrev=0 --tags 2> /dev/null || echo "0.0.0") | sed 's/^v//' | awk -F'[ .]' '{print $$1"."$$2+1".0"}')
.PHONY: major
major: ## Release a new major version
@${MAKE} release-$(shell (git describe --abbrev=0 --tags 2> /dev/null || echo "0.0.0") | sed 's/^v//' | awk -F'[ .]' '{print $$1+1".0.0"}')
.PHONY: list
list: ## List all make targets
@${MAKE} -pRrn : -f $(MAKEFILE_LIST) 2>/dev/null | awk -v RS= -F: '/^# File/,/^# Finished Make data base/ {if ($$1 !~ "^[#.]") {print $$1}}' | egrep -v -e '^[^[:alnum:]]' -e '^$@$$' | sort
.PHONY: help
.DEFAULT_GOAL := help
help:
@grep -h -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}'
# Variable outputting/exporting rules
var-%: ; @echo $($*)
varexport-%: ; @echo $*=$($*)
================================================
FILE: README.md
================================================
[](https://pkg.go.dev/github.com/vahid-sohrabloo/chconn/v2)
[](https://codecov.io/gh/vahid-sohrabloo/chconn)
[](https://goreportcard.com/report/github.com/vahid-sohrabloo/chconn/v2)
[](https://github.com/vahid-sohrabloo/chconn/actions)
[](https://app.fossa.com/projects/git%2Bgithub.com%2Fvahid-sohrabloo%2Fchconn?ref=badge_shield)
# chconn - ClickHouse low level Driver
chconn is a pure generic Go (1.18+) driver for [ClickHouse](https://clickhouse.com/) that use Native protocol
chconn aims to be low-level, fast, and performant.
For comparison with other libraries, please see https://github.com/vahid-sohrabloo/go-ch-benchmark and https://github.com/go-faster/ch-bench#benchmarks
If you have any suggestion or comment, please feel free to open an issue
## Example Usage
```go
package main
import (
"context"
"fmt"
"os"
"time"
"github.com/vahid-sohrabloo/chconn/v2/chpool"
"github.com/vahid-sohrabloo/chconn/v2/column"
)
func main() {
conn, err := chpool.New(os.Getenv("DATABASE_URL"))
if err != nil {
panic(err)
}
defer conn.Close()
// to check if the connection is alive
err = conn.Ping(context.Background())
if err != nil {
panic(err)
}
err = conn.Exec(context.Background(), `DROP TABLE IF EXISTS example_table`)
if err != nil {
panic(err)
}
err = conn.Exec(context.Background(), `CREATE TABLE example_table (
uint64 UInt64,
uint64_nullable Nullable(UInt64)
) Engine=Memory`)
if err != nil {
panic(err)
}
col1 := column.New[uint64]()
col2 := column.New[uint64]().Nullable()
rows := 1_000_0000 // Ten million rows - insert in 10 times
numInsert := 10
col1.SetWriteBufferSize(rows)
col2.SetWriteBufferSize(rows)
startInsert := time.Now()
for i := 0; i < numInsert; i++ {
for y := 0; y < rows; y++ {
col1.Append(uint64(i))
if i%2 == 0 {
col2.Append(uint64(i))
} else {
col2.AppendNil()
}
}
ctxInsert, cancelInsert := context.WithTimeout(context.Background(), time.Second*30)
// insert data
err = conn.Insert(ctxInsert, "INSERT INTO example_table (uint64,uint64_nullable) VALUES", col1, col2)
if err != nil {
cancelInsert()
panic(err)
}
cancelInsert()
}
fmt.Println("inserted 10M rows in ", time.Since(startInsert))
// select data
col1Read := column.New[uint64]()
col2Read := column.New[uint64]().Nullable()
ctxSelect, cancelSelect := context.WithTimeout(context.Background(), time.Second*30)
defer cancelSelect()
startSelect := time.Now()
selectStmt, err := conn.Select(ctxSelect, "SELECT uint64,uint64_nullable FROM example_table", col1Read, col2Read)
if err != nil {
panic(err)
}
// make sure the stmt close after select. but it's not necessary
defer selectStmt.Close()
var col1Data []uint64
var col2DataNil []bool
var col2Data []uint64
// read data block by block
// for more information about block, see: https://clickhouse.com/docs/en/development/architecture/#block
for selectStmt.Next() {
col1Data = col1Data[:0]
col1Data = col1Read.Read(col1Data)
col2DataNil = col2DataNil[:0]
col2DataNil = col2Read.ReadNil(col2DataNil)
col2Data = col2Data[:0]
col2Data = col2Read.Read(col2Data)
}
// check errors
if selectStmt.Err() != nil {
panic(selectStmt.Err())
}
fmt.Println("selected 10M rows in ", time.Since(startSelect))
}
```
```
inserted 10M rows in 1.206666188s
selected 10M rows in 880.505004ms
```
**For moe information**, please see the [documentation](https://github.com/vahid-sohrabloo/chconn/wiki)
## Features
* Generics (go1.18) for column types
* Connection pool with after-connect hook for arbitrary connection setup similar to pgx (thanks @jackc)
* Support DSN and Query connection string (thanks @jackc)
* Support All ClickHouse data types
* Read and write data in column-oriented (like ClickHouse)
* Do not use `interface{}` , `reflect`
* Batch select and insert
* Full TLS connection control
* Read raw binary data
* Supports profile and progress
* database url connection very like pgx (thanks @jackc)
* Code generator for Insert
* Support LZ4 and ZSTD compression protocol
* Support execution telemetry streaming profiles and progress
## Supported types
* UInt8, UInt16, UInt32, UInt64, UInt128, UInt256
* Int8, Int16, Int32, Int64, Int128, Int256
* Date, Date32, DateTime, DateTime64
* Decimal32, Decimal64, Decimal128, Decimal256
* IPv4, IPv6
* String, FixedString(N)
* UUID
* Array(T)
* Enums
* LowCardinality(T)
* Map(K, V)
* Tuple(T1, T2, ..., Tn)
* Nullable(T)
* Point, Ring, Polygon, MultiPolygon
# Benchmarks
the source code of this benchmark here
https://github.com/vahid-sohrabloo/go-ch-benchmark
```
name \ time/op chconn chgo go-clickhouse uptrace
TestSelect100MUint64-16 150ms 154ms 8019ms 3045ms
TestSelect10MString-16 271ms 447ms 969ms 822ms
TestInsert10M-16 198ms 514ms 561ms 304ms
name \ alloc/op chconn chgo go-clickhouse uptrace
TestSelect100MUint64-16 111kB 262kB 3202443kB 800941kB
TestSelect10MString-16 1.63MB 1.79MB 1626.51MB 241.03MB
TestInsert10M-16 26.0MB 283.7MB 1680.4MB 240.2MB
name \ allocs/op chconn chgo go-clickhouse uptrace
TestSelect100MUint64-16 35.0 6683.0 200030937.0 100006069.0
TestSelect10MString-16 49.0 1748.0 30011991.0 20001120.0
TestInsert10M-16 26.0 80.0 224.0 50.0
```
## License
[](https://app.fossa.com/projects/git%2Bgithub.com%2Fvahid-sohrabloo%2Fchconn?ref=badge_large)
================================================
FILE: block.go
================================================
package chconn
import (
"bytes"
"fmt"
"github.com/vahid-sohrabloo/chconn/v2/column"
"github.com/vahid-sohrabloo/chconn/v2/internal/helper"
"github.com/vahid-sohrabloo/chconn/v2/internal/readerwriter"
)
// Column contains details of ClickHouse column
type chColumn struct {
ChType []byte
Name []byte
}
type block struct {
Columns []chColumn
NumRows uint64
NumColumns uint64
info blockInfo
headerWriter *readerwriter.Writer
}
func newBlock() *block {
return &block{
headerWriter: readerwriter.NewWriter(),
}
}
func (block *block) reset() {
block.headerWriter.Reset()
block.Columns = block.Columns[:0]
block.NumRows = 0
block.NumColumns = 0
}
func (block *block) read(ch *conn) error {
if _, err := ch.reader.ByteString(); err != nil { // temporary table
return &readError{"block: temporary table", err}
}
ch.reader.SetCompress(ch.compress)
defer ch.reader.SetCompress(false)
var err error
err = block.info.read(ch.reader)
if err != nil {
return err
}
block.NumColumns, err = ch.reader.Uvarint()
if err != nil {
return &readError{"block: read NumColumns", err}
}
block.NumRows, err = ch.reader.Uvarint()
if err != nil {
return &readError{"block: read NumRows", err}
}
return nil
}
func (block *block) readColumns(ch *conn) error {
ch.reader.SetCompress(ch.compress)
defer ch.reader.SetCompress(false)
block.Columns = make([]chColumn, block.NumColumns)
for i := uint64(0); i < block.NumColumns; i++ {
col, err := block.nextColumn(ch)
if err != nil {
return err
}
block.Columns[i] = col
}
return nil
}
func (block *block) readColumnsData(ch *conn, needValidateData bool, columns ...column.ColumnBasic) error {
ch.reader.SetCompress(ch.compress)
defer ch.reader.SetCompress(false)
for _, col := range columns {
err := col.HeaderReader(ch.reader, true, ch.serverInfo.Revision)
if err != nil {
return fmt.Errorf("read column header: %w", err)
}
if needValidateData {
if errValidate := col.Validate(); errValidate != nil {
return fmt.Errorf("validate %q: %w", col.Name(), errValidate)
}
}
err = col.ReadRaw(int(block.NumRows), ch.reader)
if err != nil {
return fmt.Errorf("read data %q: %w", col.Name(), err)
}
}
return nil
}
func (block *block) reorderColumns(columns []column.ColumnBasic) ([]column.ColumnBasic, error) {
for i, c := range block.Columns {
// check if already sorted
if bytes.Equal(columns[i].Name(), block.Columns[i].Name) {
continue
}
index, col := findColumn(columns, c.Name)
if col == nil {
return nil, &ColumnNotFoundError{
Column: string(c.Name),
}
}
columns[index] = columns[i]
columns[i] = col
}
return columns, nil
}
func findColumn(columns []column.ColumnBasic, name []byte) (int, column.ColumnBasic) {
for i, col := range columns {
if bytes.Equal(col.Name(), name) {
return i, col
}
}
return 0, nil
}
func (block *block) nextColumn(ch *conn) (chColumn, error) {
col := chColumn{}
var err error
if col.Name, err = ch.reader.ByteString(); err != nil {
return col, &readError{"block: read column name", err}
}
if col.ChType, err = ch.reader.ByteString(); err != nil {
return col, &readError{"block: read column type", err}
}
if ch.serverInfo.Revision >= helper.DbmsMinProtocolWithCustomSerialization {
customSerialization, err := ch.reader.ReadByte()
if err != nil {
return col, &readError{"block: read custom serialization", err}
}
if customSerialization == 1 {
return col, &readError{"block: custom serialization not supported", nil}
}
}
return col, nil
}
func (block *block) writeHeader(ch *conn, numRows int) error {
block.info.write(ch.writer)
// NumColumns
ch.writer.Uvarint(block.NumColumns)
// NumRows
ch.writer.Uvarint(uint64(numRows))
_, err := ch.writer.WriteTo(ch.writerToCompress)
if err != nil {
return &writeError{"write block info", err}
}
err = ch.flushCompress()
if err != nil {
return &writeError{"flush block info", err}
}
return nil
}
func (block *block) writeColumnsBuffer(ch *conn, columns ...column.ColumnBasic) error {
numRows := columns[0].NumRow()
for i, column := range block.Columns {
if numRows != columns[i].NumRow() {
return &NumberWriteError{
FirstNumRow: numRows,
NumRow: columns[i].NumRow(),
Column: string(column.Name),
FirstColumn: string(block.Columns[0].Name),
}
}
block.headerWriter.Reset()
block.headerWriter.ByteString(column.Name)
block.headerWriter.ByteString(column.ChType)
if ch.serverInfo.Revision >= helper.DbmsMinProtocolWithCustomSerialization {
block.headerWriter.Uint8(0)
}
columns[i].HeaderWriter(block.headerWriter)
if _, err := block.headerWriter.WriteTo(ch.writerToCompress); err != nil {
return &writeError{"block: write header block data for column " + string(column.Name), err}
}
if _, err := columns[i].WriteTo(ch.writerToCompress); err != nil {
return &writeError{"block: write block data for column " + string(column.Name), err}
}
}
err := ch.flushCompress()
if err != nil {
return &writeError{"block: flush block data", err}
}
return nil
}
type blockInfo struct {
field1 uint64
isOverflows uint8
field2 uint64
bucketNum int32
num3 uint64
}
func (info *blockInfo) read(r *readerwriter.Reader) error {
var err error
if info.field1, err = r.Uvarint(); err != nil {
return &readError{"blockInfo: read field1", err}
}
if info.isOverflows, err = r.ReadByte(); err != nil {
return &readError{"blockInfo: read isOverflows", err}
}
if info.field2, err = r.Uvarint(); err != nil {
return &readError{"blockInfo: read field2", err}
}
if info.bucketNum, err = r.Int32(); err != nil {
return &readError{"blockInfo: read bucketNum", err}
}
if info.num3, err = r.Uvarint(); err != nil {
return &readError{"blockInfo: read num3", err}
}
return nil
}
func (info *blockInfo) write(w *readerwriter.Writer) {
w.Uvarint(1)
w.Uint8(info.isOverflows)
w.Uvarint(2)
if info.bucketNum == 0 {
info.bucketNum = -1
}
w.Int32(info.bucketNum)
w.Uvarint(0)
}
================================================
FILE: block_test.go
================================================
package chconn
import (
"context"
"errors"
"io"
"os"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestBlockReadError(t *testing.T) {
startValidReader := 15
tests := []struct {
name string
wantErr string
numberValid int
}{
{
name: "blockInfo: temporary table",
wantErr: "block: temporary table",
numberValid: startValidReader - 1,
}, {
name: "blockInfo: read field1",
wantErr: "blockInfo: read field1",
numberValid: startValidReader,
}, {
name: "blockInfo: read isOverflows",
wantErr: "blockInfo: read isOverflows",
numberValid: startValidReader + 1,
}, {
name: "blockInfo: read field2",
wantErr: "blockInfo: read field2",
numberValid: startValidReader + 2,
}, {
name: "blockInfo: read bucketNum",
wantErr: "blockInfo: read bucketNum",
numberValid: startValidReader + 3,
}, {
name: "blockInfo: read num3",
wantErr: "blockInfo: read num3",
numberValid: startValidReader + 4,
}, {
name: "block: read NumColumns",
wantErr: "block: read NumColumns",
numberValid: startValidReader + 5,
}, {
name: "block: read NumRows",
wantErr: "block: read NumRows",
numberValid: startValidReader + 6,
}, {
name: "block: read column name",
wantErr: "block: read column name",
numberValid: startValidReader + 8,
}, {
name: "block: read column type",
wantErr: "block: read column type",
numberValid: startValidReader + 10,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
config, err := ParseConfig(os.Getenv("CHX_TEST_TCP_CONN_STRING"))
require.NoError(t, err)
config.ReaderFunc = func(r io.Reader) io.Reader {
return &readErrorHelper{
err: errors.New("timeout"),
r: r,
numberValid: tt.numberValid,
}
}
c, err := ConnectConfig(context.Background(), config)
assert.NoError(t, err)
stmt, err := c.Select(context.Background(), "SELECT * FROM system.numbers LIMIT 5;")
require.Error(t, err)
require.Nil(t, stmt)
readErr, ok := err.(*readError)
require.True(t, ok)
require.Equal(t, readErr.msg, tt.wantErr)
require.EqualError(t, readErr.Unwrap(), "timeout")
assert.True(t, c.IsClosed())
})
}
}
================================================
FILE: chconn.go
================================================
package chconn
import (
"bufio"
"context"
"crypto/tls"
"errors"
"fmt"
"io"
"net"
"strconv"
"time"
"github.com/vahid-sohrabloo/chconn/v2/column"
"github.com/vahid-sohrabloo/chconn/v2/internal/ctxwatch"
"github.com/vahid-sohrabloo/chconn/v2/internal/helper"
"github.com/vahid-sohrabloo/chconn/v2/internal/readerwriter"
)
const (
connStatusUninitialized = iota
connStatusConnecting
connStatusClosed
connStatusIdle
connStatusBusy
)
const (
// Name, version, revision, default DB
clientHello = 0
// whether the compression must be used,
// query text (without data for INSERTs).
clientQuery = 1
// A block of data (compressed or not).
clientData = 2
// Check that connection to the server is alive.
clientPing = 4
)
const (
// Name, version, revision.
serverHello = 0
// A block of data (compressed or not).
serverData = 1
// The exception during query execution.
serverException = 2
// Query execution progress: rows read, bytes read.
serverProgress = 3
// Ping response
serverPong = 4
// All packets were transmitted
serverEndOfStream = 5
// Packet with profiling info.
serverProfileInfo = 6
// A block with totals (compressed or not).
serverTotals = 7
// A block with minimums and maximums (compressed or not).
serverExtremes = 8
// Columns' description for default values calculation
serverTableColumns = 11
// list of unique parts ids.
//nolint:deadcode,unused,varcheck
serverPartUUIDs = 12
// String (UUID) describes a request for which next task is needed
//nolint:deadcode,unused,varcheck
serverReadTaskRequest = 13
// Packet with profile events from server
serverProfileEvents = 14
)
const (
dbmsVersionMajor = 1
dbmsVersionMinor = 0
dbmsVersionPatch = 0
dbmsVersionRevision = 54460
)
type queryProcessingStage uint64
const (
// queryProcessingStageComplete Completely.
queryProcessingStageComplete queryProcessingStage = 2
)
// DialFunc is a function that can be used to connect to a ClickHouse server.
type DialFunc func(ctx context.Context, network, addr string) (net.Conn, error)
// LookupFunc is a function that can be used to lookup IPs addrs from host.
type LookupFunc func(ctx context.Context, host string) (addrs []string, err error)
// ReaderFunc is a function that can be used get reader for read from server
type ReaderFunc func(io.Reader) io.Reader
// WriterFunc is a function that can be used to get writer to writer from server
// Note: DO NOT use bufio.Writer, chconn doesn't support flush
type WriterFunc func(io.Writer) io.Writer
// Conn is a low-level Clickhouse connection handle. It is not safe for concurrent usage.
type Conn interface {
// RawConn Get Raw Connection. Do not use unless you know what you want to do
RawConn() net.Conn
// Close the connection to database
Close() error
// IsClosed reports if the connection has been closed.
IsClosed() bool
// IsBusy reports if the connection is busy.
IsBusy() bool
// ServerInfo get Server info
ServerInfo() *ServerInfo
// Ping sends a ping to check that the connection to the server is alive.
Ping(ctx context.Context) error
// Exec executes a query without returning any rows.
// NOTE: don't use it for insert and select query
Exec(ctx context.Context, query string) error
// ExecWithOption executes a query without returning any rows with Query options.
// NOTE: don't use it for insert and select query
ExecWithOption(
ctx context.Context,
query string,
queryOptions *QueryOptions,
) error
// Insert executes a insert query and commit all columns data.
//
// If the query is successful, the columns buffer will be reset.
//
// NOTE: only use for insert query
Insert(ctx context.Context, query string, columns ...column.ColumnBasic) error
// InsertWithOption executes a insert query with the query options and commit all columns data.
//
// If the query is successful, the columns buffer will be reset.
//
// NOTE: only use for insert query
InsertWithOption(ctx context.Context, query string, queryOptions *QueryOptions, columns ...column.ColumnBasic) error
// Insert executes a insert query and return a InsertStmt.
//
// NOTE: only use for insert query
InsertStream(ctx context.Context, query string) (InsertStmt, error)
// InsertWithOption executes a insert query with the query options and return a InsertStmt.
//
// If the query is successful, the columns buffer will be reset.
//
// NOTE: only use for insert query
InsertStreamWithOption(
ctx context.Context,
query string,
queryOptions *QueryOptions) (InsertStmt, error)
// Select executes a query and return select stmt.
//
// NOTE: only use for select query
Select(ctx context.Context, query string, columns ...column.ColumnBasic) (SelectStmt, error)
// Select executes a query with the the query options and return select stmt.
//
// NOTE: only use for select query
SelectWithOption(
ctx context.Context,
query string,
queryOptions *QueryOptions,
columns ...column.ColumnBasic,
) (SelectStmt, error)
}
type writeFlusher interface {
io.Writer
Flush() error
}
type conn struct {
conn net.Conn // the underlying TCP connection
parameterStatuses map[string]string // parameters that have been reported by the server
serverInfo *ServerInfo
clientInfo *ClientInfo
config *Config
status byte // One of connStatus* constants
writer *readerwriter.Writer
writerTo io.Writer
writerToCompress io.Writer
reader *readerwriter.Reader
compress bool
contextWatcher *ctxwatch.ContextWatcher
block *block
profileEvent *ProfileEvent
}
// Connect establishes a connection to a ClickHouse server using the environment and connString (in URL or DSN format)
// to provide configuration. See documentation for ParseConfig for details. ctx can be used to cancel a connect attempt.
func Connect(ctx context.Context, connString string) (Conn, error) {
config, err := ParseConfig(connString)
if err != nil {
return nil, err
}
return ConnectConfig(ctx, config)
}
// ConnectConfig establishes a connection to a ClickHouse server using config. config must have been constructed with
// ParseConfig. ctx can be used to cancel a connect attempt.
//
// If config.Fallbacks are present they will sequentially be tried in case of error establishing network connection. An
// authentication error will terminate the chain of attempts (like libpq:
// https://www.postgresql.org/docs/12/libpq-connect.html#LIBPQ-MULTIPLE-HOSTS) and be returned as the error. Otherwise,
// if all attempts fail the last error is returned.
func ConnectConfig(octx context.Context, config *Config) (c Conn, err error) {
// Default values are set in ParseConfig. Enforce initial creation by ParseConfig rather than setting defaults from
// zero values.
if !config.createdByParseConfig {
panic("config must be created by ParseConfig")
}
// Simplify usage by treating primary config and fallbacks the same.
fallbackConfigs := []*FallbackConfig{
{
Host: config.Host,
Port: config.Port,
TLSConfig: config.TLSConfig,
},
}
fallbackConfigs = append(fallbackConfigs, config.Fallbacks...)
ctx := octx
fallbackConfigs, err = expandWithIPs(ctx, config.LookupFunc, fallbackConfigs)
if err != nil {
return nil, &connectError{config: config, msg: "hostname resolving error", err: err}
}
if len(fallbackConfigs) == 0 {
return nil, &connectError{config: config, msg: "hostname resolving error", err: ErrIPNotFound}
}
foundBestServer := false
var fallbackConfig *FallbackConfig
for _, fc := range fallbackConfigs {
// ConnectTimeout restricts the whole connection process.
if config.ConnectTimeout != 0 {
var cancel context.CancelFunc
ctx, cancel = context.WithTimeout(octx, config.ConnectTimeout)
//nolint:gocritic
defer cancel()
} else {
ctx = octx
}
c, err = connect(ctx, config, fc)
if err == nil {
foundBestServer = true
break
} else if chErr, ok := err.(*ChError); ok {
return nil, &connectError{config: config, msg: "server error", err: chErr}
}
}
if !foundBestServer && fallbackConfig != nil {
c, err = connect(ctx, config, fallbackConfig)
if cherr, ok := err.(*ChError); ok {
err = &connectError{config: config, msg: "server error", err: cherr}
}
}
if err != nil {
return nil, err // no need to wrap in connectError because it will already be wrapped in all cases except ChError
}
if config.AfterConnect != nil {
err := config.AfterConnect(ctx, c)
if err != nil {
c.RawConn().Close()
return nil, &connectError{config: config, msg: "AfterConnect error", err: err}
}
}
return c, nil
}
func expandWithIPs(ctx context.Context, lookupFn LookupFunc, fallbacks []*FallbackConfig) ([]*FallbackConfig, error) {
var configs []*FallbackConfig
for _, fb := range fallbacks {
ips, err := lookupFn(ctx, fb.Host)
if err != nil {
return nil, err
}
for _, ip := range ips {
splitIP, splitPort, err := net.SplitHostPort(ip)
if err == nil {
port, err := strconv.ParseUint(splitPort, 10, 16)
if err != nil {
return nil, fmt.Errorf("error parsing port (%s) from lookup: %w", splitPort, err)
}
configs = append(configs, &FallbackConfig{
Host: splitIP,
Port: uint16(port),
TLSConfig: fb.TLSConfig,
})
} else {
configs = append(configs, &FallbackConfig{
Host: ip,
Port: fb.Port,
TLSConfig: fb.TLSConfig,
})
}
}
}
return configs, nil
}
func connect(ctx context.Context, config *Config, fallbackConfig *FallbackConfig) (Conn, error) {
c := new(conn)
c.config = config
c.compress = config.Compress != CompressNone
var err error
network, address := NetworkAddress(fallbackConfig.Host, fallbackConfig.Port)
c.conn, err = config.DialFunc(ctx, network, address)
if err != nil {
var netErr net.Error
if errors.As(err, &netErr) && netErr.Timeout() {
err = &errTimeout{err: err}
}
return nil, &connectError{config: config, msg: "dial error", err: err}
}
c.parameterStatuses = make(map[string]string)
if fallbackConfig.TLSConfig != nil {
c.conn = tls.Client(c.conn, fallbackConfig.TLSConfig)
}
c.status = connStatusConnecting
c.contextWatcher = ctxwatch.NewContextWatcher(
func() {
c.conn.SetDeadline(time.Date(1, 1, 1, 1, 1, 1, 1, time.UTC)) //nolint:errcheck //no need
},
func() {
c.conn.SetDeadline(time.Time{}) //nolint:errcheck //no need
},
)
if ctx != context.Background() {
select {
case <-ctx.Done():
return nil, newContextAlreadyDoneError(ctx)
default:
}
c.contextWatcher.Watch(ctx)
defer c.contextWatcher.Unwatch()
}
c.writer = readerwriter.NewWriter()
if config.ReaderFunc != nil {
c.reader = readerwriter.NewReader(config.ReaderFunc(c.conn))
} else {
c.reader = readerwriter.NewReader(bufio.NewReaderSize(c.conn, c.config.MinReadBufferSize))
}
if config.WriterFunc != nil {
c.writerTo = config.WriterFunc(c.conn)
} else {
c.writerTo = c.conn
}
if c.compress {
c.writerToCompress = readerwriter.NewCompressWriter(c.writerTo, byte(config.Compress))
} else {
c.writerToCompress = c.writerTo
}
c.serverInfo = &ServerInfo{}
err = c.hello()
if err != nil {
return nil, preferContextOverNetTimeoutError(ctx, err)
}
c.sendAddendum()
c.block = newBlock()
c.profileEvent = newProfileEvent()
c.status = connStatusIdle
return c, nil
}
func (ch *conn) sendAddendum() {
if ch.serverInfo.Revision >= helper.DbmsMinProtocolWithQuotaKey {
ch.writer.String(ch.config.QuotaKey)
}
}
func (ch *conn) flushCompress() error {
if w, ok := ch.writerToCompress.(writeFlusher); ok {
return w.Flush()
}
return nil
}
func (ch *conn) RawConn() net.Conn {
return ch.conn
}
// send hello to ClickHouse
func (ch *conn) hello() error {
ch.writer.Uvarint(clientHello)
ch.writer.String(ch.config.ClientName)
ch.writer.Uvarint(dbmsVersionMajor)
ch.writer.Uvarint(dbmsVersionMinor)
ch.writer.Uvarint(dbmsVersionRevision)
ch.writer.String(ch.config.Database)
ch.writer.String(ch.config.User)
ch.writer.String(ch.config.Password)
if _, err := ch.writer.WriteTo(ch.writerTo); err != nil {
return fmt.Errorf("write hello: %w", err)
}
res, err := ch.receiveAndProcessData(emptyOnProgress)
if err != nil {
return err
}
if ch.serverInfo.Revision == 0 {
return &unexpectedPacket{expected: "serverHello", actual: res}
}
return nil
}
// IsClosed reports if the connection has been closed.
func (ch *conn) IsClosed() bool {
return ch.status < connStatusIdle
}
// IsBusy reports if the connection is busy.
func (ch *conn) IsBusy() bool {
return ch.status == connStatusBusy
}
// lock locks the connection.
func (ch *conn) lock() error {
switch ch.status {
case connStatusBusy:
return &connLockError{status: "conn busy"} // This only should be possible in case of an application bug.
case connStatusClosed:
return &connLockError{status: "conn closed"}
case connStatusUninitialized:
return &connLockError{status: "conn uninitialized"}
}
ch.status = connStatusBusy
return nil
}
func (ch *conn) unlock() {
switch ch.status {
case connStatusBusy:
ch.status = connStatusIdle
case connStatusClosed:
default:
panic("BUG: cannot unlock unlocked connection") // This should only be possible if there is a bug in this package.
}
}
func (ch *conn) sendQueryWithOption(
query,
queryID string,
settings Settings,
parameters *Parameters,
) error {
ch.writer.Uvarint(clientQuery)
ch.writer.String(queryID)
if ch.serverInfo.Revision >= helper.DbmsMinRevisionWithClientInfo {
if ch.clientInfo == nil {
ch.clientInfo = &ClientInfo{}
}
ch.clientInfo.fillOSUserHostNameAndVersionInfo()
ch.clientInfo.ClientName = ch.config.Database + " " + ch.config.ClientName
ch.clientInfo.write(ch)
}
// setting
if settings != nil && ch.serverInfo.Revision >= helper.DbmsMinRevisionWithSettingsSerializedAsStrings {
settings.write(ch.writer)
}
ch.writer.String("")
if ch.serverInfo.Revision >= helper.DbmsMinRevisionWithInterServerSecret {
ch.writer.String("")
}
ch.writer.Uvarint(uint64(queryProcessingStageComplete))
// compression
if ch.compress {
ch.writer.Uint8(1)
} else {
ch.writer.Uint8(0)
}
ch.writer.String(query)
if ch.serverInfo.Revision >= helper.DbmsMinProtocolWithParameters {
parameters.write(ch.writer)
ch.writer.String("")
} else if parameters.hasParam() {
return errors.New("parameters are not supported by the server")
}
return ch.sendEmptyBlock()
}
func (ch *conn) sendData(block *block, numRows int) error {
ch.writer.Uvarint(clientData)
// name
ch.writer.String("")
// if compress enable we must send this part with uncompressed data
if ch.compress {
_, err := ch.writer.WriteTo(ch.writerTo)
if err != nil {
return &writeError{"write block info", err}
}
}
return block.writeHeader(ch, numRows)
}
func (ch *conn) sendEmptyBlock() error {
ch.block.reset()
return ch.sendData(ch.block, 0)
}
func (ch *conn) Close() error {
if ch.status == connStatusClosed {
return nil
}
ch.contextWatcher.Unwatch()
ch.status = connStatusClosed
return ch.conn.Close()
}
func (ch *conn) readTableColumn() {
// todo check errors
ch.reader.String() //nolint:errcheck //no needed
ch.reader.String() //nolint:errcheck //no needed
}
func (ch *conn) receiveAndProcessData(onProgress func(*Progress)) (interface{}, error) {
packet, err := ch.reader.Uvarint()
if err != nil {
return nil, &readError{"packet: read packet type", err}
}
switch packet {
case serverData, serverTotals, serverExtremes:
ch.block.reset()
err = ch.block.read(ch)
return ch.block, err
case serverProfileInfo:
profile := newProfile()
err = profile.read(ch)
return profile, err
case serverProgress:
progress := newProgress()
err = progress.read(ch)
if err == nil && onProgress != nil {
onProgress(progress)
return ch.receiveAndProcessData(onProgress)
}
return progress, err
case serverHello:
err = ch.serverInfo.read(ch.reader)
return nil, err
case serverPong:
return &pong{}, err
case serverException:
err := &ChError{}
defer ch.Close()
if errRead := err.read(ch.reader); errRead != nil {
return nil, errRead
}
return nil, err
case serverEndOfStream:
return nil, nil
case serverTableColumns:
ch.readTableColumn()
return ch.receiveAndProcessData(onProgress)
case serverProfileEvents:
ch.block.reset()
oldCompress := ch.compress
defer func() {
ch.compress = oldCompress
}()
ch.compress = false
err = ch.block.read(ch)
if err != nil {
return nil, err
}
err := ch.profileEvent.read(ch)
if err != nil {
return nil, err
}
return ch.profileEvent, nil
}
return nil, ¬ImplementedPacket{packet: packet}
}
var emptyOnProgress = func(*Progress) {
}
var emptyQueryOptions = &QueryOptions{
OnProgress: emptyOnProgress,
}
type QueryOptions struct {
QueryID string
Settings Settings
OnProgress func(*Progress)
OnProfile func(*Profile)
OnProfileEvent func(*ProfileEvent)
Parameters *Parameters
UseGoTime bool
}
func (ch *conn) Exec(ctx context.Context, query string) error {
return ch.ExecWithOption(ctx, query, nil)
}
func (ch *conn) ExecWithOption(
ctx context.Context,
query string,
queryOptions *QueryOptions,
) error {
err := ch.lock()
if err != nil {
return err
}
defer func() {
ch.unlock()
if err != nil {
ch.Close()
}
}()
if ctx != context.Background() {
select {
case <-ctx.Done():
return newContextAlreadyDoneError(ctx)
default:
}
ch.contextWatcher.Watch(ctx)
defer ch.contextWatcher.Unwatch()
}
if queryOptions == nil {
queryOptions = emptyQueryOptions
}
err = ch.sendQueryWithOption(query, queryOptions.QueryID, queryOptions.Settings, queryOptions.Parameters)
if err != nil {
return preferContextOverNetTimeoutError(ctx, err)
}
if queryOptions.OnProgress == nil {
queryOptions.OnProgress = emptyOnProgress
}
_, err = ch.receiveAndProcessData(queryOptions.OnProgress)
return preferContextOverNetTimeoutError(ctx, err)
}
================================================
FILE: chconn_test.go
================================================
package chconn
import (
"context"
"crypto/tls"
"errors"
"io"
"os"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestConnect(t *testing.T) {
t.Parallel()
connString := os.Getenv("CHX_TEST_TCP_CONN_STRING") + " connect_timeout=10"
conn, err := Connect(context.Background(), connString)
require.NoError(t, err)
require.NoError(t, conn.Ping(context.Background()))
require.NotEmpty(t, conn.ServerInfo().String())
require.Nil(t, conn.Close())
require.True(t, conn.IsClosed())
// test protected two close
require.Nil(t, conn.Close())
}
func TestConnectError(t *testing.T) {
t.Parallel()
connString := os.Getenv("CHX_TEST_TCP_CONN_STRING")
config, err := ParseConfig(connString)
require.NoError(t, err)
config.Password = "invalid password"
config.User = "invalid username"
conn, err := ConnectConfig(context.Background(), config)
assert.Contains(t,
err.Error(),
"server error ( DB::Exception (516)")
assert.Contains(t,
errors.Unwrap(err).Error(),
" DB::Exception (516):")
assert.Nil(t, conn)
conn, err = Connect(context.Background(), "host>0")
assert.EqualError(t,
err,
"cannot parse `host>0`: failed to parse as DSN (invalid dsn)")
assert.Nil(t, conn)
ctx, cancel := context.WithCancel(context.Background())
cancel()
conn, err = Connect(ctx, connString)
assert.Error(t,
errors.Unwrap(err),
context.Canceled)
assert.Nil(t, conn)
conn, err = Connect(context.Background(), "host=invalid_host")
assert.Contains(t,
err.Error(),
"hostname resolving error")
assert.Nil(t, conn)
config, err = ParseConfig(connString)
require.NoError(t, err)
config.Port = 63666
conn, err = ConnectConfig(context.Background(), config)
assert.Contains(t,
err.Error(),
"connect: connection refused")
assert.Nil(t, conn)
config, err = ParseConfig(os.Getenv("CHX_TEST_TCP_CONN_STRING"))
require.NoError(t, err)
config.AfterConnect = func(ctx context.Context, c Conn) error {
return errors.New("afterConnect err")
}
_, err = ConnectConfig(context.Background(), config)
assert.EqualError(t,
errors.Unwrap(err),
"afterConnect err")
config, err = ParseConfig(os.Getenv("CHX_TEST_TCP_CONN_STRING"))
require.NoError(t, err)
config.WriterFunc = func(w io.Writer) io.Writer {
return &writerErrorHelper{
err: errors.New("timeout"),
w: w,
numberValid: 0,
}
}
_, err = ConnectConfig(context.Background(), config)
assert.EqualError(t, err, "write hello: timeout")
}
func TestEndOfStream(t *testing.T) {
t.Parallel()
connString := os.Getenv("CHX_TEST_TCP_CONN_STRING")
conn, err := Connect(context.Background(), connString)
require.NoError(t, err)
err = conn.Exec(context.Background(), `CREATE TABLE IF NOT EXISTS example (
country_code FixedString(2),
os_id UInt8,
browser_id UInt8,
categories Array(Int16),
action_day Date,
action_time DateTime
) engine=Memory`)
require.NoError(t, err)
}
func TestException(t *testing.T) {
t.Parallel()
connString := os.Getenv("CHX_TEST_TCP_CONN_STRING")
conn, err := Connect(context.Background(), connString)
require.NoError(t, err)
require.NoError(t, conn.Ping(context.Background()))
err = conn.Exec(context.Background(), `invalid query`)
var chError *ChError
require.True(t, errors.As(err, &chError))
require.Equal(t, chError.Code, ChErrorSyntaxError)
require.Equal(t, chError.Name, "DB::Exception")
}
func TestTlsPreferConnect(t *testing.T) {
t.Parallel()
connString := os.Getenv("CHX_TEST_TCP_TLS_CONN_STRING")
if connString == "" {
t.Skip("please set CHX_TEST_TCP_TLS_CONN_STRING env")
return
}
conn, err := Connect(context.Background(), connString)
require.NoError(t, err)
require.NoError(t, conn.Ping(context.Background()))
if _, ok := conn.RawConn().(*tls.Conn); !ok {
t.Error("not a TLS connection")
}
conn.RawConn().Close()
}
func TestConnectConfigRequiresConnConfigFromParseConfig(t *testing.T) {
t.Parallel()
config := &Config{}
require.PanicsWithValue(t, "config must be created by ParseConfig", func() {
ConnectConfig(context.Background(), config)
})
}
func TestLockError(t *testing.T) {
t.Parallel()
connString := os.Getenv("CHX_TEST_TCP_CONN_STRING")
c, err := Connect(context.Background(), connString)
require.NoError(t, err)
c.(*conn).status = connStatusBusy
require.EqualError(t, c.(*conn).lock(), "conn busy")
c.(*conn).status = connStatusClosed
require.EqualError(t, c.(*conn).lock(), "conn closed")
c.(*conn).status = connStatusUninitialized
require.EqualError(t, c.(*conn).lock(), "conn uninitialized")
resSelect, err := c.Select(context.Background(), "SET enable_http_compression=1")
require.EqualError(t, err, "conn uninitialized")
require.Nil(t, resSelect)
require.EqualError(t, c.(*conn).lock(), "conn uninitialized")
}
func TestUnlockError(t *testing.T) {
t.Parallel()
connString := os.Getenv("CHX_TEST_TCP_CONN_STRING")
c, err := Connect(context.Background(), connString)
require.NoError(t, err)
c.(*conn).status = connStatusUninitialized
require.PanicsWithValue(t, "BUG: cannot unlock unlocked connection", func() {
c.(*conn).unlock()
})
}
func TestExecError(t *testing.T) {
t.Parallel()
connString := os.Getenv("CHX_TEST_TCP_CONN_STRING")
config, err := ParseConfig(connString)
require.NoError(t, err)
c, err := ConnectConfig(context.Background(), config)
require.NoError(t, err)
c.(*conn).status = connStatusUninitialized
err = c.Exec(context.Background(), "SET enable_http_compression=1")
require.EqualError(t, err, "conn uninitialized")
require.EqualError(t, c.(*conn).lock(), "conn uninitialized")
c.Close()
config.WriterFunc = func(w io.Writer) io.Writer {
return &writerErrorHelper{
err: errors.New("timeout"),
w: w,
numberValid: 1,
}
}
c, err = ConnectConfig(context.Background(), config)
require.NoError(t, err)
err = c.Exec(context.Background(), "SET enable_http_compression=1")
require.EqualError(t, err, "write block info (timeout)")
assert.True(t, c.IsClosed())
}
func TestExecCtxError(t *testing.T) {
t.Parallel()
connString := os.Getenv("CHX_TEST_TCP_CONN_STRING")
config, err := ParseConfig(connString)
require.NoError(t, err)
c, err := ConnectConfig(context.Background(), config)
require.NoError(t, err)
ctx, cancel := context.WithCancel(context.Background())
cancel()
err = c.Exec(ctx, "select * from system.numbers limit 1")
require.EqualError(t, err, "timeout: context already done: context canceled")
assert.False(t, c.IsClosed())
config.WriterFunc = func(w io.Writer) io.Writer {
return &writerSlowHelper{
w: w,
sleep: time.Second,
}
}
c, err = ConnectConfig(context.Background(), config)
require.NoError(t, err)
ctx, cancel = context.WithTimeout(context.Background(), time.Millisecond*50)
defer cancel()
err = c.Exec(ctx, "select * from system.numbers")
require.EqualError(t, errors.Unwrap(err), "context deadline exceeded")
assert.True(t, c.IsClosed())
}
func TestReceivePackError(t *testing.T) {
t.Parallel()
connString := os.Getenv("CHX_TEST_TCP_CONN_STRING")
config, err := ParseConfig(connString)
require.NoError(t, err)
config.ReaderFunc = func(r io.Reader) io.Reader {
return &readErrorHelper{
err: errors.New("timeout"),
r: r,
numberValid: 13,
}
}
c, err := ConnectConfig(context.Background(), config)
require.NoError(t, err)
err = c.Exec(context.Background(), `SELECT * FROM system.numbers limit 1`)
require.EqualError(t, err, "packet: read packet type (timeout)")
assert.True(t, c.IsClosed())
}
================================================
FILE: chpool/common_test.go
================================================
package chpool
import (
"context"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/vahid-sohrabloo/chconn/v2"
"github.com/vahid-sohrabloo/chconn/v2/column"
)
// Conn.Release is an asynchronous process that returns immediately. There is no signal when the actual work is
// completed. To test something that relies on the actual work for Conn.Release being completed we must simply wait.
// This function wraps the sleep so there is more meaning for the callers.
func waitForReleaseToComplete() {
time.Sleep(500 * time.Millisecond)
}
type execer interface {
Exec(ctx context.Context, sql string) error
}
func testExec(t *testing.T, db execer) {
err := db.Exec(context.Background(), "SET enable_http_compression=1")
require.NoError(t, err)
}
type selecter interface {
Select(ctx context.Context, query string, columns ...column.ColumnBasic) (chconn.SelectStmt, error)
}
func testSelect(t *testing.T, db selecter) {
var (
num []uint64
)
col := column.New[uint64]()
stmt, err := db.Select(context.Background(), "SELECT * FROM system.numbers LIMIT 5;", col)
require.NoError(t, err)
for stmt.Next() {
assert.NoError(t, err)
num = col.Read(num)
assert.NoError(t, err)
}
assert.NoError(t, stmt.Err())
assert.Equal(t, 5, len(num))
stmt.Close()
assert.ElementsMatch(t, []uint64{0, 1, 2, 3, 4}, num)
}
func assertConfigsEqual(t *testing.T, expected, actual *Config, testName string) {
if !assert.NotNil(t, expected) {
return
}
if !assert.NotNil(t, actual) {
return
}
assert.Equalf(t, expected.ConnString(), actual.ConnString(), "%s - ConnString", testName)
// Can't test function equality, so just test that they are set or not.
assert.Equalf(t, expected.AfterConnect == nil, actual.AfterConnect == nil, "%s - AfterConnect", testName)
assert.Equalf(t, expected.BeforeAcquire == nil, actual.BeforeAcquire == nil, "%s - BeforeAcquire", testName)
assert.Equalf(t, expected.AfterRelease == nil, actual.AfterRelease == nil, "%s - AfterRelease", testName)
assert.Equalf(t, expected.MaxConnLifetime, actual.MaxConnLifetime, "%s - MaxConnLifetime", testName)
assert.Equalf(t, expected.MaxConnIdleTime, actual.MaxConnIdleTime, "%s - MaxConnIdleTime", testName)
assert.Equalf(t, expected.MaxConns, actual.MaxConns, "%s - MaxConns", testName)
assert.Equalf(t, expected.MinConns, actual.MinConns, "%s - MinConns", testName)
assert.Equalf(t, expected.HealthCheckPeriod, actual.HealthCheckPeriod, "%s - HealthCheckPeriod", testName)
assertConnConfigsEqual(t, expected.ConnConfig, actual.ConnConfig, testName)
}
func assertConnConfigsEqual(t *testing.T, expected, actual *chconn.Config, testName string) {
if !assert.NotNil(t, expected) {
return
}
if !assert.NotNil(t, actual) {
return
}
assert.Equalf(t, expected.ConnString(), actual.ConnString(), "%s - ConnString", testName)
assert.Equalf(t, expected.Host, actual.Host, "%s - Host", testName)
assert.Equalf(t, expected.Database, actual.Database, "%s - Database", testName)
assert.Equalf(t, expected.Port, actual.Port, "%s - Port", testName)
assert.Equalf(t, expected.User, actual.User, "%s - User", testName)
assert.Equalf(t, expected.Password, actual.Password, "%s - Password", testName)
assert.Equalf(t, expected.ConnectTimeout, actual.ConnectTimeout, "%s - ConnectTimeout", testName)
assert.Equalf(t, expected.RuntimeParams, actual.RuntimeParams, "%s - RuntimeParams", testName)
// Can't test function equality, so just test that they are set or not.
assert.Equalf(t, expected.ValidateConnect == nil, actual.ValidateConnect == nil, "%s - ValidateConnect", testName)
assert.Equalf(t, expected.AfterConnect == nil, actual.AfterConnect == nil, "%s - AfterConnect", testName)
if assert.Equalf(t, expected.TLSConfig == nil, actual.TLSConfig == nil, "%s - TLSConfig", testName) {
if expected.TLSConfig != nil {
assert.Equalf(t,
expected.TLSConfig.InsecureSkipVerify,
actual.TLSConfig.InsecureSkipVerify,
"%s - TLSConfig InsecureSkipVerify", testName)
assert.Equalf(t,
expected.TLSConfig.ServerName,
actual.TLSConfig.ServerName,
"%s - TLSConfig ServerName", testName)
}
}
if assert.Equalf(t, len(expected.Fallbacks), len(actual.Fallbacks), "%s - Fallbacks", testName) {
for i := range expected.Fallbacks {
assert.Equalf(t,
expected.Fallbacks[i].Host,
actual.Fallbacks[i].Host,
"%s - Fallback %d - Host", testName, i)
assert.Equalf(t,
expected.Fallbacks[i].Port,
actual.Fallbacks[i].Port,
"%s - Fallback %d - Port", testName, i)
if assert.Equalf(t,
expected.Fallbacks[i].TLSConfig == nil,
actual.Fallbacks[i].TLSConfig == nil,
"%s - Fallback %d - TLSConfig", testName, i) {
if expected.Fallbacks[i].TLSConfig != nil {
assert.Equalf(t,
expected.Fallbacks[i].TLSConfig.InsecureSkipVerify,
actual.Fallbacks[i].TLSConfig.InsecureSkipVerify,
"%s - Fallback %d - TLSConfig InsecureSkipVerify", testName)
assert.Equalf(t,
expected.Fallbacks[i].TLSConfig.ServerName,
actual.Fallbacks[i].TLSConfig.ServerName,
"%s - Fallback %d - TLSConfig ServerName", testName)
}
}
}
}
}
================================================
FILE: chpool/conn.go
================================================
package chpool
import (
"context"
"sync/atomic"
puddle "github.com/jackc/puddle/v2"
"github.com/vahid-sohrabloo/chconn/v2"
"github.com/vahid-sohrabloo/chconn/v2/column"
)
// Conn is an acquired *chconn.Conn from a Pool.
type Conn interface {
Release()
// ExecWithOption executes a query without returning any rows with Query options.
// NOTE: don't use it for insert and select query
ExecWithOption(
ctx context.Context,
query string,
queryOptions *chconn.QueryOptions,
) error
// Select executes a query with the the query options and return select stmt.
// NOTE: only use for select query
SelectWithOption(
ctx context.Context,
query string,
queryOptions *chconn.QueryOptions,
columns ...column.ColumnBasic,
) (chconn.SelectStmt, error)
// InsertWithSetting executes a query with the query options and commit all columns data.
// NOTE: only use for insert query
InsertWithOption(ctx context.Context, query string, queryOptions *chconn.QueryOptions, columns ...column.ColumnBasic) error
// InsertWithSetting executes a query with the query options and commit all columns data.
// NOTE: only use for insert query
InsertStreamWithOption(ctx context.Context, query string, queryOptions *chconn.QueryOptions) (chconn.InsertStmt, error)
// Conn get the underlying chconn.Conn
Conn() chconn.Conn
// Hijack assumes ownership of the connection from the pool. Caller is responsible for closing the connection. Hijack
// will panic if called on an already released or hijacked connection.
Hijack() chconn.Conn
Ping(ctx context.Context) error
}
type conn struct {
res *puddle.Resource[*connResource]
p *pool
}
// Release returns c to the pool it was acquired from. Once Release has been called, other methods must not be called.
// However, it is safe to call Release multiple times. Subsequent calls after the first will be ignored.
func (c *conn) Release() {
if c.res == nil {
return
}
conn := c.Conn()
res := c.res
c.res = nil
if conn.IsClosed() || conn.IsBusy() {
res.Destroy()
// Signal to the health check to run since we just destroyed a connections
// and we might be below minConns now
c.p.triggerHealthCheck()
return
}
// If the pool is consistently being used, we might never get to check the
// lifetime of a connection since we only check idle connections in checkConnsHealth
// so we also check the lifetime here and force a health check
if c.p.isExpired(res) {
atomic.AddInt64(&c.p.lifetimeDestroyCount, 1)
res.Destroy()
// Signal to the health check to run since we just destroyed a connections
// and we might be below minConns now
c.p.triggerHealthCheck()
return
}
if c.p.afterRelease == nil {
res.Release()
return
}
go func() {
if c.p.afterRelease(conn) {
res.Release()
} else {
res.Destroy()
// Signal to the health check to run since we just destroyed a connections
// and we might be below minConns now
c.p.triggerHealthCheck()
}
}()
}
// Hijack assumes ownership of the connection from the pool. Caller is responsible for closing the connection. Hijack
// will panic if called on an already released or hijacked connection.
func (c *conn) Hijack() chconn.Conn {
if c.res == nil {
panic("cannot hijack already released or hijacked connection")
}
conn := c.Conn()
res := c.res
c.res = nil
res.Hijack()
return conn
}
func (c *conn) ExecWithOption(
ctx context.Context,
query string,
queryOptions *chconn.QueryOptions,
) error {
return c.Conn().ExecWithOption(ctx, query, queryOptions)
}
func (c *conn) Ping(ctx context.Context) error {
return c.Conn().Ping(ctx)
}
func (c *conn) SelectWithOption(
ctx context.Context,
query string,
queryOptions *chconn.QueryOptions,
columns ...column.ColumnBasic,
) (chconn.SelectStmt, error) {
s, err := c.Conn().SelectWithOption(ctx, query, queryOptions, columns...)
if err != nil {
return nil, err
}
return &selectStmt{
SelectStmt: s,
conn: c,
}, nil
}
func (c *conn) InsertWithOption(ctx context.Context, query string, queryOptions *chconn.QueryOptions, columns ...column.ColumnBasic) error {
return c.Conn().InsertWithOption(ctx, query, queryOptions, columns...)
}
func (c *conn) InsertStreamWithOption(ctx context.Context, query string, queryOptions *chconn.QueryOptions) (chconn.InsertStmt, error) {
s, err := c.Conn().InsertStreamWithOption(ctx, query, queryOptions)
if err != nil {
return nil, err
}
return &insertStmt{
InsertStmt: s,
conn: c,
}, nil
}
func (c *conn) Conn() chconn.Conn {
return c.connResource().conn
}
func (c *conn) connResource() *connResource {
return c.res.Value()
}
================================================
FILE: chpool/insert_stmt.go
================================================
package chpool
import (
"context"
"github.com/vahid-sohrabloo/chconn/v2"
)
type insertStmt struct {
chconn.InsertStmt
conn Conn
}
func (s *insertStmt) Flush(ctx context.Context) error {
if s.conn == nil {
return nil
}
defer s.conn.Release()
return s.InsertStmt.Flush(ctx)
}
func (s *insertStmt) Close() {
if s.conn == nil {
return
}
s.InsertStmt.Close()
s.conn.Release()
}
================================================
FILE: chpool/pool.go
================================================
package chpool
import (
"context"
"errors"
"fmt"
"math/rand"
"runtime"
"strconv"
"sync"
"sync/atomic"
"syscall"
"time"
puddle "github.com/jackc/puddle/v2"
"github.com/vahid-sohrabloo/chconn/v2"
"github.com/vahid-sohrabloo/chconn/v2/column"
)
var defaultMaxConns = int32(4)
var defaultMinConns = int32(0)
var defaultCreateIdleTimeout = time.Second * 10
var defaultMaxConnLifetime = time.Hour
var defaultMaxConnIdleTime = time.Minute * 30
var defaultHealthCheckPeriod = time.Minute
type connResource struct {
conn chconn.Conn
conns []conn
}
func (cr *connResource) getConn(p *pool, res *puddle.Resource[*connResource]) Conn {
if len(cr.conns) == 0 {
cr.conns = make([]conn, 128)
}
c := &cr.conns[len(cr.conns)-1]
cr.conns = cr.conns[0 : len(cr.conns)-1]
c.res = res
c.p = p
return c
}
// Pool is a connection pool for chconn
type Pool interface {
// Close closes all connections in the pool and rejects future Acquire calls. Blocks until all connections are returned
// to pool and closed.
Close()
Acquire(ctx context.Context) (Conn, error)
// AcquireFunc acquires a *Conn and calls f with that *Conn. ctx will only affect the Acquire. It has no effect on the
// call of f. The return value is either an error acquiring the Conn or the return value of f. The Conn is
// automatically released after the call of f.
AcquireFunc(ctx context.Context, f func(Conn) error) error
// AcquireAllIdle atomically acquires all currently idle connections. Its intended use is for health check and
// keep-alive functionality. It does not update pool statistics.
AcquireAllIdle(ctx context.Context) []Conn
// Exec executes a query without returning any rows.
// NOTE: don't use it for insert and select query
Exec(ctx context.Context, query string) error
// ExecWithOption executes a query without returning any rows with Query options.
// NOTE: don't use it for insert and select query
ExecWithOption(
ctx context.Context,
query string,
queryOptions *chconn.QueryOptions,
) error
// Insert executes a insert query and commit all columns data.
//
// If the query is successful, the columns buffer will be reset.
//
// NOTE: only use for insert query
Insert(ctx context.Context, query string, columns ...column.ColumnBasic) error
// InsertWithOption executes a insert query with the query options and commit all columns data.
//
// If the query is successful, the columns buffer will be reset.
//
// NOTE: only use for insert query
InsertWithOption(ctx context.Context, query string, queryOptions *chconn.QueryOptions, columns ...column.ColumnBasic) error
// Insert executes a insert query and return a InsertStmt.
//
// NOTE: only use for insert query
InsertStream(ctx context.Context, query string) (chconn.InsertStmt, error)
// InsertWithOption executes a insert query with the query options and return a InsertStmt.
//
// If the query is successful, the columns buffer will be reset.
//
// NOTE: only use for insert query
InsertStreamWithOption(
ctx context.Context,
query string,
queryOptions *chconn.QueryOptions) (chconn.InsertStmt, error)
// Select executes a query and return select stmt.
//
// NOTE: only use for select query
Select(ctx context.Context, query string, columns ...column.ColumnBasic) (chconn.SelectStmt, error)
// Select executes a query with the the query options and return select stmt.
//
// NOTE: only use for select query
SelectWithOption(
ctx context.Context,
query string,
queryOptions *chconn.QueryOptions,
columns ...column.ColumnBasic,
) (chconn.SelectStmt, error)
// Ping sends a ping to check that the connection to the server is alive.
Ping(ctx context.Context) error
// Stat returns a chpool.Stat struct with a snapshot of Pool statistics.
Stat() *Stat
// Reset closes all connections, but leaves the pool open. It is intended for use when an error is detected that would
// disrupt all connections (such as a network interruption or a server state change).
//
// It is safe to reset a pool while connections are checked out. Those connections will be closed when they are returned
// to the pool.
Reset()
// Config returns a copy of config that was used to initialize this pool.
Config() *Config
}
type pool struct {
p *puddle.Pool[*connResource]
config *Config
beforeConnect func(context.Context, *chconn.Config) error
afterConnect func(context.Context, chconn.Conn) error
beforeAcquire func(context.Context, chconn.Conn) bool
afterRelease func(chconn.Conn) bool
minConns int32
maxConns int32
maxConnLifetime time.Duration
maxConnLifetimeJitter time.Duration
maxConnIdleTime time.Duration
healthCheckPeriod time.Duration
healthCheckChan chan struct{}
newConnsCount int64
lifetimeDestroyCount int64
idleDestroyCount int64
closeOnce sync.Once
closeChan chan struct{}
}
// Config is the configuration struct for creating a pool. It must be created by ParseConfig and then it can be
// modified. A manually initialized Config will cause ConnectConfig to panic.
type Config struct {
ConnConfig *chconn.Config
// BeforeConnect is called before a new connection is made. It is passed a copy of the underlying chconn.Config and
// will not impact any existing open connections.
BeforeConnect func(context.Context, *chconn.Config) error
// AfterConnect is called after a connection is established, but before it is added to the pool.
AfterConnect func(context.Context, chconn.Conn) error
// BeforeAcquire is called before a connection is acquired from the pool. It must return true to allow the
// acquision or false to indicate that the connection should be destroyed and a different connection should be
// acquired.
BeforeAcquire func(context.Context, chconn.Conn) bool
// AfterRelease is called after a connection is released, but before it is returned to the pool. It must return true to
// return the connection to the pool or false to destroy the connection.
AfterRelease func(chconn.Conn) bool
// MaxConnLifetime is the duration since creation after which a connection will be automatically closed.
MaxConnLifetime time.Duration
// MaxConnLifetimeJitter is the duration after MaxConnLifetime to randomly decide to close a connection.
// This helps prevent all connections from being closed at the exact same time, starving the pool.
MaxConnLifetimeJitter time.Duration
// MaxConnIdleTime is the duration after which an idle connection will be automatically closed by the health check.
MaxConnIdleTime time.Duration
// MaxConns is the maximum size of the pool. The default is the greater of 4 or runtime.NumCPU().
MaxConns int32
// MinConns is the minimum size of the pool. After connection closes, the pool might dip below MinConns. A low
// number of MinConns might mean the pool is empty after MaxConnLifetime until the health check has a chance
// to create new connections.
MinConns int32
// HealthCheckPeriod is the duration between checks of the health of idle connections.
HealthCheckPeriod time.Duration
// CreateIdleTimeout is the timeout for create idle connection
CreateIdleTimeout time.Duration
createdByParseConfig bool // Used to enforce created by ParseConfig rule.
}
// Copy returns a deep copy of the config that is safe to use and modify.
// The only exception is the tls.Config:
// according to the tls.Config docs it must not be modified after creation.
func (c *Config) Copy() *Config {
newConfig := new(Config)
*newConfig = *c
newConfig.ConnConfig = c.ConnConfig.Copy()
return newConfig
}
// ConnString returns the connection string as parsed by pgxpool.ParseConfig into pgxpool.Config.
func (c *Config) ConnString() string { return c.ConnConfig.ConnString() }
// New creates a new Pool. See ParseConfig for information on connString format.
func New(connString string) (Pool, error) {
config, err := ParseConfig(connString)
if err != nil {
return nil, err
}
return NewWithConfig(config)
}
// NewWithConfig creates a new Pool. config must have been created by ParseConfig.
func NewWithConfig(config *Config) (Pool, error) {
// Default values are set in ParseConfig. Enforce initial creation by ParseConfig rather than setting defaults from
// zero values.
if !config.createdByParseConfig {
panic("config must be created by ParseConfig")
}
p := &pool{
config: config,
beforeConnect: config.BeforeConnect,
afterConnect: config.AfterConnect,
beforeAcquire: config.BeforeAcquire,
afterRelease: config.AfterRelease,
minConns: config.MinConns,
maxConns: config.MaxConns,
maxConnLifetime: config.MaxConnLifetime,
maxConnLifetimeJitter: config.MaxConnLifetimeJitter,
maxConnIdleTime: config.MaxConnIdleTime,
healthCheckPeriod: config.HealthCheckPeriod,
healthCheckChan: make(chan struct{}, 1),
closeChan: make(chan struct{}),
}
var err error
p.p, err = puddle.NewPool(
&puddle.Config[*connResource]{
Constructor: func(ctx context.Context) (*connResource, error) {
connConfig := p.config.ConnConfig.Copy()
// Connection will continue in background even if Acquire is canceled. Ensure that a connect won't hang forever.
if connConfig.ConnectTimeout <= 0 {
connConfig.ConnectTimeout = 2 * time.Minute
}
if p.beforeConnect != nil {
if err := p.beforeConnect(ctx, connConfig); err != nil {
return nil, err
}
}
c, err := chconn.ConnectConfig(ctx, connConfig)
if err != nil {
return nil, err
}
if p.afterConnect != nil {
err := p.afterConnect(ctx, c)
if err != nil {
c.Close()
return nil, err
}
}
cr := &connResource{
conn: c,
conns: make([]conn, 64),
}
return cr, nil
},
Destructor: func(value *connResource) {
value.conn.Close()
},
MaxSize: config.MaxConns,
},
)
if err != nil {
return nil, err
}
go func() {
//nolint:errcheck // todo find a way to handle this error
p.createIdleResources(int(p.minConns))
p.backgroundHealthCheck()
}()
return p, nil
}
// ParseConfig builds a Config from connString. It parses connString with the same behavior as chconn.ParseConfig with the
// addition of the following variables:
//
// pool_max_conns: integer greater than 0
// pool_min_conns: integer 0 or greater
// pool_max_conn_lifetime: duration string
// pool_max_conn_idle_time: duration string
// pool_health_check_period: duration string
// pool_max_conn_lifetime_jitter: duration string
// pool_create_idle_timeout: duration string
//
// See Config for definitions of these arguments.
//
// # Example DSN
// user=vahid password=secret host=clickhouse.example.com port=9000 dbname=mydb sslmode=verify-ca pool_max_conns=10
//
// # Example URL
// clickhouse://vahid:secret@ch.example.com:9000/mydb?sslmode=verify-ca&pool_max_conns=10
func ParseConfig(connString string) (*Config, error) {
chConfig, err := chconn.ParseConfig(connString)
if err != nil {
return nil, err
}
config := &Config{
ConnConfig: chConfig,
createdByParseConfig: true,
}
if s, ok := config.ConnConfig.RuntimeParams["pool_max_conns"]; ok {
delete(config.ConnConfig.RuntimeParams, "pool_max_conns")
n, err := strconv.ParseInt(s, 10, 32)
if err != nil {
return nil, fmt.Errorf("cannot parse pool_max_conns: %w", err)
}
if n < 1 {
//nolint:goerr113
return nil, fmt.Errorf("pool_max_conns too small: %d", n)
}
config.MaxConns = int32(n)
} else {
config.MaxConns = defaultMaxConns
if numCPU := int32(runtime.NumCPU()); numCPU > config.MaxConns {
config.MaxConns = numCPU
}
}
if s, ok := config.ConnConfig.RuntimeParams["pool_min_conns"]; ok {
delete(config.ConnConfig.RuntimeParams, "pool_min_conns")
n, err := strconv.ParseInt(s, 10, 32)
if err != nil {
return nil, fmt.Errorf("cannot parse pool_min_conns: %w", err)
}
config.MinConns = int32(n)
} else {
config.MinConns = defaultMinConns
}
if s, ok := config.ConnConfig.RuntimeParams["pool_max_conn_lifetime"]; ok {
delete(config.ConnConfig.RuntimeParams, "pool_max_conn_lifetime")
d, err := time.ParseDuration(s)
if err != nil {
return nil, fmt.Errorf("invalid pool_max_conn_lifetime: %w", err)
}
config.MaxConnLifetime = d
} else {
config.MaxConnLifetime = defaultMaxConnLifetime
}
if s, ok := config.ConnConfig.RuntimeParams["pool_max_conn_idle_time"]; ok {
delete(config.ConnConfig.RuntimeParams, "pool_max_conn_idle_time")
d, err := time.ParseDuration(s)
if err != nil {
return nil, fmt.Errorf("invalid pool_max_conn_idle_time: %w", err)
}
config.MaxConnIdleTime = d
} else {
config.MaxConnIdleTime = defaultMaxConnIdleTime
}
if s, ok := config.ConnConfig.RuntimeParams["pool_health_check_period"]; ok {
delete(config.ConnConfig.RuntimeParams, "pool_health_check_period")
d, err := time.ParseDuration(s)
if err != nil {
return nil, fmt.Errorf("invalid pool_health_check_period: %w", err)
}
config.HealthCheckPeriod = d
} else {
config.HealthCheckPeriod = defaultHealthCheckPeriod
}
if s, ok := config.ConnConfig.RuntimeParams["pool_max_conn_lifetime_jitter"]; ok {
delete(config.ConnConfig.RuntimeParams, "pool_max_conn_lifetime_jitter")
d, err := time.ParseDuration(s)
if err != nil {
return nil, fmt.Errorf("invalid pool_max_conn_lifetime_jitter: %w", err)
}
config.MaxConnLifetimeJitter = d
}
if s, ok := config.ConnConfig.RuntimeParams["pool_create_idle_timeout"]; ok {
delete(config.ConnConfig.RuntimeParams, "pool_create_idle_timeout")
d, err := time.ParseDuration(s)
if err != nil {
return nil, fmt.Errorf("invalid pool_create_idle_timeout: %w", err)
}
config.CreateIdleTimeout = d
} else {
config.CreateIdleTimeout = defaultCreateIdleTimeout
}
return config, nil
}
// Close closes all connections in the pool and rejects future Acquire calls. Blocks until all connections are returned
// to pool and closed.
func (p *pool) Close() {
p.closeOnce.Do(func() {
close(p.closeChan)
p.p.Close()
})
}
func (p *pool) isExpired(res *puddle.Resource[*connResource]) bool {
now := time.Now()
// Small optimization to avoid rand. If it's over lifetime AND jitter, immediately
// return true.
if now.Sub(res.CreationTime()) > p.maxConnLifetime+p.maxConnLifetimeJitter {
return true
}
if p.maxConnLifetimeJitter == 0 {
return false
}
//nolint:gosec // rand is not used for security purposes
jitterSecs := rand.Float64() * p.maxConnLifetimeJitter.Seconds()
return now.Sub(res.CreationTime()) > p.maxConnLifetime+(time.Duration(jitterSecs)*time.Second)
}
func (p *pool) triggerHealthCheck() {
go func() {
// Destroy is asynchronous so we give it time to actually remove itself from
// the pool otherwise we might try to check the pool size too soon
time.Sleep(500 * time.Millisecond)
select {
case p.healthCheckChan <- struct{}{}:
default:
}
}()
}
func (p *pool) backgroundHealthCheck() {
ticker := time.NewTicker(p.healthCheckPeriod)
defer ticker.Stop()
for {
select {
case <-p.closeChan:
return
case <-p.healthCheckChan:
p.checkHealth()
case <-ticker.C:
p.checkHealth()
}
}
}
func (p *pool) checkHealth() {
for {
// If checkMinConns failed we don't destroy any connections since we couldn't
// even get to minConns
if err := p.checkMinConns(); err != nil {
// Should we log this error somewhere?
break
}
if !p.checkConnsHealth() {
// Since we didn't destroy any connections we can stop looping
break
}
// Technically Destroy is asynchronous but 500ms should be enough for it to
// remove it from the underlying pool
select {
case <-p.closeChan:
return
case <-time.After(500 * time.Millisecond):
}
}
}
// checkConnsHealth will check all idle connections, destroy a connection if
// it's idle or too old, and returns true if any were destroyed
func (p *pool) checkConnsHealth() bool {
var destroyed bool
totalConns := p.Stat().TotalConns()
resources := p.p.AcquireAllIdle()
for _, res := range resources {
// We're okay going under minConns if the lifetime is up
if p.isExpired(res) && totalConns >= p.minConns {
atomic.AddInt64(&p.lifetimeDestroyCount, 1)
res.Destroy()
destroyed = true
// Since Destroy is async we manually decrement totalConns.
totalConns--
} else if res.IdleDuration() > p.maxConnIdleTime && totalConns > p.minConns {
atomic.AddInt64(&p.idleDestroyCount, 1)
res.Destroy()
destroyed = true
// Since Destroy is async we manually decrement totalConns.
totalConns--
} else {
res.ReleaseUnused()
}
}
return destroyed
}
func (p *pool) checkMinConns() error {
// TotalConns can include ones that are being destroyed but we should have
// sleep(500ms) around all of the destroys to help prevent that from throwing
// off this check
toCreate := p.minConns - p.Stat().TotalConns()
if toCreate > 0 {
return p.createIdleResources(int(toCreate))
}
return nil
}
func (p *pool) createIdleResources(targetResources int) error {
ctx, cancel := context.WithTimeout(context.Background(), p.config.CreateIdleTimeout)
defer cancel()
errs := make(chan error, targetResources)
for i := 0; i < targetResources; i++ {
go func() {
atomic.AddInt64(&p.newConnsCount, 1)
err := p.p.CreateResource(ctx)
errs <- err
}()
}
var firstError error
for i := 0; i < targetResources; i++ {
err := <-errs
if err != nil && firstError == nil {
cancel()
firstError = err
}
}
return firstError
}
// Acquire returns a connection (Conn) from the Pool
func (p *pool) Acquire(ctx context.Context) (Conn, error) {
for {
res, err := p.p.Acquire(ctx)
if err != nil {
return nil, fmt.Errorf("acquire: %w", err)
}
cr := res.Value()
if res.IdleDuration() > time.Second {
err := cr.conn.Ping(ctx)
if err != nil {
res.Destroy()
continue
}
}
if p.beforeAcquire == nil || p.beforeAcquire(ctx, cr.conn) {
return cr.getConn(p, res), nil
}
res.Destroy()
}
}
// AcquireFunc acquires a *Conn and calls f with that *Conn. ctx will only affect the Acquire. It has no effect on the
// call of f. The return value is either an error acquiring the *Conn or the return value of f. The *Conn is
// automatically released after the call of f.
func (p *pool) AcquireFunc(ctx context.Context, f func(Conn) error) error {
conn, err := p.Acquire(ctx)
if err != nil {
return err
}
defer conn.Release()
return f(conn)
}
// AcquireAllIdle atomically acquires all currently idle connections. Its intended use is for health check and
// keep-alive functionality. It does not update pool statistics.
func (p *pool) AcquireAllIdle(ctx context.Context) []Conn {
resources := p.p.AcquireAllIdle()
conns := make([]Conn, 0, len(resources))
for _, res := range resources {
cr := res.Value()
if p.beforeAcquire == nil || p.beforeAcquire(ctx, cr.conn) {
conns = append(conns, cr.getConn(p, res))
} else {
res.Destroy()
}
}
return conns
}
// Reset closes all connections, but leaves the pool open. It is intended for use when an error is detected that would
// disrupt all connections (such as a network interruption or a server state change).
//
// It is safe to reset a pool while connections are checked out. Those connections will be closed when they are returned
// to the pool.
func (p *pool) Reset() {
p.p.Reset()
}
// Config returns a copy of config that was used to initialize this pool.
func (p *pool) Config() *Config { return p.config.Copy() }
// Stat returns a chpool.Stat struct with a snapshot of Pool statistics.
func (p *pool) Stat() *Stat {
return &Stat{
s: p.p.Stat(),
newConnsCount: atomic.LoadInt64(&p.newConnsCount),
lifetimeDestroyCount: atomic.LoadInt64(&p.lifetimeDestroyCount),
idleDestroyCount: atomic.LoadInt64(&p.idleDestroyCount),
}
}
func (p *pool) Exec(ctx context.Context, query string) error {
return p.ExecWithOption(ctx, query, nil)
}
func (p *pool) ExecWithOption(
ctx context.Context,
query string,
queryOptions *chconn.QueryOptions,
) error {
for {
c, err := p.Acquire(ctx)
if err != nil {
return err
}
err = c.ExecWithOption(ctx, query, queryOptions)
c.Release()
if errors.Is(err, syscall.EPIPE) {
continue
}
return err
}
}
func (p *pool) Select(ctx context.Context, query string, columns ...column.ColumnBasic) (chconn.SelectStmt, error) {
return p.SelectWithOption(ctx, query, nil, columns...)
}
func (p *pool) SelectWithOption(
ctx context.Context,
query string,
queryOptions *chconn.QueryOptions,
columns ...column.ColumnBasic,
) (chconn.SelectStmt, error) {
for {
c, err := p.Acquire(ctx)
if err != nil {
return nil, err
}
s, err := c.SelectWithOption(ctx, query, queryOptions, columns...)
if err != nil {
c.Release()
if errors.Is(err, syscall.EPIPE) {
continue
}
return nil, err
}
return s, nil
}
}
func (p *pool) Insert(ctx context.Context, query string, columns ...column.ColumnBasic) error {
return p.InsertWithOption(ctx, query, nil, columns...)
}
func (p *pool) InsertWithOption(ctx context.Context, query string, queryOptions *chconn.QueryOptions, columns ...column.ColumnBasic) error {
for {
c, err := p.Acquire(ctx)
if err != nil {
return err
}
err = c.InsertWithOption(ctx, query, queryOptions, columns...)
c.Release()
if err != nil && errors.Is(err, syscall.EPIPE) {
continue
}
return err
}
}
func (p *pool) InsertStream(ctx context.Context, query string) (chconn.InsertStmt, error) {
return p.InsertStreamWithOption(ctx, query, nil)
}
func (p *pool) InsertStreamWithOption(ctx context.Context, query string, queryOptions *chconn.QueryOptions) (chconn.InsertStmt, error) {
for {
c, err := p.Acquire(ctx)
if err != nil {
return nil, err
}
s, err := c.InsertStreamWithOption(ctx, query, queryOptions)
if err != nil {
c.Release()
if errors.Is(err, syscall.EPIPE) {
continue
}
return nil, err
}
return s, nil
}
}
// Ping acquires a connection from the Pool and send ping
// If returns without error, the database Ping is considered successful, otherwise, the error is returned.
func (p *pool) Ping(ctx context.Context) error {
for {
c, err := p.Acquire(ctx)
if err != nil {
return err
}
err = c.Ping(ctx)
c.Release()
if errors.Is(err, syscall.EPIPE) {
continue
}
return err
}
}
================================================
FILE: chpool/pool_test.go
================================================
package chpool
import (
"context"
"errors"
"fmt"
"os"
"runtime"
"sync/atomic"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/vahid-sohrabloo/chconn/v2"
"github.com/vahid-sohrabloo/chconn/v2/column"
)
func TestNew(t *testing.T) {
t.Parallel()
connString := os.Getenv("CHX_TEST_TCP_CONN_STRING")
pool, err := New(connString)
require.NoError(t, err)
assert.Equal(t, connString, pool.Config().ConnString())
pool.Close()
}
func TestNewWithConfig(t *testing.T) {
t.Parallel()
connString := os.Getenv("CHX_TEST_TCP_CONN_STRING")
config, err := ParseConfig(connString)
require.NoError(t, err)
pool, err := NewWithConfig(config)
require.NoError(t, err)
assertConfigsEqual(t, config, pool.Config(), "Pool.Config() returns original config")
pool.Close()
}
func TestParseConfigExtractsPoolArguments(t *testing.T) {
t.Parallel()
config, err := ParseConfig(`pool_max_conns=42
pool_min_conns=1
pool_max_conn_lifetime=30s
pool_max_conn_idle_time=31s
pool_health_check_period=32s`)
assert.NoError(t, err)
assert.EqualValues(t, 42, config.MaxConns)
assert.EqualValues(t, 42, config.MaxConns)
assert.EqualValues(t, time.Second*30, config.MaxConnLifetime)
assert.EqualValues(t, time.Second*31, config.MaxConnIdleTime)
assert.EqualValues(t, time.Second*32, config.HealthCheckPeriod)
assert.NotContains(t, config.ConnConfig.RuntimeParams, "pool_max_conns")
assert.NotContains(t, config.ConnConfig.RuntimeParams, "pool_min_conns")
assert.NotContains(t, config.ConnConfig.RuntimeParams, "pool_max_conn_lifetime")
assert.NotContains(t, config.ConnConfig.RuntimeParams, "pool_max_conn_idle_time")
assert.NotContains(t, config.ConnConfig.RuntimeParams, "pool_health_check_period")
}
func TestConnectConfigRequiresConnConfigFromParseConfig(t *testing.T) {
t.Parallel()
config := &Config{}
require.PanicsWithValue(t, "config must be created by ParseConfig", func() {
NewWithConfig(config)
})
}
func TestConfigCopyReturnsEqualConfig(t *testing.T) {
connString := "clickhouse://vahid:secret@localhost:9000/mydb?client_name=chxtest&connect_timeout=5"
original, err := ParseConfig(connString)
require.NoError(t, err)
copied := original.Copy()
assertConfigsEqual(t, original, copied, t.Name())
}
func TestConfigCopyCanBeUsedToNew(t *testing.T) {
connString := os.Getenv("CHX_TEST_TCP_CONN_STRING")
original, err := ParseConfig(connString)
require.NoError(t, err)
copied := original.Copy()
assert.NotPanics(t, func() {
_, err = NewWithConfig(copied)
})
assert.NoError(t, err)
}
func TestPoolAcquireAndConnRelease(t *testing.T) {
t.Parallel()
pool, err := New(os.Getenv("CHX_TEST_TCP_CONN_STRING"))
require.NoError(t, err)
defer pool.Close()
c, err := pool.Acquire(context.Background())
require.NoError(t, err)
c.Release()
}
func TestPoolAcquireAndConnHijack(t *testing.T) {
t.Parallel()
ctx := context.Background()
pool, err := New(os.Getenv("CHX_TEST_TCP_CONN_STRING"))
require.NoError(t, err)
defer pool.Close()
c, err := pool.Acquire(ctx)
require.NoError(t, err)
connsBeforeHijack := pool.Stat().TotalConns()
conn := c.Hijack()
defer conn.Close()
connsAfterHijack := pool.Stat().TotalConns()
require.Equal(t, connsBeforeHijack-1, connsAfterHijack)
col := column.New[uint64]()
stmt, err := conn.Select(context.Background(), "SELECT * FROM system.numbers LIMIT 5;", col)
require.NoError(t, err)
for stmt.Next() {
}
require.NoError(t, stmt.Err())
}
func TestPoolAcquireFunc(t *testing.T) {
t.Parallel()
pool, err := New(os.Getenv("CHX_TEST_TCP_CONN_STRING"))
require.NoError(t, err)
defer pool.Close()
err = pool.AcquireFunc(context.Background(), func(c Conn) error {
return c.Ping(context.Background())
})
require.NoError(t, err)
}
func TestPoolAcquireFuncReturnsFnError(t *testing.T) {
t.Parallel()
pool, err := New(os.Getenv("CHX_TEST_TCP_CONN_STRING"))
require.NoError(t, err)
defer pool.Close()
err = pool.AcquireFunc(context.Background(), func(c Conn) error {
return fmt.Errorf("some error")
})
require.EqualError(t, err, "some error")
}
func TestPoolBeforeConnect(t *testing.T) {
t.Parallel()
config, err := ParseConfig(os.Getenv("CHX_TEST_TCP_CONN_STRING"))
require.NoError(t, err)
config.BeforeConnect = func(_ context.Context, cfg *chconn.Config) error {
cfg.ClientName = "chx2"
return nil
}
db, err := NewWithConfig(config)
require.NoError(t, err)
db.Close()
// todo find a way to check it
}
func TestPoolAfterConnect(t *testing.T) {
t.Parallel()
config, err := ParseConfig(os.Getenv("CHX_TEST_TCP_CONN_STRING"))
require.NoError(t, err)
var trigger bool
config.AfterConnect = func(_ context.Context, _ chconn.Conn) error {
trigger = true
return nil
}
db, err := NewWithConfig(config)
require.NoError(t, err)
defer db.Close()
err = db.Ping(context.Background())
require.NoError(t, err)
assert.True(t, trigger)
}
func TestPoolBeforeAcquire(t *testing.T) {
t.Parallel()
config, err := ParseConfig(os.Getenv("CHX_TEST_TCP_CONN_STRING"))
require.NoError(t, err)
acquireAttempts := 0
config.BeforeAcquire = func(ctx context.Context, c chconn.Conn) bool {
acquireAttempts++
return acquireAttempts%2 == 0
}
db, err := NewWithConfig(config)
require.NoError(t, err)
defer db.Close()
conns := make([]Conn, 4)
for i := range conns {
conns[i], err = db.Acquire(context.Background())
assert.NoError(t, err)
}
for _, c := range conns {
c.Release()
}
waitForReleaseToComplete()
assert.EqualValues(t, 8, acquireAttempts)
conns = db.AcquireAllIdle(context.Background())
assert.Len(t, conns, 2)
for _, c := range conns {
c.Release()
}
waitForReleaseToComplete()
assert.EqualValues(t, 12, acquireAttempts)
}
func TestPoolAfterRelease(t *testing.T) {
t.Parallel()
config, err := ParseConfig(os.Getenv("CHX_TEST_TCP_CONN_STRING"))
require.NoError(t, err)
afterReleaseCount := 0
config.AfterRelease = func(c chconn.Conn) bool {
afterReleaseCount++
return afterReleaseCount%2 == 1
}
db, err := NewWithConfig(config)
require.NoError(t, err)
defer db.Close()
conns := map[string]struct{}{}
for i := 0; i < 10; i++ {
conn, err := db.Acquire(context.Background())
assert.NoError(t, err)
conns[conn.Conn().RawConn().LocalAddr().String()] = struct{}{}
conn.Release()
waitForReleaseToComplete()
}
assert.EqualValues(t, 5, len(conns))
}
func TestPoolAcquireAllIdle(t *testing.T) {
t.Parallel()
db, err := New(os.Getenv("CHX_TEST_TCP_CONN_STRING"))
require.NoError(t, err)
defer db.Close()
conns := make([]Conn, 3)
for i := range conns {
conns[i], err = db.Acquire(context.Background())
assert.NoError(t, err)
}
for _, c := range conns {
if c != nil {
c.Release()
}
}
waitForReleaseToComplete()
conns = db.AcquireAllIdle(context.Background())
assert.Len(t, conns, 3)
for _, c := range conns {
c.Release()
}
}
func TestPoolReset(t *testing.T) {
t.Parallel()
db, err := New(os.Getenv("CHX_TEST_TCP_CONN_STRING"))
require.NoError(t, err)
defer db.Close()
conns := make([]Conn, 3)
for i := range conns {
conns[i], err = db.Acquire(context.Background())
assert.NoError(t, err)
}
db.Reset()
for _, c := range conns {
if c != nil {
c.Release()
}
}
waitForReleaseToComplete()
require.EqualValues(t, 0, db.Stat().TotalConns())
}
func TestConnReleaseChecksMaxConnLifetime(t *testing.T) {
t.Parallel()
config, err := ParseConfig(os.Getenv("CHX_TEST_TCP_CONN_STRING"))
require.NoError(t, err)
config.MaxConnLifetime = 250 * time.Millisecond
db, err := NewWithConfig(config)
require.NoError(t, err)
defer db.Close()
c, err := db.Acquire(context.Background())
require.NoError(t, err)
time.Sleep(config.MaxConnLifetime)
c.Release()
waitForReleaseToComplete()
stats := db.Stat()
assert.EqualValues(t, 0, stats.TotalConns())
}
func TestConnReleaseClosesBusyConn(t *testing.T) {
t.Parallel()
db, err := New(os.Getenv("CHX_TEST_TCP_CONN_STRING"))
require.NoError(t, err)
defer db.Close()
c, err := db.Acquire(context.Background())
require.NoError(t, err)
col := column.New[uint64]()
_, err = c.Conn().Select(context.Background(), "SELECT * FROM system.numbers LIMIT 10;", col)
require.NoError(t, err)
c.Release()
waitForReleaseToComplete()
// wait for the connection to actually be destroyed
for i := 0; i < 1000; i++ {
if db.Stat().TotalConns() == 0 {
break
}
time.Sleep(time.Millisecond)
}
stats := db.Stat()
assert.EqualValues(t, 0, stats.TotalConns())
}
func TestPoolBackgroundChecksMaxConnLifetime(t *testing.T) {
t.Parallel()
config, err := ParseConfig(os.Getenv("CHX_TEST_TCP_CONN_STRING"))
require.NoError(t, err)
config.MaxConnLifetime = 100 * time.Millisecond
config.HealthCheckPeriod = 100 * time.Millisecond
db, err := NewWithConfig(config)
require.NoError(t, err)
defer db.Close()
c, err := db.Acquire(context.Background())
require.NoError(t, err)
c.Release()
time.Sleep(config.MaxConnLifetime + 100*time.Millisecond)
stats := db.Stat()
assert.EqualValues(t, 0, stats.TotalConns())
assert.EqualValues(t, 0, stats.MaxIdleDestroyCount())
assert.EqualValues(t, 1, stats.MaxLifetimeDestroyCount())
}
func TestPoolBackgroundChecksMaxConnIdleTime(t *testing.T) {
t.Parallel()
config, err := ParseConfig(os.Getenv("CHX_TEST_TCP_CONN_STRING"))
require.NoError(t, err)
config.MaxConnLifetime = 1 * time.Minute
config.MaxConnIdleTime = 100 * time.Millisecond
config.HealthCheckPeriod = 150 * time.Millisecond
db, err := NewWithConfig(config)
require.NoError(t, err)
defer db.Close()
c, err := db.Acquire(context.Background())
require.NoError(t, err)
c.Release()
time.Sleep(config.HealthCheckPeriod)
for i := 0; i < 1000; i++ {
if db.Stat().TotalConns() == 0 {
break
}
time.Sleep(time.Millisecond)
}
stats := db.Stat()
assert.EqualValues(t, 0, stats.TotalConns())
assert.EqualValues(t, 1, stats.MaxIdleDestroyCount())
assert.EqualValues(t, 0, stats.MaxLifetimeDestroyCount())
}
func TestPoolBackgroundChecksMinConns(t *testing.T) {
t.Parallel()
config, err := ParseConfig(os.Getenv("CHX_TEST_TCP_CONN_STRING"))
require.NoError(t, err)
config.HealthCheckPeriod = 100 * time.Millisecond
config.MinConns = 2
db, err := NewWithConfig(config)
require.NoError(t, err)
defer db.Close()
time.Sleep(config.HealthCheckPeriod + 500*time.Millisecond)
stats := db.Stat()
assert.EqualValues(t, 2, stats.TotalConns())
assert.EqualValues(t, 0, stats.MaxLifetimeDestroyCount())
assert.EqualValues(t, 2, stats.NewConnsCount())
c, err := db.Acquire(context.Background())
require.NoError(t, err)
err = c.Conn().Close()
require.NoError(t, err)
c.Release()
time.Sleep(config.HealthCheckPeriod + 500*time.Millisecond)
stats = db.Stat()
assert.EqualValues(t, 2, stats.TotalConns())
assert.EqualValues(t, 0, stats.MaxIdleDestroyCount())
assert.EqualValues(t, 3, stats.NewConnsCount())
}
func TestPoolExec(t *testing.T) {
t.Parallel()
pool, err := New(os.Getenv("CHX_TEST_TCP_CONN_STRING"))
require.NoError(t, err)
defer pool.Close()
testExec(t, pool)
}
func TestPoolExecError(t *testing.T) {
t.Parallel()
pool, err := New(os.Getenv("CHX_TEST_TCP_CONN_STRING"))
require.NoError(t, err)
testExec(t, pool)
pool.Close()
err = pool.Exec(context.Background(), "SET enable_http_compression=1")
if assert.Error(t, err) {
assert.Equal(t, "acquire: closed pool", err.Error())
}
}
func TestPoolSelect(t *testing.T) {
t.Parallel()
pool, err := New(os.Getenv("CHX_TEST_TCP_CONN_STRING"))
require.NoError(t, err)
defer pool.Close()
// Test common usage
testSelect(t, pool)
waitForReleaseToComplete()
// Test expected pool behavior
col := column.New[uint64]()
stmt, err := pool.Select(context.Background(), "SELECT * FROM system.numbers LIMIT 5;", col)
require.NoError(t, err)
stats := pool.Stat()
assert.EqualValues(t, 1, stats.AcquiredConns())
assert.EqualValues(t, 1, stats.TotalConns())
for stmt.Next() {
}
require.NoError(t, stmt.Err())
waitForReleaseToComplete()
stats = pool.Stat()
assert.EqualValues(t, 0, stats.AcquiredConns())
assert.EqualValues(t, 1, stats.TotalConns())
// more coverage
assert.EqualValues(t, 2, stats.AcquireCount())
assert.GreaterOrEqual(t, int64(time.Second), int64(stats.AcquireDuration()))
assert.EqualValues(t, 0, stats.AcquiredConns())
assert.EqualValues(t, 0, stats.CanceledAcquireCount())
assert.EqualValues(t, 0, stats.ConstructingConns())
assert.EqualValues(t, 1, stats.EmptyAcquireCount())
assert.EqualValues(t, 1, stats.IdleConns())
maxConns := defaultMaxConns
if numCPU := int32(runtime.NumCPU()); numCPU > maxConns {
maxConns = numCPU
}
assert.EqualValues(t, maxConns, stats.MaxConns())
}
func TestPoolSelectError(t *testing.T) {
t.Parallel()
pool, err := New(os.Getenv("CHX_TEST_TCP_CONN_STRING"))
require.NoError(t, err)
defer pool.Close()
// Test common usage
testSelect(t, pool)
waitForReleaseToComplete()
// Test expected pool behavior
stmt, err := pool.Select(context.Background(), "SELECT * FROM not_fount_table LIMIT 10;")
assert.Error(t, err)
assert.Nil(t, stmt)
pool.Close()
stmt, err = pool.Select(context.Background(), "SELECT * FROM not_fount_table LIMIT 10;")
if assert.Error(t, err) {
assert.Equal(t, "acquire: closed pool", err.Error())
}
require.Nil(t, stmt)
}
func TestPoolAcquireSelectError(t *testing.T) {
t.Parallel()
pool, err := New(os.Getenv("CHX_TEST_TCP_CONN_STRING"))
require.NoError(t, err)
defer pool.Close()
// Test common usage
testSelect(t, pool)
waitForReleaseToComplete()
// Test expected pool behavior
conn, err := pool.Acquire(context.Background())
require.NoError(t, err)
conn.Conn().RawConn().Close()
_, err = conn.Conn().Select(context.Background(), "SELECT * FROM system.numbers LIMIT 5;")
conn.Release()
require.Error(t, err)
}
func TestPoolInsert(t *testing.T) {
t.Parallel()
pool, err := New(os.Getenv("CHX_TEST_TCP_CONN_STRING"))
require.NoError(t, err)
defer pool.Close()
require.NoError(t, pool.Ping(context.Background()))
err = pool.Exec(context.Background(), `DROP TABLE IF EXISTS clickhouse_test_insert_pool`)
require.NoError(t, err)
err = pool.Exec(context.Background(), `CREATE TABLE clickhouse_test_insert_pool (
int8 Int8
) Engine=Memory`)
require.NoError(t, err)
col := column.New[int8]()
for i := 1; i <= 10; i++ {
col.Append(int8(-1 * i))
}
stmt, err := pool.InsertStream(context.Background(), `INSERT INTO clickhouse_test_insert_pool (
int8
) VALUES`)
require.NoError(t, err)
stats := pool.Stat()
assert.EqualValues(t, 1, stats.AcquiredConns())
assert.EqualValues(t, 1, stats.TotalConns())
require.NoError(t, stmt.Write(context.Background(), col))
require.NoError(t, stmt.Write(context.Background(), col))
require.NoError(t, stmt.Flush(context.Background()))
waitForReleaseToComplete()
stats = pool.Stat()
assert.EqualValues(t, 0, stats.AcquiredConns())
assert.EqualValues(t, 1, stats.TotalConns())
}
func TestPoolInsertError(t *testing.T) {
t.Parallel()
pool, err := New(os.Getenv("CHX_TEST_TCP_CONN_STRING"))
require.NoError(t, err)
err = pool.Insert(context.Background(), `INSERT INTO not_found_table (
int8
) VALUES`)
if assert.Error(t, err) {
assert.Equal(t, " DB::Exception (60): Table default.not_found_table doesn't exist", err.Error())
}
pool.Close()
err = pool.Insert(context.Background(), `INSERT INTO not_found_table (
int8
) VALUES`)
if assert.Error(t, err) {
assert.Equal(t, "acquire: closed pool", err.Error())
}
}
func TestPoolInsertStream(t *testing.T) {
t.Parallel()
pool, err := New(os.Getenv("CHX_TEST_TCP_CONN_STRING"))
require.NoError(t, err)
defer pool.Close()
require.NoError(t, pool.Ping(context.Background()))
err = pool.Exec(context.Background(), `DROP TABLE IF EXISTS clickhouse_test_insert_pool_stream`)
require.NoError(t, err)
err = pool.Exec(context.Background(), `CREATE TABLE clickhouse_test_insert_pool_stream (
int8 Int8
) Engine=Memory`)
require.NoError(t, err)
col := column.New[int8]()
for i := 1; i <= 10; i++ {
col.Append(int8(-1 * i))
}
err = pool.Insert(context.Background(), `INSERT INTO clickhouse_test_insert_pool_stream (
int8
) VALUES`, col)
require.NoError(t, err)
colInt8 := column.New[int8]()
selectStmt, err := pool.Select(context.Background(), `SELECT
int8
FROM clickhouse_test_insert_pool_stream`, colInt8)
require.NoError(t, err)
stats := pool.Stat()
assert.EqualValues(t, 1, stats.AcquiredConns())
assert.EqualValues(t, 1, stats.TotalConns())
for selectStmt.Next() {
}
require.NoError(t, selectStmt.Err())
selectStmt.Close()
waitForReleaseToComplete()
stats = pool.Stat()
assert.EqualValues(t, 0, stats.AcquiredConns())
assert.EqualValues(t, 1, stats.TotalConns())
}
func TestConnReleaseClosesConnInFailedTransaction(t *testing.T) {
t.Parallel()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
pool, err := New(os.Getenv("CHX_TEST_TCP_CONN_STRING"))
require.NoError(t, err)
defer pool.Close()
c, err := pool.Acquire(ctx)
require.NoError(t, err)
pid := c.Conn().RawConn().LocalAddr().String()
stmt, err := c.Conn().Select(ctx, "SELECT * FROM system.numbers2 LIMIT 5;")
assert.Error(t, err)
assert.Nil(t, stmt)
c.Release()
waitForReleaseToComplete()
c, err = pool.Acquire(ctx)
require.NoError(t, err)
assert.NotEqual(t, pid, c.Conn().RawConn().LocalAddr().String())
c.Release()
}
func TestConnReleaseDestroysClosedConn(t *testing.T) {
t.Parallel()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
pool, err := New(os.Getenv("CHX_TEST_TCP_CONN_STRING"))
require.NoError(t, err)
defer pool.Close()
c, err := pool.Acquire(ctx)
require.NoError(t, err)
c.Conn().Close()
err = c.Conn().Close()
require.NoError(t, err)
assert.EqualValues(t, 1, pool.Stat().TotalConns())
c.Release()
waitForReleaseToComplete()
// wait for the connection to actually be destroyed
for i := 0; i < 1000; i++ {
if pool.Stat().TotalConns() == 0 {
break
}
time.Sleep(time.Millisecond)
}
assert.EqualValues(t, 0, pool.Stat().TotalConns())
}
func TestConnPoolQueryConcurrentLoad(t *testing.T) {
t.Parallel()
pool, err := New(os.Getenv("CHX_TEST_TCP_CONN_STRING"))
require.NoError(t, err)
defer pool.Close()
n := 100
done := make(chan bool)
for i := 0; i < n; i++ {
go func() {
defer func() { done <- true }()
testSelect(t, pool)
}()
}
for i := 0; i < n; i++ {
<-done
}
}
func TestParseConfigError(t *testing.T) {
t.Parallel()
parseConfigErrorTests := []struct {
name string
connString string
err string
}{
{
name: "invalid host",
connString: "host>0",
err: "cannot parse `host>0`: failed to parse as DSN (invalid dsn)",
}, {
name: "invalid pool_max_conns",
connString: "pool_max_conns=invalid",
err: "cannot parse pool_max_conns: strconv.ParseInt: parsing \"invalid\": invalid syntax",
}, {
name: "low pool_max_conns",
connString: "pool_max_conns=0",
err: "pool_max_conns too small: 0",
}, {
name: "invalid pool_min_conns",
connString: "pool_min_conns=invalid",
err: "cannot parse pool_min_conns: strconv.ParseInt: parsing \"invalid\": invalid syntax",
}, {
name: "invalid pool_max_conn_lifetime",
connString: "pool_max_conn_lifetime=invalid",
err: "invalid pool_max_conn_lifetime: time: invalid duration \"invalid\"",
}, {
name: "invalid pool_max_conn_idle_time",
connString: "pool_max_conn_idle_time=invalid",
err: "invalid pool_max_conn_idle_time: time: invalid duration \"invalid\"",
}, {
name: "invalid pool_health_check_period",
connString: "pool_health_check_period=invalid",
err: "invalid pool_health_check_period: time: invalid duration \"invalid\"",
}, {
name: "invalid pool_max_conn_lifetime_jitter",
connString: "pool_max_conn_lifetime_jitter=invalid",
err: "invalid pool_max_conn_lifetime_jitter: time: invalid duration \"invalid\"",
}, {
name: "invalid pool_create_idle_timeout",
connString: "pool_create_idle_timeout=invalid",
err: "invalid pool_create_idle_timeout: time: invalid duration \"invalid\"",
},
}
for i, tt := range parseConfigErrorTests {
_, err := ParseConfig(tt.connString)
if !assert.Errorf(t, err, "Test %d (%s)", i, tt.name) {
continue
}
if !assert.Equalf(t, err.Error(), tt.err, "Test %d (%s)", i, tt.name) {
continue
}
}
}
func TestNewParseError(t *testing.T) {
t.Parallel()
pool, err := New("host>0")
assert.Nil(t, pool)
assert.Equal(t, "cannot parse `host>0`: failed to parse as DSN (invalid dsn)", err.Error())
}
func TestNewError(t *testing.T) {
t.Parallel()
pool, err := New("host=invalidhost")
assert.NotNil(t, pool)
assert.NoError(t, err)
err = pool.Ping(context.Background())
assert.Error(t, err)
config, err := ParseConfig(os.Getenv("CHX_TEST_TCP_CONN_STRING"))
require.NoError(t, err)
config.AfterConnect = func(ctx context.Context, c chconn.Conn) error {
return errors.New("afterConnect err")
}
pool, err = NewWithConfig(config)
require.NoError(t, err)
err = pool.Ping(context.Background())
assert.Error(t, err)
assert.EqualError(t, err, "acquire: afterConnect err")
}
func TestIdempotentPoolClose(t *testing.T) {
pool, err := New(os.Getenv("CHX_TEST_TCP_CONN_STRING"))
require.NoError(t, err)
// Close the open pool.
require.NotPanics(t, func() { pool.Close() })
// Close the already closed pool.
require.NotPanics(t, func() { pool.Close() })
}
func TestConnectEagerlyReachesMinPoolSize(t *testing.T) {
t.Parallel()
config, err := ParseConfig(os.Getenv("CHX_TEST_TCP_CONN_STRING"))
require.NoError(t, err)
config.MinConns = int32(12)
config.MaxConns = int32(15)
acquireAttempts := int64(0)
connectAttempts := int64(0)
config.BeforeAcquire = func(ctx context.Context, conn chconn.Conn) bool {
atomic.AddInt64(&acquireAttempts, 1)
return true
}
config.BeforeConnect = func(ctx context.Context, cfg *chconn.Config) error {
atomic.AddInt64(&connectAttempts, 1)
return nil
}
pool, err := NewWithConfig(config)
require.NoError(t, err)
defer pool.Close()
for i := 0; i < 500; i++ {
time.Sleep(10 * time.Millisecond)
stat := pool.Stat()
if stat.IdleConns() == 12 &&
stat.AcquireCount() == 0 &&
stat.TotalConns() == 12 &&
atomic.LoadInt64(&acquireAttempts) == 0 &&
atomic.LoadInt64(&connectAttempts) == 12 {
return
}
}
t.Fatal("did not reach min pool size")
}
================================================
FILE: chpool/select_stmt.go
================================================
package chpool
import (
"github.com/vahid-sohrabloo/chconn/v2"
)
type selectStmt struct {
chconn.SelectStmt
conn Conn
}
func (s *selectStmt) Next() bool {
if s.conn == nil {
return false
}
next := s.SelectStmt.Next()
if s.SelectStmt.Err() != nil && s.conn != nil {
s.conn.Release()
s.conn = nil
}
if !next && s.conn != nil {
s.conn.Release()
s.conn = nil
}
return next
}
func (s *selectStmt) Close() {
if s.conn == nil {
return
}
s.SelectStmt.Close()
s.conn.Release()
}
================================================
FILE: chpool/stat.go
================================================
package chpool
import (
"time"
"github.com/jackc/puddle/v2"
)
// Stat is a snapshot of Pool statistics.
type Stat struct {
s *puddle.Stat
newConnsCount int64
lifetimeDestroyCount int64
idleDestroyCount int64
}
// AcquireCount returns the cumulative count of successful acquires from the pool.
func (s *Stat) AcquireCount() int64 {
return s.s.AcquireCount()
}
// AcquireDuration returns the total duration of all successful acquires from
// the pool.
func (s *Stat) AcquireDuration() time.Duration {
return s.s.AcquireDuration()
}
// AcquiredConns returns the number of currently acquired connections in the pool.
func (s *Stat) AcquiredConns() int32 {
return s.s.AcquiredResources()
}
// CanceledAcquireCount returns the cumulative count of acquires from the pool
// that were canceled by a context.
func (s *Stat) CanceledAcquireCount() int64 {
return s.s.CanceledAcquireCount()
}
// ConstructingConns returns the number of conns with construction in progress in
// the pool.
func (s *Stat) ConstructingConns() int32 {
return s.s.ConstructingResources()
}
// EmptyAcquireCount returns the cumulative count of successful acquires from the pool
// that waited for a resource to be released or constructed because the pool was
// empty.
func (s *Stat) EmptyAcquireCount() int64 {
return s.s.EmptyAcquireCount()
}
// IdleConns returns the number of currently idle conns in the pool.
func (s *Stat) IdleConns() int32 {
return s.s.IdleResources()
}
// MaxConns returns the maximum size of the pool.
func (s *Stat) MaxConns() int32 {
return s.s.MaxResources()
}
// TotalConns returns the total number of resources currently in the pool.
// The value is the sum of ConstructingConns, AcquiredConns, and
// IdleConns.
func (s *Stat) TotalConns() int32 {
return s.s.TotalResources()
}
// NewConnsCount returns the cumulative count of new connections opened.
func (s *Stat) NewConnsCount() int64 {
return s.newConnsCount
}
// MaxLifetimeDestroyCount returns the cumulative count of connections destroyed
// because they exceeded MaxConnLifetime.
func (s *Stat) MaxLifetimeDestroyCount() int64 {
return s.lifetimeDestroyCount
}
// MaxIdleDestroyCount returns the cumulative count of connections destroyed because
// they exceeded MaxConnIdleTime.
func (s *Stat) MaxIdleDestroyCount() int64 {
return s.idleDestroyCount
}
================================================
FILE: client_info.go
================================================
package chconn
import (
"os/user"
"github.com/vahid-sohrabloo/chconn/v2/internal/helper"
)
// ClientInfo Information about client for query.
// Some fields are passed explicitly from client and some are calculated automatically.
// Contains info about initial query source, for tracing distributed queries
// where one query initiates many other queries.
type ClientInfo struct {
InitialUser string
InitialQueryID string
OSUser string
ClientHostname string
ClientName string
ClientVersionMajor uint64
ClientVersionMinor uint64
ClientVersionPatch uint64
ClientRevision uint64
DistributedDepth uint64
QuotaKey string
}
// Write Only values that are not calculated automatically or passed separately are serialized.
// Revisions are passed to use format that server will understand or client was used.
func (c *ClientInfo) write(ch *conn) {
// InitialQuery
ch.writer.Uint8(1)
ch.writer.String(c.InitialUser)
ch.writer.String(c.InitialQueryID)
ch.writer.String("[::ffff:127.0.0.1]:0")
if ch.serverInfo.Revision >= helper.DbmsMinProtocolVersionWithInitialQueryStartTime {
ch.writer.Uint64(0)
}
// iface type
ch.writer.Uint8(1) // tcp
ch.writer.String(c.OSUser)
ch.writer.String(c.ClientHostname)
ch.writer.String(c.ClientName)
ch.writer.Uvarint(c.ClientVersionMajor)
ch.writer.Uvarint(c.ClientVersionMinor)
ch.writer.Uvarint(c.ClientRevision)
if ch.serverInfo.Revision >= helper.DbmsMinRevisionWithQuotaKeyInClientInfo {
ch.writer.String(c.QuotaKey)
}
if ch.serverInfo.Revision >= helper.DbmsMinProtocolVersionWithDistributedDepth {
ch.writer.Uvarint(c.DistributedDepth)
}
if ch.serverInfo.Revision >= helper.DbmsMinRevisionWithVersionPatch {
ch.writer.Uvarint(c.ClientVersionPatch)
}
if ch.serverInfo.Revision >= helper.DbmsMinRevisionWithOpenTelemetry {
ch.writer.Uint8(0)
}
if ch.serverInfo.Revision >= helper.DbmsMinProtocolVersionWithParallelReplicas {
ch.writer.Uvarint(0) // collaborate_with_initiator
ch.writer.Uvarint(0) // count_participating_replicas
ch.writer.Uvarint(0) // number_of_current_replica
}
}
func (c *ClientInfo) fillOSUserHostNameAndVersionInfo() {
u, err := user.Current()
if err == nil {
c.OSUser = u.Username
}
c.ClientVersionMajor = dbmsVersionMajor
c.ClientVersionMinor = dbmsVersionMinor
c.ClientVersionPatch = dbmsVersionPatch
c.ClientRevision = dbmsVersionRevision
}
================================================
FILE: column/array.go
================================================
package column
// Array is a column of Array(T) ClickHouse data type
type Array[T any] struct {
ArrayBase
columnData []T
}
// NewArray create a new array column of Array(T) ClickHouse data type
func NewArray[T any](dataColumn Column[T]) *Array[T] {
a := &Array[T]{
ArrayBase: ArrayBase{
dataColumn: dataColumn,
offsetColumn: New[uint64](),
},
}
a.resetHook = func() {
a.columnData = a.columnData[:0]
}
return a
}
// Data get all the data in current block as a slice.
func (c *Array[T]) Data() [][]T {
values := make([][]T, c.offsetColumn.numRow)
offsets := c.Offsets()
var lastOffset uint64
columnData := c.getColumnData()
for i, offset := range offsets {
val := make([]T, offset-lastOffset)
copy(val, columnData[lastOffset:offset])
values[i] = val
lastOffset = offset
}
return values
}
// Read reads all the data in current block and append to the input.
func (c *Array[T]) Read(value [][]T) [][]T {
offsets := c.Offsets()
var lastOffset uint64
columnData := c.getColumnData()
for _, offset := range offsets {
val := make([]T, offset-lastOffset)
copy(val, columnData[lastOffset:offset])
value = append(value, val)
lastOffset = offset
}
return value
}
// Row return the value of given row.
// NOTE: Row number start from zero
func (c *Array[T]) Row(row int) []T {
var lastOffset uint64
if row != 0 {
lastOffset = c.offsetColumn.Row(row - 1)
}
var val []T
val = append(val, c.getColumnData()[lastOffset:c.offsetColumn.Row(row)]...)
return val
}
// Append value for insert
func (c *Array[T]) Append(v ...[]T) {
for _, v := range v {
c.AppendLen(len(v))
c.dataColumn.(Column[T]).Append(v...)
}
}
// Append single item value for insert
//
// it should use with AppendLen
//
// Example:
//
// c.AppendLen(2) // insert 2 items
// c.AppendItem(1, 2)
func (c *Array[T]) AppendItem(v ...T) {
c.dataColumn.(Column[T]).Append(v...)
}
// Array return a Array type for this column
func (c *Array[T]) Array() *Array2[T] {
return NewArray2(c)
}
func (c *Array[T]) getColumnData() []T {
if len(c.columnData) == 0 {
c.columnData = c.dataColumn.(Column[T]).Data()
}
return c.columnData
}
func (c *Array[T]) elem(arrayLevel int) ColumnBasic {
if arrayLevel > 0 {
return c.Array().elem(arrayLevel - 1)
}
return c
}
================================================
FILE: column/array2.go
================================================
package column
// Array2 is a column of Array(Array(T)) ClickHouse data type
type Array2[T any] struct {
ArrayBase
}
// NewArray create a new array column of Array(Array(T)) ClickHouse data type
func NewArray2[T any](array *Array[T]) *Array2[T] {
a := &Array2[T]{
ArrayBase: ArrayBase{
dataColumn: array,
offsetColumn: New[uint64](),
},
}
return a
}
// Data get all the data in current block as a slice.
func (c *Array2[T]) Data() [][][]T {
values := make([][][]T, c.offsetColumn.numRow)
for i := range values {
values[i] = c.Row(i)
}
return values
}
// Read reads all the data in current block and append to the input.
func (c *Array2[T]) Read(value [][][]T) [][][]T {
if cap(value)-len(value) >= c.NumRow() {
value = (value)[:len(value)+c.NumRow()]
} else {
value = append(value, make([][][]T, c.NumRow())...)
}
val := (value)[len(value)-c.NumRow():]
for i := 0; i < c.NumRow(); i++ {
val[i] = c.Row(i)
}
return value
}
// Row return the value of given row.
// NOTE: Row number start from zero
func (c *Array2[T]) Row(row int) [][]T {
var lastOffset uint64
if row != 0 {
lastOffset = c.offsetColumn.Row(row - 1)
}
var val [][]T
lastRow := c.offsetColumn.Row(row)
for ; lastOffset < lastRow; lastOffset++ {
val = append(val, c.dataColumn.(*Array[T]).Row(int(lastOffset)))
}
return val
}
// Append value for insert
func (c *Array2[T]) Append(v ...[][]T) {
for _, v := range v {
c.AppendLen(len(v))
c.dataColumn.(*Array[T]).Append(v...)
}
}
func (c *Array2[T]) elem(arrayLevel int) ColumnBasic {
if arrayLevel > 0 {
return c.Array().elem(arrayLevel - 1)
}
return c
}
================================================
FILE: column/array2_nullable.go
================================================
package column
import "github.com/vahid-sohrabloo/chconn/v2/internal/readerwriter"
// Array is a column of Array(Array(Nullable(T))) ClickHouse data type
type Array2Nullable[T comparable] struct {
Array2[T]
dataColumn *ArrayNullable[T]
columnData [][]*T
}
// NewArrayNullable create a new array column of Array(Nullable(T)) ClickHouse data type
func NewArray2Nullable[T comparable](dataColumn *ArrayNullable[T]) *Array2Nullable[T] {
a := &Array2Nullable[T]{
dataColumn: dataColumn,
Array2: Array2[T]{
ArrayBase: ArrayBase{
dataColumn: dataColumn,
offsetColumn: New[uint64](),
},
},
}
a.resetHook = func() {
a.columnData = a.columnData[:0]
}
return a
}
// Data get all the nullable data in current block as a slice of pointer.
func (c *Array2Nullable[T]) DataP() [][][]*T {
values := make([][][]*T, c.offsetColumn.numRow)
var lastOffset uint64
columnData := c.getColumnData()
for i := 0; i < c.offsetColumn.numRow; i++ {
values[i] = columnData[lastOffset:c.offsetColumn.Row(i)]
lastOffset = c.offsetColumn.Row(i)
}
return values
}
// Read reads all the nullable data in current block as a slice pointer and append to the input.
func (c *Array2Nullable[T]) ReadP(value [][][]*T) [][][]*T {
var lastOffset uint64
columnData := c.getColumnData()
for i := 0; i < c.offsetColumn.numRow; i++ {
value = append(value, columnData[lastOffset:c.offsetColumn.Row(i)])
lastOffset = c.offsetColumn.Row(i)
}
return value
}
// RowP return the nullable value of given row as a pointer
// NOTE: Row number start from zero
func (c *Array2Nullable[T]) RowP(row int) [][]*T {
var lastOffset uint64
if row != 0 {
lastOffset = c.offsetColumn.Row(row - 1)
}
var val [][]*T
val = append(val, c.getColumnData()[lastOffset:c.offsetColumn.Row(row)]...)
return val
}
// AppendP a nullable value for insert
func (c *Array2Nullable[T]) AppendP(v ...[][]*T) {
for _, v := range v {
c.AppendLen(len(v))
c.dataColumn.AppendP(v...)
}
}
// ReadRaw read raw data from the reader. it runs automatically
func (c *Array2Nullable[T]) ReadRaw(num int, r *readerwriter.Reader) error {
err := c.Array2.ReadRaw(num, r)
if err != nil {
return err
}
c.columnData = c.dataColumn.DataP()
return nil
}
// Array return a Array type for this column
func (c *Array2Nullable[T]) Array() *Array3Nullable[T] {
return NewArray3Nullable(c)
}
func (c *Array2Nullable[T]) getColumnData() [][]*T {
if len(c.columnData) == 0 {
c.columnData = c.dataColumn.DataP()
}
return c.columnData
}
func (c *Array2Nullable[T]) elem(arrayLevel int) ColumnBasic {
if arrayLevel > 0 {
return c.Array().elem(arrayLevel - 1)
}
return c
}
================================================
FILE: column/array3.go
================================================
package column
// Array3 is a column of Array(Array(Array(T))) ClickHouse data type
type Array3[T any] struct {
ArrayBase
}
// NewArray create a new array column of Array(Array(Array(T))) ClickHouse data type
func NewArray3[T any](array *Array2[T]) *Array3[T] {
a := &Array3[T]{
ArrayBase: ArrayBase{
dataColumn: array,
offsetColumn: New[uint64](),
},
}
return a
}
// Data get all the data in current block as a slice.
func (c *Array3[T]) Data() [][][][]T {
values := make([][][][]T, c.offsetColumn.numRow)
for i := range values {
values[i] = c.Row(i)
}
return values
}
// Read reads all the data in current block and append to the input.
func (c *Array3[T]) Read(value [][][][]T) [][][][]T {
if cap(value)-len(value) >= c.NumRow() {
value = (value)[:len(value)+c.NumRow()]
} else {
value = append(value, make([][][][]T, c.NumRow())...)
}
val := (value)[len(value)-c.NumRow():]
for i := 0; i < c.NumRow(); i++ {
val[i] = c.Row(i)
}
return value
}
// Row return the value of given row.
// NOTE: Row number start from zero
func (c *Array3[T]) Row(row int) [][][]T {
var lastOffset uint64
if row != 0 {
lastOffset = c.offsetColumn.Row(row - 1)
}
var val [][][]T
lastRow := c.offsetColumn.Row(row)
for ; lastOffset < lastRow; lastOffset++ {
val = append(val, c.dataColumn.(*Array2[T]).Row(int(lastOffset)))
}
return val
}
// Append value for insert
func (c *Array3[T]) Append(v ...[][][]T) {
for _, v := range v {
c.AppendLen(len(v))
c.dataColumn.(*Array2[T]).Append(v...)
}
}
// Array return a Array type for this column
func (c *Array2[T]) Array() *Array3[T] {
return NewArray3(c)
}
func (c *Array3[T]) elem(arrayLevel int) ColumnBasic {
if arrayLevel > 0 {
panic("array level is too deep")
}
return c
}
================================================
FILE: column/array3_nullable.go
================================================
package column
import "github.com/vahid-sohrabloo/chconn/v2/internal/readerwriter"
// Array is a column of Array(Array(Nullable(T))) ClickHouse data type
type Array3Nullable[T comparable] struct {
Array3[T]
dataColumn *Array2Nullable[T]
columnData [][][]*T
}
// NewArrayNullable create a new array column of Array(Nullable(T)) ClickHouse data type
func NewArray3Nullable[T comparable](dataColumn *Array2Nullable[T]) *Array3Nullable[T] {
a := &Array3Nullable[T]{
dataColumn: dataColumn,
Array3: Array3[T]{
ArrayBase: ArrayBase{
dataColumn: dataColumn,
offsetColumn: New[uint64](),
},
},
}
a.resetHook = func() {
a.columnData = a.columnData[:0]
}
return a
}
// Data get all the nullable data in current block as a slice of pointer.
func (c *Array3Nullable[T]) DataP() [][][][]*T {
values := make([][][][]*T, c.offsetColumn.numRow)
var lastOffset uint64
columnData := c.getColumnData()
for i := 0; i < c.offsetColumn.numRow; i++ {
values[i] = columnData[lastOffset:c.offsetColumn.Row(i)]
lastOffset = c.offsetColumn.Row(i)
}
return values
}
// Read reads all the nullable data in current block as a slice pointer and append to the input.
func (c *Array3Nullable[T]) ReadP(value [][][][]*T) [][][][]*T {
var lastOffset uint64
columnData := c.getColumnData()
for i := 0; i < c.offsetColumn.numRow; i++ {
value = append(value, columnData[lastOffset:c.offsetColumn.Row(i)])
lastOffset = c.offsetColumn.Row(i)
}
return value
}
// RowP return the nullable value of given row as a pointer
// NOTE: Row number start from zero
func (c *Array3Nullable[T]) RowP(row int) [][][]*T {
var lastOffset uint64
if row != 0 {
lastOffset = c.offsetColumn.Row(row - 1)
}
var val [][][]*T
val = append(val, c.getColumnData()[lastOffset:c.offsetColumn.Row(row)]...)
return val
}
// AppendP a nullable value for insert
func (c *Array3Nullable[T]) AppendP(v ...[][][]*T) {
for _, v := range v {
c.AppendLen(len(v))
c.dataColumn.AppendP(v...)
}
}
// ReadRaw read raw data from the reader. it runs automatically
func (c *Array3Nullable[T]) ReadRaw(num int, r *readerwriter.Reader) error {
err := c.Array3.ReadRaw(num, r)
if err != nil {
return err
}
c.columnData = c.dataColumn.DataP()
return nil
}
func (c *Array3Nullable[T]) getColumnData() [][][]*T {
if len(c.columnData) == 0 {
c.columnData = c.dataColumn.DataP()
}
return c.columnData
}
func (c *Array3Nullable[T]) elem(arrayLevel int) ColumnBasic {
if arrayLevel > 0 {
panic("array level is too deep")
}
return c
}
================================================
FILE: column/array_base.go
================================================
package column
import (
"encoding/binary"
"fmt"
"io"
"strings"
"github.com/vahid-sohrabloo/chconn/v2/internal/helper"
"github.com/vahid-sohrabloo/chconn/v2/internal/readerwriter"
)
// ArrayBase is a column of Array(T) ClickHouse data type
//
// ArrayBase is a base class for other arrays or use for none generic use
type ArrayBase struct {
column
offsetColumn *Base[uint64]
dataColumn ColumnBasic
offset uint64
resetHook func()
}
// NewArray create a new array column of Array(T) ClickHouse data type
func NewArrayBase(dataColumn ColumnBasic) *ArrayBase {
a := &ArrayBase{
dataColumn: dataColumn,
offsetColumn: New[uint64](),
}
return a
}
// AppendLen Append len of array for insert
func (c *ArrayBase) AppendLen(v int) {
c.offset += uint64(v)
c.offsetColumn.Append(c.offset)
}
// NumRow return number of row for this block
func (c *ArrayBase) NumRow() int {
return c.offsetColumn.NumRow()
}
// Array return a Array type for this column
func (c *ArrayBase) Array() *ArrayBase {
return NewArrayBase(c)
}
// Reset all statuses and buffered data
//
// After each reading, the reading data does not need to be reset. It will be automatically reset.
//
// When inserting, buffers are reset only after the operation is successful.
// If an error occurs, you can safely call insert again.
func (c *ArrayBase) Reset() {
c.offsetColumn.Reset()
c.dataColumn.Reset()
c.offset = 0
}
// Offsets return all the offsets in current block
// Note: Only available in the current block
func (c *ArrayBase) Offsets() []uint64 {
return c.offsetColumn.Data()
}
// TotalRows return total rows on this block of array data
func (c *ArrayBase) TotalRows() int {
if c.offsetColumn.totalByte == 0 {
return 0
}
return int(binary.LittleEndian.Uint64(c.offsetColumn.b[c.offsetColumn.totalByte-8 : c.offsetColumn.totalByte]))
}
// SetWriteBufferSize set write buffer (number of rows)
// this buffer only used for writing.
// By setting this buffer, you will avoid allocating the memory several times.
func (c *ArrayBase) SetWriteBufferSize(row int) {
c.offsetColumn.SetWriteBufferSize(row)
c.dataColumn.SetWriteBufferSize(row)
}
// ReadRaw read raw data from the reader. it runs automatically
func (c *ArrayBase) ReadRaw(num int, r *readerwriter.Reader) error {
c.offsetColumn.Reset()
err := c.offsetColumn.ReadRaw(num, r)
if err != nil {
return fmt.Errorf("array: read offset column: %w", err)
}
err = c.dataColumn.ReadRaw(c.TotalRows(), r)
if err != nil {
return fmt.Errorf("array: read data column: %w", err)
}
if c.resetHook != nil {
c.resetHook()
}
return nil
}
// HeaderReader reads header data from reader
// it uses internally
func (c *ArrayBase) HeaderReader(r *readerwriter.Reader, readColumn bool, revision uint64) error {
c.r = r
err := c.readColumn(readColumn, revision)
if err != nil {
return err
}
// never return error
//nolint:errcheck
c.offsetColumn.HeaderReader(r, false, revision)
return c.dataColumn.HeaderReader(r, false, revision)
}
// Column returns the sub column
func (c *ArrayBase) Column() ColumnBasic {
return c.dataColumn
}
func (c *ArrayBase) Validate() error {
chType := helper.FilterSimpleAggregate(c.chType)
switch {
case helper.IsRing(chType):
chType = helper.RingMainTypeStr
case helper.IsPolygon(chType):
chType = helper.PolygonMainTypeStr
case helper.IsMultiPolygon(chType):
chType = helper.MultiPolygonMainTypeStr
}
chType = helper.NestedToArrayType(chType)
if !helper.IsArray(chType) {
return ErrInvalidType{
column: c,
}
}
c.dataColumn.SetType(chType[helper.LenArrayStr : len(chType)-1])
if c.dataColumn.Validate() != nil {
return ErrInvalidType{
column: c,
}
}
return nil
}
func (c *ArrayBase) ColumnType() string {
return strings.ReplaceAll(helper.ArrayTypeStr, "<type>", c.dataColumn.ColumnType())
}
// WriteTo write data to ClickHouse.
// it uses internally
func (c *ArrayBase) WriteTo(w io.Writer) (int64, error) {
nw, err := c.offsetColumn.WriteTo(w)
if err != nil {
return 0, fmt.Errorf("write len data: %w", err)
}
n, errDataColumn := c.dataColumn.WriteTo(w)
return nw + n, errDataColumn
}
// HeaderWriter writes header data to writer
// it uses internally
func (c *ArrayBase) HeaderWriter(w *readerwriter.Writer) {
c.dataColumn.HeaderWriter(w)
}
func (c *ArrayBase) elem(arrayLevel int) ColumnBasic {
if arrayLevel > 0 {
return c.Array().elem(arrayLevel - 1)
}
return c
}
================================================
FILE: column/array_nullable.go
================================================
package column
import "github.com/vahid-sohrabloo/chconn/v2/internal/readerwriter"
// Array is a column of Array(Nullable(T)) ClickHouse data type
type ArrayNullable[T comparable] struct {
Array[T]
dataColumn NullableColumn[T]
columnData []*T
}
// NewArrayNullable create a new array column of Array(Nullable(T)) ClickHouse data type
func NewArrayNullable[T comparable](dataColumn NullableColumn[T]) *ArrayNullable[T] {
a := &ArrayNullable[T]{
dataColumn: dataColumn,
Array: Array[T]{
ArrayBase: ArrayBase{
dataColumn: dataColumn,
offsetColumn: New[uint64](),
},
},
}
a.resetHook = func() {
a.columnData = a.columnData[:0]
}
return a
}
// Data get all the nullable data in current block as a slice of pointer.
func (c *ArrayNullable[T]) DataP() [][]*T {
values := make([][]*T, c.offsetColumn.numRow)
var lastOffset uint64
columnData := c.getColumnData()
for i := 0; i < c.offsetColumn.numRow; i++ {
values[i] = columnData[lastOffset:c.offsetColumn.Row(i)]
lastOffset = c.offsetColumn.Row(i)
}
return values
}
// Read reads all the nullable data in current block as a slice pointer and append to the input.
func (c *ArrayNullable[T]) ReadP(value [][]*T) [][]*T {
var lastOffset uint64
columnData := c.getColumnData()
for i := 0; i < c.offsetColumn.numRow; i++ {
value = append(value, columnData[lastOffset:c.offsetColumn.Row(i)])
lastOffset = c.offsetColumn.Row(i)
}
return value
}
// RowP return the nullable value of given row as a pointer
// NOTE: Row number start from zero
func (c *ArrayNullable[T]) RowP(row int) []*T {
var lastOffset uint64
if row != 0 {
lastOffset = c.offsetColumn.Row(row - 1)
}
var val []*T
val = append(val, c.getColumnData()[lastOffset:c.offsetColumn.Row(row)]...)
return val
}
// AppendP a nullable value for insert
func (c *ArrayNullable[T]) AppendP(v ...[]*T) {
for _, v := range v {
c.AppendLen(len(v))
c.dataColumn.AppendP(v...)
}
}
// AppendItemP Append nullable item value for insert
//
// it should use with AppendLen
//
// Example:
//
// c.AppendLen(2) // insert 2 items
// c.AppendItemP(val1, val2) // insert item 1
func (c *ArrayNullable[T]) AppendItemP(v ...*T) {
c.dataColumn.AppendP(v...)
}
// ArrayOf return a Array type for this column
func (c *ArrayNullable[T]) ArrayOf() *Array2Nullable[T] {
return NewArray2Nullable(c)
}
// ReadRaw read raw data from the reader. it runs automatically
func (c *ArrayNullable[T]) ReadRaw(num int, r *readerwriter.Reader) error {
err := c.Array.ReadRaw(num, r)
if err != nil {
return err
}
c.columnData = c.dataColumn.DataP()
return nil
}
func (c *ArrayNullable[T]) getColumnData() []*T {
if len(c.columnData) == 0 {
c.columnData = c.dataColumn.DataP()
}
return c.columnData
}
func (c *ArrayNullable[T]) elem(arrayLevel int) ColumnBasic {
if arrayLevel > 0 {
return c.ArrayOf().elem(arrayLevel - 1)
}
return c
}
================================================
FILE: column/base.go
================================================
package column
import (
"fmt"
"unsafe"
"github.com/vahid-sohrabloo/chconn/v2/internal/readerwriter"
)
// Column use for most (fixed size) ClickHouse Columns type
type Base[T comparable] struct {
column
size int
numRow int
values []T
params []interface{}
}
// New create a new column
func New[T comparable]() *Base[T] {
var tmpValue T
size := int(unsafe.Sizeof(tmpValue))
return &Base[T]{
size: size,
}
}
// Data get all the data in current block as a slice.
//
// NOTE: the return slice only valid in current block, if you want to use it after, you should copy it. or use Read
func (c *Base[T]) Data() []T {
value := *(*[]T)(unsafe.Pointer(&c.b))
return value[:c.numRow]
}
// Read reads all the data in current block and append to the input.
func (c *Base[T]) Read(value []T) []T {
return append(value, c.Data()...)
}
// Row return the value of given row.
// NOTE: Row number start from zero
func (c *Base[T]) Row(row int) T {
i := row * c.size
return *(*T)(unsafe.Pointer(&c.b[i]))
}
// Append value for insert
func (c *Base[T]) Append(v ...T) {
c.values = append(c.values, v...)
c.numRow += len(v)
}
// NumRow return number of row for this block
func (c *Base[T]) NumRow() int {
return c.numRow
}
// Array return a Array type for this column
func (c *Base[T]) Array() *Array[T] {
return NewArray[T](c)
}
// Nullable return a nullable type for this column
func (c *Base[T]) Nullable() *Nullable[T] {
return NewNullable[T](c)
}
// LC return a low cardinality type for this column
func (c *Base[T]) LC() *LowCardinality[T] {
return NewLC[T](c)
}
// LowCardinality return a low cardinality type for this column
func (c *Base[T]) LowCardinality() *LowCardinality[T] {
return NewLowCardinality[T](c)
}
// appendEmpty append empty value for insert
// this use internally for nullable and low cardinality nullable column
func (c *Base[T]) appendEmpty() {
var emptyValue T
c.Append(emptyValue)
}
// Reset all statuses and buffered data
//
// After each reading, the reading data does not need to be reset. It will be automatically reset.
//
// When inserting, buffers are reset only after the operation is successful.
// If an error occurs, you can safely call insert again.
func (c *Base[T]) Reset() {
c.numRow = 0
c.values = c.values[:0]
}
// SetWriteBufferSize set write buffer (number of rows)
// this buffer only used for writing.
// By setting this buffer, you will avoid allocating the memory several times.
func (c *Base[T]) SetWriteBufferSize(row int) {
if cap(c.values) < row {
c.values = make([]T, 0, row)
}
}
// ReadRaw read raw data from the reader. it runs automatically
func (c *Base[T]) ReadRaw(num int, r *readerwriter.Reader) error {
c.Reset()
c.r = r
c.numRow = num
c.totalByte = num * c.size
err := c.readBuffer()
if err != nil {
err = fmt.Errorf("read data: %w", err)
}
c.readyBufferHook()
return err
}
func (c *Base[T]) readBuffer() error {
if cap(c.b) < c.totalByte {
c.b = make([]byte, c.totalByte)
} else {
c.b = c.b[:c.totalByte]
}
_, err := c.r.Read(c.b)
return err
}
// HeaderReader reads header data from reader
// it uses internally
func (c *Base[T]) HeaderReader(r *readerwriter.Reader, readColumn bool, revision uint64) error {
c.r = r
return c.readColumn(readColumn, revision)
}
// HeaderWriter writes header data to writer
// it uses internally
func (c *Base[T]) HeaderWriter(w *readerwriter.Writer) {
}
func (c *Base[T]) Elem(arrayLevel int, nullable, lc bool) ColumnBasic {
if nullable {
return c.Nullable().elem(arrayLevel, lc)
}
if lc {
return c.LowCardinality().elem(arrayLevel)
}
if arrayLevel > 0 {
return c.Array().elem(arrayLevel - 1)
}
return c
}
================================================
FILE: column/base_big_cpu.go
================================================
//go:build !(386 || amd64 || amd64p32 || arm || arm64 || mipsle || mips64le || mips64p32le || ppc64le || riscv || riscv64)
// +build !386,!amd64,!amd64p32,!arm,!arm64,!mipsle,!mips64le,!mips64p32le,!ppc64le,!riscv,!riscv64
package column
// ReadAll read all value in this block and append to the input slice
func (c *Base[T]) readyBufferHook() {
for i := 0; i < c.totalByte; i += c.size {
reverseBuffer(c.b[i : i+c.size])
}
}
func reverseBuffer(s []byte) {
for i, j := 0, len(s)-1; i < j; i, j = i+1, j-1 {
s[i], s[j] = s[j], s[i]
}
}
// slice is the runtime representation of a slice.
// It cannot be used safely or portably and its representation may
// change in a later release.
// Moreover, the Data field is not sufficient to guarantee the data
// it references will not be garbage collected, so programs must keep
// a separate, correctly typed pointer to the underlying data.
type slice struct {
Data uintptr
Len int
Cap int
}
func (c *Base[T]) WriteTo(w io.Writer) (int64, error) {
s := *(*slice)(unsafe.Pointer(&c.values))
s.Len *= c.size
s.Cap *= c.size
b := *(*[]byte)(unsafe.Pointer(&s))
for i := 0; i < len(b); i += c.size {
reverseBuffer(b[i : i+c.size])
}
var n int64
nw, err := w.Write(*(*[]byte)(unsafe.Pointer(&s)))
return int64(nw) + n, err
}
================================================
FILE: column/base_little_cpu.go
================================================
//go:build 386 || amd64 || amd64p32 || arm || arm64 || mipsle || mips64le || mips64p32le || ppc64le || riscv || riscv64
// +build 386 amd64 amd64p32 arm arm64 mipsle mips64le mips64p32le ppc64le riscv riscv64
package column
import (
"io"
"unsafe"
)
func (c *Base[T]) readyBufferHook() {
}
// slice is the runtime representation of a slice.
// It cannot be used safely or portably and its representation may
// change in a later release.
// Moreover, the Data field is not sufficient to guarantee the data
// it references will not be garbage collected, so programs must keep
// a separate, correctly typed pointer to the underlying data.
type slice struct {
Data uintptr
Len int
Cap int
}
func (c *Base[T]) WriteTo(w io.Writer) (int64, error) {
s := *(*slice)(unsafe.Pointer(&c.values))
s.Len *= c.size
s.Cap *= c.size
var n int64
src := *(*[]byte)(unsafe.Pointer(&s))
nw, err := w.Write(src)
return int64(nw) + n, err
}
================================================
FILE: column/base_test.go
================================================
package column_test
import (
"context"
"fmt"
"math"
"math/big"
"net/netip"
"os"
"testing"
"github.com/google/uuid"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/vahid-sohrabloo/chconn/v2"
"github.com/vahid-sohrabloo/chconn/v2/column"
"github.com/vahid-sohrabloo/chconn/v2/types"
)
func TestBool(t *testing.T) {
testColumn(t, true, "Bool", "bool", func(i int) bool {
return true
}, func(i int) bool {
return false
})
}
func TestBoolUint8(t *testing.T) {
testColumn(t, true, "UInt8", "bool", func(i int) bool {
return true
}, func(i int) bool {
return false
})
}
func TestUint8(t *testing.T) {
testColumn(t, true, "UInt8", "uint8", func(i int) uint8 {
return uint8(i)
}, func(i int) uint8 {
return uint8(i + 1)
})
}
func TestUint16(t *testing.T) {
testColumn(t, true, "UInt16", "uint16", func(i int) uint16 {
return uint16(i)
}, func(i int) uint16 {
return uint16(i + 1)
})
}
func TestUint32(t *testing.T) {
testColumn(t, true, "UInt32", "uint32", func(i int) uint32 {
return uint32(i)
}, func(i int) uint32 {
return uint32(i + 1)
})
}
func TestUint64(t *testing.T) {
testColumn(t, true, "UInt64", "uint64", func(i int) uint64 {
return uint64(i)
}, func(i int) uint64 {
return uint64(i + 1)
})
}
func TestUint128(t *testing.T) {
testColumn(t, true, "UInt128", "uint128", func(i int) types.Uint128 {
return types.Uint128FromBig(big.NewInt(int64(i)))
}, func(i int) types.Uint128 {
x := big.NewInt(int64(i))
x = x.Mul(x, big.NewInt(math.MaxInt64))
return types.Uint128FromBig(x)
})
}
func TestUint256(t *testing.T) {
testColumn(t, true, "UInt256", "uint256", func(i int) types.Uint256 {
return types.Uint256FromBig(big.NewInt(int64(i)))
}, func(i int) types.Uint256 {
x := big.NewInt(int64(i))
x = x.Mul(x, big.NewInt(math.MaxInt64))
x = x.Mul(x, big.NewInt(math.MaxInt64))
return types.Uint256FromBig(x)
})
}
func TestInt8(t *testing.T) {
testColumn(t, true, "Int8", "int8", func(i int) int8 {
return int8(i)
}, func(i int) int8 {
return int8(i + 1)
})
}
func TestInt16(t *testing.T) {
testColumn(t, true, "Int16", "int16", func(i int) int16 {
return int16(i)
}, func(i int) int16 {
return int16(i + 1)
})
}
func TestInt32(t *testing.T) {
testColumn(t, true, "Int32", "int32", func(i int) int32 {
return int32(i)
}, func(i int) int32 {
return int32(i + 1)
})
}
func TestInt64(t *testing.T) {
testColumn(t, true, "Int64", "int64", func(i int) int64 {
return int64(i)
}, func(i int) int64 {
return int64(i + 1)
})
}
func TestInt128(t *testing.T) {
testColumn(t, true, "Int128", "int128", func(i int) types.Int128 {
return types.Int128FromBig(big.NewInt(int64(i * -1)))
}, func(i int) types.Int128 {
x := big.NewInt(int64(i) * -1)
x = x.Mul(x, big.NewInt(math.MaxInt64))
return types.Int128FromBig(x)
})
}
func TestInt256(t *testing.T) {
testColumn(t, true, "Int256", "int256", func(i int) types.Int256 {
return types.Int256FromBig(big.NewInt(int64(i)))
}, func(i int) types.Int256 {
x := big.NewInt(int64(i) * -1)
x = x.Mul(x, big.NewInt(math.MaxInt64))
x = x.Mul(x, big.NewInt(math.MaxInt64))
return types.Int256FromBig(x)
})
}
func TestFixedString(t *testing.T) {
testColumn(t, true, "FixedString(2)", "fixedString", func(i int) [2]byte {
return [2]byte{byte(i), byte(i + 1)}
}, func(i int) [2]byte {
return [2]byte{byte(i + 1), byte(i + 2)}
})
}
func TestFloat32(t *testing.T) {
testColumn(t, true, "Float32", "float32", func(i int) float32 {
return float32(i)
}, func(i int) float32 {
return float32(i + 1)
})
}
func TestFloat64(t *testing.T) {
testColumn(t, true, "Float64", "float64", func(i int) float64 {
return float64(i)
}, func(i int) float64 {
return float64(i + 1)
})
}
func TestDecimal32(t *testing.T) {
testColumn(t, false, "Decimal32(3)", "decimal32", func(i int) types.Decimal32 {
return types.Decimal32(i)
}, func(i int) types.Decimal32 {
return types.Decimal32(i + 1)
})
}
func TestDecimal64(t *testing.T) {
testColumn(t, false, "Decimal64(3)", "decimal64", func(i int) types.Decimal64 {
return types.Decimal64(i)
}, func(i int) types.Decimal64 {
return types.Decimal64(i + 1)
})
}
func TestDecimal128(t *testing.T) {
testColumn(t, false, "Decimal128(3)", "decimal128", func(i int) types.Decimal128 {
return types.Decimal128(types.Int128FromBig(big.NewInt(int64(i))))
}, func(i int) types.Decimal128 {
return types.Decimal128(types.Int128FromBig(big.NewInt(int64(i + 1))))
})
}
func TestDecimal256(t *testing.T) {
testColumn(t, false, "Decimal256(3)", "decimal256", func(i int) types.Decimal256 {
return types.Decimal256(types.Int256FromBig(big.NewInt(int64(i))))
}, func(i int) types.Decimal256 {
return types.Decimal256(types.Int256FromBig(big.NewInt(int64(i + 1))))
})
}
func TestIPv4(t *testing.T) {
testColumn(t, true, "IPv4", "ipv4", func(i int) types.IPv4 {
// or directly return types.IPv4
return types.IPv4FromAddr(netip.AddrFrom4([4]byte{0, 0, 0, byte(i)}))
}, func(i int) types.IPv4 {
// or directly return types.IPv4
return types.IPv4FromAddr(netip.AddrFrom4([4]byte{0, 0, byte(i), 0}))
})
}
func TestIPv6(t *testing.T) {
testColumn(t, true, "IPv6", "ipv6", func(i int) types.IPv6 {
// or directly return types.IPv6
return types.IPv6FromAddr(netip.MustParseAddr("2001:0db8:85a3:0000:0000:8a2e:0370:7334"))
}, func(i int) types.IPv6 {
// or directly return types.IPv6
return types.IPv6FromAddr(netip.AddrFrom16([16]byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, byte(i + 1)}))
})
}
func TestUUID(t *testing.T) {
testColumn(t, true, "UUID", "uuid", func(i int) types.UUID {
return types.UUIDFromBigEndian(uuid.New())
}, func(i int) types.UUID {
return types.UUIDFromBigEndian(uuid.New())
})
}
func testColumn[T comparable](
t *testing.T,
isLC bool,
chType, tableName string,
firstVal func(i int) T,
secondVal func(i int) T,
) {
t.Parallel()
connString := os.Getenv("CHX_TEST_TCP_CONN_STRING")
conn, err := chconn.Connect(context.Background(), connString)
require.NoError(t, err)
err = conn.Exec(context.Background(),
fmt.Sprintf(`DROP TABLE IF EXISTS test_%s`, tableName),
)
require.NoError(t, err)
set := chconn.Settings{
{
Name: "allow_suspicious_low_cardinality_types",
Value: "true",
Important: true,
},
}
var sqlCreate string
if isLC {
sqlCreate = fmt.Sprintf(`CREATE TABLE test_%[1]s (
block_id UInt8,
%[1]s %[2]s,
%[1]s_nullable Nullable(%[2]s),
%[1]s_array Array(%[2]s),
%[1]s_array_nullable Array(Nullable(%[2]s)),
%[1]s_lc LowCardinality(%[2]s),
%[1]s_nullable_lc LowCardinality(Nullable(%[2]s)),
%[1]s_array_lc Array(LowCardinality(%[2]s)),
%[1]s_array_lc_nullable Array(LowCardinality(Nullable(%[2]s)))
) Engine=Memory`, tableName, chType)
} else {
sqlCreate = fmt.Sprintf(`CREATE TABLE test_%[1]s (
block_id UInt8,
%[1]s %[2]s,
%[1]s_nullable Nullable(%[2]s),
%[1]s_array Array(%[2]s),
%[1]s_array_nullable Array(Nullable(%[2]s))
) Engine=Memory`, tableName, chType)
}
err = conn.ExecWithOption(context.Background(), sqlCreate, &chconn.QueryOptions{
Settings: set,
})
require.NoError(t, err)
blockID := column.New[uint8]()
col := column.New[T]()
colNullable := column.New[T]().Nullable()
colArray := column.New[T]().Array()
colNullableArray := column.New[T]().Nullable().Array()
colLC := column.New[T]().LC()
colLCNullable := column.New[T]().Nullable().LC()
colArrayLC := column.New[T]().LC().Array()
colArrayLCNullable := column.New[T]().Nullable().LC().Array()
var colInsert []T
var colNullableInsert []*T
var colArrayInsert [][]T
var colArrayNullableInsert [][]*T
var colLCInsert []T
var colLCNullableInsert []*T
var colLCArrayInsert [][]T
var colLCNullableArrayInsert [][]*T
// SetWriteBufferSize is not necessary. this just to show how to set write buffer
col.SetWriteBufferSize(10)
colNullable.SetWriteBufferSize(10)
colArray.SetWriteBufferSize(10)
colNullableArray.SetWriteBufferSize(10)
colLC.SetWriteBufferSize(10)
colLCNullable.SetWriteBufferSize(10)
colArrayLC.SetWriteBufferSize(10)
colArrayLCNullable.SetWriteBufferSize(10)
for insertN := 0; insertN < 2; insertN++ {
rows := 10
for i := 0; i < rows; i++ {
blockID.Append(uint8(insertN))
val := firstVal(i * (insertN + 1))
val2 := secondVal(i * (insertN + 1))
valArray := []T{val, val2}
valArrayNil := []*T{&val, nil}
col.Append(val)
colInsert = append(colInsert, val)
// example add nullable
if i%2 == 0 {
colNullableInsert = append(colNullableInsert, &val)
colNullable.Append(val)
colLCNullableInsert = append(colLCNullableInsert, &val)
colLCNullable.Append(val)
} else {
colNullableInsert = append(colNullableInsert, nil)
colNullable.AppendNil()
colLCNullableInsert = append(colLCNullableInsert, nil)
colLCNullable.AppendNil()
}
colArray.Append(valArray)
colArrayInsert = append(colArrayInsert, valArray)
colNullableArray.AppendP(valArrayNil)
colArrayNullableInsert = append(colArrayNullableInsert, valArrayNil)
colLCInsert = append(colLCInsert, val)
colLC.Append(val)
colLCArrayInsert = append(colLCArrayInsert, valArray)
colArrayLC.Append(valArray)
colLCNullableArrayInsert = append(colLCNullableArrayInsert, valArrayNil)
colArrayLCNullable.AppendP(valArrayNil)
}
if isLC {
err = conn.Insert(context.Background(), fmt.Sprintf(`INSERT INTO
test_%[1]s (
block_id,
%[1]s,
%[1]s_nullable,
%[1]s_array,
%[1]s_array_nullable,
%[1]s_lc,
%[1]s_nullable_lc,
%[1]s_array_lc,
%[1]s_array_lc_nullable
)
VALUES`, tableName),
blockID,
col,
colNullable,
colArray,
colNullableArray,
colLC,
colLCNullable,
colArrayLC,
colArrayLCNullable,
)
} else {
err = conn.Insert(context.Background(), fmt.Sprintf(`INSERT INTO
test_%[1]s (
block_id,
%[1]s,
%[1]s_nullable,
%[1]s_array,
%[1]s_array_nullable
)
VALUES`, tableName),
blockID,
col,
colNullable,
colArray,
colNullableArray,
)
}
require.NoError(t, err)
}
// test read all
colRead := column.New[T]()
colNullableRead := column.New[T]().Nullable()
colArrayRead := column.New[T]().Array()
colNullableArrayRead := column.New[T]().Nullable().Array()
colLCRead := column.New[T]().LC()
colLCNullableRead := column.New[T]().Nullable().LC()
colArrayLCRead := column.New[T]().LC().Array()
colArrayLCNullableRead := column.New[T]().Nullable().LC().Array()
var selectStmt chconn.SelectStmt
if isLC {
selectStmt, err = conn.Select(context.Background(), fmt.Sprintf(`SELECT
%[1]s,
%[1]s_nullable,
%[1]s_array,
%[1]s_array_nullable,
%[1]s_lc,
%[1]s_nullable_lc,
%[1]s_array_lc,
%[1]s_array_lc_nullable
FROM test_%[1]s order by block_id`, tableName),
colRead,
colNullableRead,
colArrayRead,
colNullableArrayRead,
colLCRead,
colLCNullableRead,
colArrayLCRead,
colArrayLCNullableRead,
)
} else {
selectStmt, err = conn.Select(context.Background(), fmt.Sprintf(`SELECT
%[1]s,
%[1]s_nullable,
%[1]s_array,
%[1]s_array_nullable
FROM test_%[1]s order by block_id`, tableName),
colRead,
colNullableRead,
colArrayRead,
colNullableArrayRead,
)
}
require.NoError(t, err)
require.True(t, conn.IsBusy())
var colData []T
var colNullableData []*T
var colArrayData [][]T
var colArrayNullableData [][]*T
var colLCData []T
var colLCDataWithKeys []T
var dictData []T
var dictKey []int
var colLCNullableData []*T
var colLCArrayData [][]T
var colLCNullableArrayData [][]*T
for selectStmt.Next() {
colData = colRead.Read(colData)
colNullableData = colNullableRead.ReadP(colNullableData)
colArrayData = colArrayRead.Read(colArrayData)
colArrayNullableData = colNullableArrayRead.ReadP(colArrayNullableData)
if isLC {
colLCData = colLCRead.Read(colLCData)
colLCNullableData = colLCNullableRead.ReadP(colLCNullableData)
colLCArrayData = colArrayLCRead.Read(colLCArrayData)
colLCNullableArrayData = colArrayLCNullableRead.ReadP(colLCNullableArrayData)
dictData = colLCRead.Dicts()
dictKey = colLCRead.Keys()
// get data from dict and keys
for _, val := range dictKey {
colLCDataWithKeys = append(colLCDataWithKeys, dictData[val])
}
}
}
require.NoError(t, selectStmt.Err())
assert.Equal(t, colInsert, colData)
assert.Equal(t, colNullableInsert, colNullableData)
assert.Equal(t, colArrayInsert, colArrayData)
assert.Equal(t, colArrayNullableInsert, colArrayNullableData)
if isLC {
assert.Equal(t, colLCInsert, colLCData)
assert.Equal(t, colLCInsert, colLCDataWithKeys)
assert.Equal(t, colLCNullableInsert, colLCNullableData)
assert.Equal(t, colLCArrayInsert, colLCArrayData)
assert.Equal(t, colLCNullableArrayInsert, colLCNullableArrayData)
}
// test row
colRead = column.New[T]()
colNullableRead = column.New[T]().Nullable()
colArrayRead = column.New[T]().Array()
colNullableArrayRead = column.New[T]().Nullable().Array()
colLCRead = column.New[T]().LowCardinality()
colLCNullableRead = column.New[T]().Nullable().LowCardinality()
colArrayLCRead = column.New[T]().LowCardinality().Array()
colArrayLCNullableRead = column.New[T]().Nullable().LowCardinality().Array()
if isLC {
selectStmt, err = conn.Select(context.Background(), fmt.Sprintf(`SELECT
%[1]s,
%[1]s_nullable,
%[1]s_array,
%[1]s_array_nullable,
%[1]s_lc,
%[1]s_nullable_lc,
%[1]s_array_lc,
%[1]s_array_lc_nullable
FROM test_%[1]s order by block_id`, tableName),
colRead,
colNullableRead,
colArrayRead,
colNullableArrayRead,
colLCRead,
colLCNullableRead,
colArrayLCRead,
colArrayLCNullableRead,
)
} else {
selectStmt, err = conn.Select(context.Background(), fmt.Sprintf(`SELECT
%[1]s,
%[1]s_nullable,
%[1]s_array,
%[1]s_array_nullable
FROM test_%[1]s order by block_id`, tableName),
colRead,
colNullableRead,
colArrayRead,
colNullableArrayRead,
)
}
require.NoError(t, err)
require.True(t, conn.IsBusy())
colData = colData[:0]
colNullableData = colNullableData[:0]
colArrayData = colArrayData[:0]
colArrayNullableData = colArrayNullableData[:0]
colLCData = colLCData[:0]
colLCNullableData = colLCNullableData[:0]
colLCArrayData = colLCArrayData[:0]
colLCNullableArrayData = colLCNullableArrayData[:0]
for selectStmt.Next() {
for i := 0; i < selectStmt.RowsInBlock(); i++ {
colData = append(colData, colRead.Row(i))
colNullableData = append(colNullableData, colNullableRead.RowP(i))
colArrayData = append(colArrayData, colArrayRead.Row(i))
colArrayNullableData = append(colArrayNullableData, colNullableArrayRead.RowP(i))
if isLC {
colLCData = append(colLCData, colLCRead.Row(i))
colLCNullableData = append(colLCNullableData, colLCNullableRead.RowP(i))
colLCArrayData = append(colLCArrayData, colArrayLCRead.Row(i))
colLCNullableArrayData = append(colLCNullableArrayData, colArrayLCNullableRead.RowP(i))
}
}
}
require.NoError(t, selectStmt.Err())
assert.Equal(t, colInsert, colData)
assert.Equal(t, colNullableInsert, colNullableData)
assert.Equal(t, colArrayInsert, colArrayData)
assert.Equal(t, colArrayNullableInsert, colArrayNullableData)
if isLC {
assert.Equal(t, colLCInsert, colLCData)
assert.Equal(t, colLCNullableInsert, colLCNullableData)
assert.Equal(t, colLCArrayInsert, colLCArrayData)
assert.Equal(t, colLCNullableArrayInsert, colLCNullableArrayData)
}
// check dynamic column
if isLC {
selectStmt, err = conn.Select(context.Background(), fmt.Sprintf(`SELECT
%[1]s,
%[1]s_nullable,
%[1]s_array,
%[1]s_array_nullable,
%[1]s_lc,
%[1]s_nullable_lc,
%[1]s_array_lc,
%[1]s_array_lc_nullable
FROM test_%[1]s order by block_id`, tableName),
)
} else {
selectStmt, err = conn.Select(context.Background(), fmt.Sprintf(`SELECT
%[1]s,
%[1]s_nullable,
%[1]s_array,
%[1]s_array_nullable
FROM test_%[1]s order by block_id`, tableName),
)
}
require.NoError(t, err)
autoColumns := selectStmt.Columns()
if isLC {
assert.Len(t, autoColumns, 8)
if tableName == "bool" {
assert.Equal(t, column.New[uint8]().ColumnType(), autoColumns[0].ColumnType())
assert.Equal(t, column.New[uint8]().Nullable().ColumnType(), autoColumns[1].ColumnType())
assert.Equal(t, column.New[uint8]().Array().ColumnType(), autoColumns[2].ColumnType())
assert.Equal(t, column.New[uint8]().Nullable().Array().ColumnType(), autoColumns[3].ColumnType())
assert.Equal(t, column.New[uint8]().LowCardinality().ColumnType(), autoColumns[4].ColumnType())
assert.Equal(t, column.New[uint8]().Nullable().LowCardinality().ColumnType(), autoColumns[5].ColumnType())
assert.Equal(t, column.New[uint8]().LowCardinality().Array().ColumnType(), autoColumns[6].ColumnType())
assert.Equal(t, column.New[uint8]().Nullable().LowCardinality().Array().ColumnType(), autoColumns[7].ColumnType())
} else {
assert.Equal(t, colRead.ColumnType(), autoColumns[0].ColumnType())
assert.Equal(t, colNullableRead.ColumnType(), autoColumns[1].ColumnType())
assert.Equal(t, colArrayRead.ColumnType(), autoColumns[2].ColumnType())
assert.Equal(t, colNullableArrayRead.ColumnType(), autoColumns[3].ColumnType())
assert.Equal(t, colLCRead.ColumnType(), autoColumns[4].ColumnType())
assert.Equal(t, colLCNullableRead.ColumnType(), autoColumns[5].ColumnType())
assert.Equal(t, colArrayLCRead.ColumnType(), autoColumns[6].ColumnType())
assert.Equal(t, colArrayLCNullableRead.ColumnType(), autoColumns[7].ColumnType())
}
} else {
assert.Len(t, autoColumns, 4)
assert.Equal(t, colRead.ColumnType(), autoColumns[0].ColumnType())
assert.Equal(t, colNullableRead.ColumnType(), autoColumns[1].ColumnType())
assert.Equal(t, colArrayRead.ColumnType(), autoColumns[2].ColumnType())
assert.Equal(t, colNullableArrayRead.ColumnType(), autoColumns[3].ColumnType())
}
for selectStmt.Next() {
}
require.NoError(t, selectStmt.Err())
selectStmt.Close()
}
func TestEmptyCollection(t *testing.T) {
t.Parallel()
connString := os.Getenv("CHX_TEST_TCP_CONN_STRING")
conn, err := chconn.Connect(context.Background(), connString)
require.NoError(t, err)
tableName := "empty_collection"
err = conn.Exec(context.Background(),
fmt.Sprintf(`DROP TABLE IF EXISTS test_%s`, tableName),
)
require.NoError(t, err)
set := chconn.Settings{
{
Name: "allow_suspicious_low_cardinality_types",
Value: "true",
},
}
sqlCreate := fmt.Sprintf(`CREATE TABLE test_%[1]s (
%[1]s_array Array(%[2]s),
%[1]s_array_nullable Array(Nullable(%[2]s)),
%[1]s_array_lc Array(LowCardinality(%[2]s)),
%[1]s_array_lc_nullable Array(LowCardinality(Nullable(%[2]s)))
) Engine=Memory`, tableName, "UInt16")
err = conn.ExecWithOption(context.Background(), sqlCreate, &chconn.QueryOptions{
Settings: set,
})
require.NoError(t, err)
colArray := column.New[uint16]().Array()
colNullableArray := column.New[uint16]().Nullable().Array()
colArrayLC := column.New[uint16]().LC().Array()
colArrayLCNullable := column.New[uint16]().Nullable().LC().Array()
colArray.Append()
colArray.Append([]uint16{})
colNullableArray.AppendP()
colNullableArray.AppendP([]*uint16{})
colArrayLC.Append()
colArrayLC.Append([]uint16{})
colArrayLCNullable.AppendP()
colArrayLCNullable.AppendP([]*uint16{})
err = conn.Insert(context.Background(), fmt.Sprintf(`INSERT INTO
test_%[1]s (
%[1]s_array,
%[1]s_array_nullable,
%[1]s_array_lc,
%[1]s_array_lc_nullable
)
VALUES`, tableName),
colArray,
colNullableArray,
colArrayLC,
colArrayLCNullable,
)
require.NoError(t, err)
// test read all
colArrayRead := column.New[uint16]().Array()
colNullableArrayRead := column.New[uint16]().Nullable().Array()
colArrayLCRead := column.New[uint16]().LC().Array()
colArrayLCNullableRead := column.New[uint16]().Nullable().LC().Array()
var selectStmt chconn.SelectStmt
selectStmt, err = conn.Select(context.Background(), fmt.Sprintf(`SELECT
%[1]s_array,
%[1]s_array_nullable,
%[1]s_array_lc,
%[1]s_array_lc_nullable
FROM test_%[1]s `, tableName),
colArrayRead,
colNullableArrayRead,
colArrayLCRead,
colArrayLCNullableRead,
)
require.NoError(t, err)
require.True(t, conn.IsBusy())
var colArrayData [][]uint16
var colArrayNullableData [][]*uint16
var colLCArrayData [][]uint16
var colLCNullableArrayData [][]*uint16
for selectStmt.Next() {
colArrayData = colArrayRead.Read(colArrayData)
colArrayNullableData = colNullableArrayRead.ReadP(colArrayNullableData)
colLCArrayData = colArrayLCRead.Read(colLCArrayData)
colLCNullableArrayData = colArrayLCNullableRead.ReadP(colLCNullableArrayData)
}
require.NoError(t, selectStmt.Err())
assert.Equal(t, [][]uint16{{}}, colArrayData)
assert.Equal(t, [][]*uint16{{}}, colArrayNullableData)
assert.Equal(t, [][]uint16{{}}, colLCArrayData)
assert.Equal(t, [][]*uint16{{}}, colLCNullableArrayData)
}
================================================
FILE: column/base_validate.go
================================================
package column
import (
"bytes"
"fmt"
"strconv"
"github.com/vahid-sohrabloo/chconn/v2/internal/helper"
)
var chColumnByteSize = map[string]int{
"Bool": 1,
"Int8": 1,
"Int16": 2,
"Int32": 4,
"Int64": 8,
"Int128": 16,
"Int256": 32,
"UInt8": 1,
"UInt16": 2,
"UInt32": 4,
"UInt64": 8,
"UInt128": 16,
"UInt256": 32,
"Float32": 4,
"Float64": 8,
"Date": 2,
"Date32": 4,
"DateTime": 4,
"DateTime64": 8,
"UUID": 16,
"IPv4": 4,
"IPv6": 16,
}
var byteChColumnType = map[int]string{
1: "Int8|UInt8|Enum8",
2: "Int16|UInt16|Enum16|Date",
4: "Int32|UInt32|Float32|Decimal32|Date32|DateTime|IPv4",
8: "Int64|UInt64|Float64|Decimal64|DateTime64",
16: "Int128|UInt128|Decimal128|IPv6|UUID",
32: "Int256|UInt256|Decimal256",
}
func (c *Base[T]) Validate() error {
chType := helper.FilterSimpleAggregate(c.chType)
if byteSize, ok := chColumnByteSize[string(chType)]; ok {
if byteSize != c.size {
return &ErrInvalidType{
column: c,
}
}
return nil
}
if ok, err := c.checkEnum8(chType); ok {
return err
}
if ok, err := c.checkEnum16(chType); ok {
return err
}
if ok, err := c.checkDateTime(chType); ok {
return err
}
if ok, err := c.checkDateTime(chType); ok {
return err
}
if ok, err := c.checkDateTime64(chType); ok {
return err
}
if ok, err := c.checkFixedString(chType); ok {
return err
}
if ok, err := c.checkDecimal(chType); ok {
return err
}
return &ErrInvalidType{
column: c,
}
}
func (c *Base[T]) checkEnum8(chType []byte) (bool, error) {
if helper.IsEnum8(chType) {
if c.size != Uint8Size {
return true, &ErrInvalidType{
column: c,
}
}
return true, nil
}
return false, nil
}
func (c *Base[T]) checkEnum16(chType []byte) (bool, error) {
if helper.IsEnum16(chType) {
if c.size != Uint16Size {
return true, &ErrInvalidType{
column: c,
}
}
return true, nil
}
return false, nil
}
func (c *Base[T]) checkDateTime(chType []byte) (bool, error) {
if helper.IsDateTimeWithParam(chType) {
if c.size != 4 {
return true, &ErrInvalidType{
column: c,
}
}
c.params = []interface{}{
// precision
0,
// timezone
chType[helper.DateTimeStrLen : len(chType)-1],
}
return true, nil
}
return false, nil
}
func (c *Base[T]) checkDateTime64(chType []byte) (bool, error) {
if helper.IsDateTime64(chType) {
if c.size != 8 {
return true, &ErrInvalidType{
column: c,
}
}
parts := bytes.Split(chType[helper.DecimalStrLen:len(chType)-1], []byte(", "))
c.params = []interface{}{
parts[0],
[]byte{},
}
if len(parts) > 1 {
c.params[1] = parts[1]
}
return true, nil
}
return false, nil
}
func (c *Base[T]) checkFixedString(chType []byte) (bool, error) {
if helper.IsFixedString(chType) {
size, err := strconv.Atoi(string(chType[helper.FixedStringStrLen : len(chType)-1]))
if err != nil {
return true, fmt.Errorf("invalid size: %s", err)
}
if c.size != size {
return true, &ErrInvalidType{
column: c,
}
}
return true, nil
}
return false, nil
}
func (c *Base[T]) checkDecimal(chType []byte) (bool, error) {
if helper.IsDecimal(chType) {
parts := bytes.Split(chType[helper.DecimalStrLen:len(chType)-1], []byte(", "))
if len(parts) != 2 {
return true, fmt.Errorf("invalid decimal type (should have precision and scale): %s", c.chType)
}
precision, err := strconv.Atoi(string(parts[0]))
if err != nil {
return true, fmt.Errorf("invalid precision: %s", err)
}
scale, err := strconv.Atoi(string(parts[1]))
if err != nil {
return true, fmt.Errorf("invalid scale: %s", err)
}
c.params = []interface{}{precision, scale}
var size int
switch {
case precision >= 1 && precision <= 9:
size = 4
case precision >= 10 && precision <= 18:
size = 8
case precision >= 19 && precision <= 38:
size = 16
case precision >= 39 && precision <= 76:
size = 32
default:
return true, fmt.Errorf("invalid precision: %d. it should be between 1 and 76", precision)
}
if c.size != size {
return true, &ErrInvalidType{
column: c,
}
}
return true, nil
}
return false, nil
}
func (c *Base[T]) ColumnType() string {
if ok, _ := c.checkFixedString(c.chType); !ok {
if str, ok := byteChColumnType[c.size]; ok {
return str
}
}
return fmt.Sprintf("T(%d bytes size)", c.size)
}
================================================
FILE: column/bench_test.go
================================================
package column_test
import (
"context"
"testing"
"github.com/vahid-sohrabloo/chconn/v2"
"github.com/vahid-sohrabloo/chconn/v2/column"
)
func BenchmarkTestChconnSelect100MUint64(b *testing.B) {
// return
ctx := context.Background()
c, err := chconn.Connect(ctx, "password=salam")
if err != nil {
b.Fatal(err)
}
colRead := column.New[uint64]()
for n := 0; n < b.N; n++ {
s, err := c.Select(ctx, "SELECT number FROM system.numbers_mt LIMIT 100000000", colRead)
if err != nil {
b.Fatal(err)
}
for s.Next() {
colRead.Data()
}
if err := s.Err(); err != nil {
b.Fatal(err)
}
s.Close()
}
}
func BenchmarkTestChconnSelect1MString(b *testing.B) {
ctx := context.Background()
c, err := chconn.Connect(ctx, "password=salam")
if err != nil {
b.Fatal(err)
}
colRead := column.NewString()
var data [][]byte
for n := 0; n < b.N; n++ {
s, err := c.Select(ctx, "SELECT randomString(20) FROM system.numbers_mt LIMIT 1000000", colRead)
if err != nil {
b.Fatal(err)
}
for s.Next() {
data = data[:0]
colRead.DataBytes()
}
if err := s.Err(); err != nil {
b.Fatal(err)
}
s.Close()
}
}
func BenchmarkTestChconnInsert10M(b *testing.B) {
// return
ctx := context.Background()
c, err := chconn.Connect(ctx, "password=salam")
if err != nil {
b.Fatal(err)
}
err = c.Exec(ctx, "DROP TABLE IF EXISTS test_insert_chconn")
if err != nil {
b.Fatal(err)
}
err = c.Exec(ctx, "CREATE TABLE test_insert_chconn (id UInt64) ENGINE = Null")
if err != nil {
b.Fatal(err)
}
const (
rowsInBlock = 10_000_000
)
idColumns := column.New[uint64]()
idColumns.SetWriteBufferSize(rowsInBlock)
for n := 0; n < b.N; n++ {
for y := 0; y < rowsInBlock; y++ {
idColumns.Append(1)
}
err := c.Insert(ctx, "INSERT INTO test_insert_chconn VALUES", idColumns)
if err != nil {
b.Fatal(err)
}
}
}
================================================
FILE: column/column_helper.go
================================================
package column
import (
"fmt"
"io"
"github.com/vahid-sohrabloo/chconn/v2/internal/helper"
"github.com/vahid-sohrabloo/chconn/v2/internal/readerwriter"
)
type ColumnBasic interface {
ReadRaw(num int, r *readerwriter.Reader) error
HeaderReader(r *readerwriter.Reader, readColumn bool, revision uint64) error
HeaderWriter(*readerwriter.Writer)
WriteTo(io.Writer) (int64, error)
NumRow() int
Reset()
SetType(v []byte)
Type() []byte
SetName(v []byte)
Name() []byte
Validate() error
ColumnType() string
SetWriteBufferSize(int)
}
type Column[T any] interface {
ColumnBasic
Data() []T
Read([]T) []T
Row(int) T
Append(...T)
}
type NullableColumn[T any] interface {
Column[T]
DataP() []*T
ReadP([]*T) []*T
RowP(int) *T
AppendP(...*T)
}
type column struct {
r *readerwriter.Reader
b []byte
totalByte int
name []byte
chType []byte
parent ColumnBasic
}
func (c *column) readColumn(readColumn bool, revision uint64) error {
if c.parent != nil || !readColumn {
return nil
}
strLen, err := c.r.Uvarint()
if err != nil {
return fmt.Errorf("read column name length: %w", err)
}
if cap(c.name) < int(strLen) {
c.name = make([]byte, strLen)
} else {
c.name = c.name[:strLen]
}
_, err = c.r.Read(c.name)
if err != nil {
return fmt.Errorf("read column name: %w", err)
}
strLen, err = c.r.Uvarint()
if err != nil {
return fmt.Errorf("read column type length: %w", err)
}
if cap(c.chType) < int(strLen) {
c.chType = make([]byte, strLen)
} else {
c.chType = c.chType[:strLen]
}
_, err = c.r.Read(c.chType)
if err != nil {
return fmt.Errorf("read column type: %w", err)
}
if revision >= helper.DbmsMinProtocolWithCustomSerialization {
hasCustomSerialization, err := c.r.ReadByte()
if err != nil {
return fmt.Errorf("read custom serialization: %w", err)
}
// todo check with json object
if hasCustomSerialization == 1 {
return fmt.Errorf("custom serialization not supported")
}
}
return nil
}
// Name get name of the column
func (c *column) Name() []byte {
return c.name
}
// Type get clickhouse type
func (c *column) Type() []byte {
return c.chType
}
// SetName set name of the column
func (c *column) SetName(v []byte) {
c.name = v
}
// SetType set clickhouse type
func (c *column) SetType(v []byte) {
c.chType = v
}
================================================
FILE: column/date.go
================================================
package column
import (
"strings"
"time"
"unsafe"
)
// DateType is an interface to handle convert between time.Time and T.
type DateType[T any] interface {
comparable
FromTime(val time.Time, precision int) T
ToTime(val *time.Location, precision int) time.Time
}
// Date is a date column of ClickHouse date type (Date, Date32, DateTime, DateTime64).
// it is a wrapper of time.Time. but if you want to work with the raw data like unix timestamp
// you can directly use `Column` (`New[T]()`)
//
// `uint16` or `types.Date` or any 16 bits data types For `Date`.
//
// `uint32` or `types.Date32` or any 32 bits data types For `Date32`
//
// `uint32` or `types.DateTime` or any 32 bits data types For `DateTime`
//
// `uint64` or `types.DateTime64` or any 64 bits data types For `DateTime64`
type Date[T DateType[T]] struct {
Base[T]
loc *time.Location
precision int
}
// NewDate create a new date column of ClickHouse date type (Date, Date32, DateTime, DateTime64).
// it is a wrapper of time.Time. but if you want to work with the raw data like unix timestamp
// you can directly use `Column` (`New[T]()``)
//
// `uint16` or `types.Date` or any 16 bits data types For `Date`.
//
// `uint32` or `types.Date32` or any 32 bits data types For `Date32`
//
// `uint32` or `types.DateTime` or any 32 bits data types For `DateTime`
//
// `uint64` or `types.DateTime64` or any 64 bits data types For `DateTime64`
//
// ONLY ON SELECT, timezone set automatically for `DateTime` and `DateTime64` if not set and present in clickhouse datatype)
func NewDate[T DateType[T]]() *Date[T] {
var tmpValue T
size := int(unsafe.Sizeof(tmpValue))
return &Date[T]{
Base: Base[T]{
size: size,
},
}
}
// SetLocation set the location of the time.Time. Only use for `DateTime` and `DateTime64`
func (c *Date[T]) SetLocation(loc *time.Location) *Date[T] {
c.loc = loc
return c
}
// Location get location
//
// ONLY ON SELECT, set automatically for `DateTime` and `DateTime64` if not set and present in clickhouse datatype)
func (c *Date[T]) Location() *time.Location {
if c.loc == nil && len(c.params) >= 2 && len(c.params[1].([]byte)) > 0 {
loc, err := time.LoadLocation(strings.Trim(string(c.params[1].([]byte)), "'"))
if err == nil {
c.SetLocation(loc)
} else {
c.SetLocation(time.Local)
}
}
if c.loc == nil {
c.SetLocation(time.Local)
}
return c.loc
}
// SetPrecision set the precision of the time.Time. Only use for `DateTime64`
func (c *Date[T]) SetPrecision(precision int) *Date[T] {
c.precision = precision
return c
}
// Data get all the data in current block as a slice.
func (c *Date[T]) Data() []time.Time {
values := make([]time.Time, c.numRow)
for i := 0; i < c.numRow; i++ {
values[i] = c.Row(i)
}
return values
}
// Read reads all the data in current block and append to the input.
func (c *Date[T]) Read(value []time.Time) []time.Time {
if cap(value)-len(value) >= c.NumRow() {
value = (value)[:len(value)+c.NumRow()]
} else {
value = append(value, make([]time.Time, c.NumRow())...)
}
val := (value)[len(value)-c.NumRow():]
for i := 0; i < c.NumRow(); i++ {
val[i] = c.Row(i)
}
return value
}
// Row return the value of given row
// NOTE: Row number start from zero
func (c *Date[T]) Row(row int) time.Time {
i := row * c.size
return (*(*T)(unsafe.Pointer(&c.b[i]))).ToTime(c.Location(), c.precision)
}
// Append value for insert
func (c *Date[T]) Append(v ...time.Time) {
var val T
for _, v := range v {
c.values = append(c.values, val.FromTime(v, c.precision))
}
c.numRow += len(v)
}
// Array return a Array type for this column
func (c *Date[T]) Array() *Array[time.Time] {
return NewArray[time.Time](c)
}
// Nullable return a nullable type for this column
func (c *Date[T]) Nullable() *Nullable[time.Time] {
return NewNullable[time.Time](c)
}
// LC return a low cardinality type for this column
func (c *Date[T]) LC() *LowCardinality[time.Time] {
return NewLC[time.Time](c)
}
// LowCardinality return a low cardinality type for this column
func (c *Date[T]) LowCardinality() *LowCardinality[time.Time] {
return NewLC[time.Time](c)
}
func (c *Date[T]) Elem(arrayLevel int, nullable, lc bool) ColumnBasic {
if nullable {
return c.Nullable().elem(arrayLevel, lc)
}
if lc {
return c.LowCardinality().elem(arrayLevel)
}
if arrayLevel > 0 {
return c.Array().elem(arrayLevel - 1)
}
return c
}
================================================
FILE: column/date_test.go
================================================
package column_test
import (
"context"
"fmt"
"os"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/vahid-sohrabloo/chconn/v2"
"github.com/vahid-sohrabloo/chconn/v2/column"
"github.com/vahid-sohrabloo/chconn/v2/types"
)
func TestDate(t *testing.T) {
testDateColumn(t, true, "Date", "date", func(i int) time.Time {
return time.Date(2020, 1, i, 0, 0, 0, 0, time.UTC)
}, func(i int) time.Time {
return time.Date(2020, 1, i+1, 0, 0, 0, 0, time.UTC)
}, func() *column.Date[types.Date] {
return column.NewDate[types.Date]()
})
}
func TestDate32(t *testing.T) {
testDateColumn(t, true, "Date32", "date32", func(i int) time.Time {
return time.Date(2020, 1, i, 0, 0, 0, 0, time.UTC)
}, func(i int) time.Time {
return time.Date(2020, 1, i+1, 0, 0, 0, 0, time.UTC)
}, func() *column.Date[types.Date32] {
return column.NewDate[types.Date32]()
})
}
func TestDateTime(t *testing.T) {
testDateColumn(t, true, "DateTime", "dateTime", func(i int) time.Time {
return time.Date(2020, 1, i, 0, 0, i+1, 0, time.Local)
}, func(i int) time.Time {
return time.Date(2020, 1, i, 0, 0, i+2, 0, time.Local)
}, func() *column.Date[types.DateTime] {
return column.NewDate[types.DateTime]()
})
}
func TestDateTimeTimezone(t *testing.T) {
testDateColumn(t, true, "DateTime('America/New_York')", "dateTime_timezone", func(i int) time.Time {
loc, err := time.LoadLocation("America/New_York")
require.NoError(t, err)
return time.Date(2020, 1, i, 0, 0, i+1, 0, loc)
}, func(i int) time.Time {
loc, err := time.LoadLocation("America/New_York")
require.NoError(t, err)
return time.Date(2020, 1, i, 0, 0, i+2, 0, loc)
}, func() *column.Date[types.DateTime] {
return column.NewDate[types.DateTime]()
})
}
func TestDateTime64(t *testing.T) {
testDateColumn(t, false, "DateTime64(9, 'America/New_York')", "dateTime64", func(i int) time.Time {
loc, err := time.LoadLocation("America/New_York")
require.NoError(t, err)
return time.Date(2020, 1, i, 0, 0, i+1, i+110, loc)
}, func(i int) time.Time {
loc, err := time.LoadLocation("America/New_York")
require.NoError(t, err)
return time.Date(2020, 1, i, 0, 0, i+1, i+1101, loc)
}, func() *column.Date[types.DateTime64] {
return column.NewDate[types.DateTime64]().SetPrecision(9)
})
}
func testDateColumn[T column.DateType[T]](
t *testing.T,
isLC bool,
chType, tableName string,
firstVal func(i int) time.Time,
secondVal func(i int) time.Time,
getBaseColumn func() *column.Date[T],
) {
t.Parallel()
connString := os.Getenv("CHX_TEST_TCP_CONN_STRING")
conn, err := chconn.Connect(context.Background(), connString)
require.NoError(t, err)
err = conn.Exec(context.Background(),
fmt.Sprintf(`DROP TABLE IF EXISTS test_%s`, tableName),
)
require.NoError(t, err)
set := chconn.Settings{
{
Name: "allow_suspicious_low_cardinality_types",
Value: "true",
},
}
var sqlCreate string
if isLC {
sqlCreate = fmt.Sprintf(`CREATE TABLE test_%[1]s (
%[1]s %[2]s,
%[1]s_nullable Nullable(%[2]s),
%[1]s_array Array(%[2]s),
%[1]s_array_nullable Array(Nullable(%[2]s)),
%[1]s_lc LowCardinality(%[2]s),
%[1]s_nullable_lc LowCardinality(Nullable(%[2]s)),
%[1]s_array_lc Array(LowCardinality(%[2]s)),
%[1]s_array_lc_nullable Array(LowCardinality(Nullable(%[2]s)))
) Engine=Memory`, tableName, chType)
} else {
sqlCreate = fmt.Sprintf(`CREATE TABLE test_%[1]s (
%[1]s %[2]s,
%[1]s_nullable Nullable(%[2]s),
%[1]s_array Array(%[2]s),
%[1]s_array_nullable Array(Nullable(%[2]s))
) Engine=Memory`, tableName, chType)
}
err = conn.ExecWithOption(context.Background(), sqlCreate, &chconn.QueryOptions{
Settings: set,
})
require.NoError(t, err)
col := getBaseColumn()
colNullable := getBaseColumn().Nullable()
colArray := getBaseColumn().Array()
colNullableArray := getBaseColumn().Nullable().Array()
colLC := getBaseColumn().LC()
colLCNullable := getBaseColumn().Nullable().LC()
colArrayLC := getBaseColumn().LC().Array()
colArrayLCNullable := getBaseColumn().Nullable().LC().Array()
var colInsert []time.Time
var colNullableInsert []*time.Time
var colArrayInsert [][]time.Time
var colArrayNullableInsert [][]*time.Time
var colLCInsert []time.Time
var colLCNullableInsert []*time.Time
var colLCArrayInsert [][]time.Time
var colLCNullableArrayInsert [][]*time.Time
// SetWriteBufferSize is not necessary. this just to show how to set write buffer
col.SetWriteBufferSize(10)
colNullable.SetWriteBufferSize(10)
colArray.SetWriteBufferSize(10)
colNullableArray.SetWriteBufferSize(10)
colLC.SetWriteBufferSize(10)
colLCNullable.SetWriteBufferSize(10)
colArrayLC.SetWriteBufferSize(10)
colArrayLCNullable.SetWriteBufferSize(10)
for insertN := 0; insertN < 2; insertN++ {
rows := 10
for i := 0; i < rows; i++ {
val := firstVal(i)
val2 := secondVal(i)
valArray := []time.Time{val, val2}
valArrayNil := []*time.Time{&val, nil}
col.Append(val)
colInsert = append(colInsert, val)
// example add nullable
if i%2 == 0 {
colNullableInsert = append(colNullableInsert, &val)
colNullable.Append(val)
colLCNullableInsert = append(colLCNullableInsert, &val)
colLCNullable.Append(val)
} else {
colNullableInsert = append(colNullableInsert, nil)
colNullable.AppendNil()
colLCNullableInsert = append(colLCNullableInsert, nil)
colLCNullable.AppendNil()
}
colArray.Append(valArray)
colArrayInsert = append(colArrayInsert, valArray)
colNullableArray.AppendP(valArrayNil)
colArrayNullableInsert = append(colArrayNullableInsert, valArrayNil)
colLCInsert = append(colLCInsert, val)
colLC.Append(val)
colLCArrayInsert = append(colLCArrayInsert, valArray)
colArrayLC.Append(valArray)
colLCNullableArrayInsert = append(colLCNullableArrayInsert, valArrayNil)
colArrayLCNullable.AppendP(valArrayNil)
}
if isLC {
err = conn.Insert(context.Background(), fmt.Sprintf(`INSERT INTO
test_%[1]s (
%[1]s,
%[1]s_nullable,
%[1]s_array,
%[1]s_array_nullable,
%[1]s_lc,
%[1]s_nullable_lc,
%[1]s_array_lc,
%[1]s_array_lc_nullable
)
VALUES`, tableName),
col,
colNullable,
colArray,
colNullableArray,
colLC,
colLCNullable,
colArrayLC,
colArrayLCNullable,
)
} else {
err = conn.Insert(context.Background(), fmt.Sprintf(`INSERT INTO
test_%[1]s (
%[1]s,
%[1]s_nullable,
%[1]s_array,
%[1]s_array_nullable
)
VALUES`, tableName),
col,
colNullable,
colArray,
colNullableArray,
)
}
require.NoError(t, err)
}
// test read all
colRead := getBaseColumn()
colNullableRead := getBaseColumn().Nullable()
colArrayRead := getBaseColumn().Array()
colNullableArrayRead := getBaseColumn().Nullable().Array()
colLCRead := getBaseColumn().LC()
colLCNullableRead := getBaseColumn().Nullable().LC()
colArrayLCRead := getBaseColumn().LC().Array()
colArrayLCNullableRead := getBaseColumn().Nullable().LC().Array()
var selectStmt chconn.SelectStmt
if isLC {
selectStmt, err = conn.Select(context.Background(), fmt.Sprintf(`SELECT
%[1]s,
%[1]s_nullable,
%[1]s_array,
%[1]s_array_nullable,
%[1]s_lc,
%[1]s_nullable_lc,
%[1]s_array_lc,
%[1]s_array_lc_nullable
FROM test_%[1]s`, tableName),
colRead,
colNullableRead,
colArrayRead,
colNullableArrayRead,
colLCRead,
colLCNullableRead,
colArrayLCRead,
colArrayLCNullableRead,
)
} else {
selectStmt, err = conn.Select(context.Background(), fmt.Sprintf(`SELECT
%[1]s,
%[1]s_nullable,
%[1]s_array,
%[1]s_array_nullable
FROM test_%[1]s`, tableName),
colRead,
colNullableRead,
colArrayRead,
colNullableArrayRead,
)
}
require.NoError(t, err)
require.True(t, conn.IsBusy())
var colData []time.Time
var colNullableData []*time.Time
var colArrayData [][]time.Time
var colArrayNullableData [][]*time.Time
var colLCData []time.Time
var colLCNullableData []*time.Time
var colLCArrayData [][]time.Time
var colLCNullableArrayData [][]*time.Time
for selectStmt.Next() {
colData = colRead.Read(colData)
colNullableData = colNullableRead.ReadP(colNullableData)
colArrayData = colArrayRead.Read(colArrayData)
colArrayNullableData = colNullableArrayRead.ReadP(colArrayNullableData)
if isLC {
colLCData = colLCRead.Read(colLCData)
colLCNullableData = colLCNullableRead.ReadP(colLCNullableData)
colLCArrayData = colArrayLCRead.Read(colLCArrayData)
colLCNullableArrayData = colArrayLCNullableRead.ReadP(colLCNullableArrayData)
}
}
require.NoError(t, selectStmt.Err())
assert.Equal(t, colInsert, colData)
assert.Equal(t, colNullableInsert, colNullableData)
assert.Equal(t, colArrayInsert, colArrayData)
assert.Equal(t, colArrayNullableInsert, colArrayNullableData)
if isLC {
assert.Equal(t, colLCInsert, colLCData)
assert.Equal(t, colLCNullableInsert, colLCNullableData)
assert.Equal(t, colLCArrayInsert, colLCArrayData)
assert.Equal(t, colLCNullableArrayInsert, colLCNullableArrayData)
}
// test row
colRead = getBaseColumn()
colNullableRead = getBaseColumn().Nullable()
colArrayRead = getBaseColumn().Array()
colNullableArrayRead = getBaseColumn().Nullable().Array()
colLCRead = getBaseColumn().LowCardinality()
colLCNullableRead = getBaseColumn().Nullable().LowCardinality()
colArrayLCRead = getBaseColumn().LowCardinality().Array()
colArrayLCNullableRead = getBaseColumn().Nullable().LowCardinality().Array()
if isLC {
selectStmt, err = conn.Select(context.Background(), fmt.Sprintf(`SELECT
%[1]s,
%[1]s_nullable,
%[1]s_array,
%[1]s_array_nullable,
%[1]s_lc,
%[1]s_nullable_lc,
%[1]s_array_lc,
%[1]s_array_lc_nullable
FROM test_%[1]s`, tableName),
colRead,
colNullableRead,
colArrayRead,
colNullableArrayRead,
colLCRead,
colLCNullableRead,
colArrayLCRead,
colArrayLCNullableRead,
)
} else {
selectStmt, err = conn.Select(context.Background(), fmt.Sprintf(`SELECT
%[1]s,
%[1]s_nullable,
%[1]s_array,
%[1]s_array_nullable
FROM test_%[1]s`, tableName),
colRead,
colNullableRead,
colArrayRead,
colNullableArrayRead,
)
}
require.NoError(t, err)
require.True(t, conn.IsBusy())
colData = colData[:0]
colNullableData = colNullableData[:0]
colArrayData = colArrayData[:0]
colArrayNullableData = colArrayNullableData[:0]
colLCData = colLCData[:0]
colLCNullableData = colLCNullableData[:0]
colLCArrayData = colLCArrayData[:0]
colLCNullableArrayData = colLCNullableArrayData[:0]
for selectStmt.Next() {
for i := 0; i < selectStmt.RowsInBlock(); i++ {
colData = append(colData, colRead.Row(i))
colNullableData = append(colNullableData, colNullableRead.RowP(i))
colArrayData = append(colArrayData, colArrayRead.Row(i))
colArrayNullableData = append(colArrayNullableData, colNullableArrayRead.RowP(i))
if isLC {
colLCData = append(colLCData, colLCRead.Row(i))
colLCNullableData = append(colLCNullableData, colLCNullableRead.RowP(i))
colLCArrayData = append(colLCArrayData, colArrayLCRead.Row(i))
colLCNullableArrayData = append(colLCNullableArrayData, colArrayLCNullableRead.RowP(i))
}
}
}
require.NoError(t, selectStmt.Err())
assert.Equal(t, colInsert, colData)
assert.Equal(t, colNullableInsert, colNullableData)
assert.Equal(t, colArrayInsert, colArrayData)
assert.Equal(t, colArrayNullableInsert, colArrayNullableData)
if isLC {
assert.Equal(t, colLCInsert, colLCData)
assert.Equal(t, colLCNullableInsert, colLCNullableData)
assert.Equal(t, colLCArrayInsert, colLCArrayData)
assert.Equal(t, colLCNullableArrayInsert, colLCNullableArrayData)
}
// check dynamic column
if isLC {
selectStmt, err = conn.SelectWithOption(context.Background(), fmt.Sprintf(`SELECT
%[1]s,
%[1]s_nullable,
%[1]s_array,
%[1]s_array_nullable,
%[1]s_lc,
%[1]s_nullable_lc,
%[1]s_array_lc,
%[1]s_array_lc_nullable
FROM test_%[1]s`,
tableName),
&chconn.QueryOptions{
UseGoTime: false,
},
)
} else {
selectStmt, err = conn.SelectWithOption(context.Background(), fmt.Sprintf(`SELECT
%[1]s,
%[1]s_nullable,
%[1]s_array,
%[1]s_array_nullable
FROM test_%[1]s`, tableName,
),
&chconn.QueryOptions{
UseGoTime: false,
},
)
}
require.NoError(t, err)
autoColumns := selectStmt.Columns()
if isLC {
assert.Len(t, autoColumns, 8)
assert.Equal(t, column.New[T]().ColumnType(), autoColumns[0].ColumnType())
assert.Equal(t, column.New[T]().Nullable().ColumnType(), autoColumns[1].ColumnType())
assert.Equal(t, column.New[T]().Array().ColumnType(), autoColumns[2].ColumnType())
assert.Equal(t, column.New[T]().Nullable().Array().ColumnType(), autoColumns[3].ColumnType())
assert.Equal(t, column.New[T]().LowCardinality().ColumnType(), autoColumns[4].ColumnType())
assert.Equal(t, column.New[T]().Nullable().LowCardinality().ColumnType(), autoColumns[5].ColumnType())
assert.Equal(t, column.New[T]().LowCardinality().Array().ColumnType(), autoColumns[6].ColumnType())
assert.Equal(t, column.New[T]().Nullable().LowCardinality().Array().ColumnType(), autoColumns[7].ColumnType())
} else {
assert.Len(t, autoColumns, 4)
assert.Equal(t, column.New[T]().ColumnType(), autoColumns[0].ColumnType())
assert.Equal(t, column.New[T]().Nullable().ColumnType(), autoColumns[1].ColumnType())
assert.Equal(t, column.New[T]().Array().ColumnType(), autoColumns[2].ColumnType())
assert.Equal(t, column.New[T]().Nullable().Array().ColumnType(), autoColumns[3].ColumnType())
}
for selectStmt.Next() {
}
require.NoError(t, selectStmt.Err())
selectStmt.Close()
// check dynamic column
if isLC {
selectStmt, err = conn.SelectWithOption(context.Background(), fmt.Sprintf(`SELECT
%[1]s,
%[1]s_nullable,
%[1]s_array,
%[1]s_array_nullable,
%[1]s_lc,
%[1]s_nullable_lc,
%[1]s_array_lc,
%[1]s_array_lc_nullable
FROM test_%[1]s`,
tableName),
&chconn.QueryOptions{
UseGoTime: true,
},
)
} else {
selectStmt, err = conn.SelectWithOption(context.Background(), fmt.Sprintf(`SELECT
%[1]s,
%[1]s_nullable,
%[1]s_array,
%[1]s_array_nullable
FROM test_%[1]s`, tableName,
),
&chconn.QueryOptions{
UseGoTime: true,
},
)
}
require.NoError(t, err)
autoColumns = selectStmt.Columns()
if isLC {
assert.Len(t, autoColumns, 8)
assert.Equal(t, colRead.ColumnType(), autoColumns[0].ColumnType())
assert.Equal(t, colNullableRead.ColumnType(), autoColumns[1].ColumnType())
assert.Equal(t, colArrayRead.ColumnType(), autoColumns[2].ColumnType())
assert.Equal(t, colNullableArrayRead.ColumnType(), autoColumns[3].ColumnType())
assert.Equal(t, colLCRead.ColumnType(), autoColumns[4].ColumnType())
assert.Equal(t, colLCNullableRead.ColumnType(), autoColumns[5].ColumnType())
assert.Equal(t, colArrayLCRead.ColumnType(), autoColumns[6].ColumnType())
assert.Equal(t, colArrayLCNullableRead.ColumnType(), autoColumns[7].ColumnType())
} else {
assert.Len(t, autoColumns, 4)
assert.Equal(t, colRead.ColumnType(), autoColumns[0].ColumnType())
assert.Equal(t, colNullableRead.ColumnType(), autoColumns[1].ColumnType())
assert.Equal(t, colArrayRead.ColumnType(), autoColumns[2].ColumnType())
assert.Equal(t, colNullableArrayRead.ColumnType(), autoColumns[3].ColumnType())
}
for selectStmt.Next() {
}
require.NoError(t, selectStmt.Err())
selectStmt.Close()
}
func TestInvalidNegativeTimes(t *testing.T) {
t.Parallel()
connString := os.Getenv("CHX_TEST_TCP_CONN_STRING")
conn, err := chconn.Connect(context.Background(), connString)
require.NoError(t, err)
err = conn.Exec(context.Background(),
`DROP TABLE IF EXISTS test_invalid_dates`,
)
require.NoError(t, err)
set := chconn.Settings{
{
Name: "allow_suspicious_low_cardinality_types",
Value: "true",
},
}
sqlCreate := `CREATE TABLE test_invalid_dates (
date Date,
date32 Date32,
dateTime DateTime,
dateTime64 DateTime64(3)
) Engine=Memory`
err = conn.ExecWithOption(context.Background(), sqlCreate, &chconn.QueryOptions{
Settings: set,
})
require.NoError(t, err)
colDate := column.NewDate[types.Date]()
colDate32 := column.NewDate[types.Date32]()
colDateTime := column.NewDate[types.DateTime]()
colDateTime64 := column.NewDate[types.DateTime64]()
invalidTime := time.Unix(-3208988700, 0) // 1868
colDate.Append(invalidTime)
colDate32.Append(invalidTime)
colDateTime.Append(invalidTime)
colDateTime64.Append(invalidTime)
err = conn.Insert(context.Background(), `INSERT INTO
test_invalid_dates (
date,
date32,
dateTime,
dateTime64
)
VALUES`,
colDate,
colDate32,
colDateTime,
colDateTime64,
)
require.NoError(t, err)
// test read all
colDateRead := column.NewDate[types.Date]()
colDate32Read := column.NewDate[types.Date32]()
colDateTimeRead := column.NewDate[types.DateTime]()
colDateTime64Read := column.NewDate[types.DateTime64]()
var selectStmt chconn.SelectStmt
selectStmt, err = conn.Select(context.Background(), `SELECT
date,
date32,
dateTime,
dateTime64
FROM test_invalid_dates`,
colDateRead,
colDate32Read,
colDateTimeRead,
colDateTime64Read,
)
require.NoError(t, err)
require.True(t, conn.IsBusy())
for selectStmt.Next() {
}
assert.Equal(t, colDateRead.Row(0).In(time.UTC).Format(time.RFC3339), "1970-01-01T00:00:00Z")
assert.Equal(t, colDate32Read.Row(0).In(time.UTC).Format(time.RFC3339), "1900-01-01T00:00:00Z")
assert.Equal(t, colDateTimeRead.Row(0).In(time.UTC).Format(time.RFC3339), "1970-01-01T00:00:00Z")
assert.Equal(t, colDateTime64Read.Row(0).In(time.UTC).Format(time.RFC3339), "1900-01-01T00:00:00Z")
require.NoError(t, selectStmt.Err())
}
================================================
FILE: column/error_test.go
================================================
package column_test
import (
"context"
"errors"
"fmt"
"io"
"os"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/vahid-sohrabloo/chconn/v2"
"github.com/vahid-sohrabloo/chconn/v2/column"
"github.com/vahid-sohrabloo/chconn/v2/types"
)
func TestInsertColumnLowCardinalityError(t *testing.T) {
t.Parallel()
connString := os.Getenv("CHX_TEST_TCP_CONN_STRING")
config, err := chconn.ParseConfig(connString)
require.NoError(t, err)
c, err := chconn.ConnectConfig(context.Background(), config)
require.NoError(t, err)
err = c.Exec(context.Background(), `DROP TABLE IF EXISTS clickhouse_test_insert_column_error_lc`)
require.NoError(t, err)
err = c.Exec(context.Background(), `CREATE TABLE clickhouse_test_insert_column_error_lc (
col LowCardinality(String)
) Engine=Memory`)
require.NoError(t, err)
startValidReader := 3
tests := []struct {
name string
wantErr string
numberValid int
}{
{
name: "write header",
wantErr: "block: write header block data for column col (timeout)",
numberValid: startValidReader,
},
{
name: "write stype",
wantErr: "block: write block data for column col (error writing stype: timeout)",
numberValid: startValidReader + 1,
},
{
name: "write dictionarySize",
wantErr: "block: write block data for column col (error writing dictionarySize: timeout)",
numberValid: startValidReader + 2,
},
{
name: "write dictionary",
wantErr: "block: write block data for column col (error writing dictionary: timeout)",
numberValid: startValidReader + 3,
},
{
name: "write keys len",
wantErr: "block: write block data for column col (error writing keys len: timeout)",
numberValid: startValidReader + 4,
},
{
name: "write indices",
wantErr: "block: write block data for column col (error writing indices: timeout)",
numberValid: startValidReader + 5,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
config.WriterFunc = func(w io.Writer) io.Writer {
return &writerErrorHelper{
err: errors.New("timeout"),
w: w,
numberValid: tt.numberValid,
}
}
c, err = chconn.ConnectConfig(context.Background(), config)
require.NoError(t, err)
col := column.NewString().LowCardinality()
col.Append("test")
err = c.Insert(context.Background(),
"insert into clickhouse_test_insert_column_error_lc (col) VALUES",
col,
)
require.EqualError(t, errors.Unwrap(err), tt.wantErr)
assert.True(t, c.IsClosed())
})
}
}
func TestSelectReadLCError(t *testing.T) {
startValidReader := 36
tests := []struct {
name string
wantErr string
numberValid int
}{
{
name: "read column name length",
wantErr: "read column header: read column name length: timeout",
numberValid: startValidReader,
},
{
name: "read column name",
wantErr: "read column header: read column name: timeout",
numberValid: startValidReader + 1,
},
{
name: "read column type length",
wantErr: "read column header: read column type length: timeout",
numberValid: startValidReader + 2,
},
{
name: "read column type error",
wantErr: "read column header: read column type: timeout",
numberValid: startValidReader + 3,
},
{
name: "read custom serialization",
wantErr: "read column header: read custom serialization: timeout",
numberValid: startValidReader + 4,
},
{
name: "error reading keys serialization version",
wantErr: "read column header: error reading keys serialization version: timeout",
numberValid: startValidReader + 5,
},
{
name: "error reading serialization type",
wantErr: "read data \"toLowCardinality(toString(number))\": error reading serialization type: timeout",
numberValid: startValidReader + 6,
},
{
name: "error reading dictionary size",
wantErr: "read data \"toLowCardinality(toString(number))\": error reading dictionary size: timeout",
numberValid: startValidReader + 7,
},
{
name: "error reading dictionary",
wantErr: "read data \"toLowCardinality(toString(number))\": error reading dictionary: error read string len: timeout",
numberValid: startValidReader + 8,
},
{
name: "error reading string len",
wantErr: "read dat
gitextract_xi8nbcuh/
├── .codecov.yml
├── .github/
│ ├── dependabot.yml
│ └── workflows/
│ ├── ci.yaml
│ └── lint.yaml
├── .gitignore
├── .golangci.yml
├── LICENSE
├── Makefile
├── README.md
├── block.go
├── block_test.go
├── chconn.go
├── chconn_test.go
├── chpool/
│ ├── common_test.go
│ ├── conn.go
│ ├── insert_stmt.go
│ ├── pool.go
│ ├── pool_test.go
│ ├── select_stmt.go
│ └── stat.go
├── client_info.go
├── column/
│ ├── array.go
│ ├── array2.go
│ ├── array2_nullable.go
│ ├── array3.go
│ ├── array3_nullable.go
│ ├── array_base.go
│ ├── array_nullable.go
│ ├── base.go
│ ├── base_big_cpu.go
│ ├── base_little_cpu.go
│ ├── base_test.go
│ ├── base_validate.go
│ ├── bench_test.go
│ ├── column_helper.go
│ ├── date.go
│ ├── date_test.go
│ ├── error_test.go
│ ├── errors.go
│ ├── helper_test.go
│ ├── lc.go
│ ├── lc_indices.go
│ ├── lc_nullable.go
│ ├── lc_test.go
│ ├── map.go
│ ├── map_base.go
│ ├── map_nullable.go
│ ├── map_test.go
│ ├── nested.go
│ ├── nested_test.go
│ ├── nullable.go
│ ├── nullable_test.go
│ ├── point.go
│ ├── size.go
│ ├── string.go
│ ├── string_base.go
│ ├── string_test.go
│ ├── tuple.go
│ ├── tuple1.go
│ ├── tuple2_gen.go
│ ├── tuple3_gen.go
│ ├── tuple4_gen.go
│ ├── tuple5_gen.go
│ ├── tuple_test.go
│ ├── tuples_template/
│ │ ├── tuple.go.tmpl
│ │ ├── tuple2.json
│ │ ├── tuple3.json
│ │ ├── tuple4.json
│ │ └── tuple5.json
│ └── tuples_test.go
├── config.go
├── config_test.go
├── doc.go
├── doc_test.go
├── errors.go
├── errors_ch_code.go
├── errors_test.go
├── go.mod
├── go.sum
├── helper_test.go
├── insert.go
├── insert_test.go
├── internal/
│ ├── ctxwatch/
│ │ ├── context_watcher.go
│ │ └── context_watcher_test.go
│ ├── helper/
│ │ ├── features.go
│ │ ├── strs.go
│ │ └── validator.go
│ └── readerwriter/
│ ├── compress_reader.go
│ ├── compress_writer.go
│ ├── consts.go
│ ├── reader.go
│ └── writer.go
├── ping.go
├── ping_test.go
├── profile.go
├── profile_event.go
├── profile_test.go
├── progress.go
├── select_stmt.go
├── select_stmt_test.go
├── server_info.go
├── server_info_test.go
├── settings.go
├── sqlbuilder/
│ ├── injection.go
│ ├── select.go
│ └── select_test.go
└── types/
├── Int256.go
├── date_type.go
├── decimal.go
├── decimal_test.go
├── int128.go
├── int128_test.go
├── int256_test.go
├── ip_test.go
├── ipv4.go
├── ipv6.go
├── tuple.go
├── uint128.go
├── uint128_test.go
├── uint256.go
├── uint256_test.go
├── uuid.go
└── uuid_test.go
SYMBOL INDEX (1703 symbols across 106 files)
FILE: block.go
type chColumn (line 13) | type chColumn struct
type block (line 18) | type block struct
method reset (line 32) | func (block *block) reset() {
method read (line 39) | func (block *block) read(ch *conn) error {
method readColumns (line 64) | func (block *block) readColumns(ch *conn) error {
method readColumnsData (line 79) | func (block *block) readColumnsData(ch *conn, needValidateData bool, c...
method reorderColumns (line 100) | func (block *block) reorderColumns(columns []column.ColumnBasic) ([]co...
method nextColumn (line 127) | func (block *block) nextColumn(ch *conn) (chColumn, error) {
method writeHeader (line 148) | func (block *block) writeHeader(ch *conn, numRows int) error {
method writeColumnsBuffer (line 166) | func (block *block) writeColumnsBuffer(ch *conn, columns ...column.Col...
function newBlock (line 26) | func newBlock() *block {
function findColumn (line 118) | func findColumn(columns []column.ColumnBasic, name []byte) (int, column....
type blockInfo (line 200) | type blockInfo struct
method read (line 208) | func (info *blockInfo) read(r *readerwriter.Reader) error {
method write (line 228) | func (info *blockInfo) write(w *readerwriter.Writer) {
FILE: block_test.go
function TestBlockReadError (line 14) | func TestBlockReadError(t *testing.T) {
FILE: chconn.go
constant connStatusUninitialized (line 21) | connStatusUninitialized = iota
constant connStatusConnecting (line 22) | connStatusConnecting
constant connStatusClosed (line 23) | connStatusClosed
constant connStatusIdle (line 24) | connStatusIdle
constant connStatusBusy (line 25) | connStatusBusy
constant clientHello (line 30) | clientHello = 0
constant clientQuery (line 33) | clientQuery = 1
constant clientData (line 35) | clientData = 2
constant clientPing (line 37) | clientPing = 4
constant serverHello (line 42) | serverHello = 0
constant serverData (line 44) | serverData = 1
constant serverException (line 46) | serverException = 2
constant serverProgress (line 48) | serverProgress = 3
constant serverPong (line 50) | serverPong = 4
constant serverEndOfStream (line 52) | serverEndOfStream = 5
constant serverProfileInfo (line 54) | serverProfileInfo = 6
constant serverTotals (line 56) | serverTotals = 7
constant serverExtremes (line 58) | serverExtremes = 8
constant serverTableColumns (line 60) | serverTableColumns = 11
constant serverPartUUIDs (line 63) | serverPartUUIDs = 12
constant serverReadTaskRequest (line 66) | serverReadTaskRequest = 13
constant serverProfileEvents (line 68) | serverProfileEvents = 14
constant dbmsVersionMajor (line 72) | dbmsVersionMajor = 1
constant dbmsVersionMinor (line 73) | dbmsVersionMinor = 0
constant dbmsVersionPatch (line 74) | dbmsVersionPatch = 0
constant dbmsVersionRevision (line 75) | dbmsVersionRevision = 54460
type queryProcessingStage (line 78) | type queryProcessingStage
constant queryProcessingStageComplete (line 83) | queryProcessingStageComplete queryProcessingStage = 2
type DialFunc (line 87) | type DialFunc
type LookupFunc (line 90) | type LookupFunc
type ReaderFunc (line 93) | type ReaderFunc
type WriterFunc (line 97) | type WriterFunc
type Conn (line 100) | type Conn interface
type writeFlusher (line 163) | type writeFlusher interface
type conn (line 168) | type conn struct
method sendAddendum (line 387) | func (ch *conn) sendAddendum() {
method flushCompress (line 393) | func (ch *conn) flushCompress() error {
method RawConn (line 400) | func (ch *conn) RawConn() net.Conn {
method hello (line 405) | func (ch *conn) hello() error {
method IsClosed (line 430) | func (ch *conn) IsClosed() bool {
method IsBusy (line 435) | func (ch *conn) IsBusy() bool {
method lock (line 440) | func (ch *conn) lock() error {
method unlock (line 453) | func (ch *conn) unlock() {
method sendQueryWithOption (line 463) | func (ch *conn) sendQueryWithOption(
method sendData (line 514) | func (ch *conn) sendData(block *block, numRows int) error {
method sendEmptyBlock (line 529) | func (ch *conn) sendEmptyBlock() error {
method Close (line 534) | func (ch *conn) Close() error {
method readTableColumn (line 543) | func (ch *conn) readTableColumn() {
method receiveAndProcessData (line 548) | func (ch *conn) receiveAndProcessData(onProgress func(*Progress)) (int...
method Exec (line 627) | func (ch *conn) Exec(ctx context.Context, query string) error {
method ExecWithOption (line 631) | func (ch *conn) ExecWithOption(
function Connect (line 193) | func Connect(ctx context.Context, connString string) (Conn, error) {
function ConnectConfig (line 209) | func ConnectConfig(octx context.Context, config *Config) (c Conn, err er...
function expandWithIPs (line 278) | func expandWithIPs(ctx context.Context, lookupFn LookupFunc, fallbacks [...
function connect (line 312) | func connect(ctx context.Context, config *Config, fallbackConfig *Fallba...
type QueryOptions (line 617) | type QueryOptions struct
FILE: chconn_test.go
function TestConnect (line 16) | func TestConnect(t *testing.T) {
function TestConnectError (line 35) | func TestConnectError(t *testing.T) {
function TestEndOfStream (line 107) | func TestEndOfStream(t *testing.T) {
function TestException (line 127) | func TestException(t *testing.T) {
function TestTlsPreferConnect (line 144) | func TestTlsPreferConnect(t *testing.T) {
function TestConnectConfigRequiresConnConfigFromParseConfig (line 167) | func TestConnectConfigRequiresConnConfigFromParseConfig(t *testing.T) {
function TestLockError (line 177) | func TestLockError(t *testing.T) {
function TestUnlockError (line 200) | func TestUnlockError(t *testing.T) {
function TestExecError (line 214) | func TestExecError(t *testing.T) {
function TestExecCtxError (line 246) | func TestExecCtxError(t *testing.T) {
function TestReceivePackError (line 277) | func TestReceivePackError(t *testing.T) {
FILE: chpool/common_test.go
function waitForReleaseToComplete (line 17) | func waitForReleaseToComplete() {
type execer (line 21) | type execer interface
function testExec (line 25) | func testExec(t *testing.T, db execer) {
type selecter (line 30) | type selecter interface
function testSelect (line 34) | func testSelect(t *testing.T, db selecter) {
function assertConfigsEqual (line 52) | func assertConfigsEqual(t *testing.T, expected, actual *Config, testName...
function assertConnConfigsEqual (line 76) | func assertConnConfigsEqual(t *testing.T, expected, actual *chconn.Confi...
FILE: chpool/conn.go
type Conn (line 13) | type Conn interface
type conn (line 43) | type conn struct
method Release (line 50) | func (c *conn) Release() {
method Hijack (line 98) | func (c *conn) Hijack() chconn.Conn {
method ExecWithOption (line 112) | func (c *conn) ExecWithOption(
method Ping (line 120) | func (c *conn) Ping(ctx context.Context) error {
method SelectWithOption (line 124) | func (c *conn) SelectWithOption(
method InsertWithOption (line 140) | func (c *conn) InsertWithOption(ctx context.Context, query string, que...
method InsertStreamWithOption (line 143) | func (c *conn) InsertStreamWithOption(ctx context.Context, query strin...
method Conn (line 154) | func (c *conn) Conn() chconn.Conn {
method connResource (line 158) | func (c *conn) connResource() *connResource {
FILE: chpool/insert_stmt.go
type insertStmt (line 9) | type insertStmt struct
method Flush (line 14) | func (s *insertStmt) Flush(ctx context.Context) error {
method Close (line 22) | func (s *insertStmt) Close() {
FILE: chpool/pool.go
type connResource (line 27) | type connResource struct
method getConn (line 32) | func (cr *connResource) getConn(p *pool, res *puddle.Resource[*connRes...
type Pool (line 47) | type Pool interface
type pool (line 120) | type pool struct
method Close (line 411) | func (p *pool) Close() {
method isExpired (line 418) | func (p *pool) isExpired(res *puddle.Resource[*connResource]) bool {
method triggerHealthCheck (line 433) | func (p *pool) triggerHealthCheck() {
method backgroundHealthCheck (line 445) | func (p *pool) backgroundHealthCheck() {
method checkHealth (line 460) | func (p *pool) checkHealth() {
method checkConnsHealth (line 484) | func (p *pool) checkConnsHealth() bool {
method checkMinConns (line 509) | func (p *pool) checkMinConns() error {
method createIdleResources (line 520) | func (p *pool) createIdleResources(targetResources int) error {
method Acquire (line 547) | func (p *pool) Acquire(ctx context.Context) (Conn, error) {
method AcquireFunc (line 575) | func (p *pool) AcquireFunc(ctx context.Context, f func(Conn) error) er...
method AcquireAllIdle (line 587) | func (p *pool) AcquireAllIdle(ctx context.Context) []Conn {
method Reset (line 607) | func (p *pool) Reset() {
method Config (line 612) | func (p *pool) Config() *Config { return p.config.Copy() }
method Stat (line 615) | func (p *pool) Stat() *Stat {
method Exec (line 624) | func (p *pool) Exec(ctx context.Context, query string) error {
method ExecWithOption (line 628) | func (p *pool) ExecWithOption(
method Select (line 647) | func (p *pool) Select(ctx context.Context, query string, columns ...co...
method SelectWithOption (line 651) | func (p *pool) SelectWithOption(
method Insert (line 675) | func (p *pool) Insert(ctx context.Context, query string, columns ...co...
method InsertWithOption (line 679) | func (p *pool) InsertWithOption(ctx context.Context, query string, que...
method InsertStream (line 695) | func (p *pool) InsertStream(ctx context.Context, query string) (chconn...
method InsertStreamWithOption (line 699) | func (p *pool) InsertStreamWithOption(ctx context.Context, query strin...
method Ping (line 720) | func (p *pool) Ping(ctx context.Context) error {
type Config (line 146) | type Config struct
method Copy (line 195) | func (c *Config) Copy() *Config {
method ConnString (line 203) | func (c *Config) ConnString() string { return c.ConnConfig.ConnString() }
function New (line 206) | func New(connString string) (Pool, error) {
function NewWithConfig (line 216) | func NewWithConfig(config *Config) (Pool, error) {
function ParseConfig (line 313) | func ParseConfig(connString string) (*Config, error) {
FILE: chpool/pool_test.go
function TestNew (line 19) | func TestNew(t *testing.T) {
function TestNewWithConfig (line 28) | func TestNewWithConfig(t *testing.T) {
function TestParseConfigExtractsPoolArguments (line 39) | func TestParseConfigExtractsPoolArguments(t *testing.T) {
function TestConnectConfigRequiresConnConfigFromParseConfig (line 61) | func TestConnectConfigRequiresConnConfigFromParseConfig(t *testing.T) {
function TestConfigCopyReturnsEqualConfig (line 71) | func TestConfigCopyReturnsEqualConfig(t *testing.T) {
function TestConfigCopyCanBeUsedToNew (line 81) | func TestConfigCopyCanBeUsedToNew(t *testing.T) {
function TestPoolAcquireAndConnRelease (line 93) | func TestPoolAcquireAndConnRelease(t *testing.T) {
function TestPoolAcquireAndConnHijack (line 105) | func TestPoolAcquireAndConnHijack(t *testing.T) {
function TestPoolAcquireFunc (line 134) | func TestPoolAcquireFunc(t *testing.T) {
function TestPoolAcquireFuncReturnsFnError (line 147) | func TestPoolAcquireFuncReturnsFnError(t *testing.T) {
function TestPoolBeforeConnect (line 160) | func TestPoolBeforeConnect(t *testing.T) {
function TestPoolAfterConnect (line 178) | func TestPoolAfterConnect(t *testing.T) {
function TestPoolBeforeAcquire (line 199) | func TestPoolBeforeAcquire(t *testing.T) {
function TestPoolAfterRelease (line 240) | func TestPoolAfterRelease(t *testing.T) {
function TestPoolAcquireAllIdle (line 270) | func TestPoolAcquireAllIdle(t *testing.T) {
function TestPoolReset (line 298) | func TestPoolReset(t *testing.T) {
function TestConnReleaseChecksMaxConnLifetime (line 323) | func TestConnReleaseChecksMaxConnLifetime(t *testing.T) {
function TestConnReleaseClosesBusyConn (line 347) | func TestConnReleaseClosesBusyConn(t *testing.T) {
function TestPoolBackgroundChecksMaxConnLifetime (line 375) | func TestPoolBackgroundChecksMaxConnLifetime(t *testing.T) {
function TestPoolBackgroundChecksMaxConnIdleTime (line 399) | func TestPoolBackgroundChecksMaxConnIdleTime(t *testing.T) {
function TestPoolBackgroundChecksMinConns (line 431) | func TestPoolBackgroundChecksMinConns(t *testing.T) {
function TestPoolExec (line 465) | func TestPoolExec(t *testing.T) {
function TestPoolExecError (line 475) | func TestPoolExecError(t *testing.T) {
function TestPoolSelect (line 491) | func TestPoolSelect(t *testing.T) {
function TestPoolSelectError (line 536) | func TestPoolSelectError(t *testing.T) {
function TestPoolAcquireSelectError (line 562) | func TestPoolAcquireSelectError(t *testing.T) {
function TestPoolInsert (line 582) | func TestPoolInsert(t *testing.T) {
function TestPoolInsertError (line 623) | func TestPoolInsertError(t *testing.T) {
function TestPoolInsertStream (line 647) | func TestPoolInsertStream(t *testing.T) {
function TestConnReleaseClosesConnInFailedTransaction (line 694) | func TestConnReleaseClosesConnInFailedTransaction(t *testing.T) {
function TestConnReleaseDestroysClosedConn (line 722) | func TestConnReleaseDestroysClosedConn(t *testing.T) {
function TestConnPoolQueryConcurrentLoad (line 753) | func TestConnPoolQueryConcurrentLoad(t *testing.T) {
function TestParseConfigError (line 775) | func TestParseConfigError(t *testing.T) {
function TestNewParseError (line 833) | func TestNewParseError(t *testing.T) {
function TestNewError (line 841) | func TestNewError(t *testing.T) {
function TestIdempotentPoolClose (line 863) | func TestIdempotentPoolClose(t *testing.T) {
function TestConnectEagerlyReachesMinPoolSize (line 874) | func TestConnectEagerlyReachesMinPoolSize(t *testing.T) {
FILE: chpool/select_stmt.go
type selectStmt (line 7) | type selectStmt struct
method Next (line 12) | func (s *selectStmt) Next() bool {
method Close (line 28) | func (s *selectStmt) Close() {
FILE: chpool/stat.go
type Stat (line 10) | type Stat struct
method AcquireCount (line 18) | func (s *Stat) AcquireCount() int64 {
method AcquireDuration (line 24) | func (s *Stat) AcquireDuration() time.Duration {
method AcquiredConns (line 29) | func (s *Stat) AcquiredConns() int32 {
method CanceledAcquireCount (line 35) | func (s *Stat) CanceledAcquireCount() int64 {
method ConstructingConns (line 41) | func (s *Stat) ConstructingConns() int32 {
method EmptyAcquireCount (line 48) | func (s *Stat) EmptyAcquireCount() int64 {
method IdleConns (line 53) | func (s *Stat) IdleConns() int32 {
method MaxConns (line 58) | func (s *Stat) MaxConns() int32 {
method TotalConns (line 65) | func (s *Stat) TotalConns() int32 {
method NewConnsCount (line 70) | func (s *Stat) NewConnsCount() int64 {
method MaxLifetimeDestroyCount (line 76) | func (s *Stat) MaxLifetimeDestroyCount() int64 {
method MaxIdleDestroyCount (line 82) | func (s *Stat) MaxIdleDestroyCount() int64 {
FILE: client_info.go
type ClientInfo (line 13) | type ClientInfo struct
method write (line 32) | func (c *ClientInfo) write(ch *conn) {
method fillOSUserHostNameAndVersionInfo (line 77) | func (c *ClientInfo) fillOSUserHostNameAndVersionInfo() {
FILE: column/array.go
type Array (line 4) | type Array struct
function NewArray (line 10) | func NewArray[T any](dataColumn Column[T]) *Array[T] {
method Data (line 24) | func (c *Array[T]) Data() [][]T {
method Read (line 39) | func (c *Array[T]) Read(value [][]T) [][]T {
method Row (line 54) | func (c *Array[T]) Row(row int) []T {
method Append (line 65) | func (c *Array[T]) Append(v ...[]T) {
method AppendItem (line 80) | func (c *Array[T]) AppendItem(v ...T) {
method Array (line 85) | func (c *Array[T]) Array() *Array2[T] {
method getColumnData (line 89) | func (c *Array[T]) getColumnData() []T {
method elem (line 96) | func (c *Array[T]) elem(arrayLevel int) ColumnBasic {
FILE: column/array2.go
type Array2 (line 4) | type Array2 struct
function NewArray2 (line 9) | func NewArray2[T any](array *Array[T]) *Array2[T] {
method Data (line 20) | func (c *Array2[T]) Data() [][][]T {
method Read (line 29) | func (c *Array2[T]) Read(value [][][]T) [][][]T {
method Row (line 44) | func (c *Array2[T]) Row(row int) [][]T {
method Append (line 58) | func (c *Array2[T]) Append(v ...[][]T) {
method elem (line 65) | func (c *Array2[T]) elem(arrayLevel int) ColumnBasic {
FILE: column/array2_nullable.go
type Array2Nullable (line 6) | type Array2Nullable struct
function NewArray2Nullable (line 13) | func NewArray2Nullable[T comparable](dataColumn *ArrayNullable[T]) *Arra...
method DataP (line 30) | func (c *Array2Nullable[T]) DataP() [][][]*T {
method ReadP (line 42) | func (c *Array2Nullable[T]) ReadP(value [][][]*T) [][][]*T {
method RowP (line 54) | func (c *Array2Nullable[T]) RowP(row int) [][]*T {
method AppendP (line 65) | func (c *Array2Nullable[T]) AppendP(v ...[][]*T) {
method ReadRaw (line 73) | func (c *Array2Nullable[T]) ReadRaw(num int, r *readerwriter.Reader) err...
method Array (line 83) | func (c *Array2Nullable[T]) Array() *Array3Nullable[T] {
method getColumnData (line 87) | func (c *Array2Nullable[T]) getColumnData() [][]*T {
method elem (line 94) | func (c *Array2Nullable[T]) elem(arrayLevel int) ColumnBasic {
FILE: column/array3.go
type Array3 (line 4) | type Array3 struct
function NewArray3 (line 9) | func NewArray3[T any](array *Array2[T]) *Array3[T] {
method Data (line 20) | func (c *Array3[T]) Data() [][][][]T {
method Read (line 29) | func (c *Array3[T]) Read(value [][][][]T) [][][][]T {
method Row (line 44) | func (c *Array3[T]) Row(row int) [][][]T {
method Append (line 58) | func (c *Array3[T]) Append(v ...[][][]T) {
method Array (line 66) | func (c *Array2[T]) Array() *Array3[T] {
method elem (line 70) | func (c *Array3[T]) elem(arrayLevel int) ColumnBasic {
FILE: column/array3_nullable.go
type Array3Nullable (line 6) | type Array3Nullable struct
function NewArray3Nullable (line 13) | func NewArray3Nullable[T comparable](dataColumn *Array2Nullable[T]) *Arr...
method DataP (line 30) | func (c *Array3Nullable[T]) DataP() [][][][]*T {
method ReadP (line 42) | func (c *Array3Nullable[T]) ReadP(value [][][][]*T) [][][][]*T {
method RowP (line 54) | func (c *Array3Nullable[T]) RowP(row int) [][][]*T {
method AppendP (line 65) | func (c *Array3Nullable[T]) AppendP(v ...[][][]*T) {
method ReadRaw (line 73) | func (c *Array3Nullable[T]) ReadRaw(num int, r *readerwriter.Reader) err...
method getColumnData (line 82) | func (c *Array3Nullable[T]) getColumnData() [][][]*T {
method elem (line 89) | func (c *Array3Nullable[T]) elem(arrayLevel int) ColumnBasic {
FILE: column/array_base.go
type ArrayBase (line 16) | type ArrayBase struct
method AppendLen (line 34) | func (c *ArrayBase) AppendLen(v int) {
method NumRow (line 40) | func (c *ArrayBase) NumRow() int {
method Array (line 45) | func (c *ArrayBase) Array() *ArrayBase {
method Reset (line 55) | func (c *ArrayBase) Reset() {
method Offsets (line 63) | func (c *ArrayBase) Offsets() []uint64 {
method TotalRows (line 68) | func (c *ArrayBase) TotalRows() int {
method SetWriteBufferSize (line 78) | func (c *ArrayBase) SetWriteBufferSize(row int) {
method ReadRaw (line 84) | func (c *ArrayBase) ReadRaw(num int, r *readerwriter.Reader) error {
method HeaderReader (line 103) | func (c *ArrayBase) HeaderReader(r *readerwriter.Reader, readColumn bo...
method Column (line 118) | func (c *ArrayBase) Column() ColumnBasic {
method Validate (line 122) | func (c *ArrayBase) Validate() error {
method ColumnType (line 149) | func (c *ArrayBase) ColumnType() string {
method WriteTo (line 155) | func (c *ArrayBase) WriteTo(w io.Writer) (int64, error) {
method HeaderWriter (line 167) | func (c *ArrayBase) HeaderWriter(w *readerwriter.Writer) {
method elem (line 171) | func (c *ArrayBase) elem(arrayLevel int) ColumnBasic {
function NewArrayBase (line 25) | func NewArrayBase(dataColumn ColumnBasic) *ArrayBase {
FILE: column/array_nullable.go
type ArrayNullable (line 6) | type ArrayNullable struct
function NewArrayNullable (line 13) | func NewArrayNullable[T comparable](dataColumn NullableColumn[T]) *Array...
method DataP (line 30) | func (c *ArrayNullable[T]) DataP() [][]*T {
method ReadP (line 42) | func (c *ArrayNullable[T]) ReadP(value [][]*T) [][]*T {
method RowP (line 54) | func (c *ArrayNullable[T]) RowP(row int) []*T {
method AppendP (line 65) | func (c *ArrayNullable[T]) AppendP(v ...[]*T) {
method AppendItemP (line 80) | func (c *ArrayNullable[T]) AppendItemP(v ...*T) {
method ArrayOf (line 85) | func (c *ArrayNullable[T]) ArrayOf() *Array2Nullable[T] {
method ReadRaw (line 90) | func (c *ArrayNullable[T]) ReadRaw(num int, r *readerwriter.Reader) error {
method getColumnData (line 99) | func (c *ArrayNullable[T]) getColumnData() []*T {
method elem (line 106) | func (c *ArrayNullable[T]) elem(arrayLevel int) ColumnBasic {
FILE: column/base.go
type Base (line 11) | type Base struct
function New (line 20) | func New[T comparable]() *Base[T] {
method Data (line 31) | func (c *Base[T]) Data() []T {
method Read (line 37) | func (c *Base[T]) Read(value []T) []T {
method Row (line 43) | func (c *Base[T]) Row(row int) T {
method Append (line 49) | func (c *Base[T]) Append(v ...T) {
method NumRow (line 55) | func (c *Base[T]) NumRow() int {
method Array (line 60) | func (c *Base[T]) Array() *Array[T] {
method Nullable (line 65) | func (c *Base[T]) Nullable() *Nullable[T] {
method LC (line 70) | func (c *Base[T]) LC() *LowCardinality[T] {
method LowCardinality (line 75) | func (c *Base[T]) LowCardinality() *LowCardinality[T] {
method appendEmpty (line 81) | func (c *Base[T]) appendEmpty() {
method Reset (line 92) | func (c *Base[T]) Reset() {
method SetWriteBufferSize (line 100) | func (c *Base[T]) SetWriteBufferSize(row int) {
method ReadRaw (line 107) | func (c *Base[T]) ReadRaw(num int, r *readerwriter.Reader) error {
method readBuffer (line 120) | func (c *Base[T]) readBuffer() error {
method HeaderReader (line 132) | func (c *Base[T]) HeaderReader(r *readerwriter.Reader, readColumn bool, ...
method HeaderWriter (line 139) | func (c *Base[T]) HeaderWriter(w *readerwriter.Writer) {
method Elem (line 142) | func (c *Base[T]) Elem(arrayLevel int, nullable, lc bool) ColumnBasic {
FILE: column/base_big_cpu.go
method readyBufferHook (line 7) | func (c *Base[T]) readyBufferHook() {
function reverseBuffer (line 13) | func reverseBuffer(s []byte) {
type slice (line 25) | type slice struct
method WriteTo (line 31) | func (c *Base[T]) WriteTo(w io.Writer) (int64, error) {
FILE: column/base_little_cpu.go
method readyBufferHook (line 11) | func (c *Base[T]) readyBufferHook() {
type slice (line 20) | type slice struct
method WriteTo (line 26) | func (c *Base[T]) WriteTo(w io.Writer) (int64, error) {
FILE: column/base_test.go
function TestBool (line 20) | func TestBool(t *testing.T) {
function TestBoolUint8 (line 28) | func TestBoolUint8(t *testing.T) {
function TestUint8 (line 36) | func TestUint8(t *testing.T) {
function TestUint16 (line 44) | func TestUint16(t *testing.T) {
function TestUint32 (line 52) | func TestUint32(t *testing.T) {
function TestUint64 (line 60) | func TestUint64(t *testing.T) {
function TestUint128 (line 68) | func TestUint128(t *testing.T) {
function TestUint256 (line 78) | func TestUint256(t *testing.T) {
function TestInt8 (line 89) | func TestInt8(t *testing.T) {
function TestInt16 (line 97) | func TestInt16(t *testing.T) {
function TestInt32 (line 105) | func TestInt32(t *testing.T) {
function TestInt64 (line 113) | func TestInt64(t *testing.T) {
function TestInt128 (line 121) | func TestInt128(t *testing.T) {
function TestInt256 (line 131) | func TestInt256(t *testing.T) {
function TestFixedString (line 141) | func TestFixedString(t *testing.T) {
function TestFloat32 (line 149) | func TestFloat32(t *testing.T) {
function TestFloat64 (line 157) | func TestFloat64(t *testing.T) {
function TestDecimal32 (line 165) | func TestDecimal32(t *testing.T) {
function TestDecimal64 (line 172) | func TestDecimal64(t *testing.T) {
function TestDecimal128 (line 180) | func TestDecimal128(t *testing.T) {
function TestDecimal256 (line 188) | func TestDecimal256(t *testing.T) {
function TestIPv4 (line 196) | func TestIPv4(t *testing.T) {
function TestIPv6 (line 206) | func TestIPv6(t *testing.T) {
function TestUUID (line 216) | func TestUUID(t *testing.T) {
function testColumn (line 224) | func testColumn[T comparable](
function TestEmptyCollection (line 622) | func TestEmptyCollection(t *testing.T) {
FILE: column/base_validate.go
method Validate (line 45) | func (c *Base[T]) Validate() error {
method checkEnum8 (line 87) | func (c *Base[T]) checkEnum8(chType []byte) (bool, error) {
method checkEnum16 (line 99) | func (c *Base[T]) checkEnum16(chType []byte) (bool, error) {
method checkDateTime (line 111) | func (c *Base[T]) checkDateTime(chType []byte) (bool, error) {
method checkDateTime64 (line 129) | func (c *Base[T]) checkDateTime64(chType []byte) (bool, error) {
method checkFixedString (line 149) | func (c *Base[T]) checkFixedString(chType []byte) (bool, error) {
method checkDecimal (line 165) | func (c *Base[T]) checkDecimal(chType []byte) (bool, error) {
method ColumnType (line 204) | func (c *Base[T]) ColumnType() string {
FILE: column/bench_test.go
function BenchmarkTestChconnSelect100MUint64 (line 11) | func BenchmarkTestChconnSelect100MUint64(b *testing.B) {
function BenchmarkTestChconnSelect1MString (line 35) | func BenchmarkTestChconnSelect1MString(b *testing.B) {
function BenchmarkTestChconnInsert10M (line 61) | func BenchmarkTestChconnInsert10M(b *testing.B) {
FILE: column/column_helper.go
type ColumnBasic (line 11) | type ColumnBasic interface
type Column (line 27) | type Column interface
type NullableColumn (line 35) | type NullableColumn interface
type column (line 43) | type column struct
method readColumn (line 52) | func (c *column) readColumn(readColumn bool, revision uint64) error {
method Name (line 99) | func (c *column) Name() []byte {
method Type (line 104) | func (c *column) Type() []byte {
method SetName (line 109) | func (c *column) SetName(v []byte) {
method SetType (line 114) | func (c *column) SetType(v []byte) {
FILE: column/date.go
type DateType (line 10) | type DateType interface
type Date (line 27) | type Date struct
function NewDate (line 47) | func NewDate[T DateType[T]]() *Date[T] {
method SetLocation (line 58) | func (c *Date[T]) SetLocation(loc *time.Location) *Date[T] {
method Location (line 66) | func (c *Date[T]) Location() *time.Location {
method SetPrecision (line 82) | func (c *Date[T]) SetPrecision(precision int) *Date[T] {
method Data (line 88) | func (c *Date[T]) Data() []time.Time {
method Read (line 97) | func (c *Date[T]) Read(value []time.Time) []time.Time {
method Row (line 112) | func (c *Date[T]) Row(row int) time.Time {
method Append (line 118) | func (c *Date[T]) Append(v ...time.Time) {
method Array (line 127) | func (c *Date[T]) Array() *Array[time.Time] {
method Nullable (line 132) | func (c *Date[T]) Nullable() *Nullable[time.Time] {
method LC (line 137) | func (c *Date[T]) LC() *LowCardinality[time.Time] {
method LowCardinality (line 142) | func (c *Date[T]) LowCardinality() *LowCardinality[time.Time] {
method Elem (line 146) | func (c *Date[T]) Elem(arrayLevel int, nullable, lc bool) ColumnBasic {
FILE: column/date_test.go
function TestDate (line 17) | func TestDate(t *testing.T) {
function TestDate32 (line 26) | func TestDate32(t *testing.T) {
function TestDateTime (line 36) | func TestDateTime(t *testing.T) {
function TestDateTimeTimezone (line 45) | func TestDateTimeTimezone(t *testing.T) {
function TestDateTime64 (line 59) | func TestDateTime64(t *testing.T) {
function testDateColumn (line 73) | func testDateColumn[T column.DateType[T]](
function TestInvalidNegativeTimes (line 507) | func TestInvalidNegativeTimes(t *testing.T) {
FILE: column/error_test.go
function TestInsertColumnLowCardinalityError (line 19) | func TestInsertColumnLowCardinalityError(t *testing.T) {
function TestSelectReadLCError (line 100) | func TestSelectReadLCError(t *testing.T) {
function TestInsertColumnArrayError (line 198) | func TestInsertColumnArrayError(t *testing.T) {
function TestSelectReadArrayError (line 259) | func TestSelectReadArrayError(t *testing.T) {
function TestInsertColumnArrayNullable (line 327) | func TestInsertColumnArrayNullable(t *testing.T) {
function TestSelectReadArrayNullableError (line 393) | func TestSelectReadArrayNullableError(t *testing.T) {
function TestSelectReadNullableError (line 446) | func TestSelectReadNullableError(t *testing.T) {
function TestInsertColumnArray2Error (line 494) | func TestInsertColumnArray2Error(t *testing.T) {
function TestSelectReadArray2Error (line 555) | func TestSelectReadArray2Error(t *testing.T) {
function TestInsertColumnArray3Error (line 622) | func TestInsertColumnArray3Error(t *testing.T) {
function TestSelectReadArray3Error (line 683) | func TestSelectReadArray3Error(t *testing.T) {
function TestInsertColumnTupleError (line 751) | func TestInsertColumnTupleError(t *testing.T) {
function TestSelectReadTupleError (line 813) | func TestSelectReadTupleError(t *testing.T) {
function TestInsertColumnMapError (line 891) | func TestInsertColumnMapError(t *testing.T) {
function TestSelectReadMapError (line 963) | func TestSelectReadMapError(t *testing.T) {
function TestInvalidType (line 1059) | func TestInvalidType(t *testing.T) {
function TestMapInvalidColumnNumber (line 1300) | func TestMapInvalidColumnNumber(t *testing.T) {
function TestFixedStringInvalidType (line 1307) | func TestFixedStringInvalidType(t *testing.T) {
function TestEnum8InvalidType (line 1314) | func TestEnum8InvalidType(t *testing.T) {
function TestEnum16InvalidType (line 1320) | func TestEnum16InvalidType(t *testing.T) {
function TestDecimalInvalidType (line 1328) | func TestDecimalInvalidType(t *testing.T) {
function TestInvalidDate (line 1347) | func TestInvalidDate(t *testing.T) {
function TestInvalidSimpleAggregateFunction (line 1355) | func TestInvalidSimpleAggregateFunction(t *testing.T) {
FILE: column/errors.go
type ErrInvalidType (line 7) | type ErrInvalidType struct
method Error (line 12) | func (e ErrInvalidType) Error() string {
FILE: column/helper_test.go
type readErrorHelper (line 7) | type readErrorHelper struct
method Read (line 14) | func (r *readErrorHelper) Read(p []byte) (int, error) {
type writerErrorHelper (line 22) | type writerErrorHelper struct
method Write (line 29) | func (w *writerErrorHelper) Write(p []byte) (int, error) {
FILE: column/lc.go
constant hasAdditionalKeysBit (line 17) | hasAdditionalKeysBit = 1 << 9
constant needUpdateDictionary (line 20) | needUpdateDictionary = 1 << 10
constant serializationType (line 22) | serializationType = hasAdditionalKeysBit | needUpdateDictionary
type LowCardinality (line 26) | type LowCardinality struct
function NewLowCardinality (line 41) | func NewLowCardinality[T comparable](dictColumn Column[T]) *LowCardinali...
function NewLC (line 46) | func NewLC[T comparable](dictColumn Column[T]) *LowCardinality[T] {
method Data (line 57) | func (c *LowCardinality[T]) Data() []T {
method Read (line 66) | func (c *LowCardinality[T]) Read(value []T) []T {
method Row (line 75) | func (c *LowCardinality[T]) Row(row int) T {
method Append (line 80) | func (c *LowCardinality[T]) Append(v ...T) {
method Dicts (line 95) | func (c *LowCardinality[T]) Dicts() []T {
method Keys (line 101) | func (c *LowCardinality[T]) Keys() []int {
method NumRow (line 106) | func (c *LowCardinality[T]) NumRow() int {
method Array (line 111) | func (c *LowCardinality[T]) Array() *Array[T] {
method Reset (line 121) | func (c *LowCardinality[T]) Reset() {
method SetWriteBufferSize (line 133) | func (c *LowCardinality[T]) SetWriteBufferSize(row int) {
method ReadRaw (line 140) | func (c *LowCardinality[T]) ReadRaw(num int, r *readerwriter.Reader) err...
method HeaderReader (line 190) | func (c *LowCardinality[T]) HeaderReader(r *readerwriter.Reader, readCol...
method ColumnType (line 210) | func (c *LowCardinality[T]) ColumnType() string {
method Validate (line 217) | func (c *LowCardinality[T]) Validate() error {
method WriteTo (line 244) | func (c *LowCardinality[T]) WriteTo(w io.Writer) (int64, error) {
method HeaderWriter (line 295) | func (c *LowCardinality[T]) HeaderWriter(w *readerwriter.Writer) {
function getLCIndicate (line 300) | func getLCIndicate(intType int) indicesColumnI {
method writeUint64 (line 315) | func (c *LowCardinality[T]) writeUint64(w io.Writer, v uint64) (int, err...
method elem (line 327) | func (c *LowCardinality[T]) elem(arrayLevel int) ColumnBasic {
FILE: column/lc_indices.go
type indicesColumnI (line 10) | type indicesColumnI interface
type indicatedTypes (line 18) | type indicatedTypes interface
type indicesColumn (line 22) | type indicesColumn struct
function newIndicesColumn (line 26) | func newIndicesColumn[T indicatedTypes]() *indicesColumn[T] {
method readInt (line 36) | func (c *indicesColumn[T]) readInt(value *[]int) {
method appendInts (line 44) | func (c *indicesColumn[T]) appendInts(values []int) {
FILE: column/lc_nullable.go
type LowCardinalityNullable (line 4) | type LowCardinalityNullable struct
function NewLowCardinalityNullable (line 9) | func NewLowCardinalityNullable[T comparable](dictColumn Column[T]) *LowC...
function NewLCNullable (line 14) | func NewLCNullable[T comparable](dictColumn Column[T]) *LowCardinalityNu...
method DataP (line 30) | func (c *LowCardinalityNullable[T]) DataP() []*T {
method ReadP (line 44) | func (c *LowCardinalityNullable[T]) ReadP(value []*T) []*T {
method RowP (line 58) | func (c *LowCardinalityNullable[T]) RowP(row int) *T {
method Append (line 67) | func (c *LowCardinalityNullable[T]) Append(v ...T) {
method AppendNil (line 82) | func (c *LowCardinalityNullable[T]) AppendNil() {
method AppendP (line 90) | func (c *LowCardinalityNullable[T]) AppendP(v ...*T) {
method Array (line 109) | func (c *LowCardinalityNullable[T]) Array() *ArrayNullable[T] {
method Reset (line 119) | func (c *LowCardinalityNullable[T]) Reset() {
method elem (line 125) | func (c *LowCardinalityNullable[T]) elem(arrayLevel int) ColumnBasic {
FILE: column/lc_test.go
function TestLcIndicator16 (line 15) | func TestLcIndicator16(t *testing.T) {
function TestLcIndicator32 (line 86) | func TestLcIndicator32(t *testing.T) {
FILE: column/map.go
type Map (line 5) | type Map struct
function NewMap (line 12) | func NewMap[K comparable, V any](
method Data (line 31) | func (c *Map[K, V]) Data() []map[K]V {
method Read (line 52) | func (c *Map[K, V]) Read(value []map[K]V) []map[K]V {
method Row (line 58) | func (c *Map[K, V]) Row(row int) map[K]V {
method Append (line 75) | func (c *Map[K, V]) Append(v map[K]V) {
method getKeyColumnData (line 83) | func (c *Map[K, V]) getKeyColumnData() []K {
method getValueColumnData (line 89) | func (c *Map[K, V]) getValueColumnData() []V {
method KeyColumn (line 97) | func (c *Map[K, V]) KeyColumn() Column[K] {
method ValueColumn (line 102) | func (c *Map[K, V]) ValueColumn() Column[V] {
FILE: column/map_base.go
type MapBase (line 17) | type MapBase struct
method Each (line 54) | func (c *MapBase) Each(f func(start, end uint64) bool) {
method AppendLen (line 69) | func (c *MapBase) AppendLen(v int) {
method NumRow (line 75) | func (c *MapBase) NumRow() int {
method Reset (line 85) | func (c *MapBase) Reset() {
method Offsets (line 93) | func (c *MapBase) Offsets() []uint64 {
method TotalRows (line 98) | func (c *MapBase) TotalRows() int {
method SetWriteBufferSize (line 108) | func (c *MapBase) SetWriteBufferSize(row int) {
method ReadRaw (line 115) | func (c *MapBase) ReadRaw(num int, r *readerwriter.Reader) error {
method KeyColumn (line 138) | func (c *MapBase) KeyColumn() ColumnBasic {
method ValueColumn (line 143) | func (c *MapBase) ValueColumn() ColumnBasic {
method HeaderReader (line 149) | func (c *MapBase) HeaderReader(r *readerwriter.Reader, readColumn bool...
method Validate (line 170) | func (c *MapBase) Validate() error {
method ColumnType (line 206) | func (c *MapBase) ColumnType() string {
method WriteTo (line 214) | func (c *MapBase) WriteTo(w io.Writer) (int64, error) {
method HeaderWriter (line 236) | func (c *MapBase) HeaderWriter(w *readerwriter.Writer) {
function NewMapBase (line 27) | func NewMapBase(
FILE: column/map_nullable.go
type MapNullable (line 7) | type MapNullable struct
function NewMapNullable (line 15) | func NewMapNullable[K comparable, V any](
method DataP (line 33) | func (c *MapNullable[T, V]) DataP() []map[T]*V {
method ReadP (line 50) | func (c *MapNullable[T, V]) ReadP(value []map[T]*V) []map[T]*V {
method RowP (line 56) | func (c *MapNullable[T, V]) RowP(row int) map[T]*V {
method AppendP (line 70) | func (c *MapNullable[K, V]) AppendP(v map[K]*V) {
method ReadRaw (line 79) | func (c *MapNullable[K, V]) ReadRaw(num int, r *readerwriter.Reader) err...
method ValueColumn (line 92) | func (c *MapNullable[K, V]) ValueColumn() NullableColumn[V] {
FILE: column/map_test.go
function TestMapUint8 (line 15) | func TestMapUint8(t *testing.T) {
function TestMapUint16 (line 29) | func TestMapUint16(t *testing.T) {
function TestMapUint32 (line 43) | func TestMapUint32(t *testing.T) {
function TestMapUint64 (line 57) | func TestMapUint64(t *testing.T) {
function TestMapInt8 (line 70) | func TestMapInt8(t *testing.T) {
function TestMapInt16 (line 84) | func TestMapInt16(t *testing.T) {
function TestMapInt32 (line 98) | func TestMapInt32(t *testing.T) {
function TestMapInt64 (line 112) | func TestMapInt64(t *testing.T) {
function TestMapFloat32 (line 126) | func TestMapFloat32(t *testing.T) {
function TestMapFloat64 (line 140) | func TestMapFloat64(t *testing.T) {
function testMapColumn (line 154) | func testMapColumn[V comparable](
function TestMapEmptyResult (line 556) | func TestMapEmptyResult(t *testing.T) {
function TestMapEmpty (line 589) | func TestMapEmpty(t *testing.T) {
FILE: column/nested.go
function NewNested (line 6) | func NewNested(columns ...ColumnBasic) *ArrayBase {
FILE: column/nested_test.go
function TestNestedNoFlattened (line 16) | func TestNestedNoFlattened(t *testing.T) {
FILE: column/nullable.go
type appendEmptyInterface (line 13) | type appendEmptyInterface interface
type Nullable (line 18) | type Nullable struct
function NewNullable (line 27) | func NewNullable[T comparable](dataColumn Column[T]) *Nullable[T] {
method Data (line 36) | func (c *Nullable[T]) Data() []T {
method DataP (line 44) | func (c *Nullable[T]) DataP() []*T {
method Read (line 59) | func (c *Nullable[T]) Read(value []T) []T {
method ReadP (line 67) | func (c *Nullable[T]) ReadP(value []*T) []*T {
method Row (line 75) | func (c *Nullable[T]) Row(i int) T {
method RowP (line 83) | func (c *Nullable[T]) RowP(row int) *T {
method ReadNil (line 92) | func (c *Nullable[T]) ReadNil(value []bool) []bool {
method DataNil (line 97) | func (c *Nullable[T]) DataNil() []bool {
method RowIsNil (line 102) | func (c *Nullable[T]) RowIsNil(row int) bool {
method Append (line 107) | func (c *Nullable[T]) Append(v ...T) {
method AppendP (line 115) | func (c *Nullable[T]) AppendP(v ...*T) {
method AppendNil (line 126) | func (c *Nullable[T]) AppendNil() {
method NumRow (line 132) | func (c *Nullable[T]) NumRow() int {
method Array (line 137) | func (c *Nullable[T]) Array() *ArrayNullable[T] {
method LC (line 142) | func (c *Nullable[T]) LC() *LowCardinalityNullable[T] {
method LowCardinality (line 147) | func (c *Nullable[T]) LowCardinality() *LowCardinalityNullable[T] {
method Reset (line 157) | func (c *Nullable[T]) Reset() {
method SetWriteBufferSize (line 167) | func (c *Nullable[T]) SetWriteBufferSize(row int) {
method ReadRaw (line 175) | func (c *Nullable[T]) ReadRaw(num int, r *readerwriter.Reader) error {
method readBuffer (line 187) | func (c *Nullable[T]) readBuffer() error {
method HeaderReader (line 202) | func (c *Nullable[T]) HeaderReader(r *readerwriter.Reader, readColumn bo...
method Validate (line 211) | func (c *Nullable[T]) Validate() error {
method ColumnType (line 227) | func (c *Nullable[T]) ColumnType() string {
method WriteTo (line 233) | func (c *Nullable[T]) WriteTo(w io.Writer) (int64, error) {
method HeaderWriter (line 245) | func (c *Nullable[T]) HeaderWriter(w *readerwriter.Writer) {
method elem (line 248) | func (c *Nullable[T]) elem(arrayLevel int, lc bool) ColumnBasic {
FILE: column/nullable_test.go
function TestNullableAsNormal (line 15) | func TestNullableAsNormal(t *testing.T) {
FILE: column/point.go
function NewPoint (line 5) | func NewPoint() *Tuple2[types.Point, float64, float64] {
FILE: column/size.go
constant Uint8Size (line 5) | Uint8Size = 1
constant Uint16Size (line 7) | Uint16Size = 2
constant Uint32Size (line 9) | Uint32Size = 4
constant Uint64Size (line 11) | Uint64Size = 8
constant Uint128Size (line 13) | Uint128Size = 16
constant Uint256Size (line 15) | Uint256Size = 32
constant Int8Size (line 17) | Int8Size = 1
constant Int16Size (line 19) | Int16Size = 2
constant Int32Size (line 21) | Int32Size = 4
constant Int64Size (line 23) | Int64Size = 8
constant Int128Size (line 25) | Int128Size = 16
constant Int256Size (line 27) | Int256Size = 32
constant Float32Size (line 29) | Float32Size = 4
constant Float64Size (line 31) | Float64Size = 8
constant DateSize (line 33) | DateSize = 2
constant Date32Size (line 35) | Date32Size = 4
constant DatetimeSize (line 37) | DatetimeSize = 4
constant Datetime64Size (line 39) | Datetime64Size = 8
constant IPv4Size (line 41) | IPv4Size = 4
constant IPv6Size (line 43) | IPv6Size = 16
constant Decimal32Size (line 45) | Decimal32Size = 4
constant Decimal64Size (line 47) | Decimal64Size = 8
constant Decimal128Size (line 49) | Decimal128Size = 16
constant Decimal256Size (line 51) | Decimal256Size = 32
constant ArraylenSize (line 53) | ArraylenSize = 8
constant MaplenSize (line 55) | MaplenSize = 8
constant UUIDSize (line 57) | UUIDSize = 16
FILE: column/string.go
type String (line 4) | type String struct
method Elem (line 13) | func (c *String) Elem(arrayLevel int, nullable, lc bool) ColumnBasic {
function NewString (line 9) | func NewString() *String {
FILE: column/string_base.go
type stringPos (line 11) | type stringPos struct
type StringBase (line 17) | type StringBase struct
function NewStringBase (line 26) | func NewStringBase[T ~string]() *StringBase[T] {
method Data (line 31) | func (c *StringBase[T]) Data() []T {
method DataBytes (line 40) | func (c *StringBase[T]) DataBytes() [][]byte {
method Read (line 45) | func (c *StringBase[T]) Read(value []T) []T {
method ReadBytes (line 61) | func (c *StringBase[T]) ReadBytes(value [][]byte) [][]byte {
method Row (line 79) | func (c *StringBase[T]) Row(row int) T {
method RowBytes (line 86) | func (c *StringBase[T]) RowBytes(row int) []byte {
method Each (line 91) | func (c *StringBase[T]) Each(f func(i int, b []byte) bool) {
method appendLen (line 99) | func (c *StringBase[T]) appendLen(x int) {
method Append (line 110) | func (c *StringBase[T]) Append(v ...T) {
method AppendBytes (line 119) | func (c *StringBase[T]) AppendBytes(v ...[]byte) {
method NumRow (line 128) | func (c *StringBase[T]) NumRow() int {
method Array (line 133) | func (c *StringBase[T]) Array() *Array[T] {
method Nullable (line 138) | func (c *StringBase[T]) Nullable() *Nullable[T] {
method LC (line 143) | func (c *StringBase[T]) LC() *LowCardinality[T] {
method LowCardinality (line 148) | func (c *StringBase[T]) LowCardinality() *LowCardinality[T] {
method Reset (line 157) | func (c *StringBase[T]) Reset() {
method SetWriteBufferSize (line 167) | func (c *StringBase[T]) SetWriteBufferSize(b int) {
method ReadRaw (line 174) | func (c *StringBase[T]) ReadRaw(num int, r *readerwriter.Reader) error {
method HeaderReader (line 200) | func (c *StringBase[T]) HeaderReader(r *readerwriter.Reader, readColumn ...
method Validate (line 205) | func (c *StringBase[T]) Validate() error {
method ColumnType (line 215) | func (c *StringBase[T]) ColumnType() string {
method WriteTo (line 221) | func (c *StringBase[T]) WriteTo(w io.Writer) (int64, error) {
method HeaderWriter (line 228) | func (c *StringBase[T]) HeaderWriter(w *readerwriter.Writer) {
method appendEmpty (line 231) | func (c *StringBase[T]) appendEmpty() {
method Elem (line 236) | func (c *StringBase[T]) Elem(arrayLevel int, nullable, lc bool) ColumnBa...
FILE: column/string_test.go
function TestString (line 16) | func TestString(t *testing.T) {
FILE: column/tuple.go
type Tuple (line 16) | type Tuple struct
method NumRow (line 36) | func (c *Tuple) NumRow() int {
method Array (line 41) | func (c *Tuple) Array() *ArrayBase {
method Reset (line 51) | func (c *Tuple) Reset() {
method SetWriteBufferSize (line 60) | func (c *Tuple) SetWriteBufferSize(row int) {
method ReadRaw (line 67) | func (c *Tuple) ReadRaw(num int, r *readerwriter.Reader) error {
method HeaderReader (line 79) | func (c *Tuple) HeaderReader(r *readerwriter.Reader, readColumn bool, ...
method Columns (line 97) | func (c *Tuple) Columns() []ColumnBasic {
method Validate (line 101) | func (c *Tuple) Validate() error {
method ColumnType (line 139) | func (c *Tuple) ColumnType() string {
method WriteTo (line 149) | func (c *Tuple) WriteTo(w io.Writer) (int64, error) {
method HeaderWriter (line 163) | func (c *Tuple) HeaderWriter(w *readerwriter.Writer) {
method Elem (line 169) | func (c *Tuple) Elem(arrayLevel int) ColumnBasic {
function NewTuple (line 26) | func NewTuple(columns ...ColumnBasic) *Tuple {
FILE: column/tuple1.go
type Tuple1 (line 4) | type Tuple1 struct
function NewTuple1 (line 10) | func NewTuple1[T1 any](
function NewNested1 (line 26) | func NewNested1[T any](
method Data (line 35) | func (c *Tuple1[T]) Data() []T {
method Read (line 40) | func (c *Tuple1[T]) Read(value []T) []T {
method Row (line 46) | func (c *Tuple1[T]) Row(row int) T {
method Append (line 51) | func (c *Tuple1[T]) Append(v ...T) {
method Array (line 56) | func (c *Tuple1[T]) Array() *Array[T] {
FILE: column/tuple2_gen.go
type tuple2Value (line 7) | type tuple2Value struct
type Tuple2 (line 13) | type Tuple2 struct
function NewTuple2 (line 23) | func NewTuple2[T ~struct {
function NewNested2 (line 45) | func NewNested2[T ~struct {
method Data (line 59) | func (c *Tuple2[T, T1, T2]) Data() []T {
method Read (line 71) | func (c *Tuple2[T, T1, T2]) Read(value []T) []T {
method Row (line 89) | func (c *Tuple2[T, T1, T2]) Row(row int) T {
method Append (line 97) | func (c *Tuple2[T, T1, T2]) Append(v ...T) {
method Array (line 106) | func (c *Tuple2[T, T1, T2]) Array() *Array[T] {
FILE: column/tuple3_gen.go
type tuple3Value (line 7) | type tuple3Value struct
type Tuple3 (line 14) | type Tuple3 struct
function NewTuple3 (line 26) | func NewTuple3[T ~struct {
function NewNested3 (line 52) | func NewNested3[T ~struct {
method Data (line 69) | func (c *Tuple3[T, T1, T2, T3]) Data() []T {
method Read (line 82) | func (c *Tuple3[T, T1, T2, T3]) Read(value []T) []T {
method Row (line 101) | func (c *Tuple3[T, T1, T2, T3]) Row(row int) T {
method Append (line 110) | func (c *Tuple3[T, T1, T2, T3]) Append(v ...T) {
method Array (line 120) | func (c *Tuple3[T, T1, T2, T3]) Array() *Array[T] {
FILE: column/tuple4_gen.go
type tuple4Value (line 7) | type tuple4Value struct
type Tuple4 (line 15) | type Tuple4 struct
function NewTuple4 (line 29) | func NewTuple4[T ~struct {
function NewNested4 (line 59) | func NewNested4[T ~struct {
method Data (line 79) | func (c *Tuple4[T, T1, T2, T3, T4]) Data() []T {
method Read (line 93) | func (c *Tuple4[T, T1, T2, T3, T4]) Read(value []T) []T {
method Row (line 113) | func (c *Tuple4[T, T1, T2, T3, T4]) Row(row int) T {
method Append (line 123) | func (c *Tuple4[T, T1, T2, T3, T4]) Append(v ...T) {
method Array (line 134) | func (c *Tuple4[T, T1, T2, T3, T4]) Array() *Array[T] {
FILE: column/tuple5_gen.go
type tuple5Value (line 7) | type tuple5Value struct
type Tuple5 (line 16) | type Tuple5 struct
function NewTuple5 (line 32) | func NewTuple5[T ~struct {
function NewNested5 (line 66) | func NewNested5[T ~struct {
method Data (line 89) | func (c *Tuple5[T, T1, T2, T3, T4, T5]) Data() []T {
method Read (line 104) | func (c *Tuple5[T, T1, T2, T3, T4, T5]) Read(value []T) []T {
method Row (line 125) | func (c *Tuple5[T, T1, T2, T3, T4, T5]) Row(row int) T {
method Append (line 136) | func (c *Tuple5[T, T1, T2, T3, T4, T5]) Append(v ...T) {
method Array (line 148) | func (c *Tuple5[T, T1, T2, T3, T4, T5]) Array() *Array[T] {
FILE: column/tuple_test.go
function TestTuple (line 16) | func TestTuple(t *testing.T) {
function TestTupleNoColumn (line 346) | func TestTupleNoColumn(t *testing.T) {
function TestGeo (line 350) | func TestGeo(t *testing.T) {
FILE: column/tuples_test.go
function TestTuples (line 16) | func TestTuples(t *testing.T) {
FILE: config.go
constant defaultUsername (line 17) | defaultUsername = "default"
constant defaultDatabase (line 18) | defaultDatabase = "default"
constant defaultDBPort (line 19) | defaultDBPort = "9000"
constant defaultClientName (line 20) | defaultClientName = "chx"
type CompressMethod (line 23) | type CompressMethod
constant CompressNone (line 27) | CompressNone CompressMethod = 0x00
constant CompressChecksum (line 28) | CompressChecksum CompressMethod = 0x02
constant CompressLZ4 (line 29) | CompressLZ4 CompressMethod = 0x82
constant CompressZSTD (line 30) | CompressZSTD CompressMethod = 0x90
type AfterConnectFunc (line 35) | type AfterConnectFunc
type ValidateConnectFunc (line 40) | type ValidateConnectFunc
type Config (line 44) | type Config struct
method Copy (line 83) | func (c *Config) Copy() *Config {
method ConnString (line 110) | func (c *Config) ConnString() string { return c.connString }
type FallbackConfig (line 114) | type FallbackConfig struct
function NetworkAddress (line 122) | func NetworkAddress(host string, port uint16) (network, address string) {
function ParseConfig (line 183) | func ParseConfig(connString string) (*Config, error) {
function defaultSettings (line 313) | func defaultSettings() map[string]string {
function mergeSettings (line 326) | func mergeSettings(settingSets ...map[string]string) map[string]string {
function parseEnvSettings (line 338) | func parseEnvSettings() map[string]string {
function parseURLSettings (line 365) | func parseURLSettings(connString string) (map[string]string, error) {
function isIPOnly (line 429) | func isIPOnly(host string) bool {
function parseDSNSettings (line 435) | func parseDSNSettings(s string) (map[string]string, error) {
function configTLS (line 512) | func configTLS(settings map[string]string, thisHost string) ([]*tls.Conf...
function parsePort (line 622) | func parsePort(s string) (uint16, error) {
function makeDefaultDialer (line 633) | func makeDefaultDialer() *net.Dialer {
function makeDefaultResolver (line 637) | func makeDefaultResolver() *net.Resolver {
function parseConnectTimeoutSetting (line 641) | func parseConnectTimeoutSetting(s string) (time.Duration, error) {
function makeConnectTimeoutDialFunc (line 652) | func makeConnectTimeoutDialFunc(timeout time.Duration) DialFunc {
FILE: config_test.go
function TestParseConfig (line 544) | func TestParseConfig(t *testing.T) {
function TestParseConfigDSNWithTrailingEmptyEqualDoesNotPanic (line 557) | func TestParseConfigDSNWithTrailingEmptyEqualDoesNotPanic(t *testing.T) {
function TestParseConfigDSNLeadingEqual (line 562) | func TestParseConfigDSNLeadingEqual(t *testing.T) {
function TestParseConfigDSNTrailingBackslash (line 567) | func TestParseConfigDSNTrailingBackslash(t *testing.T) {
function TestConfigCopyReturnsEqualConfig (line 573) | func TestConfigCopyReturnsEqualConfig(t *testing.T) {
function TestConfigCopyOriginalConfigDidNotChange (line 582) | func TestConfigCopyOriginalConfigDidNotChange(t *testing.T) {
function TestConfigCopyCanBeUsedToConnect (line 597) | func TestConfigCopyCanBeUsedToConnect(t *testing.T) {
function assertConfigsEqual (line 609) | func assertConfigsEqual(t *testing.T, expected, actual *Config, testName...
function TestParseConfigEnv (line 677) | func TestParseConfigEnv(t *testing.T) {
function TestParseConfigError (line 758) | func TestParseConfigError(t *testing.T) {
FILE: doc_test.go
function Example (line 13) | func Example() {
FILE: errors.go
type ChError (line 43) | type ChError struct
method read (line 51) | func (e *ChError) read(r *readerwriter.Reader) error {
method Unwrap (line 86) | func (e *ChError) Unwrap() error {
method Error (line 91) | func (e *ChError) Error() string {
function preferContextOverNetTimeoutError (line 100) | func preferContextOverNetTimeoutError(ctx context.Context, err error) er...
type errTimeout (line 118) | type errTimeout struct
method Error (line 123) | func (e *errTimeout) Error() string {
method Unwrap (line 130) | func (e *errTimeout) Unwrap() error {
type contextAlreadyDoneError (line 134) | type contextAlreadyDoneError struct
method Error (line 138) | func (e *contextAlreadyDoneError) Error() string {
method Unwrap (line 142) | func (e *contextAlreadyDoneError) Unwrap() error {
function newContextAlreadyDoneError (line 147) | func newContextAlreadyDoneError(ctx context.Context) (err error) {
type unexpectedPacket (line 153) | type unexpectedPacket struct
method Error (line 158) | func (e *unexpectedPacket) Error() string {
type notImplementedPacket (line 162) | type notImplementedPacket struct
method Error (line 166) | func (e *notImplementedPacket) Error() string {
type connectError (line 170) | type connectError struct
method Error (line 176) | func (e *connectError) Error() string {
method Unwrap (line 185) | func (e *connectError) Unwrap() error {
type connLockError (line 189) | type connLockError struct
method Error (line 193) | func (e *connLockError) Error() string {
type parseConfigError (line 197) | type parseConfigError struct
method Error (line 203) | func (e *parseConfigError) Error() string {
method Unwrap (line 211) | func (e *parseConfigError) Unwrap() error {
type readError (line 215) | type readError struct
method Error (line 220) | func (e *readError) Error() string {
method Unwrap (line 224) | func (e *readError) Unwrap() error {
type writeError (line 228) | type writeError struct
method Error (line 233) | func (e *writeError) Error() string {
method Unwrap (line 237) | func (e *writeError) Unwrap() error {
function redactPW (line 241) | func redactPW(connString string) string {
function redactURL (line 256) | func redactURL(u *url.URL) string {
type InsertError (line 267) | type InsertError struct
method Error (line 273) | func (e *InsertError) Error() string {
method Unwrap (line 278) | func (e *InsertError) Unwrap() error {
type ColumnNumberReadError (line 283) | type ColumnNumberReadError struct
method Error (line 288) | func (e *ColumnNumberReadError) Error() string {
type ColumnNumberWriteError (line 293) | type ColumnNumberWriteError struct
method Error (line 298) | func (e *ColumnNumberWriteError) Error() string {
type NumberWriteError (line 303) | type NumberWriteError struct
method Error (line 310) | func (e *NumberWriteError) Error() string {
type ColumnNotFoundError (line 315) | type ColumnNotFoundError struct
method Error (line 319) | func (e *ColumnNotFoundError) Error() string {
FILE: errors_ch_code.go
type ChErrorType (line 3) | type ChErrorType
constant ChErrorOk (line 6) | ChErrorOk ChErrorType = 0
constant ChErrorUnsupportedMethod (line 7) | ChErrorUnsupportedMethod ChErrorType = 1
constant ChErrorUnsupportedParameter (line 8) | ChErrorUnsupportedParameter ChErrorType = 2
constant ChErrorUnexpectedEndOfFile (line 9) | ChErrorUnexpectedEndOfFile ChErrorType = 3
constant ChErrorExpectedEndOfFile (line 10) | ChErrorExpectedEndOfFile ChErrorType = 4
constant ChErrorCannotParseText (line 11) | ChErrorCannotParseText ChErrorType = 6
constant ChErrorIncorrectNumberOfColumns (line 12) | ChErrorIncorrectNumberOfColumns ChErrorType = 7
constant ChErrorThereIsNoColumn (line 13) | ChErrorThereIsNoColumn ChErrorType = 8
constant ChErrorSizesOfColumnsDoesntMatch (line 14) | ChErrorSizesOfColumnsDoesntMatch ChErrorType = 9
constant ChErrorNotFoundColumnInBlock (line 15) | ChErrorNotFoundColumnInBlock ChErrorType = 10
constant ChErrorPositionOutOfBound (line 16) | ChErrorPositionOutOfBound ChErrorType = 11
constant ChErrorParameterOutOfBound (line 17) | ChErrorParameterOutOfBound ChErrorType = 12
constant ChErrorSizesOfColumnsInTupleDoesntMatch (line 18) | ChErrorSizesOfColumnsInTupleDoesntMatch ChErrorType = 13
constant ChErrorDuplicateColumn (line 19) | ChErrorDuplicateColumn ChErrorType = 15
constant ChErrorNoSuchColumnInTable (line 20) | ChErrorNoSuchColumnInTable ChErrorType = 16
constant ChErrorDelimiterInStringLiteralDoesntMatch (line 21) | ChErrorDelimiterInStringLiteralDoesntMatch ChErrorType = 17
constant ChErrorCannotInsertElementIntoConstantColumn (line 22) | ChErrorCannotInsertElementIntoConstantColumn ChErrorType = 18
constant ChErrorSizeOfFixedStringDoesntMatch (line 23) | ChErrorSizeOfFixedStringDoesntMatch ChErrorType = 19
constant ChErrorNumberOfColumnsDoesntMatch (line 24) | ChErrorNumberOfColumnsDoesntMatch ChErrorType = 20
constant ChErrorCannotReadAllDataFromTabSeparatedInput (line 25) | ChErrorCannotReadAllDataFromTabSeparatedInput ChErrorType = 21
constant ChErrorCannotParseAllValueFromTabSeparatedInput (line 26) | ChErrorCannotParseAllValueFromTabSeparatedInput ChErrorType = 22
constant ChErrorCannotReadFromIstream (line 27) | ChErrorCannotReadFromIstream ChErrorType = 23
constant ChErrorCannotWriteToOstream (line 28) | ChErrorCannotWriteToOstream ChErrorType = 24
constant ChErrorCannotParseEscapeSequence (line 29) | ChErrorCannotParseEscapeSequence ChErrorType = 25
constant ChErrorCannotParseQuotedString (line 30) | ChErrorCannotParseQuotedString ChErrorType = 26
constant ChErrorCannotParseInputAssertionFailed (line 31) | ChErrorCannotParseInputAssertionFailed ChErrorType = 27
constant ChErrorCannotPrintFloatOrDoubleNumber (line 32) | ChErrorCannotPrintFloatOrDoubleNumber ChErrorType = 28
constant ChErrorCannotPrintInteger (line 33) | ChErrorCannotPrintInteger ChErrorType = 29
constant ChErrorCannotReadSizeOfCompressedChunk (line 34) | ChErrorCannotReadSizeOfCompressedChunk ChErrorType = 30
constant ChErrorCannotReadCompressedChunk (line 35) | ChErrorCannotReadCompressedChunk ChErrorType = 31
constant ChErrorAttemptToReadAfterEOF (line 36) | ChErrorAttemptToReadAfterEOF ChErrorType = 32
constant ChErrorCannotReadAllData (line 37) | ChErrorCannotReadAllData ChErrorType = 33
constant ChErrorTooManyArgumentsForFunction (line 38) | ChErrorTooManyArgumentsForFunction ChErrorType = 34
constant ChErrorTooFewArgumentsForFunction (line 39) | ChErrorTooFewArgumentsForFunction ChErrorType = 35
constant ChErrorBadArguments (line 40) | ChErrorBadArguments ChErrorType = 36
constant ChErrorUnknownElementInAst (line 41) | ChErrorUnknownElementInAst ChErrorType = 37
constant ChErrorCannotParseDate (line 42) | ChErrorCannotParseDate ChErrorType = 38
constant ChErrorTooLargeSizeCompressed (line 43) | ChErrorTooLargeSizeCompressed ChErrorType = 39
constant ChErrorChecksumDoesntMatch (line 44) | ChErrorChecksumDoesntMatch ChErrorType = 40
constant ChErrorCannotParseDatetime (line 45) | ChErrorCannotParseDatetime ChErrorType = 41
constant ChErrorNumberOfArgumentsDoesntMatch (line 46) | ChErrorNumberOfArgumentsDoesntMatch ChErrorType = 42
constant ChErrorIllegalTypeOfArgument (line 47) | ChErrorIllegalTypeOfArgument ChErrorType = 43
constant ChErrorIllegalColumn (line 48) | ChErrorIllegalColumn ChErrorType = 44
constant ChErrorIllegalNumberOfResultColumns (line 49) | ChErrorIllegalNumberOfResultColumns ChErrorType = 45
constant ChErrorUnknownFunction (line 50) | ChErrorUnknownFunction ChErrorType = 46
constant ChErrorUnknownIdentifier (line 51) | ChErrorUnknownIdentifier ChErrorType = 47
constant ChErrorNotImplemented (line 52) | ChErrorNotImplemented ChErrorType = 48
constant ChErrorLogicalError (line 53) | ChErrorLogicalError ChErrorType = 49
constant ChErrorUnknownType (line 54) | ChErrorUnknownType ChErrorType = 50
constant ChErrorEmptyListOfColumnsQueried (line 55) | ChErrorEmptyListOfColumnsQueried ChErrorType = 51
constant ChErrorColumnQueriedMoreThanOnce (line 56) | ChErrorColumnQueriedMoreThanOnce ChErrorType = 52
constant ChErrorTypeMismatch (line 57) | ChErrorTypeMismatch ChErrorType = 53
constant ChErrorStorageDoesntAllowParameters (line 58) | ChErrorStorageDoesntAllowParameters ChErrorType = 54
constant ChErrorStorageRequiresParameter (line 59) | ChErrorStorageRequiresParameter ChErrorType = 55
constant ChErrorUnknownStorage (line 60) | ChErrorUnknownStorage ChErrorType = 56
constant ChErrorTableAlreadyExists (line 61) | ChErrorTableAlreadyExists ChErrorType = 57
constant ChErrorTableMetadataAlreadyExists (line 62) | ChErrorTableMetadataAlreadyExists ChErrorType = 58
constant ChErrorIllegalTypeOfColumnForFilter (line 63) | ChErrorIllegalTypeOfColumnForFilter ChErrorType = 59
constant ChErrorUnknownTable (line 64) | ChErrorUnknownTable ChErrorType = 60
constant ChErrorOnlyFilterColumnInBlock (line 65) | ChErrorOnlyFilterColumnInBlock ChErrorType = 61
constant ChErrorSyntaxError (line 66) | ChErrorSyntaxError ChErrorType = 62
constant ChErrorUnknownAggregateFunction (line 67) | ChErrorUnknownAggregateFunction ChErrorType = 63
constant ChErrorCannotReadAggregateFunctionFromText (line 68) | ChErrorCannotReadAggregateFunctionFromText ChErrorType = 64
constant ChErrorCannotWriteAggregateFunctionAsText (line 69) | ChErrorCannotWriteAggregateFunctionAsText ChErrorType = 65
constant ChErrorNotAColumn (line 70) | ChErrorNotAColumn ChErrorType = 66
constant ChErrorIllegalKeyOfAggregation (line 71) | ChErrorIllegalKeyOfAggregation ChErrorType = 67
constant ChErrorCannotGetSizeOfField (line 72) | ChErrorCannotGetSizeOfField ChErrorType = 68
constant ChErrorArgumentOutOfBound (line 73) | ChErrorArgumentOutOfBound ChErrorType = 69
constant ChErrorCannotConvertType (line 74) | ChErrorCannotConvertType ChErrorType = 70
constant ChErrorCannotWriteAfterEndOfBuffer (line 75) | ChErrorCannotWriteAfterEndOfBuffer ChErrorType = 71
constant ChErrorCannotParseNumber (line 76) | ChErrorCannotParseNumber ChErrorType = 72
constant ChErrorUnknownFormat (line 77) | ChErrorUnknownFormat ChErrorType = 73
constant ChErrorCannotReadFromFileDescriptor (line 78) | ChErrorCannotReadFromFileDescriptor ChErrorType = 74
constant ChErrorCannotWriteToFileDescriptor (line 79) | ChErrorCannotWriteToFileDescriptor ChErrorType = 75
constant ChErrorCannotOpenFile (line 80) | ChErrorCannotOpenFile ChErrorType = 76
constant ChErrorCannotCloseFile (line 81) | ChErrorCannotCloseFile ChErrorType = 77
constant ChErrorUnknownTypeOfQuery (line 82) | ChErrorUnknownTypeOfQuery ChErrorType = 78
constant ChErrorIncorrectFileName (line 83) | ChErrorIncorrectFileName ChErrorType = 79
constant ChErrorIncorrectQuery (line 84) | ChErrorIncorrectQuery ChErrorType = 80
constant ChErrorUnknownDatabase (line 85) | ChErrorUnknownDatabase ChErrorType = 81
constant ChErrorDatabaseAlreadyExists (line 86) | ChErrorDatabaseAlreadyExists ChErrorType = 82
constant ChErrorDirectoryDoesntExist (line 87) | ChErrorDirectoryDoesntExist ChErrorType = 83
constant ChErrorDirectoryAlreadyExists (line 88) | ChErrorDirectoryAlreadyExists ChErrorType = 84
constant ChErrorFormatIsNotSuitableForInput (line 89) | ChErrorFormatIsNotSuitableForInput ChErrorType = 85
constant ChErrorReceivedErrorFromRemoteIoServer (line 90) | ChErrorReceivedErrorFromRemoteIoServer ChErrorType = 86
constant ChErrorCannotSeekThroughFile (line 91) | ChErrorCannotSeekThroughFile ChErrorType = 87
constant ChErrorCannotTruncateFile (line 92) | ChErrorCannotTruncateFile ChErrorType = 88
constant ChErrorUnknownCompressionMethod (line 93) | ChErrorUnknownCompressionMethod ChErrorType = 89
constant ChErrorEmptyListOfColumnsPassed (line 94) | ChErrorEmptyListOfColumnsPassed ChErrorType = 90
constant ChErrorSizesOfMarksFilesAreInconsistent (line 95) | ChErrorSizesOfMarksFilesAreInconsistent ChErrorType = 91
constant ChErrorEmptyDataPassed (line 96) | ChErrorEmptyDataPassed ChErrorType = 92
constant ChErrorUnknownAggregatedDataVariant (line 97) | ChErrorUnknownAggregatedDataVariant ChErrorType = 93
constant ChErrorCannotMergeDifferentAggregatedDataVariants (line 98) | ChErrorCannotMergeDifferentAggregatedDataVariants ChErrorType = 94
constant ChErrorCannotReadFromSocket (line 99) | ChErrorCannotReadFromSocket ChErrorType = 95
constant ChErrorCannotWriteToSocket (line 100) | ChErrorCannotWriteToSocket ChErrorType = 96
constant ChErrorCannotReadAllDataFromChunkedInput (line 101) | ChErrorCannotReadAllDataFromChunkedInput ChErrorType = 97
constant ChErrorCannotWriteToEmptyBlockOutputStream (line 102) | ChErrorCannotWriteToEmptyBlockOutputStream ChErrorType = 98
constant ChErrorUnknownPacketFromClient (line 103) | ChErrorUnknownPacketFromClient ChErrorType = 99
constant ChErrorUnknownPacketFromServer (line 104) | ChErrorUnknownPacketFromServer ChErrorType = 100
constant ChErrorUnexpectedPacketFromClient (line 105) | ChErrorUnexpectedPacketFromClient ChErrorType = 101
constant ChErrorUnexpectedPacketFromServer (line 106) | ChErrorUnexpectedPacketFromServer ChErrorType = 102
constant ChErrorReceivedDataForWrongQueryID (line 107) | ChErrorReceivedDataForWrongQueryID ChErrorType = 103
constant ChErrorTooSmallBufferSize (line 108) | ChErrorTooSmallBufferSize ChErrorType = 104
constant ChErrorCannotReadHistory (line 109) | ChErrorCannotReadHistory ChErrorType = 105
constant ChErrorCannotAppendHistory (line 110) | ChErrorCannotAppendHistory ChErrorType = 106
constant ChErrorFileDoesntExist (line 111) | ChErrorFileDoesntExist ChErrorType = 107
constant ChErrorNoDataToInsert (line 112) | ChErrorNoDataToInsert ChErrorType = 108
constant ChErrorCannotBlockSignal (line 113) | ChErrorCannotBlockSignal ChErrorType = 109
constant ChErrorCannotUnblockSignal (line 114) | ChErrorCannotUnblockSignal ChErrorType = 110
constant ChErrorCannotManipulateSigset (line 115) | ChErrorCannotManipulateSigset ChErrorType = 111
constant ChErrorCannotWaitForSignal (line 116) | ChErrorCannotWaitForSignal ChErrorType = 112
constant ChErrorThereIsNoSession (line 117) | ChErrorThereIsNoSession ChErrorType = 113
constant ChErrorCannotClockGettime (line 118) | ChErrorCannotClockGettime ChErrorType = 114
constant ChErrorUnknownSetting (line 119) | ChErrorUnknownSetting ChErrorType = 115
constant ChErrorThereIsNoDefaultValue (line 120) | ChErrorThereIsNoDefaultValue ChErrorType = 116
constant ChErrorIncorrectData (line 121) | ChErrorIncorrectData ChErrorType = 117
constant ChErrorEngineRequired (line 122) | ChErrorEngineRequired ChErrorType = 119
constant ChErrorCannotInsertValueOfDifferentSizeIntoTuple (line 123) | ChErrorCannotInsertValueOfDifferentSizeIntoTuple ChErrorType = 120
constant ChErrorUnsupportedJoinKeys (line 124) | ChErrorUnsupportedJoinKeys ChErrorType = 121
constant ChErrorIncompatibleColumns (line 125) | ChErrorIncompatibleColumns ChErrorType = 122
constant ChErrorUnknownTypeOfAstNode (line 126) | ChErrorUnknownTypeOfAstNode ChErrorType = 123
constant ChErrorIncorrectElementOfSet (line 127) | ChErrorIncorrectElementOfSet ChErrorType = 124
constant ChErrorIncorrectResultOfScalarSubquery (line 128) | ChErrorIncorrectResultOfScalarSubquery ChErrorType = 125
constant ChErrorCannotGetReturnType (line 129) | ChErrorCannotGetReturnType ChErrorType = 126
constant ChErrorIllegalIndex (line 130) | ChErrorIllegalIndex ChErrorType = 127
constant ChErrorTooLargeArraySize (line 131) | ChErrorTooLargeArraySize ChErrorType = 128
constant ChErrorFunctionIsSpecial (line 132) | ChErrorFunctionIsSpecial ChErrorType = 129
constant ChErrorCannotReadArrayFromText (line 133) | ChErrorCannotReadArrayFromText ChErrorType = 130
constant ChErrorTooLargeStringSize (line 134) | ChErrorTooLargeStringSize ChErrorType = 131
constant ChErrorAggregateFunctionDoesntAllowParameters (line 135) | ChErrorAggregateFunctionDoesntAllowParameters ChErrorType = 133
constant ChErrorParametersToAggregateFunctionsMustBeLiterals (line 136) | ChErrorParametersToAggregateFunctionsMustBeLiterals ChErrorType = 134
constant ChErrorZeroArrayOrTupleIndex (line 137) | ChErrorZeroArrayOrTupleIndex ChErrorType = 135
constant ChErrorUnknownElementInConfig (line 138) | ChErrorUnknownElementInConfig ChErrorType = 137
constant ChErrorExcessiveElementInConfig (line 139) | ChErrorExcessiveElementInConfig ChErrorType = 138
constant ChErrorNoElementsInConfig (line 140) | ChErrorNoElementsInConfig ChErrorType = 139
constant ChErrorAllRequestedColumnsAreMissing (line 141) | ChErrorAllRequestedColumnsAreMissing ChErrorType = 140
constant ChErrorSamplingNotSupported (line 142) | ChErrorSamplingNotSupported ChErrorType = 141
constant ChErrorNotFoundNode (line 143) | ChErrorNotFoundNode ChErrorType = 142
constant ChErrorFoundMoreThanOneNode (line 144) | ChErrorFoundMoreThanOneNode ChErrorType = 143
constant ChErrorFirstDateIsBiggerThanLastDate (line 145) | ChErrorFirstDateIsBiggerThanLastDate ChErrorType = 144
constant ChErrorUnknownOverflowMode (line 146) | ChErrorUnknownOverflowMode ChErrorType = 145
constant ChErrorQuerySectionDoesntMakeSense (line 147) | ChErrorQuerySectionDoesntMakeSense ChErrorType = 146
constant ChErrorNotFoundFunctionElementForAggregate (line 148) | ChErrorNotFoundFunctionElementForAggregate ChErrorType = 147
constant ChErrorNotFoundRelationElementForCondition (line 149) | ChErrorNotFoundRelationElementForCondition ChErrorType = 148
constant ChErrorNotFoundRhsElementForCondition (line 150) | ChErrorNotFoundRhsElementForCondition ChErrorType = 149
constant ChErrorEmptyListOfAttributesPassed (line 151) | ChErrorEmptyListOfAttributesPassed ChErrorType = 150
constant ChErrorIndexOfColumnInSortClauseIsOutOfRange (line 152) | ChErrorIndexOfColumnInSortClauseIsOutOfRange ChErrorType = 151
constant ChErrorUnknownDirectionOfSorting (line 153) | ChErrorUnknownDirectionOfSorting ChErrorType = 152
constant ChErrorIllegalDivision (line 154) | ChErrorIllegalDivision ChErrorType = 153
constant ChErrorAggregateFunctionNotApplicable (line 155) | ChErrorAggregateFunctionNotApplicable ChErrorType = 154
constant ChErrorUnknownRelation (line 156) | ChErrorUnknownRelation ChErrorType = 155
constant ChErrorDictionariesWasNotLoaded (line 157) | ChErrorDictionariesWasNotLoaded ChErrorType = 156
constant ChErrorIllegalOverflowMode (line 158) | ChErrorIllegalOverflowMode ChErrorType = 157
constant ChErrorTooManyRows (line 159) | ChErrorTooManyRows ChErrorType = 158
constant ChErrorTimeoutExceeded (line 160) | ChErrorTimeoutExceeded ChErrorType = 159
constant ChErrorTooSlow (line 161) | ChErrorTooSlow ChErrorType = 160
constant ChErrorTooManyColumns (line 162) | ChErrorTooManyColumns ChErrorType = 161
constant ChErrorTooDeepSubqueries (line 163) | ChErrorTooDeepSubqueries ChErrorType = 162
constant ChErrorTooDeepPipeline (line 164) | ChErrorTooDeepPipeline ChErrorType = 163
constant ChErrorReadonly (line 165) | ChErrorReadonly ChErrorType = 164
constant ChErrorTooManyTemporaryColumns (line 166) | ChErrorTooManyTemporaryColumns ChErrorType = 165
constant ChErrorTooManyTemporaryNonConstColumns (line 167) | ChErrorTooManyTemporaryNonConstColumns ChErrorType = 166
constant ChErrorTooDeepAst (line 168) | ChErrorTooDeepAst ChErrorType = 167
constant ChErrorTooBigAst (line 169) | ChErrorTooBigAst ChErrorType = 168
constant ChErrorBadTypeOfField (line 170) | ChErrorBadTypeOfField ChErrorType = 169
constant ChErrorBadGet (line 171) | ChErrorBadGet ChErrorType = 170
constant ChErrorCannotCreateDirectory (line 172) | ChErrorCannotCreateDirectory ChErrorType = 172
constant ChErrorCannotAllocateMemory (line 173) | ChErrorCannotAllocateMemory ChErrorType = 173
constant ChErrorCyclicAliases (line 174) | ChErrorCyclicAliases ChErrorType = 174
constant ChErrorChunkNotFound (line 175) | ChErrorChunkNotFound ChErrorType = 176
constant ChErrorDuplicateChunkName (line 176) | ChErrorDuplicateChunkName ChErrorType = 177
constant ChErrorMultipleAliasesForExpression (line 177) | ChErrorMultipleAliasesForExpression ChErrorType = 178
constant ChErrorMultipleExpressionsForAlias (line 178) | ChErrorMultipleExpressionsForAlias ChErrorType = 179
constant ChErrorThereIsNoProfile (line 179) | ChErrorThereIsNoProfile ChErrorType = 180
constant ChErrorIllegalFinal (line 180) | ChErrorIllegalFinal ChErrorType = 181
constant ChErrorIllegalPrewhere (line 181) | ChErrorIllegalPrewhere ChErrorType = 182
constant ChErrorUnexpectedExpression (line 182) | ChErrorUnexpectedExpression ChErrorType = 183
constant ChErrorIllegalAggregation (line 183) | ChErrorIllegalAggregation ChErrorType = 184
constant ChErrorUnsupportedMyisamBlockType (line 184) | ChErrorUnsupportedMyisamBlockType ChErrorType = 185
constant ChErrorUnsupportedCollationLocale (line 185) | ChErrorUnsupportedCollationLocale ChErrorType = 186
constant ChErrorCollationComparisonFailed (line 186) | ChErrorCollationComparisonFailed ChErrorType = 187
constant ChErrorUnknownAction (line 187) | ChErrorUnknownAction ChErrorType = 188
constant ChErrorTableMustNotBeCreatedManually (line 188) | ChErrorTableMustNotBeCreatedManually ChErrorType = 189
constant ChErrorSizesOfArraysDoesntMatch (line 189) | ChErrorSizesOfArraysDoesntMatch ChErrorType = 190
constant ChErrorSetSizeLimitExceeded (line 190) | ChErrorSetSizeLimitExceeded ChErrorType = 191
constant ChErrorUnknownUser (line 191) | ChErrorUnknownUser ChErrorType = 192
constant ChErrorWrongPassword (line 192) | ChErrorWrongPassword ChErrorType = 193
constant ChErrorRequiredPassword (line 193) | ChErrorRequiredPassword ChErrorType = 194
constant ChErrorIPAddressNotAllowed (line 194) | ChErrorIPAddressNotAllowed ChErrorType = 195
constant ChErrorUnknownAddressPatternType (line 195) | ChErrorUnknownAddressPatternType ChErrorType = 196
constant ChErrorServerRevisionIsTooOld (line 196) | ChErrorServerRevisionIsTooOld ChErrorType = 197
constant ChErrorDNSError (line 197) | ChErrorDNSError ChErrorType = 198
constant ChErrorUnknownQuota (line 198) | ChErrorUnknownQuota ChErrorType = 199
constant ChErrorQuotaDoesntAllowKeys (line 199) | ChErrorQuotaDoesntAllowKeys ChErrorType = 200
constant ChErrorQuotaExpired (line 200) | ChErrorQuotaExpired ChErrorType = 201
constant ChErrorTooManySimultaneousQueries (line 201) | ChErrorTooManySimultaneousQueries ChErrorType = 202
constant ChErrorNoFreeConnection (line 202) | ChErrorNoFreeConnection ChErrorType = 203
constant ChErrorCannotFsync (line 203) | ChErrorCannotFsync ChErrorType = 204
constant ChErrorNestedTypeTooDeep (line 204) | ChErrorNestedTypeTooDeep ChErrorType = 205
constant ChErrorAliasRequired (line 205) | ChErrorAliasRequired ChErrorType = 206
constant ChErrorAmbiguousIdentifier (line 206) | ChErrorAmbiguousIdentifier ChErrorType = 207
constant ChErrorEmptyNestedTable (line 207) | ChErrorEmptyNestedTable ChErrorType = 208
constant ChErrorSocketTimeout (line 208) | ChErrorSocketTimeout ChErrorType = 209
constant ChErrorNetworkError (line 209) | ChErrorNetworkError ChErrorType = 210
constant ChErrorEmptyQuery (line 210) | ChErrorEmptyQuery ChErrorType = 211
constant ChErrorUnknownLoadBalancing (line 211) | ChErrorUnknownLoadBalancing ChErrorType = 212
constant ChErrorUnknownTotalsMode (line 212) | ChErrorUnknownTotalsMode ChErrorType = 213
constant ChErrorCannotStatvfs (line 213) | ChErrorCannotStatvfs ChErrorType = 214
constant ChErrorNotAnAggregate (line 214) | ChErrorNotAnAggregate ChErrorType = 215
constant ChErrorQueryWithSameIDIsAlreadyRunning (line 215) | ChErrorQueryWithSameIDIsAlreadyRunning ChErrorType = 216
constant ChErrorClientHasConnectedToWrongPort (line 216) | ChErrorClientHasConnectedToWrongPort ChErrorType = 217
constant ChErrorTableIsDropped (line 217) | ChErrorTableIsDropped ChErrorType = 218
constant ChErrorDatabaseNotEmpty (line 218) | ChErrorDatabaseNotEmpty ChErrorType = 219
constant ChErrorDuplicateInterserverIoEndpoint (line 219) | ChErrorDuplicateInterserverIoEndpoint ChErrorType = 220
constant ChErrorNoSuchInterserverIoEndpoint (line 220) | ChErrorNoSuchInterserverIoEndpoint ChErrorType = 221
constant ChErrorAddingReplicaToNonEmptyTable (line 221) | ChErrorAddingReplicaToNonEmptyTable ChErrorType = 222
constant ChErrorUnexpectedAstStructure (line 222) | ChErrorUnexpectedAstStructure ChErrorType = 223
constant ChErrorReplicaIsAlreadyActive (line 223) | ChErrorReplicaIsAlreadyActive ChErrorType = 224
constant ChErrorNoZookeeper (line 224) | ChErrorNoZookeeper ChErrorType = 225
constant ChErrorNoFileInDataPart (line 225) | ChErrorNoFileInDataPart ChErrorType = 226
constant ChErrorUnexpectedFileInDataPart (line 226) | ChErrorUnexpectedFileInDataPart ChErrorType = 227
constant ChErrorBadSizeOfFileInDataPart (line 227) | ChErrorBadSizeOfFileInDataPart ChErrorType = 228
constant ChErrorQueryIsTooLarge (line 228) | ChErrorQueryIsTooLarge ChErrorType = 229
constant ChErrorNotFoundExpectedDataPart (line 229) | ChErrorNotFoundExpectedDataPart ChErrorType = 230
constant ChErrorTooManyUnexpectedDataParts (line 230) | ChErrorTooManyUnexpectedDataParts ChErrorType = 231
constant ChErrorNoSuchDataPart (line 231) | ChErrorNoSuchDataPart ChErrorType = 232
constant ChErrorBadDataPartName (line 232) | ChErrorBadDataPartName ChErrorType = 233
constant ChErrorNoReplicaHasPart (line 233) | ChErrorNoReplicaHasPart ChErrorType = 234
constant ChErrorDuplicateDataPart (line 234) | ChErrorDuplicateDataPart ChErrorType = 235
constant ChErrorAborted (line 235) | ChErrorAborted ChErrorType = 236
constant ChErrorNoReplicaNameGiven (line 236) | ChErrorNoReplicaNameGiven ChErrorType = 237
constant ChErrorFormatVersionTooOld (line 237) | ChErrorFormatVersionTooOld ChErrorType = 238
constant ChErrorCannotMunmap (line 238) | ChErrorCannotMunmap ChErrorType = 239
constant ChErrorCannotMremap (line 239) | ChErrorCannotMremap ChErrorType = 240
constant ChErrorMemoryLimitExceeded (line 240) | ChErrorMemoryLimitExceeded ChErrorType = 241
constant ChErrorTableIsReadOnly (line 241) | ChErrorTableIsReadOnly ChErrorType = 242
constant ChErrorNotEnoughSpace (line 242) | ChErrorNotEnoughSpace ChErrorType = 243
constant ChErrorUnexpectedZookeeperError (line 243) | ChErrorUnexpectedZookeeperError ChErrorType = 244
constant ChErrorCorruptedData (line 244) | ChErrorCorruptedData ChErrorType = 246
constant ChErrorIncorrectMark (line 245) | ChErrorIncorrectMark ChErrorType = 247
constant ChErrorInvalidPartitionValue (line 246) | ChErrorInvalidPartitionValue ChErrorType = 248
constant ChErrorNotEnoughBlockNumbers (line 247) | ChErrorNotEnoughBlockNumbers ChErrorType = 250
constant ChErrorNoSuchReplica (line 248) | ChErrorNoSuchReplica ChErrorType = 251
constant ChErrorTooManyParts (line 249) | ChErrorTooManyParts ChErrorType = 252
constant ChErrorReplicaIsAlreadyExist (line 250) | ChErrorReplicaIsAlreadyExist ChErrorType = 253
constant ChErrorNoActiveReplicas (line 251) | ChErrorNoActiveReplicas ChErrorType = 254
constant ChErrorTooManyRetriesToFetchParts (line 252) | ChErrorTooManyRetriesToFetchParts ChErrorType = 255
constant ChErrorPartitionAlreadyExists (line 253) | ChErrorPartitionAlreadyExists ChErrorType = 256
constant ChErrorPartitionDoesntExist (line 254) | ChErrorPartitionDoesntExist ChErrorType = 257
constant ChErrorUnionAllResultStructuresMismatch (line 255) | ChErrorUnionAllResultStructuresMismatch ChErrorType = 258
constant ChErrorClientOutputFormatSpecified (line 256) | ChErrorClientOutputFormatSpecified ChErrorType = 260
constant ChErrorUnknownBlockInfoField (line 257) | ChErrorUnknownBlockInfoField ChErrorType = 261
constant ChErrorBadCollation (line 258) | ChErrorBadCollation ChErrorType = 262
constant ChErrorCannotCompileCode (line 259) | ChErrorCannotCompileCode ChErrorType = 263
constant ChErrorIncompatibleTypeOfJoin (line 260) | ChErrorIncompatibleTypeOfJoin ChErrorType = 264
constant ChErrorNoAvailableReplica (line 261) | ChErrorNoAvailableReplica ChErrorType = 265
constant ChErrorMismatchReplicasDataSources (line 262) | ChErrorMismatchReplicasDataSources ChErrorType = 266
constant ChErrorStorageDoesntSupportParallelReplicas (line 263) | ChErrorStorageDoesntSupportParallelReplicas ChErrorType = 267
constant ChErrorCpuidError (line 264) | ChErrorCpuidError ChErrorType = 268
constant ChErrorInfiniteLoop (line 265) | ChErrorInfiniteLoop ChErrorType = 269
constant ChErrorCannotCompress (line 266) | ChErrorCannotCompress ChErrorType = 270
constant ChErrorCannotDecompress (line 267) | ChErrorCannotDecompress ChErrorType = 271
constant ChErrorCannotIoSubmit (line 268) | ChErrorCannotIoSubmit ChErrorType = 272
constant ChErrorCannotIoGetevents (line 269) | ChErrorCannotIoGetevents ChErrorType = 273
constant ChErrorAioReadError (line 270) | ChErrorAioReadError ChErrorType = 274
constant ChErrorAioWriteError (line 271) | ChErrorAioWriteError ChErrorType = 275
constant ChErrorIndexNotUsed (line 272) | ChErrorIndexNotUsed ChErrorType = 277
constant ChErrorAllConnectionTriesFailed (line 273) | ChErrorAllConnectionTriesFailed ChErrorType = 279
constant ChErrorNoAvailableData (line 274) | ChErrorNoAvailableData ChErrorType = 280
constant ChErrorDictionaryIsEmpty (line 275) | ChErrorDictionaryIsEmpty ChErrorType = 281
constant ChErrorIncorrectIndex (line 276) | ChErrorIncorrectIndex ChErrorType = 282
constant ChErrorUnknownDistributedProductMode (line 277) | ChErrorUnknownDistributedProductMode ChErrorType = 283
constant ChErrorWrongGlobalSubquery (line 278) | ChErrorWrongGlobalSubquery ChErrorType = 284
constant ChErrorTooFewLiveReplicas (line 279) | ChErrorTooFewLiveReplicas ChErrorType = 285
constant ChErrorUnsatisfiedQuorumForPreviousWrite (line 280) | ChErrorUnsatisfiedQuorumForPreviousWrite ChErrorType = 286
constant ChErrorUnknownFormatVersion (line 281) | ChErrorUnknownFormatVersion ChErrorType = 287
constant ChErrorDistributedInJoinSubqueryDenied (line 282) | ChErrorDistributedInJoinSubqueryDenied ChErrorType = 288
constant ChErrorReplicaIsNotInQuorum (line 283) | ChErrorReplicaIsNotInQuorum ChErrorType = 289
constant ChErrorLimitExceeded (line 284) | ChErrorLimitExceeded ChErrorType = 290
constant ChErrorDatabaseAccessDenied (line 285) | ChErrorDatabaseAccessDenied ChErrorType = 291
constant ChErrorMongodbCannotAuthenticate (line 286) | ChErrorMongodbCannotAuthenticate ChErrorType = 293
constant ChErrorInvalidBlockExtraInfo (line 287) | ChErrorInvalidBlockExtraInfo ChErrorType = 294
constant ChErrorReceivedEmptyData (line 288) | ChErrorReceivedEmptyData ChErrorType = 295
constant ChErrorNoRemoteShardFound (line 289) | ChErrorNoRemoteShardFound ChErrorType = 296
constant ChErrorShardHasNoConnections (line 290) | ChErrorShardHasNoConnections ChErrorType = 297
constant ChErrorCannotPipe (line 291) | ChErrorCannotPipe ChErrorType = 298
constant ChErrorCannotFork (line 292) | ChErrorCannotFork ChErrorType = 299
constant ChErrorCannotDlsym (line 293) | ChErrorCannotDlsym ChErrorType = 300
constant ChErrorCannotCreateChildProcess (line 294) | ChErrorCannotCreateChildProcess ChErrorType = 301
constant ChErrorChildWasNotExitedNormally (line 295) | ChErrorChildWasNotExitedNormally ChErrorType = 302
constant ChErrorCannotSelect (line 296) | ChErrorCannotSelect ChErrorType = 303
constant ChErrorCannotWaitpid (line 297) | ChErrorCannotWaitpid ChErrorType = 304
constant ChErrorTableWasNotDropped (line 298) | ChErrorTableWasNotDropped ChErrorType = 305
constant ChErrorTooDeepRecursion (line 299) | ChErrorTooDeepRecursion ChErrorType = 306
constant ChErrorTooManyBytes (line 300) | ChErrorTooManyBytes ChErrorType = 307
constant ChErrorUnexpectedNodeInZookeeper (line 301) | ChErrorUnexpectedNodeInZookeeper ChErrorType = 308
constant ChErrorFunctionCannotHaveParameters (line 302) | ChErrorFunctionCannotHaveParameters ChErrorType = 309
constant ChErrorInvalidShardWeight (line 303) | ChErrorInvalidShardWeight ChErrorType = 317
constant ChErrorInvalidConfigParameter (line 304) | ChErrorInvalidConfigParameter ChErrorType = 318
constant ChErrorUnknownStatusOfInsert (line 305) | ChErrorUnknownStatusOfInsert ChErrorType = 319
constant ChErrorValueIsOutOfRangeOfDataType (line 306) | ChErrorValueIsOutOfRangeOfDataType ChErrorType = 321
constant ChErrorBarrierTimeout (line 307) | ChErrorBarrierTimeout ChErrorType = 335
constant ChErrorUnknownDatabaseEngine (line 308) | ChErrorUnknownDatabaseEngine ChErrorType = 336
constant ChErrorDdlGuardIsActive (line 309) | ChErrorDdlGuardIsActive ChErrorType = 337
constant ChErrorUnfinished (line 310) | ChErrorUnfinished ChErrorType = 341
constant ChErrorMetadataMismatch (line 311) | ChErrorMetadataMismatch ChErrorType = 342
constant ChErrorSupportIsDisabled (line 312) | ChErrorSupportIsDisabled ChErrorType = 344
constant ChErrorTableDiffersTooMuch (line 313) | ChErrorTableDiffersTooMuch ChErrorType = 345
constant ChErrorCannotConvertCharset (line 314) | ChErrorCannotConvertCharset ChErrorType = 346
constant ChErrorCannotLoadConfig (line 315) | ChErrorCannotLoadConfig ChErrorType = 347
constant ChErrorCannotInsertNullInOrdinaryColumn (line 316) | ChErrorCannotInsertNullInOrdinaryColumn ChErrorType = 349
constant ChErrorIncompatibleSourceTables (line 317) | ChErrorIncompatibleSourceTables ChErrorType = 350
constant ChErrorAmbiguousTableName (line 318) | ChErrorAmbiguousTableName ChErrorType = 351
constant ChErrorAmbiguousColumnName (line 319) | ChErrorAmbiguousColumnName ChErrorType = 352
constant ChErrorIndexOfPositionalArgumentIsOutOfRange (line 320) | ChErrorIndexOfPositionalArgumentIsOutOfRange ChErrorType = 353
constant ChErrorZlibInflateFailed (line 321) | ChErrorZlibInflateFailed ChErrorType = 354
constant ChErrorZlibDeflateFailed (line 322) | ChErrorZlibDeflateFailed ChErrorType = 355
constant ChErrorBadLambda (line 323) | ChErrorBadLambda ChErrorType = 356
constant ChErrorReservedIdentifierName (line 324) | ChErrorReservedIdentifierName ChErrorType = 357
constant ChErrorIntoOutfileNotAllowed (line 325) | ChErrorIntoOutfileNotAllowed ChErrorType = 358
constant ChErrorTableSizeExceedsMaxDropSizeLimit (line 326) | ChErrorTableSizeExceedsMaxDropSizeLimit ChErrorType = 359
constant ChErrorCannotCreateCharsetConverter (line 327) | ChErrorCannotCreateCharsetConverter ChErrorType = 360
constant ChErrorSeekPositionOutOfBound (line 328) | ChErrorSeekPositionOutOfBound ChErrorType = 361
constant ChErrorCurrentWriteBufferIsExhausted (line 329) | ChErrorCurrentWriteBufferIsExhausted ChErrorType = 362
constant ChErrorCannotCreateIoBuffer (line 330) | ChErrorCannotCreateIoBuffer ChErrorType = 363
constant ChErrorReceivedErrorTooManyRequests (line 331) | ChErrorReceivedErrorTooManyRequests ChErrorType = 364
constant ChErrorSizesOfNestedColumnsAreInconsistent (line 332) | ChErrorSizesOfNestedColumnsAreInconsistent ChErrorType = 366
constant ChErrorTooManyFetches (line 333) | ChErrorTooManyFetches ChErrorType = 367
constant ChErrorAllReplicasAreStale (line 334) | ChErrorAllReplicasAreStale ChErrorType = 369
constant ChErrorDataTypeCannotBeUsedInTables (line 335) | ChErrorDataTypeCannotBeUsedInTables ChErrorType = 370
constant ChErrorInconsistentClusterDefinition (line 336) | ChErrorInconsistentClusterDefinition ChErrorType = 371
constant ChErrorSessionNotFound (line 337) | ChErrorSessionNotFound ChErrorType = 372
constant ChErrorSessionIsLocked (line 338) | ChErrorSessionIsLocked ChErrorType = 373
constant ChErrorInvalidSessionTimeout (line 339) | ChErrorInvalidSessionTimeout ChErrorType = 374
constant ChErrorCannotDlopen (line 340) | ChErrorCannotDlopen ChErrorType = 375
constant ChErrorCannotParseUUID (line 341) | ChErrorCannotParseUUID ChErrorType = 376
constant ChErrorIllegalSyntaxForDataType (line 342) | ChErrorIllegalSyntaxForDataType ChErrorType = 377
constant ChErrorDataTypeCannotHaveArguments (line 343) | ChErrorDataTypeCannotHaveArguments ChErrorType = 378
constant ChErrorUnknownStatusOfDistributedDdlTask (line 344) | ChErrorUnknownStatusOfDistributedDdlTask ChErrorType = 379
constant ChErrorCannotKill (line 345) | ChErrorCannotKill ChErrorType = 380
constant ChErrorHTTPLengthRequired (line 346) | ChErrorHTTPLengthRequired ChErrorType = 381
constant ChErrorCannotLoadCatboostModel (line 347) | ChErrorCannotLoadCatboostModel ChErrorType = 382
constant ChErrorCannotApplyCatboostModel (line 348) | ChErrorCannotApplyCatboostModel ChErrorType = 383
constant ChErrorPartIsTemporarilyLocked (line 349) | ChErrorPartIsTemporarilyLocked ChErrorType = 384
constant ChErrorMultipleStreamsRequired (line 350) | ChErrorMultipleStreamsRequired ChErrorType = 385
constant ChErrorNoCommonType (line 351) | ChErrorNoCommonType ChErrorType = 386
constant ChErrorDictionaryAlreadyExists (line 352) | ChErrorDictionaryAlreadyExists ChErrorType = 387
constant ChErrorCannotAssignOptimize (line 353) | ChErrorCannotAssignOptimize ChErrorType = 388
constant ChErrorInsertWasDeduplicated (line 354) | ChErrorInsertWasDeduplicated ChErrorType = 389
constant ChErrorCannotGetCreateTableQuery (line 355) | ChErrorCannotGetCreateTableQuery ChErrorType = 390
constant ChErrorExternalLibraryError (line 356) | ChErrorExternalLibraryError ChErrorType = 391
constant ChErrorQueryIsProhibited (line 357) | ChErrorQueryIsProhibited ChErrorType = 392
constant ChErrorThereIsNoQuery (line 358) | ChErrorThereIsNoQuery ChErrorType = 393
constant ChErrorQueryWasCancelled (line 359) | ChErrorQueryWasCancelled ChErrorType = 394
constant ChErrorFunctionThrowIfValueIsNonZero (line 360) | ChErrorFunctionThrowIfValueIsNonZero ChErrorType = 395
constant ChErrorTooManyRowsOrBytes (line 361) | ChErrorTooManyRowsOrBytes ChErrorType = 396
constant ChErrorQueryIsNotSupportedInMaterializedView (line 362) | ChErrorQueryIsNotSupportedInMaterializedView ChErrorType = 397
constant ChErrorUnknownMutationCommand (line 363) | ChErrorUnknownMutationCommand ChErrorType = 398
constant ChErrorFormatIsNotSuitableForOutput (line 364) | ChErrorFormatIsNotSuitableForOutput ChErrorType = 399
constant ChErrorCannotStat (line 365) | ChErrorCannotStat ChErrorType = 400
constant ChErrorFeatureIsNotEnabledAtBuildTime (line 366) | ChErrorFeatureIsNotEnabledAtBuildTime ChErrorType = 401
constant ChErrorCannotIosetup (line 367) | ChErrorCannotIosetup ChErrorType = 402
constant ChErrorInvalidJoinOnExpression (line 368) | ChErrorInvalidJoinOnExpression ChErrorType = 403
constant ChErrorBadOdbcConnectionString (line 369) | ChErrorBadOdbcConnectionString ChErrorType = 404
constant ChErrorPartitionSizeExceedsMaxDropSizeLimit (line 370) | ChErrorPartitionSizeExceedsMaxDropSizeLimit ChErrorType = 405
constant ChErrorTopAndLimitTogether (line 371) | ChErrorTopAndLimitTogether ChErrorType = 406
constant ChErrorDecimalOverflow (line 372) | ChErrorDecimalOverflow ChErrorType = 407
constant ChErrorBadRequestParameter (line 373) | ChErrorBadRequestParameter ChErrorType = 408
constant ChErrorExternalExecutableNotFound (line 374) | ChErrorExternalExecutableNotFound ChErrorType = 409
constant ChErrorExternalServerIsNotResponding (line 375) | ChErrorExternalServerIsNotResponding ChErrorType = 410
constant ChErrorPthreadError (line 376) | ChErrorPthreadError ChErrorType = 411
constant ChErrorNetlinkError (line 377) | ChErrorNetlinkError ChErrorType = 412
constant ChErrorCannotSetSignalHandler (line 378) | ChErrorCannotSetSignalHandler ChErrorType = 413
constant ChErrorAllReplicasLost (line 379) | ChErrorAllReplicasLost ChErrorType = 415
constant ChErrorReplicaStatusChanged (line 380) | ChErrorReplicaStatusChanged ChErrorType = 416
constant ChErrorExpectedAllOrAny (line 381) | ChErrorExpectedAllOrAny ChErrorType = 417
constant ChErrorUnknownJoin (line 382) | ChErrorUnknownJoin ChErrorType = 418
constant ChErrorMultipleAssignmentsToColumn (line 383) | ChErrorMultipleAssignmentsToColumn ChErrorType = 419
constant ChErrorCannotUpdateColumn (line 384) | ChErrorCannotUpdateColumn ChErrorType = 420
constant ChErrorCannotAddDifferentAggregateStates (line 385) | ChErrorCannotAddDifferentAggregateStates ChErrorType = 421
constant ChErrorUnsupportedURIScheme (line 386) | ChErrorUnsupportedURIScheme ChErrorType = 422
constant ChErrorCannotGettimeofday (line 387) | ChErrorCannotGettimeofday ChErrorType = 423
constant ChErrorCannotLink (line 388) | ChErrorCannotLink ChErrorType = 424
constant ChErrorSystemError (line 389) | ChErrorSystemError ChErrorType = 425
constant ChErrorCannotCompileRegexp (line 390) | ChErrorCannotCompileRegexp ChErrorType = 427
constant ChErrorUnknownLogLevel (line 391) | ChErrorUnknownLogLevel ChErrorType = 428
constant ChErrorFailedToGetpwuid (line 392) | ChErrorFailedToGetpwuid ChErrorType = 429
constant ChErrorMismatchingUsersForProcessAndData (line 393) | ChErrorMismatchingUsersForProcessAndData ChErrorType = 430
constant ChErrorIllegalSyntaxForCodecType (line 394) | ChErrorIllegalSyntaxForCodecType ChErrorType = 431
constant ChErrorUnknownCodec (line 395) | ChErrorUnknownCodec ChErrorType = 432
constant ChErrorIllegalCodecParameter (line 396) | ChErrorIllegalCodecParameter ChErrorType = 433
constant ChErrorCannotParseProtobufSchema (line 397) | ChErrorCannotParseProtobufSchema ChErrorType = 434
constant ChErrorNoColumnSerializedToRequiredProtobufField (line 398) | ChErrorNoColumnSerializedToRequiredProtobufField ChErrorType = 435
constant ChErrorProtobufBadCast (line 399) | ChErrorProtobufBadCast ChErrorType = 436
constant ChErrorProtobufFieldNotRepeated (line 400) | ChErrorProtobufFieldNotRepeated ChErrorType = 437
constant ChErrorDataTypeCannotBePromoted (line 401) | ChErrorDataTypeCannotBePromoted ChErrorType = 438
constant ChErrorCannotScheduleTask (line 402) | ChErrorCannotScheduleTask ChErrorType = 439
constant ChErrorInvalidLimitExpression (line 403) | ChErrorInvalidLimitExpression ChErrorType = 440
constant ChErrorCannotParseDomainValueFromString (line 404) | ChErrorCannotParseDomainValueFromString ChErrorType = 441
constant ChErrorBadDatabaseForTemporaryTable (line 405) | ChErrorBadDatabaseForTemporaryTable ChErrorType = 442
constant ChErrorNoColumnsSerializedToProtobufFields (line 406) | ChErrorNoColumnsSerializedToProtobufFields ChErrorType = 443
constant ChErrorUnknownProtobufFormat (line 407) | ChErrorUnknownProtobufFormat ChErrorType = 444
constant ChErrorCannotMprotect (line 408) | ChErrorCannotMprotect ChErrorType = 445
constant ChErrorFunctionNotAllowed (line 409) | ChErrorFunctionNotAllowed ChErrorType = 446
constant ChErrorHyperscanCannotScanText (line 410) | ChErrorHyperscanCannotScanText ChErrorType = 447
constant ChErrorBrotliReadFailed (line 411) | ChErrorBrotliReadFailed ChErrorType = 448
constant ChErrorBrotliWriteFailed (line 412) | ChErrorBrotliWriteFailed ChErrorType = 449
constant ChErrorBadTTLExpression (line 413) | ChErrorBadTTLExpression ChErrorType = 450
constant ChErrorBadTTLFile (line 414) | ChErrorBadTTLFile ChErrorType = 451
constant ChErrorSettingConstraintViolation (line 415) | ChErrorSettingConstraintViolation ChErrorType = 452
constant ChErrorMysqlClientInsufficientCapabilities (line 416) | ChErrorMysqlClientInsufficientCapabilities ChErrorType = 453
constant ChErrorOpensslError (line 417) | ChErrorOpensslError ChErrorType = 454
constant ChErrorSuspiciousTypeForLowCardinality (line 418) | ChErrorSuspiciousTypeForLowCardinality ChErrorType = 455
constant ChErrorUnknownQueryParameter (line 419) | ChErrorUnknownQueryParameter ChErrorType = 456
constant ChErrorBadQueryParameter (line 420) | ChErrorBadQueryParameter ChErrorType = 457
constant ChErrorCannotUnlink (line 421) | ChErrorCannotUnlink ChErrorType = 458
constant ChErrorCannotSetThreadPriority (line 422) | ChErrorCannotSetThreadPriority ChErrorType = 459
constant ChErrorCannotCreateTimer (line 423) | ChErrorCannotCreateTimer ChErrorType = 460
constant ChErrorCannotSetTimerPeriod (line 424) | ChErrorCannotSetTimerPeriod ChErrorType = 461
constant ChErrorCannotDeleteTimer (line 425) | ChErrorCannotDeleteTimer ChErrorType = 462
constant ChErrorCannotFcntl (line 426) | ChErrorCannotFcntl ChErrorType = 463
constant ChErrorCannotParseElf (line 427) | ChErrorCannotParseElf ChErrorType = 464
constant ChErrorCannotParseDwarf (line 428) | ChErrorCannotParseDwarf ChErrorType = 465
constant ChErrorInsecurePath (line 429) | ChErrorInsecurePath ChErrorType = 466
constant ChErrorCannotParseBool (line 430) | ChErrorCannotParseBool ChErrorType = 467
constant ChErrorCannotPthreadAttr (line 431) | ChErrorCannotPthreadAttr ChErrorType = 468
constant ChErrorViolatedConstraint (line 432) | ChErrorViolatedConstraint ChErrorType = 469
constant ChErrorQueryIsNotSupportedInLiveView (line 433) | ChErrorQueryIsNotSupportedInLiveView ChErrorType = 470
constant ChErrorInvalidSettingValue (line 434) | ChErrorInvalidSettingValue ChErrorType = 471
constant ChErrorReadonlySetting (line 435) | ChErrorReadonlySetting ChErrorType = 472
constant ChErrorDeadlockAvoided (line 436) | ChErrorDeadlockAvoided ChErrorType = 473
constant ChErrorInvalidTemplateFormat (line 437) | ChErrorInvalidTemplateFormat ChErrorType = 474
constant ChErrorInvalidWithFillExpression (line 438) | ChErrorInvalidWithFillExpression ChErrorType = 475
constant ChErrorWithTiesWithoutOrderBy (line 439) | ChErrorWithTiesWithoutOrderBy ChErrorType = 476
constant ChErrorInvalidUsageOfInput (line 440) | ChErrorInvalidUsageOfInput ChErrorType = 477
constant ChErrorUnknownPolicy (line 441) | ChErrorUnknownPolicy ChErrorType = 478
constant ChErrorUnknownDisk (line 442) | ChErrorUnknownDisk ChErrorType = 479
constant ChErrorUnknownProtocol (line 443) | ChErrorUnknownProtocol ChErrorType = 480
constant ChErrorPathAccessDenied (line 444) | ChErrorPathAccessDenied ChErrorType = 481
constant ChErrorDictionaryAccessDenied (line 445) | ChErrorDictionaryAccessDenied ChErrorType = 482
constant ChErrorTooManyRedirects (line 446) | ChErrorTooManyRedirects ChErrorType = 483
constant ChErrorInternalRedisError (line 447) | ChErrorInternalRedisError ChErrorType = 484
constant ChErrorScalarAlreadyExists (line 448) | ChErrorScalarAlreadyExists ChErrorType = 485
constant ChErrorCannotGetCreateDictionaryQuery (line 449) | ChErrorCannotGetCreateDictionaryQuery ChErrorType = 487
constant ChErrorUnknownDictionary (line 450) | ChErrorUnknownDictionary ChErrorType = 488
constant ChErrorIncorrectDictionaryDefinition (line 451) | ChErrorIncorrectDictionaryDefinition ChErrorType = 489
constant ChErrorCannotFormatDatetime (line 452) | ChErrorCannotFormatDatetime ChErrorType = 490
constant ChErrorUnacceptableURL (line 453) | ChErrorUnacceptableURL ChErrorType = 491
constant ChErrorAccessEntityNotFound (line 454) | ChErrorAccessEntityNotFound ChErrorType = 492
constant ChErrorAccessEntityAlreadyExists (line 455) | ChErrorAccessEntityAlreadyExists ChErrorType = 493
constant ChErrorAccessEntityFoundDuplicates (line 456) | ChErrorAccessEntityFoundDuplicates ChErrorType = 494
constant ChErrorAccessStorageReadonly (line 457) | ChErrorAccessStorageReadonly ChErrorType = 495
constant ChErrorQuotaRequiresClientKey (line 458) | ChErrorQuotaRequiresClientKey ChErrorType = 496
constant ChErrorAccessDenied (line 459) | ChErrorAccessDenied ChErrorType = 497
constant ChErrorLimitByWithTiesIsNotSupported (line 460) | ChErrorLimitByWithTiesIsNotSupported ChErrorType = 498
constant ChErrorS3Error (line 461) | ChErrorS3Error ChErrorType = 499
constant ChErrorAzureBlobStorageError (line 462) | ChErrorAzureBlobStorageError ChErrorType = 500
constant ChErrorCannotCreateDatabase (line 463) | ChErrorCannotCreateDatabase ChErrorType = 501
constant ChErrorCannotSigqueue (line 464) | ChErrorCannotSigqueue ChErrorType = 502
constant ChErrorAggregateFunctionThrow (line 465) | ChErrorAggregateFunctionThrow ChErrorType = 503
constant ChErrorFileAlreadyExists (line 466) | ChErrorFileAlreadyExists ChErrorType = 504
constant ChErrorCannotDeleteDirectory (line 467) | ChErrorCannotDeleteDirectory ChErrorType = 505
constant ChErrorUnexpectedErrorCode (line 468) | ChErrorUnexpectedErrorCode ChErrorType = 506
constant ChErrorUnableToSkipUnusedShards (line 469) | ChErrorUnableToSkipUnusedShards ChErrorType = 507
constant ChErrorUnknownAccessType (line 470) | ChErrorUnknownAccessType ChErrorType = 508
constant ChErrorInvalidGrant (line 471) | ChErrorInvalidGrant ChErrorType = 509
constant ChErrorCacheDictionaryUpdateFail (line 472) | ChErrorCacheDictionaryUpdateFail ChErrorType = 510
constant ChErrorUnknownRole (line 473) | ChErrorUnknownRole ChErrorType = 511
constant ChErrorSetNonGrantedRole (line 474) | ChErrorSetNonGrantedRole ChErrorType = 512
constant ChErrorUnknownPartType (line 475) | ChErrorUnknownPartType ChErrorType = 513
constant ChErrorAccessStorageForInsertionNotFound (line 476) | ChErrorAccessStorageForInsertionNotFound ChErrorType = 514
constant ChErrorIncorrectAccessEntityDefinition (line 477) | ChErrorIncorrectAccessEntityDefinition ChErrorType = 515
constant ChErrorAuthenticationFailed (line 478) | ChErrorAuthenticationFailed ChErrorType = 516
constant ChErrorCannotAssignAlter (line 479) | ChErrorCannotAssignAlter ChErrorType = 517
constant ChErrorCannotCommitOffset (line 480) | ChErrorCannotCommitOffset ChErrorType = 518
constant ChErrorNoRemoteShardAvailable (line 481) | ChErrorNoRemoteShardAvailable ChErrorType = 519
constant ChErrorCannotDetachDictionaryAsTable (line 482) | ChErrorCannotDetachDictionaryAsTable ChErrorType = 520
constant ChErrorAtomicRenameFail (line 483) | ChErrorAtomicRenameFail ChErrorType = 521
constant ChErrorUnknownRowPolicy (line 484) | ChErrorUnknownRowPolicy ChErrorType = 523
constant ChErrorAlterOfColumnIsForbidden (line 485) | ChErrorAlterOfColumnIsForbidden ChErrorType = 524
constant ChErrorIncorrectDiskIndex (line 486) | ChErrorIncorrectDiskIndex ChErrorType = 525
constant ChErrorNoSuitableFunctionImplementation (line 487) | ChErrorNoSuitableFunctionImplementation ChErrorType = 527
constant ChErrorCassandraInternalError (line 488) | ChErrorCassandraInternalError ChErrorType = 528
constant ChErrorNotALeader (line 489) | ChErrorNotALeader ChErrorType = 529
constant ChErrorCannotConnectRabbitmq (line 490) | ChErrorCannotConnectRabbitmq ChErrorType = 530
constant ChErrorCannotFstat (line 491) | ChErrorCannotFstat ChErrorType = 531
constant ChErrorLdapError (line 492) | ChErrorLdapError ChErrorType = 532
constant ChErrorInconsistentReservations (line 493) | ChErrorInconsistentReservations ChErrorType = 533
constant ChErrorNoReservationsProvided (line 494) | ChErrorNoReservationsProvided ChErrorType = 534
constant ChErrorUnknownRaidType (line 495) | ChErrorUnknownRaidType ChErrorType = 535
constant ChErrorCannotRestoreFromFieldDump (line 496) | ChErrorCannotRestoreFromFieldDump ChErrorType = 536
constant ChErrorIllegalMysqlVariable (line 497) | ChErrorIllegalMysqlVariable ChErrorType = 537
constant ChErrorMysqlSyntaxError (line 498) | ChErrorMysqlSyntaxError ChErrorType = 538
constant ChErrorCannotBindRabbitmqExchange (line 499) | ChErrorCannotBindRabbitmqExchange ChErrorType = 539
constant ChErrorCannotDeclareRabbitmqExchange (line 500) | ChErrorCannotDeclareRabbitmqExchange ChErrorType = 540
constant ChErrorCannotCreateRabbitmqQueueBinding (line 501) | ChErrorCannotCreateRabbitmqQueueBinding ChErrorType = 541
constant ChErrorCannotRemoveRabbitmqExchange (line 502) | ChErrorCannotRemoveRabbitmqExchange ChErrorType = 542
constant ChErrorUnknownMysqlDatatypesSupportLevel (line 503) | ChErrorUnknownMysqlDatatypesSupportLevel ChErrorType = 543
constant ChErrorRowAndRowsTogether (line 504) | ChErrorRowAndRowsTogether ChErrorType = 544
constant ChErrorFirstAndNextTogether (line 505) | ChErrorFirstAndNextTogether ChErrorType = 545
constant ChErrorNoRowDelimiter (line 506) | ChErrorNoRowDelimiter ChErrorType = 546
constant ChErrorInvalidRaidType (line 507) | ChErrorInvalidRaidType ChErrorType = 547
constant ChErrorUnknownVolume (line 508) | ChErrorUnknownVolume ChErrorType = 548
constant ChErrorDataTypeCannotBeUsedInKey (line 509) | ChErrorDataTypeCannotBeUsedInKey ChErrorType = 549
constant ChErrorConditionalTreeParentNotFound (line 510) | ChErrorConditionalTreeParentNotFound ChErrorType = 550
constant ChErrorIllegalProjectionManipulator (line 511) | ChErrorIllegalProjectionManipulator ChErrorType = 551
constant ChErrorUnrecognizedArguments (line 512) | ChErrorUnrecognizedArguments ChErrorType = 552
constant ChErrorLzmaStreamEncoderFailed (line 513) | ChErrorLzmaStreamEncoderFailed ChErrorType = 553
constant ChErrorLzmaStreamDecoderFailed (line 514) | ChErrorLzmaStreamDecoderFailed ChErrorType = 554
constant ChErrorRocksdbError (line 515) | ChErrorRocksdbError ChErrorType = 555
constant ChErrorSyncMysqlUserAccessErro (line 516) | ChErrorSyncMysqlUserAccessErro ChErrorType = 556
constant ChErrorUnknownUnion (line 517) | ChErrorUnknownUnion ChErrorType = 557
constant ChErrorExpectedAllOrDistinct (line 518) | ChErrorExpectedAllOrDistinct ChErrorType = 558
constant ChErrorInvalidGrpcQueryInfo (line 519) | ChErrorInvalidGrpcQueryInfo ChErrorType = 559
constant ChErrorZstdEncoderFailed (line 520) | ChErrorZstdEncoderFailed ChErrorType = 560
constant ChErrorZstdDecoderFailed (line 521) | ChErrorZstdDecoderFailed ChErrorType = 561
constant ChErrorTldListNotFound (line 522) | ChErrorTldListNotFound ChErrorType = 562
constant ChErrorCannotReadMapFromText (line 523) | ChErrorCannotReadMapFromText ChErrorType = 563
constant ChErrorInterserverSchemeDoesntMatch (line 524) | ChErrorInterserverSchemeDoesntMatch ChErrorType = 564
constant ChErrorTooManyPartitions (line 525) | ChErrorTooManyPartitions ChErrorType = 565
constant ChErrorCannotRmdir (line 526) | ChErrorCannotRmdir ChErrorType = 566
constant ChErrorDuplicatedPartUuids (line 527) | ChErrorDuplicatedPartUuids ChErrorType = 567
constant ChErrorRaftError (line 528) | ChErrorRaftError ChErrorType = 568
constant ChErrorMultipleColumnsSerializedToSameProtobufField (line 529) | ChErrorMultipleColumnsSerializedToSameProtobufField ChErrorType = 569
constant ChErrorDataTypeIncompatibleWithProtobufField (line 530) | ChErrorDataTypeIncompatibleWithProtobufField ChErrorType = 570
constant ChErrorDatabaseReplicationFailed (line 531) | ChErrorDatabaseReplicationFailed ChErrorType = 571
constant ChErrorTooManyQueryPlanOptimizations (line 532) | ChErrorTooManyQueryPlanOptimizations ChErrorType = 572
constant ChErrorEpollError (line 533) | ChErrorEpollError ChErrorType = 573
constant ChErrorDistributedTooManyPendingBytes (line 534) | ChErrorDistributedTooManyPendingBytes ChErrorType = 574
constant ChErrorUnknownSnapshot (line 535) | ChErrorUnknownSnapshot ChErrorType = 575
constant ChErrorKerberosError (line 536) | ChErrorKerberosError ChErrorType = 576
constant ChErrorInvalidShardID (line 537) | ChErrorInvalidShardID ChErrorType = 577
constant ChErrorInvalidFormatInsertQueryWithData (line 538) | ChErrorInvalidFormatInsertQueryWithData ChErrorType = 578
constant ChErrorIncorrectPartType (line 539) | ChErrorIncorrectPartType ChErrorType = 579
constant ChErrorCannotSetRoundingMode (line 540) | ChErrorCannotSetRoundingMode ChErrorType = 580
constant ChErrorTooLargeDistributedDepth (line 541) | ChErrorTooLargeDistributedDepth ChErrorType = 581
constant ChErrorNoSuchProjectionInTable (line 542) | ChErrorNoSuchProjectionInTable ChErrorType = 582
constant ChErrorIllegalProjection (line 543) | ChErrorIllegalProjection ChErrorType = 583
constant ChErrorProjectionNotUsed (line 544) | ChErrorProjectionNotUsed ChErrorType = 584
constant ChErrorCannotParseYaml (line 545) | ChErrorCannotParseYaml ChErrorType = 585
constant ChErrorCannotCreateFile (line 546) | ChErrorCannotCreateFile ChErrorType = 586
constant ChErrorConcurrentAccessNotSupported (line 547) | ChErrorConcurrentAccessNotSupported ChErrorType = 587
constant ChErrorDistributedBrokenBatchInfo (line 548) | ChErrorDistributedBrokenBatchInfo ChErrorType = 588
constant ChErrorDistributedBrokenBatchFiles (line 549) | ChErrorDistributedBrokenBatchFiles ChErrorType = 589
constant ChErrorCannotSysconf (line 550) | ChErrorCannotSysconf ChErrorType = 590
constant ChErrorSqliteEngineError (line 551) | ChErrorSqliteEngineError ChErrorType = 591
constant ChErrorDataEncryptionError (line 552) | ChErrorDataEncryptionError ChErrorType = 592
constant ChErrorZeroCopyReplicationError (line 553) | ChErrorZeroCopyReplicationError ChErrorType = 593
constant ChErrorBzip2StreamDecoderFailed (line 554) | ChErrorBzip2StreamDecoderFailed ChErrorType = 594
constant ChErrorBzip2StreamEncoderFailed (line 555) | ChErrorBzip2StreamEncoderFailed ChErrorType = 595
constant ChErrorIntersectOrExceptResultStructuresMismatch (line 556) | ChErrorIntersectOrExceptResultStructuresMismatch ChErrorType = 596
constant ChErrorNoSuchErrorCode (line 557) | ChErrorNoSuchErrorCode ChErrorType = 597
constant ChErrorBackupAlreadyExists (line 558) | ChErrorBackupAlreadyExists ChErrorType = 598
constant ChErrorBackupNotFound (line 559) | ChErrorBackupNotFound ChErrorType = 599
constant ChErrorBackupVersionNotSupported (line 560) | ChErrorBackupVersionNotSupported ChErrorType = 600
constant ChErrorBackupDamaged (line 561) | ChErrorBackupDamaged ChErrorType = 601
constant ChErrorNoBaseBackup (line 562) | ChErrorNoBaseBackup ChErrorType = 602
constant ChErrorWrongBaseBackup (line 563) | ChErrorWrongBaseBackup ChErrorType = 603
constant ChErrorBackupEntryAlreadyExists (line 564) | ChErrorBackupEntryAlreadyExists ChErrorType = 604
constant ChErrorBackupEntryNotFound (line 565) | ChErrorBackupEntryNotFound ChErrorType = 605
constant ChErrorBackupIsEmpty (line 566) | ChErrorBackupIsEmpty ChErrorType = 606
constant ChErrorBackupElementDuplicate (line 567) | ChErrorBackupElementDuplicate ChErrorType = 607
constant ChErrorCannotRestoreTable (line 568) | ChErrorCannotRestoreTable ChErrorType = 608
constant ChErrorFunctionAlreadyExists (line 569) | ChErrorFunctionAlreadyExists ChErrorType = 609
constant ChErrorCannotDropFunction (line 570) | ChErrorCannotDropFunction ChErrorType = 610
constant ChErrorCannotCreateRecursiveFunction (line 571) | ChErrorCannotCreateRecursiveFunction ChErrorType = 611
constant ChErrorObjectAlreadyStoredOnDisk (line 572) | ChErrorObjectAlreadyStoredOnDisk ChErrorType = 612
constant ChErrorObjectWasNotStoredOnDisk (line 573) | ChErrorObjectWasNotStoredOnDisk ChErrorType = 613
constant ChErrorPostgresqlConnectionFailure (line 574) | ChErrorPostgresqlConnectionFailure ChErrorType = 614
constant ChErrorCannotAdvise (line 575) | ChErrorCannotAdvise ChErrorType = 615
constant ChErrorUnknownReadMethod (line 576) | ChErrorUnknownReadMethod ChErrorType = 616
constant ChErrorLz4EncoderFailed (line 577) | ChErrorLz4EncoderFailed ChErrorType = 617
constant ChErrorLz4DecoderFailed (line 578) | ChErrorLz4DecoderFailed ChErrorType = 618
constant ChErrorPostgresqlReplicationInternalError (line 579) | ChErrorPostgresqlReplicationInternalError ChErrorType = 619
constant ChErrorQueryNotAllowed (line 580) | ChErrorQueryNotAllowed ChErrorType = 620
constant ChErrorCannotNormalizeString (line 581) | ChErrorCannotNormalizeString ChErrorType = 621
constant ChErrorCannotParseCapnProtoSchema (line 582) | ChErrorCannotParseCapnProtoSchema ChErrorType = 622
constant ChErrorCapnProtoBadCast (line 583) | ChErrorCapnProtoBadCast ChErrorType = 623
constant ChErrorBadFileType (line 584) | ChErrorBadFileType ChErrorType = 624
constant ChErrorIoSetupError (line 585) | ChErrorIoSetupError ChErrorType = 625
constant ChErrorCannotSkipUnknownField (line 586) | ChErrorCannotSkipUnknownField ChErrorType = 626
constant ChErrorBackupEngineNotFound (line 587) | ChErrorBackupEngineNotFound ChErrorType = 627
constant ChErrorOffsetFetchWithoutOrderBy (line 588) | ChErrorOffsetFetchWithoutOrderBy ChErrorType = 628
constant ChErrorHTTPRangeNotSatisfiable (line 589) | ChErrorHTTPRangeNotSatisfiable ChErrorType = 629
constant ChErrorHaveDependentObjects (line 590) | ChErrorHaveDependentObjects ChErrorType = 630
constant ChErrorUnknownFileSize (line 591) | ChErrorUnknownFileSize ChErrorType = 631
constant ChErrorUnexpectedDataAfterParsedValue (line 592) | ChErrorUnexpectedDataAfterParsedValue ChErrorType = 632
constant ChErrorQueryIsNotSupportedInWindowView (line 593) | ChErrorQueryIsNotSupportedInWindowView ChErrorType = 633
constant ChErrorMongodbError (line 594) | ChErrorMongodbError ChErrorType = 634
constant ChErrorCannotPoll (line 595) | ChErrorCannotPoll ChErrorType = 635
constant ChErrorCannotExtractTableStructure (line 596) | ChErrorCannotExtractTableStructure ChErrorType = 636
constant ChErrorInvalidTableOverride (line 597) | ChErrorInvalidTableOverride ChErrorType = 637
constant ChErrorSnappyUncompressFailed (line 598) | ChErrorSnappyUncompressFailed ChErrorType = 638
constant ChErrorSnappyCompressFailed (line 599) | ChErrorSnappyCompressFailed ChErrorType = 639
constant ChErrorNoHivemetastore (line 600) | ChErrorNoHivemetastore ChErrorType = 640
constant ChErrorCannotAppendToFile (line 601) | ChErrorCannotAppendToFile ChErrorType = 641
constant ChErrorCannotPackArchive (line 602) | ChErrorCannotPackArchive ChErrorType = 642
constant ChErrorCannotUnpackArchive (line 603) | ChErrorCannotUnpackArchive ChErrorType = 643
constant ChErrorKeeperException (line 604) | ChErrorKeeperException ChErrorType = 999
constant ChErrorPocoException (line 605) | ChErrorPocoException ChErrorType = 1000
constant ChErrorStdException (line 606) | ChErrorStdException ChErrorType = 1001
constant ChErrorUnknownException (line 607) | ChErrorUnknownException ChErrorType = 1002
FILE: errors_test.go
function TestChErrorReadError (line 14) | func TestChErrorReadError(t *testing.T) {
function NewParseConfigError (line 69) | func NewParseConfigError(conn, msg string, err error) error {
function TestConfigError (line 77) | func TestConfigError(t *testing.T) {
FILE: helper_test.go
type readErrorHelper (line 8) | type readErrorHelper struct
method Read (line 15) | func (r *readErrorHelper) Read(p []byte) (int, error) {
type writerErrorHelper (line 23) | type writerErrorHelper struct
method Write (line 30) | func (w *writerErrorHelper) Write(p []byte) (int, error) {
type writerSlowHelper (line 38) | type writerSlowHelper struct
method Write (line 43) | func (w *writerSlowHelper) Write(p []byte) (int, error) {
FILE: insert.go
type InsertStmt (line 10) | type InsertStmt interface
type insertStmt (line 21) | type insertStmt struct
method Flush (line 32) | func (s *insertStmt) Flush(ctx context.Context) error {
method Close (line 96) | func (s *insertStmt) Close() {
method Write (line 108) | func (s *insertStmt) Write(ctx context.Context, columns ...column.Colu...
method Insert (line 172) | func (ch *conn) Insert(ctx context.Context, query string, columns ...col...
method InsertWithOption (line 177) | func (ch *conn) InsertWithOption(
method InsertStream (line 208) | func (ch *conn) InsertStream(ctx context.Context, query string) (InsertS...
method InsertStreamWithOption (line 213) | func (ch *conn) InsertStreamWithOption(
FILE: insert_test.go
function TestInsertError (line 16) | func TestInsertError(t *testing.T) {
function TestInsertCtxError (line 97) | func TestInsertCtxError(t *testing.T) {
function TestInsertMoreColumnsError (line 136) | func TestInsertMoreColumnsError(t *testing.T) {
function TestInsertMoreRowsError (line 164) | func TestInsertMoreRowsError(t *testing.T) {
function TestInsert (line 198) | func TestInsert(t *testing.T) {
function TestInsertNotFoundColumn (line 283) | func TestInsertNotFoundColumn(t *testing.T) {
function TestCompressInsert (line 319) | func TestCompressInsert(t *testing.T) {
function TestInsertColumnError (line 395) | func TestInsertColumnError(t *testing.T) {
function TestInsertColumnErrorCompress (line 455) | func TestInsertColumnErrorCompress(t *testing.T) {
function TestInsertColumnDataError (line 521) | func TestInsertColumnDataError(t *testing.T) {
function TestInsertColumnDataErrorValidate (line 607) | func TestInsertColumnDataErrorValidate(t *testing.T) {
function TestInsertSelectStmt (line 639) | func TestInsertSelectStmt(t *testing.T) {
FILE: internal/ctxwatch/context_watcher.go
type ContextWatcher (line 10) | type ContextWatcher struct
method Watch (line 34) | func (cw *ContextWatcher) Watch(ctx context.Context) {
method Unwatch (line 62) | func (cw *ContextWatcher) Unwatch() {
function NewContextWatcher (line 23) | func NewContextWatcher(onCancel, onUnwatchAfterCancel func()) *ContextWa...
FILE: internal/ctxwatch/context_watcher_test.go
function TestContextWatcherContextCancelled (line 13) | func TestContextWatcherContextCancelled(t *testing.T) {
function TestContextWatcherUnwatchdBeforeContextCancelled (line 37) | func TestContextWatcherUnwatchdBeforeContextCancelled(t *testing.T) {
function TestContextWatcherMultipleWatchPanics (line 50) | func TestContextWatcherMultipleWatchPanics(t *testing.T) {
function TestContextWatcherUnwatchWhenNotWatchingIsSafe (line 62) | func TestContextWatcherUnwatchWhenNotWatchingIsSafe(t *testing.T) {
function TestContextWatcherUnwatchIsConcurrencySafe (line 73) | func TestContextWatcherUnwatchIsConcurrencySafe(t *testing.T) {
function TestContextWatcherStress (line 87) | func TestContextWatcherStress(t *testing.T) {
function BenchmarkContextWatcherUncancellable (line 136) | func BenchmarkContextWatcherUncancellable(b *testing.B) {
function BenchmarkContextWatcherCancelled (line 145) | func BenchmarkContextWatcherCancelled(b *testing.B) {
function BenchmarkContextWatcherCancellable (line 156) | func BenchmarkContextWatcherCancellable(b *testing.B) {
FILE: internal/helper/features.go
constant DbmsMinRevisionWithClientInfo (line 4) | DbmsMinRevisionWithClientInfo = 54032
constant DbmsMinRevisionWithServerTimezone (line 5) | DbmsMinRevisionWithServerTimezone = 54058
constant DbmsMinRevisionWithQuotaKeyInClientInfo (line 6) | DbmsMinRevisionWithQuotaKeyInClientInfo = 54060
constant DbmsMinRevisionWithServerDisplayName (line 7) | DbmsMinRevisionWithServerDisplayName = 54372
constant DbmsMinRevisionWithVersionPatch (line 8) | DbmsMinRevisionWithVersionPatch = 54401
constant DbmsMinRevisionWithClientWriteInfo (line 9) | DbmsMinRevisionWithClientWriteInfo = 54420
constant DbmsMinRevisionWithSettingsSerializedAsStrings (line 10) | DbmsMinRevisionWithSettingsSerializedAsStrings = 54429
constant DbmsMinRevisionWithInterServerSecret (line 11) | DbmsMinRevisionWithInterServerSecret = 54441
constant DbmsMinRevisionWithOpenTelemetry (line 12) | DbmsMinRevisionWithOpenTelemetry = 54442
constant DbmsMinProtocolVersionWithDistributedDepth (line 13) | DbmsMinProtocolVersionWithDistributedDepth = 54448
constant DbmsMinProtocolVersionWithInitialQueryStartTime (line 14) | DbmsMinProtocolVersionWithInitialQueryStartTime = 54449
constant DbmsMinProtocolVersionWithParallelReplicas (line 15) | DbmsMinProtocolVersionWithParallelReplicas = 54453
constant DbmsMinProtocolWithCustomSerialization (line 16) | DbmsMinProtocolWithCustomSerialization = 54454
constant DbmsMinProtocolWithQuotaKey (line 17) | DbmsMinProtocolWithQuotaKey = 54458
constant DbmsMinProtocolWithParameters (line 18) | DbmsMinProtocolWithParameters = 54459
constant DbmsMinProtocolWithServerQueryTimeInProgress (line 19) | DbmsMinProtocolWithServerQueryTimeInProgress = 54460
FILE: internal/helper/strs.go
constant TupleStr (line 4) | TupleStr = "Tuple("
constant LenTupleStr (line 5) | LenTupleStr = len(TupleStr)
constant PointStr (line 6) | PointStr = "Point"
constant PolygonStr (line 11) | PolygonStr = "Polygon"
constant MultiPolygonStr (line 15) | MultiPolygonStr = "MultiPolygon"
constant ArrayStr (line 20) | ArrayStr = "Array("
constant LenArrayStr (line 21) | LenArrayStr = len(ArrayStr)
constant ArrayTypeStr (line 22) | ArrayTypeStr = "Array(<type>)"
constant NestedStr (line 23) | NestedStr = "Nested("
constant LenNestedStr (line 24) | LenNestedStr = len(NestedStr)
constant NestedToArrayTube (line 25) | NestedToArrayTube = "Array(Nested("
constant RingStr (line 26) | RingStr = "Ring"
constant Enum8Str (line 32) | Enum8Str = "Enum8("
constant Enum8StrLen (line 33) | Enum8StrLen = len(Enum8Str)
constant Enum16Str (line 34) | Enum16Str = "Enum16("
constant Enum16StrLen (line 35) | Enum16StrLen = len(Enum16Str)
constant DateTimeStr (line 36) | DateTimeStr = "DateTime("
constant DateTimeStrLen (line 37) | DateTimeStrLen = len(DateTimeStr)
constant DateTime64Str (line 38) | DateTime64Str = "DateTime64("
constant DateTime64StrLen (line 39) | DateTime64StrLen = len(DateTime64Str)
constant DecimalStr (line 40) | DecimalStr = "Decimal("
constant DecimalStrLen (line 41) | DecimalStrLen = len(DecimalStr)
constant FixedStringStr (line 42) | FixedStringStr = "FixedString("
constant FixedStringStrLen (line 43) | FixedStringStrLen = len(FixedStringStr)
constant SimpleAggregateStr (line 44) | SimpleAggregateStr = "SimpleAggregateFunction("
constant SimpleAggregateStrLen (line 45) | SimpleAggregateStrLen = len(SimpleAggregateStr)
constant LowCardinalityStr (line 49) | LowCardinalityStr = "LowCardinality("
constant LenLowCardinalityStr (line 50) | LenLowCardinalityStr = len(LowCardinalityStr)
constant LowCardinalityTypeStr (line 51) | LowCardinalityTypeStr = "LowCardinality(<type>)"
constant LowCardinalityNullableStr (line 52) | LowCardinalityNullableStr = "LowCardinality(Nullable("
constant LenLowCardinalityNullableStr (line 53) | LenLowCardinalityNullableStr = len(LowCardinalityNullableStr)
constant LowCardinalityNullableTypeStr (line 54) | LowCardinalityNullableTypeStr = "LowCardinality(Nullable(<type>))"
constant MapStr (line 58) | MapStr = "Map("
constant LenMapStr (line 59) | LenMapStr = len(MapStr)
constant MapTypeStr (line 60) | MapTypeStr = "Map(<key>, <value>)"
constant NullableStr (line 64) | NullableStr = "Nullable("
constant LenNullableStr (line 65) | LenNullableStr = len(NullableStr)
constant NullableTypeStr (line 66) | NullableTypeStr = "Nullable(<type>)"
constant StringStr (line 70) | StringStr = "String"
FILE: internal/helper/validator.go
function IsEnum8 (line 9) | func IsEnum8(chType []byte) bool {
function ExtractEnum (line 13) | func ExtractEnum(data []byte) (intToStringMap map[int16]string, stringTo...
function IsEnum16 (line 35) | func IsEnum16(chType []byte) bool {
function IsDateTimeWithParam (line 39) | func IsDateTimeWithParam(chType []byte) bool {
function IsDateTime64 (line 43) | func IsDateTime64(chType []byte) bool {
function IsFixedString (line 47) | func IsFixedString(chType []byte) bool {
function IsDecimal (line 51) | func IsDecimal(chType []byte) bool {
function IsRing (line 55) | func IsRing(chType []byte) bool {
function IsMultiPolygon (line 59) | func IsMultiPolygon(chType []byte) bool {
function IsNested (line 63) | func IsNested(chType []byte) bool {
function NestedToArrayType (line 67) | func NestedToArrayType(chType []byte) []byte {
function IsArray (line 78) | func IsArray(chType []byte) bool {
function IsPolygon (line 82) | func IsPolygon(chType []byte) bool {
function IsString (line 86) | func IsString(chType []byte) bool {
function IsLowCardinality (line 90) | func IsLowCardinality(chType []byte) bool {
function IsNullableLowCardinality (line 94) | func IsNullableLowCardinality(chType []byte) bool {
function IsMap (line 99) | func IsMap(chType []byte) bool {
function IsNullable (line 103) | func IsNullable(chType []byte) bool {
function IsPoint (line 107) | func IsPoint(chType []byte) bool {
function IsTuple (line 111) | func IsTuple(chType []byte) bool {
type ColumnData (line 115) | type ColumnData struct
function TypesInParentheses (line 119) | func TypesInParentheses(b []byte) ([]ColumnData, error) {
function SplitNameType (line 166) | func SplitNameType(b []byte) (ColumnData, error) {
function FilterSimpleAggregate (line 196) | func FilterSimpleAggregate(chType []byte) []byte {
FILE: internal/readerwriter/compress_reader.go
type invalidCompressErr (line 15) | type invalidCompressErr struct
method Error (line 19) | func (e *invalidCompressErr) Error() string {
type compressReader (line 23) | type compressReader struct
method Read (line 41) | func (r *compressReader) Read(buf []byte) (n int, err error) {
method readBlock (line 53) | func (r *compressReader) readBlock() error {
function NewCompressReader (line 33) | func NewCompressReader(r io.Reader) io.Reader {
FILE: internal/readerwriter/compress_writer.go
type compressWriter (line 16) | type compressWriter struct
method Write (line 41) | func (cw *compressWriter) Write(buf []byte) (int, error) {
method Flush (line 61) | func (cw *compressWriter) Flush() error {
function NewCompressWriter (line 32) | func NewCompressWriter(w io.Writer, method byte) io.Writer {
FILE: internal/readerwriter/consts.go
type CompressMethod (line 10) | type CompressMethod
constant ChecksumSize (line 14) | ChecksumSize = 16
constant CompressHeaderSize (line 16) | CompressHeaderSize = 1 + 4 + 4
constant HeaderSize (line 19) | HeaderSize = ChecksumSize + CompressHeaderSize
constant BlockMaxSize (line 21) | BlockMaxSize = 1024 * 1024 * 128
constant CompressNone (line 26) | CompressNone CompressMethod = 0x00
constant CompressChecksum (line 27) | CompressChecksum CompressMethod = 0x02
constant CompressLZ4 (line 28) | CompressLZ4 CompressMethod = 0x82
constant CompressZSTD (line 29) | CompressZSTD CompressMethod = 0x90
constant checksumSize (line 36) | checksumSize = 16
constant compressHeaderSize (line 37) | compressHeaderSize = 1 + 4 + 4
constant headerSize (line 38) | headerSize = checksumSize + compressHeaderSize
constant maxDataSize (line 41) | maxDataSize = 1024 * 1024 * 2
constant maxBlockSize (line 42) | maxBlockSize = maxDataSize
constant hRawSize (line 44) | hRawSize = 17
constant hDataSize (line 45) | hDataSize = 21
constant hMethod (line 46) | hMethod = 16
type CorruptedDataErr (line 50) | type CorruptedDataErr struct
method Error (line 57) | func (c *CorruptedDataErr) Error() string {
FILE: internal/readerwriter/reader.go
type Reader (line 9) | type Reader struct
method SetCompress (line 25) | func (r *Reader) SetCompress(c bool) {
method Uvarint (line 37) | func (r *Reader) Uvarint() (uint64, error) {
method Int32 (line 42) | func (r *Reader) Int32() (int32, error) {
method Uint32 (line 51) | func (r *Reader) Uint32() (uint32, error) {
method Uint64 (line 59) | func (r *Reader) Uint64() (uint64, error) {
method FixedString (line 67) | func (r *Reader) FixedString(strlen int) ([]byte, error) {
method String (line 75) | func (r *Reader) String() (string, error) {
method ByteString (line 88) | func (r *Reader) ByteString() ([]byte, error) {
method ReadByte (line 100) | func (r *Reader) ReadByte() (byte, error) {
method Read (line 108) | func (r *Reader) Read(buf []byte) (int, error) {
function NewReader (line 17) | func NewReader(input io.Reader) *Reader {
FILE: internal/readerwriter/writer.go
type Writer (line 12) | type Writer struct
method Uvarint (line 25) | func (w *Writer) Uvarint(v uint64) {
method Int32 (line 31) | func (w *Writer) Int32(v int32) {
method Int64 (line 36) | func (w *Writer) Int64(v int64) {
method Uint8 (line 41) | func (w *Writer) Uint8(v uint8) {
method Uint32 (line 46) | func (w *Writer) Uint32(v uint32) {
method Uint64 (line 55) | func (w *Writer) Uint64(v uint64) {
method String (line 68) | func (w *Writer) String(v string) {
method ByteString (line 75) | func (w *Writer) ByteString(v []byte) {
method Write (line 81) | func (w *Writer) Write(b []byte) {
method WriteTo (line 86) | func (w *Writer) WriteTo(wt io.Writer) (int64, error) {
method Reset (line 91) | func (w *Writer) Reset() {
method Output (line 96) | func (w *Writer) Output() *bytes.Buffer {
function NewWriter (line 18) | func NewWriter() *Writer {
function str2Bytes (line 100) | func str2Bytes(str string) []byte {
FILE: ping.go
type pong (line 7) | type pong struct
method Ping (line 10) | func (ch *conn) Ping(ctx context.Context) error {
FILE: ping_test.go
function TestPing (line 15) | func TestPing(t *testing.T) {
function TestPingWriteError (line 26) | func TestPingWriteError(t *testing.T) {
function TestPingCtxError (line 66) | func TestPingCtxError(t *testing.T) {
FILE: profile.go
type Profile (line 4) | type Profile struct
method read (line 17) | func (p *Profile) read(ch *conn) (err error) {
function newProfile (line 13) | func newProfile() *Profile {
FILE: profile_event.go
type ProfileEvent (line 8) | type ProfileEvent struct
method read (line 28) | func (p ProfileEvent) read(c *conn) error {
function newProfileEvent (line 17) | func newProfileEvent() *ProfileEvent {
FILE: profile_test.go
function TestProfileReadError (line 17) | func TestProfileReadError(t *testing.T) {
FILE: progress.go
type Progress (line 6) | type Progress struct
method read (line 19) | func (p *Progress) read(ch *conn) (err error) {
function newProgress (line 15) | func newProgress() *Progress {
FILE: select_stmt.go
method Select (line 17) | func (ch *conn) Select(ctx context.Context, query string, columns ...col...
method SelectWithOption (line 23) | func (ch *conn) SelectWithOption(
type SelectStmt (line 86) | type SelectStmt interface
type selectStmt (line 105) | type selectStmt struct
method readEmptyBlock (line 121) | func (s *selectStmt) readEmptyBlock(b *block) error {
method Next (line 144) | func (s *selectStmt) Next() bool {
method validate (line 218) | func (s *selectStmt) validate() error {
method RowsInBlock (line 229) | func (s *selectStmt) RowsInBlock() int {
method Err (line 235) | func (s *selectStmt) Err() error {
method Close (line 243) | func (s *selectStmt) Close() {
method Columns (line 255) | func (s *selectStmt) Columns() []column.ColumnBasic {
method getColumnsByChType (line 259) | func (s *selectStmt) getColumnsByChType(b *block) ([]column.ColumnBasi...
method columnByType (line 278) | func (s *selectStmt) columnByType(chType []byte, arrayLevel int, nulla...
function getFixedType (line 450) | func getFixedType(fixedLen, arrayLevel int, nullable, lc bool) (column.C...
FILE: select_stmt_test.go
function TestSelectError (line 18) | func TestSelectError(t *testing.T) {
function TestSelectCtxError (line 63) | func TestSelectCtxError(t *testing.T) {
function TestSelectProgress (line 96) | func TestSelectProgress(t *testing.T) {
function TestSelectParameters (line 133) | func TestSelectParameters(t *testing.T) {
function TestSelectProgressError (line 237) | func TestSelectProgressError(t *testing.T) {
function TestGetFixedColumnType (line 318) | func TestGetFixedColumnType(t *testing.T) {
FILE: server_info.go
type ServerInfo (line 11) | type ServerInfo struct
method read (line 21) | func (srv *ServerInfo) read(r *readerwriter.Reader) (err error) {
method String (line 52) | func (srv *ServerInfo) String() string {
method ServerInfo (line 64) | func (ch *conn) ServerInfo() *ServerInfo {
FILE: server_info_test.go
function TestServerInfoError (line 13) | func TestServerInfoError(t *testing.T) {
FILE: settings.go
type Setting (line 15) | type Setting struct
method write (line 29) | func (st Setting) write(w *readerwriter.Writer) {
constant settingFlagImportant (line 21) | settingFlagImportant = 0x01
constant settingFlagCustom (line 22) | settingFlagCustom = 0x02
constant settingFlagObsolete (line 23) | settingFlagObsolete = 0x04
type Settings (line 27) | type Settings
method write (line 47) | func (s Settings) write(w *readerwriter.Writer) {
type Parameters (line 54) | type Parameters struct
method Params (line 241) | func (p *Parameters) Params() []Setting {
method hasParam (line 245) | func (p *Parameters) hasParam() bool {
method write (line 249) | func (p *Parameters) write(w *readerwriter.Writer) {
type Parameter (line 58) | type Parameter
function NewParameters (line 60) | func NewParameters(input ...Parameter) *Parameters {
function IntParameter (line 71) | func IntParameter[T ~int | ~int8 | ~int16 | ~int32 | ~int64](name string...
function IntSliceParameter (line 82) | func IntSliceParameter[T ~int | ~int8 | ~int16 | ~int32 | ~int64](name s...
function UintParameter (line 102) | func UintParameter[T ~uint | ~uint8 | ~uint16 | ~uint32 | ~uint64](name ...
function UintSliceParameter (line 113) | func UintSliceParameter[T ~uint | ~uint8 | ~uint16 | ~uint32 | ~uint64](...
function Float32Parameter (line 134) | func Float32Parameter[T ~float32](name string, v T) Parameter {
function Float32SliceParameter (line 145) | func Float32SliceParameter[T ~float32](name string, v []T) Parameter {
function Float64Parameter (line 166) | func Float64Parameter[T ~float64](name string, v T) Parameter {
function Float64SliceParameter (line 177) | func Float64SliceParameter[T ~float64](name string, v []T) Parameter {
function addSlashes (line 197) | func addSlashes(str string) string {
function StringParameter (line 211) | func StringParameter(name, v string) Parameter {
function StringSliceParameter (line 222) | func StringSliceParameter(name string, v []string) Parameter {
FILE: sqlbuilder/injection.go
type injection (line 12) | type injection struct
method SQL (line 27) | func (injection *injection) SQL(marker injectionMarker, sql string) {
method WriteTo (line 33) | func (injection *injection) WriteTo(buf *bytes.Buffer, marker injectio...
type injectionMarker (line 16) | type injectionMarker
function newInjection (line 19) | func newInjection() *injection {
FILE: sqlbuilder/select.go
constant selectMarkerInit (line 16) | selectMarkerInit injectionMarker = iota
constant selectMarkerAfterSelect (line 17) | selectMarkerAfterSelect
constant selectMarkerAfterFrom (line 18) | selectMarkerAfterFrom
constant selectMarkerAfterArrayJoin (line 19) | selectMarkerAfterArrayJoin
constant selectMarkerAfterJoin (line 20) | selectMarkerAfterJoin
constant selectMarkerAfterPreWhere (line 21) | selectMarkerAfterPreWhere
constant selectMarkerAfterWhere (line 22) | selectMarkerAfterWhere
constant selectMarkerAfterGroupBy (line 23) | selectMarkerAfterGroupBy
constant selectMarkerAfterOrderBy (line 24) | selectMarkerAfterOrderBy
constant selectMarkerAfterLimit (line 25) | selectMarkerAfterLimit
constant selectMarkerAfterFor (line 26) | selectMarkerAfterFor
type JoinOption (line 30) | type JoinOption
constant InnerJoin (line 34) | InnerJoin JoinOption = "INNER"
constant LeftJoin (line 35) | LeftJoin JoinOption = "LEFT"
constant LeftOuterJoin (line 36) | LeftOuterJoin JoinOption = "LEFT OUTER"
constant LeftSemiJoin (line 37) | LeftSemiJoin JoinOption = "LEFT SEMI"
constant LeftAntiJoin (line 38) | LeftAntiJoin JoinOption = "LEFT ANTI"
constant RightJoin (line 39) | RightJoin JoinOption = "RIGHT"
constant RightOuterJoin (line 40) | RightOuterJoin JoinOption = "RIGHT OUTER"
constant RightSemiJoin (line 41) | RightSemiJoin JoinOption = "RIGHT SEMI"
constant RightAntiJoin (line 42) | RightAntiJoin JoinOption = "RIGHT ANTI"
constant FullJoin (line 43) | FullJoin JoinOption = "FULL"
constant FullOuterJoin (line 44) | FullOuterJoin JoinOption = "FULL OUTER"
constant CrossJoin (line 45) | CrossJoin JoinOption = "CROSS"
function NewSelectBuilder (line 48) | func NewSelectBuilder() *SelectBuilder {
type SelectBuilder (line 57) | type SelectBuilder struct
method Select (line 88) | func (sb *SelectBuilder) Select(col ...string) *SelectBuilder {
method Column (line 95) | func (sb *SelectBuilder) Column(col ...string) *SelectBuilder {
method Distinct (line 102) | func (sb *SelectBuilder) Distinct() *SelectBuilder {
method Final (line 109) | func (sb *SelectBuilder) Final() *SelectBuilder {
method From (line 116) | func (sb *SelectBuilder) From(table ...string) *SelectBuilder {
method ArrayJoin (line 127) | func (sb *SelectBuilder) ArrayJoin(onExpr ...string) *SelectBuilder {
method LeftArrayJoin (line 134) | func (sb *SelectBuilder) LeftArrayJoin() *SelectBuilder {
method Join (line 144) | func (sb *SelectBuilder) Join(table string, onExpr ...string) *SelectB...
method JoinWithOption (line 163) | func (sb *SelectBuilder) JoinWithOption(option JoinOption, table strin...
method Where (line 172) | func (sb *SelectBuilder) Where(andExpr ...string) *SelectBuilder {
method PreWhere (line 179) | func (sb *SelectBuilder) PreWhere(andExpr ...string) *SelectBuilder {
method Parameters (line 185) | func (sb *SelectBuilder) Parameters(p chconn.Parameter) *SelectBuilder {
method Having (line 191) | func (sb *SelectBuilder) Having(andExpr ...string) *SelectBuilder {
method GroupBy (line 198) | func (sb *SelectBuilder) GroupBy(col ...string) *SelectBuilder {
method OrderBy (line 205) | func (sb *SelectBuilder) OrderBy(col ...string) *SelectBuilder {
method Limit (line 212) | func (sb *SelectBuilder) Limit(limit int) *SelectBuilder {
method Offset (line 219) | func (sb *SelectBuilder) Offset(offset int) *SelectBuilder {
method String (line 231) | func (sb *SelectBuilder) String() string {
method Build (line 238) | func (sb *SelectBuilder) Build() (sql string, params *chconn.Parameter...
method SQL (line 334) | func (sb *SelectBuilder) SQL(sql string) *SelectBuilder {
function Select (line 83) | func Select(col ...string) *SelectBuilder {
function As (line 226) | func As(name, alias string) string {
FILE: sqlbuilder/select_test.go
function TestSelectBuilder (line 11) | func TestSelectBuilder(t *testing.T) {
FILE: types/Int256.go
function Int256Zero (line 12) | func Int256Zero() Int256 {
function Int256Max (line 17) | func Int256Max() Int256 {
type Int256 (line 26) | type Int256 struct
method Big (line 96) | func (u Int256) Big() *big.Int {
method Equals (line 111) | func (u Int256) Equals(v Int256) bool {
method Neg (line 116) | func (u Int256) Neg() (z Int256) {
function Int256From128 (line 33) | func Int256From128(v Int128) Int256 {
function Int256From64 (line 47) | func Int256From64(v int64) Int256 {
function Int256FromBig (line 54) | func Int256FromBig(i *big.Int) Int256 {
function Int256FromBigEx (line 63) | func Int256FromBigEx(i *big.Int) (Int256, bool) {
FILE: types/date_type.go
type Date (line 7) | type Date
method FromTime (line 29) | func (d Date) FromTime(v time.Time, precision int) Date {
method ToTime (line 33) | func (d Date) ToTime(loc *time.Location, precision int) time.Time {
method Unix (line 37) | func (d Date) Unix() int64 {
constant minDate32 (line 9) | minDate32 = int32(-25567)
type Date32 (line 11) | type Date32
method Unix (line 41) | func (d Date32) Unix() int64 {
method FromTime (line 45) | func (d Date32) FromTime(v time.Time, precision int) Date32 {
method ToTime (line 49) | func (d Date32) ToTime(loc *time.Location, precision int) time.Time {
type DateTime (line 13) | type DateTime
method FromTime (line 70) | func (d DateTime) FromTime(v time.Time, precision int) DateTime {
method ToTime (line 74) | func (d DateTime) ToTime(loc *time.Location, precision int) time.Time {
constant minDateTime64 (line 15) | minDateTime64 = int64(-2208988800)
type DateTime64 (line 17) | type DateTime64
method FromTime (line 98) | func (d DateTime64) FromTime(v time.Time, precision int) DateTime64 {
method ToTime (line 102) | func (d DateTime64) ToTime(loc *time.Location, precision int) time.Time {
constant daySeconds (line 19) | daySeconds = 24 * 60 * 60
function TimeToDate (line 21) | func TimeToDate(t time.Time) Date {
function TimeToDate32 (line 53) | func TimeToDate32(t time.Time) Date32 {
function TimeToDateTime (line 63) | func TimeToDateTime(t time.Time) DateTime {
function TimeToDateTime64 (line 91) | func TimeToDateTime64(t time.Time, precision int) DateTime64 {
FILE: types/decimal.go
type Decimal32 (line 4) | type Decimal32
method Float64 (line 23) | func (d Decimal32) Float64(scale int) float64 {
type Decimal64 (line 7) | type Decimal64
method Float64 (line 28) | func (d Decimal64) Float64(scale int) float64 {
type Decimal128 (line 10) | type Decimal128
type Decimal256 (line 13) | type Decimal256
function Decimal32FromFloat64 (line 33) | func Decimal32FromFloat64(f float64, scale int) Decimal32 {
function Decimal64FromFloat64 (line 38) | func Decimal64FromFloat64(f float64, scale int) Decimal64 {
FILE: types/decimal_test.go
function TestDecimal (line 9) | func TestDecimal(t *testing.T) {
FILE: types/int128.go
function Int128Zero (line 13) | func Int128Zero() Int128 {
function Int128Max (line 18) | func Int128Max() Int128 {
type Int128 (line 27) | type Int128 struct
method Big (line 89) | func (u Int128) Big() *big.Int {
method Equals (line 100) | func (u Int128) Equals(v Int128) bool {
method Neg (line 105) | func (u Int128) Neg() (z Int128) {
function Int128From64 (line 38) | func Int128From64(v int64) Int128 {
function Int128FromBig (line 49) | func Int128FromBig(i *big.Int) Int128 {
function Int128FromBigEx (line 58) | func Int128FromBigEx(i *big.Int) (Int128, bool) {
FILE: types/int128_test.go
function TestInt128 (line 11) | func TestInt128(t *testing.T) {
FILE: types/int256_test.go
function TestInt256 (line 11) | func TestInt256(t *testing.T) {
FILE: types/ip_test.go
function TestIP (line 10) | func TestIP(t *testing.T) {
FILE: types/ipv4.go
type IPv4 (line 8) | type IPv4
method NetIP (line 10) | func (ip IPv4) NetIP() netip.Addr {
function IPv4FromAddr (line 14) | func IPv4FromAddr(ipAddr netip.Addr) IPv4 {
FILE: types/ipv6.go
type IPv6 (line 5) | type IPv6
method NetIP (line 7) | func (ip IPv6) NetIP() netip.Addr {
function IPv6FromAddr (line 11) | func IPv6FromAddr(ipAddr netip.Addr) IPv6 {
FILE: types/tuple.go
type Point (line 3) | type Point
type Tuple2 (line 5) | type Tuple2 struct
type Tuple3 (line 10) | type Tuple3 struct
type Tuple4 (line 16) | type Tuple4 struct
type Tuple5 (line 23) | type Tuple5 struct
FILE: types/uint128.go
function Uint128Zero (line 13) | func Uint128Zero() Uint128 {
function Uint128Max (line 18) | func Uint128Max() Uint128 {
type Uint128 (line 27) | type Uint128 struct
method Big (line 77) | func (u Uint128) Big() *big.Int {
method Equals (line 87) | func (u Uint128) Equals(v Uint128) bool {
function Uint128From64 (line 38) | func Uint128From64(v uint64) Uint128 {
function Uint128FromBig (line 45) | func Uint128FromBig(i *big.Int) Uint128 {
function Uint128FromBigEx (line 54) | func Uint128FromBigEx(i *big.Int) (Uint128, bool) {
FILE: types/uint128_test.go
function TestUint128 (line 11) | func TestUint128(t *testing.T) {
FILE: types/uint256.go
function Uint256Zero (line 12) | func Uint256Zero() Uint256 {
function Uint256Max (line 17) | func Uint256Max() Uint256 {
type Uint256 (line 26) | type Uint256 struct
method Big (line 82) | func (u Uint256) Big() *big.Int {
method Equals (line 97) | func (u Uint256) Equals(v Uint256) bool {
function Uint256From128 (line 33) | func Uint256From128(v Uint128) Uint256 {
function Uint256From64 (line 39) | func Uint256From64(v uint64) Uint256 {
function Uint256FromBig (line 46) | func Uint256FromBig(i *big.Int) Uint256 {
function Uint256FromBigEx (line 55) | func Uint256FromBigEx(i *big.Int) (Uint256, bool) {
FILE: types/uint256_test.go
function TestUint256 (line 9) | func TestUint256(t *testing.T) {
FILE: types/uuid.go
type UUID (line 3) | type UUID
method BigEndian (line 18) | func (u UUID) BigEndian() [16]byte {
function UUIDFromBigEndian (line 5) | func UUIDFromBigEndian(b [16]byte) UUID {
FILE: types/uuid_test.go
function TestUUID (line 10) | func TestUUID(t *testing.T) {
Condensed preview — 123 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (677K chars).
[
{
"path": ".codecov.yml",
"chars": 245,
"preview": "ignore:\n - \"**/main.go\"\n - \"./internal/readerwriter/*\"\ncoverage:\n status:\n project:\n default:\n target:"
},
{
"path": ".github/dependabot.yml",
"chars": 196,
"preview": "version: 2\nupdates:\n - package-ecosystem: gomod\n directory: \"/\"\n schedule:\n interval: daily\n - package-ecos"
},
{
"path": ".github/workflows/ci.yaml",
"chars": 1202,
"preview": "name: CI\n\non:\n push:\n branches:\n - master\n pull_request:\n\njobs:\n\n test-coverage:\n name: Test Coverage\n "
},
{
"path": ".github/workflows/lint.yaml",
"chars": 438,
"preview": "name: golangci-lint\non:\n push:\n branches:\n - main\n pull_request:\n\njobs:\n lint:\n name: lint\n runs-on: ub"
},
{
"path": ".gitignore",
"chars": 40,
"preview": ".envrc\nbin/\nvendor/\nbuild/\ncoverage.out\n"
},
{
"path": ".golangci.yml",
"chars": 2502,
"preview": "linters-settings:\n dupl:\n threshold: 100\n funlen:\n lines: 130\n statements: 60\n goconst:\n min-len: 5\n m"
},
{
"path": "LICENSE",
"chars": 1072,
"preview": "MIT License\n\nCopyright (c) 2020 vahid-sohrabloo\n\nPermission is hereby granted, free of charge, to any person obtaining a"
},
{
"path": "Makefile",
"chars": 8377,
"preview": "# A Self-Documenting Makefile: http://marmelab.com/blog/2016/02/29/auto-documented-makefile.html\n\nOS = $(shell uname | t"
},
{
"path": "README.md",
"chars": 6276,
"preview": "[](https://pkg.go.dev/github.com/vahid"
},
{
"path": "block.go",
"chars": 6065,
"preview": "package chconn\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\n\t\"github.com/vahid-sohrabloo/chconn/v2/column\"\n\t\"github.com/vahid-sohrabloo/ch"
},
{
"path": "block_test.go",
"chars": 2378,
"preview": "package chconn\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"io\"\n\t\"os\"\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/assert\"\n\t\"github.com"
},
{
"path": "chconn.go",
"chars": 18007,
"preview": "package chconn\n\nimport (\n\t\"bufio\"\n\t\"context\"\n\t\"crypto/tls\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"net\"\n\t\"strconv\"\n\t\"time\"\n\n\t\"github.co"
},
{
"path": "chconn_test.go",
"chars": 7612,
"preview": "package chconn\n\nimport (\n\t\"context\"\n\t\"crypto/tls\"\n\t\"errors\"\n\t\"io\"\n\t\"os\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/stretchr/testif"
},
{
"path": "chpool/common_test.go",
"chars": 5173,
"preview": "package chpool\n\nimport (\n\t\"context\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/stretchr/testify/assert\"\n\t\"github.com/stretchr/test"
},
{
"path": "chpool/conn.go",
"chars": 4625,
"preview": "package chpool\n\nimport (\n\t\"context\"\n\t\"sync/atomic\"\n\n\tpuddle \"github.com/jackc/puddle/v2\"\n\t\"github.com/vahid-sohrabloo/ch"
},
{
"path": "chpool/insert_stmt.go",
"chars": 394,
"preview": "package chpool\n\nimport (\n\t\"context\"\n\n\t\"github.com/vahid-sohrabloo/chconn/v2\"\n)\n\ntype insertStmt struct {\n\tchconn.InsertS"
},
{
"path": "chpool/pool.go",
"chars": 22491,
"preview": "package chpool\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"math/rand\"\n\t\"runtime\"\n\t\"strconv\"\n\t\"sync\"\n\t\"sync/atomic\"\n\t\"syscall"
},
{
"path": "chpool/pool_test.go",
"chars": 22627,
"preview": "package chpool\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"os\"\n\t\"runtime\"\n\t\"sync/atomic\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/st"
},
{
"path": "chpool/select_stmt.go",
"chars": 501,
"preview": "package chpool\n\nimport (\n\t\"github.com/vahid-sohrabloo/chconn/v2\"\n)\n\ntype selectStmt struct {\n\tchconn.SelectStmt\n\tconn Co"
},
{
"path": "chpool/stat.go",
"chars": 2372,
"preview": "package chpool\n\nimport (\n\t\"time\"\n\n\t\"github.com/jackc/puddle/v2\"\n)\n\n// Stat is a snapshot of Pool statistics.\ntype Stat s"
},
{
"path": "client_info.go",
"chars": 2397,
"preview": "package chconn\n\nimport (\n\t\"os/user\"\n\n\t\"github.com/vahid-sohrabloo/chconn/v2/internal/helper\"\n)\n\n// ClientInfo Informatio"
},
{
"path": "column/array.go",
"chars": 2281,
"preview": "package column\n\n// Array is a column of Array(T) ClickHouse data type\ntype Array[T any] struct {\n\tArrayBase\n\tcolumnData "
},
{
"path": "column/array2.go",
"chars": 1631,
"preview": "package column\n\n// Array2 is a column of Array(Array(T)) ClickHouse data type\ntype Array2[T any] struct {\n\tArrayBase\n}\n\n"
},
{
"path": "column/array2_nullable.go",
"chars": 2652,
"preview": "package column\n\nimport \"github.com/vahid-sohrabloo/chconn/v2/internal/readerwriter\"\n\n// Array is a column of Array(Array"
},
{
"path": "column/array3.go",
"chars": 1769,
"preview": "package column\n\n// Array3 is a column of Array(Array(Array(T))) ClickHouse data type\ntype Array3[T any] struct {\n\tArrayB"
},
{
"path": "column/array3_nullable.go",
"chars": 2533,
"preview": "package column\n\nimport \"github.com/vahid-sohrabloo/chconn/v2/internal/readerwriter\"\n\n// Array is a column of Array(Array"
},
{
"path": "column/array_base.go",
"chars": 4435,
"preview": "package column\n\nimport (\n\t\"encoding/binary\"\n\t\"fmt\"\n\t\"io\"\n\t\"strings\"\n\n\t\"github.com/vahid-sohrabloo/chconn/v2/internal/hel"
},
{
"path": "column/array_nullable.go",
"chars": 2886,
"preview": "package column\n\nimport \"github.com/vahid-sohrabloo/chconn/v2/internal/readerwriter\"\n\n// Array is a column of Array(Nulla"
},
{
"path": "column/base.go",
"chars": 3676,
"preview": "package column\n\nimport (\n\t\"fmt\"\n\t\"unsafe\"\n\n\t\"github.com/vahid-sohrabloo/chconn/v2/internal/readerwriter\"\n)\n\n// Column us"
},
{
"path": "column/base_big_cpu.go",
"chars": 1292,
"preview": "//go:build !(386 || amd64 || amd64p32 || arm || arm64 || mipsle || mips64le || mips64p32le || ppc64le || riscv || riscv6"
},
{
"path": "column/base_little_cpu.go",
"chars": 940,
"preview": "//go:build 386 || amd64 || amd64p32 || arm || arm64 || mipsle || mips64le || mips64p32le || ppc64le || riscv || riscv64\n"
},
{
"path": "column/base_test.go",
"chars": 20986,
"preview": "package column_test\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"math\"\n\t\"math/big\"\n\t\"net/netip\"\n\t\"os\"\n\t\"testing\"\n\n\t\"github.com/google/u"
},
{
"path": "column/base_validate.go",
"chars": 4416,
"preview": "package column\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"strconv\"\n\n\t\"github.com/vahid-sohrabloo/chconn/v2/internal/helper\"\n)\n\nvar chCo"
},
{
"path": "column/bench_test.go",
"chars": 1861,
"preview": "package column_test\n\nimport (\n\t\"context\"\n\t\"testing\"\n\n\t\"github.com/vahid-sohrabloo/chconn/v2\"\n\t\"github.com/vahid-sohrablo"
},
{
"path": "column/column_helper.go",
"chars": 2329,
"preview": "package column\n\nimport (\n\t\"fmt\"\n\t\"io\"\n\n\t\"github.com/vahid-sohrabloo/chconn/v2/internal/helper\"\n\t\"github.com/vahid-sohrab"
},
{
"path": "column/date.go",
"chars": 4387,
"preview": "package column\n\nimport (\n\t\"strings\"\n\t\"time\"\n\t\"unsafe\"\n)\n\n// DateType is an interface to handle convert between time.Time"
},
{
"path": "column/date_test.go",
"chars": 17719,
"preview": "package column_test\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"os\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/stretchr/testify/assert\"\n\t\"github"
},
{
"path": "column/error_test.go",
"chars": 42366,
"preview": "package column_test\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"os\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/stretchr/testify/"
},
{
"path": "column/errors.go",
"chars": 291,
"preview": "package column\n\nimport (\n\t\"fmt\"\n)\n\ntype ErrInvalidType struct {\n\tcolumn ColumnBasic\n\tColumnType string\n}\n\nfunc (e Er"
},
{
"path": "column/helper_test.go",
"chars": 544,
"preview": "package column_test\n\nimport (\n\t\"io\"\n)\n\ntype readErrorHelper struct {\n\tnumberValid int\n\terr error\n\tr io"
},
{
"path": "column/lc.go",
"chars": 8679,
"preview": "package column\n\nimport (\n\t\"fmt\"\n\t\"io\"\n\t\"math\"\n\t\"strings\"\n\n\t\"github.com/vahid-sohrabloo/chconn/v2/internal/helper\"\n\t\"gith"
},
{
"path": "column/lc_indices.go",
"chars": 840,
"preview": "package column\n\nimport (\n\t\"io\"\n\t\"unsafe\"\n\n\t\"github.com/vahid-sohrabloo/chconn/v2/internal/readerwriter\"\n)\n\ntype indicesC"
},
{
"path": "column/lc_nullable.go",
"chars": 3283,
"preview": "package column\n\n// LowCardinalityNullable for LowCardinality(Nullable(T)) ClickHouse DataTypes\ntype LowCardinalityNullab"
},
{
"path": "column/lc_test.go",
"chars": 3139,
"preview": "package column_test\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"os\"\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/assert\"\n\t\"github.com/str"
},
{
"path": "column/map.go",
"chars": 2565,
"preview": "package column\n\n// Map is a column of Map(K,V) ClickHouse data type\n// Map in clickhouse actually is a array of pair(K,V"
},
{
"path": "column/map_base.go",
"chars": 6324,
"preview": "package column\n\nimport (\n\t\"encoding/binary\"\n\t\"fmt\"\n\t\"io\"\n\t\"strings\"\n\n\t\"github.com/vahid-sohrabloo/chconn/v2/internal/hel"
},
{
"path": "column/map_nullable.go",
"chars": 2433,
"preview": "package column\n\nimport \"github.com/vahid-sohrabloo/chconn/v2/internal/readerwriter\"\n\n// MapNullable is a column of Map(K"
},
{
"path": "column/map_test.go",
"chars": 22546,
"preview": "package column_test\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"os\"\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/assert\"\n\t\"github.com/str"
},
{
"path": "column/nested.go",
"chars": 256,
"preview": "package column\n\n// NewNested create a new nested of Nested(T1,T2,.....,Tn) ClickHouse data type\n//\n// this is actually a"
},
{
"path": "column/nested_test.go",
"chars": 3931,
"preview": "package column_test\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"os\"\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/assert\"\n\t\"github.com/str"
},
{
"path": "column/nullable.go",
"chars": 6595,
"preview": "package column\n\nimport (\n\t\"fmt\"\n\t\"io\"\n\t\"strings\"\n\t\"unsafe\"\n\n\t\"github.com/vahid-sohrabloo/chconn/v2/internal/helper\"\n\t\"gi"
},
{
"path": "column/nullable_test.go",
"chars": 4949,
"preview": "package column_test\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"os\"\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/assert\"\n\t\"github.com/str"
},
{
"path": "column/point.go",
"chars": 209,
"preview": "package column\n\nimport \"github.com/vahid-sohrabloo/chconn/v2/types\"\n\nfunc NewPoint() *Tuple2[types.Point, float64, float"
},
{
"path": "column/size.go",
"chars": 1636,
"preview": "package column\n\nconst (\n\t// Uint8Size data Size of Uint8 Column\n\tUint8Size = 1\n\t// Uint16Size data Size of Uint16 Column"
},
{
"path": "column/string.go",
"chars": 482,
"preview": "package column\n\n// String is a column of String ClickHouse data type\ntype String struct {\n\tStringBase[string]\n}\n\n// NewS"
},
{
"path": "column/string_base.go",
"chars": 5807,
"preview": "package column\n\nimport (\n\t\"fmt\"\n\t\"io\"\n\n\t\"github.com/vahid-sohrabloo/chconn/v2/internal/helper\"\n\t\"github.com/vahid-sohrab"
},
{
"path": "column/string_test.go",
"chars": 7357,
"preview": "package column_test\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"os\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/assert\"\n\t\"git"
},
{
"path": "column/tuple.go",
"chars": 4238,
"preview": "package column\n\nimport (\n\t\"fmt\"\n\t\"io\"\n\n\t\"github.com/vahid-sohrabloo/chconn/v2/internal/helper\"\n\t\"github.com/vahid-sohrab"
},
{
"path": "column/tuple1.go",
"chars": 1184,
"preview": "package column\n\n// Tuple1 is a column of Tuple(T1) ClickHouse data type\ntype Tuple1[T1 any] struct {\n\tTuple\n\tcol1 Column"
},
{
"path": "column/tuple2_gen.go",
"chars": 2243,
"preview": "package column\n\nimport (\n\t\"unsafe\"\n)\n\ntype tuple2Value[T1, T2 any] struct {\n\tCol1 T1\n\tCol2 T2\n}\n\n// Tuple2 is a column o"
},
{
"path": "column/tuple3_gen.go",
"chars": 2562,
"preview": "package column\n\nimport (\n\t\"unsafe\"\n)\n\ntype tuple3Value[T1, T2, T3 any] struct {\n\tCol1 T1\n\tCol2 T2\n\tCol3 T3\n}\n\n// Tuple3 "
},
{
"path": "column/tuple4_gen.go",
"chars": 2881,
"preview": "package column\n\nimport (\n\t\"unsafe\"\n)\n\ntype tuple4Value[T1, T2, T3, T4 any] struct {\n\tCol1 T1\n\tCol2 T2\n\tCol3 T3\n\tCol4 T4\n"
},
{
"path": "column/tuple5_gen.go",
"chars": 3200,
"preview": "package column\n\nimport (\n\t\"unsafe\"\n)\n\ntype tuple5Value[T1, T2, T3, T4, T5 any] struct {\n\tCol1 T1\n\tCol2 T2\n\tCol3 T3\n\tCol4"
},
{
"path": "column/tuple_test.go",
"chars": 17091,
"preview": "package column_test\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"os\"\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/assert\"\n\t\"github.com/str"
},
{
"path": "column/tuples_template/tuple.go.tmpl",
"chars": 4413,
"preview": "package column\n\nimport (\n\t\"unsafe\"\n)\n\ntype tuple{{.Numbrer}}Value[T1{{- range $val := iterate .Numbrer \"2\" }}, T{{ $val "
},
{
"path": "column/tuples_template/tuple2.json",
"chars": 22,
"preview": "{\n \"Numbrer\": \"2\"\n}"
},
{
"path": "column/tuples_template/tuple3.json",
"chars": 22,
"preview": "{\n \"Numbrer\": \"3\"\n}"
},
{
"path": "column/tuples_template/tuple4.json",
"chars": 22,
"preview": "{\n \"Numbrer\": \"4\"\n}"
},
{
"path": "column/tuples_template/tuple5.json",
"chars": 22,
"preview": "{\n \"Numbrer\": \"5\"\n}"
},
{
"path": "column/tuples_test.go",
"chars": 11043,
"preview": "package column_test\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"os\"\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/assert\"\n\t\"github.com/str"
},
{
"path": "config.go",
"chars": 18640,
"preview": "package chconn\n\nimport (\n\t\"context\"\n\t\"crypto/tls\"\n\t\"crypto/x509\"\n\t\"fmt\"\n\t\"math\"\n\t\"net\"\n\t\"net/url\"\n\t\"os\"\n\t\"strconv\"\n\t\"str"
},
{
"path": "config_test.go",
"chars": 23306,
"preview": "package chconn\n\nimport (\n\t\"context\"\n\t\"crypto/tls\"\n\t\"errors\"\n\t\"fmt\"\n\t\"os\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/stretchr/testi"
},
{
"path": "doc.go",
"chars": 308,
"preview": "// Package chconn is a low-level Clickhouse database driver.\n/*\nchconn is a pure Go driver for [ClickHouse] that use Nat"
},
{
"path": "doc_test.go",
"chars": 2537,
"preview": "package chconn_test\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"os\"\n\t\"time\"\n\n\t\"github.com/vahid-sohrabloo/chconn/v2/chpool\"\n\t\"github.c"
},
{
"path": "errors.go",
"chars": 8179,
"preview": "package chconn\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"net\"\n\t\"net/url\"\n\t\"regexp\"\n\t\"strings\"\n\n\t\"github.com/vahid-sohrablo"
},
{
"path": "errors_ch_code.go",
"chars": 59575,
"preview": "package chconn\n\ntype ChErrorType int32\n\nconst (\n\tChErrorOk ChErrorType = 0 "
},
{
"path": "errors_test.go",
"chars": 3370,
"preview": "package chconn\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"io\"\n\t\"os\"\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/assert\"\n\t\"github.com"
},
{
"path": "go.mod",
"chars": 530,
"preview": "module github.com/vahid-sohrabloo/chconn/v2\n\ngo 1.18\n\nrequire (\n\tgithub.com/go-faster/city v1.0.1\n\tgithub.com/google/uui"
},
{
"path": "go.sum",
"chars": 2739,
"preview": "github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=\ngithub.com/davecgh/go-spew v1.1"
},
{
"path": "helper_test.go",
"chars": 722,
"preview": "package chconn\n\nimport (\n\t\"io\"\n\t\"time\"\n)\n\ntype readErrorHelper struct {\n\tnumberValid int\n\terr error\n\tr "
},
{
"path": "insert.go",
"chars": 6787,
"preview": "package chconn\n\nimport (\n\t\"context\"\n\n\t\"github.com/vahid-sohrabloo/chconn/v2/column\"\n)\n\n// InsertStmt is a interface for "
},
{
"path": "insert_test.go",
"chars": 18068,
"preview": "package chconn\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"io\"\n\t\"os\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/stretchr/testify/assert\"\n\t\"gi"
},
{
"path": "internal/ctxwatch/context_watcher.go",
"chars": 1840,
"preview": "package ctxwatch\n\nimport (\n\t\"context\"\n\t\"sync\"\n)\n\n// ContextWatcher watches a context and performs an action when the con"
},
{
"path": "internal/ctxwatch/context_watcher_test.go",
"chars": 4021,
"preview": "package ctxwatch_test\n\nimport (\n\t\"context\"\n\t\"sync/atomic\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/stretchr/testify/require\"\n\t\"g"
},
{
"path": "internal/helper/features.go",
"chars": 938,
"preview": "package helper\n\nconst (\n\tDbmsMinRevisionWithClientInfo = 54032\n\tDbmsMinRevisionWithServerTimezone "
},
{
"path": "internal/helper/strs.go",
"chars": 1894,
"preview": "package helper\n\nconst (\n\tTupleStr = \"Tuple(\"\n\tLenTupleStr = len(TupleStr)\n\tPointStr = \"Point\"\n)\n\nvar PointMainType"
},
{
"path": "internal/helper/validator.go",
"chars": 4970,
"preview": "package helper\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"strconv\"\n)\n\nfunc IsEnum8(chType []byte) bool {\n\treturn len(chType) > Enum8Str"
},
{
"path": "internal/readerwriter/compress_reader.go",
"chars": 3553,
"preview": "package readerwriter\n\n// copy from https://github.com/ClickHouse/ch-go/blob/4cde4e4bec24211c0bcdc6f385f4212d0ad522d9/com"
},
{
"path": "internal/readerwriter/compress_writer.go",
"chars": 2584,
"preview": "package readerwriter\n\n// copy from https://github.com/ClickHouse/ch-go/blob/4cde4e4bec24211c0bcdc6f385f4212d0ad522d9/com"
},
{
"path": "internal/readerwriter/consts.go",
"chars": 1461,
"preview": "package readerwriter\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/go-faster/city\"\n)\n\n// Method is compression codec.\ntype CompressMeth"
},
{
"path": "internal/readerwriter/reader.go",
"chars": 2267,
"preview": "package readerwriter\n\nimport (\n\t\"encoding/binary\"\n\t\"io\"\n)\n\n// Reader is a helper to read data from reader\ntype Reader st"
},
{
"path": "internal/readerwriter/writer.go",
"chars": 2091,
"preview": "package readerwriter\n\nimport (\n\t\"bytes\"\n\t\"encoding/binary\"\n\t\"io\"\n\t\"reflect\"\n\t\"unsafe\"\n)\n\n// Writer is a helper to write "
},
{
"path": "ping.go",
"chars": 906,
"preview": "package chconn\n\nimport (\n\t\"context\"\n)\n\ntype pong struct{}\n\n// Check that connection to the server is alive.\nfunc (ch *co"
},
{
"path": "ping_test.go",
"chars": 2379,
"preview": "package chconn\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"io\"\n\t\"os\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/stretchr/testify/assert\"\n\t\"gi"
},
{
"path": "profile.go",
"chars": 1098,
"preview": "package chconn\n\n// Profile detail of profile select query\ntype Profile struct {\n\tRows uint64\n\tBlock"
},
{
"path": "profile_event.go",
"chars": 723,
"preview": "package chconn\n\nimport (\n\t\"github.com/vahid-sohrabloo/chconn/v2/column\"\n)\n\n// Profile detail of profile select query\ntyp"
},
{
"path": "profile_test.go",
"chars": 2565,
"preview": "package chconn\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"os\"\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/assert\"\n\t\"git"
},
{
"path": "progress.go",
"chars": 1252,
"preview": "package chconn\n\nimport \"github.com/vahid-sohrabloo/chconn/v2/internal/helper\"\n\n// Progress details of progress select qu"
},
{
"path": "select_stmt.go",
"chars": 19408,
"preview": "package chconn\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"fmt\"\n\t\"strconv\"\n\t\"time\"\n\n\t\"github.com/vahid-sohrabloo/chconn/v2/column\"\n\t"
},
{
"path": "select_stmt_test.go",
"chars": 13969,
"preview": "package chconn\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"io\"\n\t\"os\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/stretchr/testify/assert\"\n\t\"gi"
},
{
"path": "server_info.go",
"chars": 1921,
"preview": "package chconn\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/vahid-sohrabloo/chconn/v2/internal/helper\"\n\t\"github.com/vahid-sohrabloo/ch"
},
{
"path": "server_info_test.go",
"chars": 1888,
"preview": "package chconn\n\nimport (\n\t\"context\"\n\t\"errors\"\n\t\"io\"\n\t\"os\"\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/require\"\n)\n\nfunc Tes"
},
{
"path": "settings.go",
"chars": 5458,
"preview": "package chconn\n\nimport (\n\t\"strconv\"\n\t\"strings\"\n\n\t\"github.com/vahid-sohrabloo/chconn/v2/internal/readerwriter\"\n)\n\n// Sett"
},
{
"path": "sqlbuilder/injection.go",
"chars": 1177,
"preview": "// sqlbuilder is a builder for SQL statements for clickhouse.\n// copy from https://github.com/huandu/go-sqlbuilder\n// ch"
},
{
"path": "sqlbuilder/select.go",
"chars": 8648,
"preview": "// sqlbuilder is a builder for SQL statements for clickhouse.\n// copy from https://github.com/huandu/go-sqlbuilder\n// ch"
},
{
"path": "sqlbuilder/select_test.go",
"chars": 2012,
"preview": "package sqlbuilder\n\nimport (\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/assert\"\n\t\"github.com/stretchr/testify/require\"\n\t\""
},
{
"path": "types/Int256.go",
"chars": 3016,
"preview": "package types\n\nimport (\n\t\"math/big\"\n)\n\n// Note, Zero and Max are functions just to make read-only values.\n// We cannot d"
},
{
"path": "types/date_type.go",
"chars": 2101,
"preview": "package types\n\nimport (\n\t\"time\"\n)\n\ntype Date uint16\n\nconst minDate32 = int32(-25567) // 1900-01-01 00:00:00 +0000 UTC\n\nt"
},
{
"path": "types/decimal.go",
"chars": 1142,
"preview": "package types\n\n// Decimal32 represents a 32-bit decimal number.\ntype Decimal32 int32\n\n// Decimal64 represents a 64-bit d"
},
{
"path": "types/decimal_test.go",
"chars": 402,
"preview": "package types\n\nimport (\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/assert\"\n)\n\nfunc TestDecimal(t *testing.T) {\n\td32 := De"
},
{
"path": "types/int128.go",
"chars": 2759,
"preview": "package types\n\nimport (\n\t\"math\"\n\t\"math/big\"\n)\n\n// Note, Zero and Max are functions just to make read-only values.\n// We "
},
{
"path": "types/int128_test.go",
"chars": 751,
"preview": "package types\n\nimport (\n\t\"math/big\"\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/assert\"\n)\n\n// TestUint128 unit tests for v"
},
{
"path": "types/int256_test.go",
"chars": 735,
"preview": "package types\n\nimport (\n\t\"math/big\"\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/assert\"\n)\n\n// TestUint256 unit tests for v"
},
{
"path": "types/ip_test.go",
"chars": 446,
"preview": "package types\n\nimport (\n\t\"net/netip\"\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/assert\"\n)\n\nfunc TestIP(t *testing.T) {\n\ti"
},
{
"path": "types/ipv4.go",
"chars": 393,
"preview": "package types\n\nimport \"net/netip\"\n\n//\tIPv4 is a compatible type for IPv4 address in clickhouse.\n//\n// clickhouse use Lit"
},
{
"path": "types/ipv6.go",
"chars": 197,
"preview": "package types\n\nimport \"net/netip\"\n\ntype IPv6 [16]byte\n\nfunc (ip IPv6) NetIP() netip.Addr {\n\treturn netip.AddrFrom16(ip)\n"
},
{
"path": "types/tuple.go",
"chars": 345,
"preview": "package types\n\ntype Point Tuple2[float64, float64]\n\ntype Tuple2[T1, T2 any] struct {\n\tCol1 T1\n\tCol2 T2\n}\n\ntype Tuple3[T1"
},
{
"path": "types/uint128.go",
"chars": 2527,
"preview": "package types\n\nimport (\n\t\"math\"\n\t\"math/big\"\n)\n\n// Note, Zero and Max are functions just to make read-only values.\n// We "
},
{
"path": "types/uint128_test.go",
"chars": 1067,
"preview": "package types\n\nimport (\n\t\"math/big\"\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/assert\"\n)\n\n// TestUint128 unit tests for v"
},
{
"path": "types/uint256.go",
"chars": 2795,
"preview": "package types\n\nimport (\n\t\"math/big\"\n)\n\n// Note, Zero and Max are functions just to make read-only values.\n// We cannot d"
},
{
"path": "types/uint256_test.go",
"chars": 799,
"preview": "package types\n\nimport (\n\t\"math/big\"\n\t\"testing\"\n)\n\n// TestUint256 unit tests for various Uint256 helpers.\nfunc TestUint25"
},
{
"path": "types/uuid.go",
"chars": 422,
"preview": "package types\n\ntype UUID [16]byte\n\nfunc UUIDFromBigEndian(b [16]byte) UUID {\n\tvar val [16]byte\n\tval[0], val[7] = b[7], b"
},
{
"path": "types/uuid_test.go",
"chars": 238,
"preview": "package types\n\nimport (\n\t\"testing\"\n\n\t\"github.com/google/uuid\"\n\t\"github.com/stretchr/testify/assert\"\n)\n\nfunc TestUUID(t *"
}
]
About this extraction
This page contains the full source code of the vahid-sohrabloo/chconn GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 123 files (603.4 KB), approximately 177.3k tokens, and a symbol index with 1703 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.