Repository: bgentry/que-go
Branch: master
Commit: 15f2338ea6fe
Files: 12
Total size: 47.0 KB
Directory structure:
gitextract_ssiubae7/
├── LICENSE
├── README.md
├── doc.go
├── enqueue_test.go
├── que.go
├── que_test.go
├── schema.sql
├── sql.go
├── util.go
├── work_test.go
├── worker.go
└── worker_test.go
================================================
FILE CONTENTS
================================================
================================================
FILE: LICENSE
================================================
The MIT License (MIT)
Copyright (c) 2014 Blake Gentry
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
================================================
FILE: README.md
================================================
# que-go
[][godoc]
## Unmaintained
⚠️ **que-go is unmaintained** ⚠️. Please check out [River](https://riverqueue.com) for a fast, reliable Postgres job queue in Go.
## Overview
Que-go is a fully interoperable Golang port of [Chris Hanks][chanks]' [Ruby Que
queuing library][ruby-que] for PostgreSQL. Que uses PostgreSQL's advisory locks
for speed and reliability.
Because que-go is an interoperable port of Que, you can enqueue jobs in Ruby
(i.e. from a Rails app) and write your workers in Go. Or if you have a limited
set of jobs that you want to write in Go, you can leave most of your workers in
Ruby and just add a few Go workers on a different queue name. Or you can just
write everything in Go :)
## pgx PostgreSQL driver
This package uses the [pgx][pgx] Go PostgreSQL driver rather than the more
popular [pq][pq]. Because Que uses session-level advisory locks, we have to hold
the same connection throughout the process of getting a job, working it,
deleting it, and removing the lock.
Pq and the built-in database/sql interfaces do not offer this functionality, so
we'd have to implement our own connection pool. Fortunately, pgx already has a
perfectly usable one built for us. Even better, it offers better performance
than pq due largely to its use of binary encoding.
Please see the [godocs][godoc] for more info and examples.
[godoc]: https://godoc.org/github.com/bgentry/que-go
[chanks]: https://github.com/chanks
[ruby-que]: https://github.com/chanks/que
[pgx]: https://github.com/jackc/pgx
[pq]: https://github.com/lib/pq
================================================
FILE: doc.go
================================================
/*
Package que-go is a fully interoperable Golang port of Chris Hanks' Ruby Que
queueing library for PostgreSQL. Que uses PostgreSQL's advisory locks
for speed and reliability. See the original Que documentation for more details:
https://github.com/chanks/que
Because que-go is an interoperable port of Que, you can enqueue jobs in Ruby
(i.e. from a Rails app) and write your workers in Go. Or if you have a limited
set of jobs that you want to write in Go, you can leave most of your workers in
Ruby and just add a few Go workers on a different queue name.
PostgreSQL Driver pgx
Instead of using database/sql and the more popular pq PostgreSQL driver, this
package uses the pgx driver: https://github.com/jackc/pgx
Because Que uses session-level advisory locks, we have to hold the same
connection throughout the process of getting a job, working it, deleting it, and
removing the lock.
Pq and the built-in database/sql interfaces do not offer this functionality, so
we'd have to implement our own connection pool. Fortunately, pgx already has a
perfectly usable one built for us. Even better, it offers better performance
than pq due largely to its use of binary encoding.
Prepared Statements
que-go relies on prepared statements for performance. As of now these have to
be initialized manually on your connection pool like so:
pgxpool, err := pgx.NewConnPool(pgx.ConnPoolConfig{
ConnConfig: pgxcfg,
AfterConnect: que.PrepareStatements,
})
If you have suggestions on how to cleanly do this automatically, please open an
issue!
Usage
Here is a complete example showing worker setup and two jobs enqueued, one with a delay:
type printNameArgs struct {
Name string
}
printName := func(j *que.Job) error {
var args printNameArgs
if err := json.Unmarshal(j.Args, &args); err != nil {
return err
}
fmt.Printf("Hello %s!\n", args.Name)
return nil
}
pgxcfg, err := pgx.ParseURI(os.Getenv("DATABASE_URL"))
if err != nil {
log.Fatal(err)
}
pgxpool, err := pgx.NewConnPool(pgx.ConnPoolConfig{
ConnConfig: pgxcfg,
AfterConnect: que.PrepareStatements,
})
if err != nil {
log.Fatal(err)
}
defer pgxpool.Close()
qc := que.NewClient(pgxpool)
wm := que.WorkMap{
"PrintName": printName,
}
workers := que.NewWorkerPool(qc, wm, 2) // create a pool w/ 2 workers
go workers.Start() // work jobs in another goroutine
args, err := json.Marshal(printNameArgs{Name: "bgentry"})
if err != nil {
log.Fatal(err)
}
j := &que.Job{
Type: "PrintName",
Args: args,
}
if err := qc.Enqueue(j); err != nil {
log.Fatal(err)
}
j := &que.Job{
Type: "PrintName",
RunAt: time.Now().UTC().Add(30 * time.Second), // delay 30 seconds
Args: args,
}
if err := qc.Enqueue(j); err != nil {
log.Fatal(err)
}
time.Sleep(35 * time.Second) // wait for while
workers.Shutdown()
*/
package que
================================================
FILE: enqueue_test.go
================================================
package que
import (
"testing"
"time"
"github.com/jackc/pgx/pgtype"
)
func TestEnqueueOnlyType(t *testing.T) {
c := openTestClient(t)
defer truncateAndClose(c.pool)
if err := c.Enqueue(&Job{Type: "MyJob"}); err != nil {
t.Fatal(err)
}
j, err := findOneJob(c.pool)
if err != nil {
t.Fatal(err)
}
// check resulting job
if j.ID == 0 {
t.Errorf("want non-zero ID")
}
if want := ""; j.Queue != want {
t.Errorf("want Queue=%q, got %q", want, j.Queue)
}
if want := int16(100); j.Priority != want {
t.Errorf("want Priority=%d, got %d", want, j.Priority)
}
if j.RunAt.IsZero() {
t.Error("want non-zero RunAt")
}
if want := "MyJob"; j.Type != want {
t.Errorf("want Type=%q, got %q", want, j.Type)
}
if want, got := "[]", string(j.Args); got != want {
t.Errorf("want Args=%s, got %s", want, got)
}
if want := int32(0); j.ErrorCount != want {
t.Errorf("want ErrorCount=%d, got %d", want, j.ErrorCount)
}
if j.LastError.Status == pgtype.Present {
t.Errorf("want no LastError, got %v", j.LastError)
}
}
func TestEnqueueWithPriority(t *testing.T) {
c := openTestClient(t)
defer truncateAndClose(c.pool)
want := int16(99)
if err := c.Enqueue(&Job{Type: "MyJob", Priority: want}); err != nil {
t.Fatal(err)
}
j, err := findOneJob(c.pool)
if err != nil {
t.Fatal(err)
}
if j.Priority != want {
t.Errorf("want Priority=%d, got %d", want, j.Priority)
}
}
func TestEnqueueWithRunAt(t *testing.T) {
c := openTestClient(t)
defer truncateAndClose(c.pool)
want := time.Now().Add(2 * time.Minute)
if err := c.Enqueue(&Job{Type: "MyJob", RunAt: want}); err != nil {
t.Fatal(err)
}
j, err := findOneJob(c.pool)
if err != nil {
t.Fatal(err)
}
// truncate to the microsecond as postgres driver does
want = want.Truncate(time.Microsecond)
if !want.Equal(j.RunAt) {
t.Errorf("want RunAt=%s, got %s", want, j.RunAt)
}
}
func TestEnqueueWithArgs(t *testing.T) {
c := openTestClient(t)
defer truncateAndClose(c.pool)
want := `{"arg1":0, "arg2":"a string"}`
if err := c.Enqueue(&Job{Type: "MyJob", Args: []byte(want)}); err != nil {
t.Fatal(err)
}
j, err := findOneJob(c.pool)
if err != nil {
t.Fatal(err)
}
if got := string(j.Args); got != want {
t.Errorf("want Args=%s, got %s", want, got)
}
}
func TestEnqueueWithQueue(t *testing.T) {
c := openTestClient(t)
defer truncateAndClose(c.pool)
want := "special-work-queue"
if err := c.Enqueue(&Job{Type: "MyJob", Queue: want}); err != nil {
t.Fatal(err)
}
j, err := findOneJob(c.pool)
if err != nil {
t.Fatal(err)
}
if j.Queue != want {
t.Errorf("want Queue=%q, got %q", want, j.Queue)
}
}
func TestEnqueueWithEmptyType(t *testing.T) {
c := openTestClient(t)
defer truncateAndClose(c.pool)
if err := c.Enqueue(&Job{Type: ""}); err != ErrMissingType {
t.Fatalf("want ErrMissingType, got %v", err)
}
}
func TestEnqueueInTx(t *testing.T) {
c := openTestClient(t)
defer truncateAndClose(c.pool)
tx, err := c.pool.Begin()
if err != nil {
t.Fatal(err)
}
defer tx.Rollback()
if err = c.EnqueueInTx(&Job{Type: "MyJob"}, tx); err != nil {
t.Fatal(err)
}
j, err := findOneJob(tx)
if err != nil {
t.Fatal(err)
}
if j == nil {
t.Fatal("want job, got none")
}
if err = tx.Rollback(); err != nil {
t.Fatal(err)
}
j, err = findOneJob(c.pool)
if err != nil {
t.Fatal(err)
}
if j != nil {
t.Fatalf("wanted job to be rolled back, got %+v", j)
}
}
================================================
FILE: que.go
================================================
package que
import (
"errors"
"sync"
"time"
"github.com/jackc/pgx"
"github.com/jackc/pgx/pgtype"
)
// Job is a single unit of work for Que to perform.
type Job struct {
// ID is the unique database ID of the Job. It is ignored on job creation.
ID int64
// Queue is the name of the queue. It defaults to the empty queue "".
Queue string
// Priority is the priority of the Job. The default priority is 100, and a
// lower number means a higher priority. A priority of 5 would be very
// important.
Priority int16
// RunAt is the time that this job should be executed. It defaults to now(),
// meaning the job will execute immediately. Set it to a value in the future
// to delay a job's execution.
RunAt time.Time
// Type corresponds to the Ruby job_class. If you are interoperating with
// Ruby, you should pick suitable Ruby class names (such as MyJob).
Type string
// Args must be the bytes of a valid JSON string
Args []byte
// ErrorCount is the number of times this job has attempted to run, but
// failed with an error. It is ignored on job creation.
ErrorCount int32
// LastError is the error message or stack trace from the last time the job
// failed. It is ignored on job creation.
LastError pgtype.Text
mu sync.Mutex
deleted bool
pool *pgx.ConnPool
conn *pgx.Conn
}
// Conn returns the pgx connection that this job is locked to. You may initiate
// transactions on this connection or use it as you please until you call
// Done(). At that point, this conn will be returned to the pool and it is
// unsafe to keep using it. This function will return nil if the Job's
// connection has already been released with Done().
func (j *Job) Conn() *pgx.Conn {
j.mu.Lock()
defer j.mu.Unlock()
return j.conn
}
// Delete marks this job as complete by deleting it form the database.
//
// You must also later call Done() to return this job's database connection to
// the pool.
func (j *Job) Delete() error {
j.mu.Lock()
defer j.mu.Unlock()
if j.deleted {
return nil
}
_, err := j.conn.Exec("que_destroy_job", j.Queue, j.Priority, j.RunAt, j.ID)
if err != nil {
return err
}
j.deleted = true
return nil
}
// Done releases the Postgres advisory lock on the job and returns the database
// connection to the pool.
func (j *Job) Done() {
j.mu.Lock()
defer j.mu.Unlock()
if j.conn == nil || j.pool == nil {
// already marked as done
return
}
var ok bool
// Swallow this error because we don't want an unlock failure to cause work to
// stop.
_ = j.conn.QueryRow("que_unlock_job", j.ID).Scan(&ok)
j.pool.Release(j.conn)
j.pool = nil
j.conn = nil
}
// Error marks the job as failed and schedules it to be reworked. An error
// message or backtrace can be provided as msg, which will be saved on the job.
// It will also increase the error count.
//
// You must also later call Done() to return this job's database connection to
// the pool.
func (j *Job) Error(msg string) error {
errorCount := j.ErrorCount + 1
delay := intPow(int(errorCount), 4) + 3 // TODO: configurable delay
_, err := j.conn.Exec("que_set_error", errorCount, delay, msg, j.Queue, j.Priority, j.RunAt, j.ID)
if err != nil {
return err
}
return nil
}
// Client is a Que client that can add jobs to the queue and remove jobs from
// the queue.
type Client struct {
pool *pgx.ConnPool
// TODO: add a way to specify default queueing options
}
// NewClient creates a new Client that uses the pgx pool.
func NewClient(pool *pgx.ConnPool) *Client {
return &Client{pool: pool}
}
// ErrMissingType is returned when you attempt to enqueue a job with no Type
// specified.
var ErrMissingType = errors.New("job type must be specified")
// Enqueue adds a job to the queue.
func (c *Client) Enqueue(j *Job) error {
return execEnqueue(j, c.pool)
}
// EnqueueInTx adds a job to the queue within the scope of the transaction tx.
// This allows you to guarantee that an enqueued job will either be committed or
// rolled back atomically with other changes in the course of this transaction.
//
// It is the caller's responsibility to Commit or Rollback the transaction after
// this function is called.
func (c *Client) EnqueueInTx(j *Job, tx *pgx.Tx) error {
return execEnqueue(j, tx)
}
func execEnqueue(j *Job, q queryable) error {
if j.Type == "" {
return ErrMissingType
}
queue := &pgtype.Text{
String: j.Queue,
Status: pgtype.Null,
}
if j.Queue != "" {
queue.Status = pgtype.Present
}
priority := &pgtype.Int2{
Int: j.Priority,
Status: pgtype.Null,
}
if j.Priority != 0 {
priority.Status = pgtype.Present
}
runAt := &pgtype.Timestamptz{
Time: j.RunAt,
Status: pgtype.Null,
}
if !j.RunAt.IsZero() {
runAt.Status = pgtype.Present
}
args := &pgtype.Bytea{
Bytes: j.Args,
Status: pgtype.Null,
}
if len(j.Args) != 0 {
args.Status = pgtype.Present
}
_, err := q.Exec("que_insert_job", queue, priority, runAt, j.Type, args)
return err
}
type queryable interface {
Exec(sql string, arguments ...interface{}) (commandTag pgx.CommandTag, err error)
Query(sql string, args ...interface{}) (*pgx.Rows, error)
QueryRow(sql string, args ...interface{}) *pgx.Row
}
// Maximum number of loop iterations in LockJob before giving up. This is to
// avoid looping forever in case something is wrong.
const maxLockJobAttempts = 10
// Returned by LockJob if a job could not be retrieved from the queue after
// several attempts because of concurrently running transactions. This error
// should not be returned unless the queue is under extremely heavy
// concurrency.
var ErrAgain = errors.New("maximum number of LockJob attempts reached")
// TODO: consider an alternate Enqueue func that also returns the newly
// enqueued Job struct. The query sqlInsertJobAndReturn was already written for
// this.
// LockJob attempts to retrieve a Job from the database in the specified queue.
// If a job is found, a session-level Postgres advisory lock is created for the
// Job's ID. If no job is found, nil will be returned instead of an error.
//
// Because Que uses session-level advisory locks, we have to hold the
// same connection throughout the process of getting a job, working it,
// deleting it, and removing the lock.
//
// After the Job has been worked, you must call either Done() or Error() on it
// in order to return the database connection to the pool and remove the lock.
func (c *Client) LockJob(queue string) (*Job, error) {
conn, err := c.pool.Acquire()
if err != nil {
return nil, err
}
j := Job{pool: c.pool, conn: conn}
for i := 0; i < maxLockJobAttempts; i++ {
err = conn.QueryRow("que_lock_job", queue).Scan(
&j.Queue,
&j.Priority,
&j.RunAt,
&j.ID,
&j.Type,
&j.Args,
&j.ErrorCount,
)
if err != nil {
c.pool.Release(conn)
if err == pgx.ErrNoRows {
return nil, nil
}
return nil, err
}
// Deal with race condition. Explanation from the Ruby Que gem:
//
// Edge case: It's possible for the lock_job query to have
// grabbed a job that's already been worked, if it took its MVCC
// snapshot while the job was processing, but didn't attempt the
// advisory lock until it was finished. Since we have the lock, a
// previous worker would have deleted it by now, so we just
// double check that it still exists before working it.
//
// Note that there is currently no spec for this behavior, since
// I'm not sure how to reliably commit a transaction that deletes
// the job in a separate thread between lock_job and check_job.
var ok bool
err = conn.QueryRow("que_check_job", j.Queue, j.Priority, j.RunAt, j.ID).Scan(&ok)
if err == nil {
return &j, nil
} else if err == pgx.ErrNoRows {
// Encountered job race condition; start over from the beginning.
// We're still holding the advisory lock, though, so we need to
// release it before resuming. Otherwise we leak the lock,
// eventually causing the server to run out of locks.
//
// Also swallow the possible error, exactly like in Done.
_ = conn.QueryRow("que_unlock_job", j.ID).Scan(&ok)
continue
} else {
c.pool.Release(conn)
return nil, err
}
}
c.pool.Release(conn)
return nil, ErrAgain
}
var preparedStatements = map[string]string{
"que_check_job": sqlCheckJob,
"que_destroy_job": sqlDeleteJob,
"que_insert_job": sqlInsertJob,
"que_lock_job": sqlLockJob,
"que_set_error": sqlSetError,
"que_unlock_job": sqlUnlockJob,
}
// PrepareStatements prepares the required statements to run que on the provided
// *pgx.Conn. Typically it is used as an AfterConnect func for a
// *pgx.ConnPool. Every connection used by que must have the statements prepared
// ahead of time.
func PrepareStatements(conn *pgx.Conn) error {
return PrepareStatementsWithPreparer(conn)
}
// Preparer defines the interface for types that support preparing
// statements. This includes all of *pgx.ConnPool, *pgx.Conn, and *pgx.Tx
type Preparer interface {
Prepare(name, sql string) (*pgx.PreparedStatement, error)
}
// PrepareStatementsWithPreparer prepares the required statements to run que on
// the provided Preparer. This func can be used to prepare statements on a
// *pgx.ConnPool after it is created, or on a *pg.Tx. Every connection used by
// que must have the statements prepared ahead of time.
func PrepareStatementsWithPreparer(p Preparer) error {
for name, sql := range preparedStatements {
if _, err := p.Prepare(name, sql); err != nil {
return err
}
}
return nil
}
================================================
FILE: que_test.go
================================================
package que
import (
"testing"
"github.com/jackc/pgx"
)
var testConnConfig = pgx.ConnConfig{
Host: "localhost",
Database: "que-go-test",
}
func openTestClientMaxConns(t testing.TB, maxConnections int) *Client {
connPoolConfig := pgx.ConnPoolConfig{
ConnConfig: testConnConfig,
MaxConnections: maxConnections,
AfterConnect: PrepareStatements,
}
pool, err := pgx.NewConnPool(connPoolConfig)
if err != nil {
t.Fatal(err)
}
return NewClient(pool)
}
func openTestClient(t testing.TB) *Client {
return openTestClientMaxConns(t, 5)
}
func truncateAndClose(pool *pgx.ConnPool) {
if _, err := pool.Exec("TRUNCATE TABLE que_jobs"); err != nil {
panic(err)
}
pool.Close()
}
func findOneJob(q queryable) (*Job, error) {
findSQL := `
SELECT priority, run_at, job_id, job_class, args, error_count, last_error, queue
FROM que_jobs LIMIT 1`
j := &Job{}
err := q.QueryRow(findSQL).Scan(
&j.Priority,
&j.RunAt,
&j.ID,
&j.Type,
&j.Args,
&j.ErrorCount,
&j.LastError,
&j.Queue,
)
if err == pgx.ErrNoRows {
return nil, nil
}
if err != nil {
return nil, err
}
return j, nil
}
================================================
FILE: schema.sql
================================================
CREATE TABLE IF NOT EXISTS que_jobs
(
priority smallint NOT NULL DEFAULT 100,
run_at timestamptz NOT NULL DEFAULT now(),
job_id bigserial NOT NULL,
job_class text NOT NULL,
args json NOT NULL DEFAULT '[]'::json,
error_count integer NOT NULL DEFAULT 0,
last_error text,
queue text NOT NULL DEFAULT '',
CONSTRAINT que_jobs_pkey PRIMARY KEY (queue, priority, run_at, job_id)
);
COMMENT ON TABLE que_jobs IS '3';
================================================
FILE: sql.go
================================================
// Copyright (c) 2013 Chris Hanks
//
// MIT License
//
// Permission is hereby granted, free of charge, to any person obtaining
// a copy of this software and associated documentation files (the
// "Software"), to deal in the Software without restriction, including
// without limitation the rights to use, copy, modify, merge, publish,
// distribute, sublicense, and/or sell copies of the Software, and to
// permit persons to whom the Software is furnished to do so, subject to
// the following conditions:
//
// The above copyright notice and this permission notice shall be
// included in all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
package que
// Thanks to RhodiumToad in #postgresql for help with the job lock CTE.
const (
sqlLockJob = `
WITH RECURSIVE jobs AS (
SELECT (j).*, pg_try_advisory_lock((j).job_id) AS locked
FROM (
SELECT j
FROM que_jobs AS j
WHERE queue = $1::text
AND run_at <= now()
ORDER BY priority, run_at, job_id
LIMIT 1
) AS t1
UNION ALL (
SELECT (j).*, pg_try_advisory_lock((j).job_id) AS locked
FROM (
SELECT (
SELECT j
FROM que_jobs AS j
WHERE queue = $1::text
AND run_at <= now()
AND (priority, run_at, job_id) > (jobs.priority, jobs.run_at, jobs.job_id)
ORDER BY priority, run_at, job_id
LIMIT 1
) AS j
FROM jobs
WHERE jobs.job_id IS NOT NULL
LIMIT 1
) AS t1
)
)
SELECT queue, priority, run_at, job_id, job_class, args, error_count
FROM jobs
WHERE locked
LIMIT 1
`
sqlUnlockJob = `
SELECT pg_advisory_unlock($1)
`
sqlCheckJob = `
SELECT true AS exists
FROM que_jobs
WHERE queue = $1::text
AND priority = $2::smallint
AND run_at = $3::timestamptz
AND job_id = $4::bigint
`
sqlSetError = `
UPDATE que_jobs
SET error_count = $1::integer,
run_at = now() + $2::bigint * '1 second'::interval,
last_error = $3::text
WHERE queue = $4::text
AND priority = $5::smallint
AND run_at = $6::timestamptz
AND job_id = $7::bigint
`
sqlInsertJob = `
INSERT INTO que_jobs
(queue, priority, run_at, job_class, args)
VALUES
(coalesce($1::text, ''::text), coalesce($2::smallint, 100::smallint), coalesce($3::timestamptz, now()::timestamptz), $4::text, coalesce($5::json, '[]'::json))
`
sqlDeleteJob = `
DELETE FROM que_jobs
WHERE queue = $1::text
AND priority = $2::smallint
AND run_at = $3::timestamptz
AND job_id = $4::bigint
`
sqlJobStats = `
SELECT queue,
job_class,
count(*) AS count,
count(locks.job_id) AS count_working,
sum((error_count > 0)::int) AS count_errored,
max(error_count) AS highest_error_count,
min(run_at) AS oldest_run_at
FROM que_jobs
LEFT JOIN (
SELECT (classid::bigint << 32) + objid::bigint AS job_id
FROM pg_locks
WHERE locktype = 'advisory'
) locks USING (job_id)
GROUP BY queue, job_class
ORDER BY count(*) DESC
`
sqlWorkerStates = `
SELECT que_jobs.*,
pg.pid AS pg_backend_pid,
pg.state AS pg_state,
pg.state_change AS pg_state_changed_at,
pg.query AS pg_last_query,
pg.query_start AS pg_last_query_started_at,
pg.xact_start AS pg_transaction_started_at,
pg.waiting AS pg_waiting_on_lock
FROM que_jobs
JOIN (
SELECT (classid::bigint << 32) + objid::bigint AS job_id, pg_stat_activity.*
FROM pg_locks
JOIN pg_stat_activity USING (pid)
WHERE locktype = 'advisory'
) pg USING (job_id)
`
)
================================================
FILE: util.go
================================================
package que
// intPow returns x**y, the base-x exponential of y.
func intPow(x, y int) (r int) {
if x == r || y < r {
return
}
r = 1
if x == r {
return
}
if x < 0 {
x = -x
if y&1 == 1 {
r = -1
}
}
for y > 0 {
if y&1 == 1 {
r *= x
}
x *= x
y >>= 1
}
return
}
================================================
FILE: work_test.go
================================================
package que
import (
"fmt"
"sync"
"testing"
"time"
"github.com/jackc/pgx"
"github.com/jackc/pgx/pgtype"
)
func TestLockJob(t *testing.T) {
c := openTestClient(t)
defer truncateAndClose(c.pool)
if err := c.Enqueue(&Job{Type: "MyJob"}); err != nil {
t.Fatal(err)
}
j, err := c.LockJob("")
if err != nil {
t.Fatal(err)
}
if j.conn == nil {
t.Fatal("want non-nil conn on locked Job")
}
if j.pool == nil {
t.Fatal("want non-nil pool on locked Job")
}
defer j.Done()
// check values of returned Job
if j.ID == 0 {
t.Errorf("want non-zero ID")
}
if want := ""; j.Queue != want {
t.Errorf("want Queue=%q, got %q", want, j.Queue)
}
if want := int16(100); j.Priority != want {
t.Errorf("want Priority=%d, got %d", want, j.Priority)
}
if j.RunAt.IsZero() {
t.Error("want non-zero RunAt")
}
if want := "MyJob"; j.Type != want {
t.Errorf("want Type=%q, got %q", want, j.Type)
}
if want, got := "[]", string(j.Args); got != want {
t.Errorf("want Args=%s, got %s", want, got)
}
if want := int32(0); j.ErrorCount != want {
t.Errorf("want ErrorCount=%d, got %d", want, j.ErrorCount)
}
if j.LastError.Status == pgtype.Present {
t.Errorf("want no LastError, got %v", j.LastError)
}
// check for advisory lock
var count int64
query := "SELECT count(*) FROM pg_locks WHERE locktype=$1 AND objid=$2::bigint"
if err = j.pool.QueryRow(query, "advisory", j.ID).Scan(&count); err != nil {
t.Fatal(err)
}
if count != 1 {
t.Errorf("want 1 advisory lock, got %d", count)
}
// make sure conn was checked out of pool
stat := c.pool.Stat()
total, available := stat.CurrentConnections, stat.AvailableConnections
if want := total - 1; available != want {
t.Errorf("want available=%d, got %d", want, available)
}
if err = j.Delete(); err != nil {
t.Fatal(err)
}
}
func TestLockJobAlreadyLocked(t *testing.T) {
c := openTestClient(t)
defer truncateAndClose(c.pool)
if err := c.Enqueue(&Job{Type: "MyJob"}); err != nil {
t.Fatal(err)
}
j, err := c.LockJob("")
if err != nil {
t.Fatal(err)
}
if j == nil {
t.Fatal("wanted job, got none")
}
defer j.Done()
j2, err := c.LockJob("")
if err != nil {
t.Fatal(err)
}
if j2 != nil {
defer j2.Done()
t.Fatalf("wanted no job, got %+v", j2)
}
}
func TestLockJobNoJob(t *testing.T) {
c := openTestClient(t)
defer truncateAndClose(c.pool)
j, err := c.LockJob("")
if err != nil {
t.Fatal(err)
}
if j != nil {
t.Errorf("want no job, got %v", j)
}
}
func TestLockJobCustomQueue(t *testing.T) {
c := openTestClient(t)
defer truncateAndClose(c.pool)
if err := c.Enqueue(&Job{Type: "MyJob", Queue: "extra_priority"}); err != nil {
t.Fatal(err)
}
j, err := c.LockJob("")
if err != nil {
t.Fatal(err)
}
if j != nil {
j.Done()
t.Errorf("expected no job to be found with empty queue name, got %+v", j)
}
j, err = c.LockJob("extra_priority")
if err != nil {
t.Fatal(err)
}
defer j.Done()
if j == nil {
t.Fatal("wanted job, got none")
}
if err = j.Delete(); err != nil {
t.Fatal(err)
}
}
func TestJobConn(t *testing.T) {
c := openTestClient(t)
defer truncateAndClose(c.pool)
if err := c.Enqueue(&Job{Type: "MyJob"}); err != nil {
t.Fatal(err)
}
j, err := c.LockJob("")
if err != nil {
t.Fatal(err)
}
if j == nil {
t.Fatal("wanted job, got none")
}
defer j.Done()
if conn := j.Conn(); conn != j.conn {
t.Errorf("want %+v, got %+v", j.conn, conn)
}
}
func TestJobConnRace(t *testing.T) {
c := openTestClient(t)
defer truncateAndClose(c.pool)
if err := c.Enqueue(&Job{Type: "MyJob"}); err != nil {
t.Fatal(err)
}
j, err := c.LockJob("")
if err != nil {
t.Fatal(err)
}
if j == nil {
t.Fatal("wanted job, got none")
}
defer j.Done()
var wg sync.WaitGroup
wg.Add(2)
// call Conn and Done in different goroutines to make sure they are safe from
// races.
go func() {
_ = j.Conn()
wg.Done()
}()
go func() {
j.Done()
wg.Done()
}()
wg.Wait()
}
// Test the race condition in LockJob
func TestLockJobAdvisoryRace(t *testing.T) {
c := openTestClientMaxConns(t, 2)
defer truncateAndClose(c.pool)
// *pgx.ConnPool doesn't support pools of only one connection. Make sure
// the other one is busy so we know which backend will be used by LockJob
// below.
unusedConn, err := c.pool.Acquire()
if err != nil {
t.Fatal(err)
}
defer c.pool.Release(unusedConn)
// We use two jobs: the first one is concurrently deleted, and the second
// one is returned by LockJob after recovering from the race condition.
for i := 0; i < 2; i++ {
if err := c.Enqueue(&Job{Type: "MyJob"}); err != nil {
t.Fatal(err)
}
}
// helper functions
newConn := func() *pgx.Conn {
conn, err := pgx.Connect(testConnConfig)
if err != nil {
panic(err)
}
return conn
}
getBackendPID := func(conn *pgx.Conn) int32 {
var backendPID int32
err := conn.QueryRow(`
SELECT pg_backend_pid()
`).Scan(&backendPID)
if err != nil {
panic(err)
}
return backendPID
}
waitUntilBackendIsWaiting := func(backendPID int32, name string) {
conn := newConn()
i := 0
for {
var waiting bool
err := conn.QueryRow(`SELECT wait_event is not null from pg_stat_activity where pid=$1`, backendPID).Scan(&waiting)
if err != nil {
panic(err)
}
if waiting {
break
} else {
i++
if i >= 10000/50 {
panic(fmt.Sprintf("timed out while waiting for %s", name))
}
time.Sleep(50 * time.Millisecond)
}
}
}
// Reproducing the race condition is a bit tricky. The idea is to form a
// lock queue on the relation that looks like this:
//
// AccessExclusive <- AccessShare <- AccessExclusive ( <- AccessShare )
//
// where the leftmost AccessShare lock is the one implicitly taken by the
// sqlLockJob query. Once we release the leftmost AccessExclusive lock
// without releasing the rightmost one, the session holding the rightmost
// AccessExclusiveLock can run the necessary DELETE before the sqlCheckJob
// query runs (since it'll be blocked behind the rightmost AccessExclusive
// Lock).
//
deletedJobIDChan := make(chan int64, 1)
lockJobBackendIDChan := make(chan int32)
secondAccessExclusiveBackendIDChan := make(chan int32)
go func() {
conn := newConn()
defer conn.Close()
tx, err := conn.Begin()
if err != nil {
panic(err)
}
_, err = tx.Exec(`LOCK TABLE que_jobs IN ACCESS EXCLUSIVE MODE`)
if err != nil {
panic(err)
}
// first wait for LockJob to appear behind us
backendID := <-lockJobBackendIDChan
waitUntilBackendIsWaiting(backendID, "LockJob")
// then for the AccessExclusive lock to appear behind that one
backendID = <-secondAccessExclusiveBackendIDChan
waitUntilBackendIsWaiting(backendID, "second access exclusive lock")
err = tx.Rollback()
if err != nil {
panic(err)
}
}()
go func() {
conn := newConn()
defer conn.Close()
// synchronization point
secondAccessExclusiveBackendIDChan <- getBackendPID(conn)
tx, err := conn.Begin()
if err != nil {
panic(err)
}
_, err = tx.Exec(`LOCK TABLE que_jobs IN ACCESS EXCLUSIVE MODE`)
if err != nil {
panic(err)
}
// Fake a concurrent transaction grabbing the job
var jid int64
err = tx.QueryRow(`
DELETE FROM que_jobs
WHERE job_id =
(SELECT min(job_id)
FROM que_jobs)
RETURNING job_id
`).Scan(&jid)
if err != nil {
panic(err)
}
deletedJobIDChan <- jid
err = tx.Commit()
if err != nil {
panic(err)
}
}()
conn, err := c.pool.Acquire()
if err != nil {
panic(err)
}
ourBackendID := getBackendPID(conn)
c.pool.Release(conn)
// synchronization point
lockJobBackendIDChan <- ourBackendID
job, err := c.LockJob("")
if err != nil {
panic(err)
}
defer job.Done()
deletedJobID := <-deletedJobIDChan
t.Logf("Got id %d", job.ID)
t.Logf("Concurrently deleted id %d", deletedJobID)
if deletedJobID >= job.ID {
t.Fatalf("deleted job id %d must be smaller than job.ID %d", deletedJobID, job.ID)
}
}
func TestJobDelete(t *testing.T) {
c := openTestClient(t)
defer truncateAndClose(c.pool)
if err := c.Enqueue(&Job{Type: "MyJob"}); err != nil {
t.Fatal(err)
}
j, err := c.LockJob("")
if err != nil {
t.Fatal(err)
}
if j == nil {
t.Fatal("wanted job, got none")
}
defer j.Done()
if err = j.Delete(); err != nil {
t.Fatal(err)
}
// make sure job was deleted
j2, err := findOneJob(c.pool)
if err != nil {
t.Fatal(err)
}
if j2 != nil {
t.Errorf("job was not deleted: %+v", j2)
}
}
func TestJobDone(t *testing.T) {
c := openTestClient(t)
defer truncateAndClose(c.pool)
if err := c.Enqueue(&Job{Type: "MyJob"}); err != nil {
t.Fatal(err)
}
j, err := c.LockJob("")
if err != nil {
t.Fatal(err)
}
if j == nil {
t.Fatal("wanted job, got none")
}
j.Done()
// make sure conn and pool were cleared
if j.conn != nil {
t.Errorf("want nil conn, got %+v", j.conn)
}
if j.pool != nil {
t.Errorf("want nil pool, got %+v", j.pool)
}
// make sure lock was released
var count int64
query := "SELECT count(*) FROM pg_locks WHERE locktype=$1 AND objid=$2::bigint"
if err = c.pool.QueryRow(query, "advisory", j.ID).Scan(&count); err != nil {
t.Fatal(err)
}
if count != 0 {
t.Error("advisory lock was not released")
}
// make sure conn was returned to pool
stat := c.pool.Stat()
total, available := stat.CurrentConnections, stat.AvailableConnections
if total != available {
t.Errorf("want available=total, got available=%d total=%d", available, total)
}
}
func TestJobDoneMultiple(t *testing.T) {
c := openTestClient(t)
defer truncateAndClose(c.pool)
if err := c.Enqueue(&Job{Type: "MyJob"}); err != nil {
t.Fatal(err)
}
j, err := c.LockJob("")
if err != nil {
t.Fatal(err)
}
if j == nil {
t.Fatal("wanted job, got none")
}
j.Done()
// try calling Done() again
j.Done()
}
func TestJobDeleteFromTx(t *testing.T) {
c := openTestClient(t)
defer truncateAndClose(c.pool)
if err := c.Enqueue(&Job{Type: "MyJob"}); err != nil {
t.Fatal(err)
}
j, err := c.LockJob("")
if err != nil {
t.Fatal(err)
}
if j == nil {
t.Fatal("wanted job, got none")
}
// get the job's database connection
conn := j.Conn()
if conn == nil {
t.Fatal("wanted conn, got nil")
}
// start a transaction
tx, err := conn.Begin()
if err != nil {
t.Fatal(err)
}
// delete the job
if err = j.Delete(); err != nil {
t.Fatal(err)
}
if err = tx.Commit(); err != nil {
t.Fatal(err)
}
// mark as done
j.Done()
// make sure the job is gone
j2, err := findOneJob(c.pool)
if err != nil {
t.Fatal(err)
}
if j2 != nil {
t.Errorf("wanted no job, got %+v", j2)
}
}
func TestJobDeleteFromTxRollback(t *testing.T) {
c := openTestClient(t)
defer truncateAndClose(c.pool)
if err := c.Enqueue(&Job{Type: "MyJob"}); err != nil {
t.Fatal(err)
}
j1, err := c.LockJob("")
if err != nil {
t.Fatal(err)
}
if j1 == nil {
t.Fatal("wanted job, got none")
}
// get the job's database connection
conn := j1.Conn()
if conn == nil {
t.Fatal("wanted conn, got nil")
}
// start a transaction
tx, err := conn.Begin()
if err != nil {
t.Fatal(err)
}
// delete the job
if err = j1.Delete(); err != nil {
t.Fatal(err)
}
if err = tx.Rollback(); err != nil {
t.Fatal(err)
}
// mark as done
j1.Done()
// make sure the job still exists and matches j1
j2, err := findOneJob(c.pool)
if err != nil {
t.Fatal(err)
}
if j1.ID != j2.ID {
t.Errorf("want job %d, got %d", j1.ID, j2.ID)
}
}
func TestJobError(t *testing.T) {
c := openTestClient(t)
defer truncateAndClose(c.pool)
if err := c.Enqueue(&Job{Type: "MyJob"}); err != nil {
t.Fatal(err)
}
j, err := c.LockJob("")
if err != nil {
t.Fatal(err)
}
if j == nil {
t.Fatal("wanted job, got none")
}
defer j.Done()
msg := "world\nended"
if err = j.Error(msg); err != nil {
t.Fatal(err)
}
j.Done()
// make sure job was not deleted
j2, err := findOneJob(c.pool)
if err != nil {
t.Fatal(err)
}
if j2 == nil {
t.Fatal("job was not found")
}
defer j2.Done()
if j2.LastError.Status == pgtype.Null || j2.LastError.String != msg {
t.Errorf("want LastError=%q, got %q", msg, j2.LastError.String)
}
if j2.ErrorCount != 1 {
t.Errorf("want ErrorCount=%d, got %d", 1, j2.ErrorCount)
}
// make sure lock was released
var count int64
query := "SELECT count(*) FROM pg_locks WHERE locktype=$1 AND objid=$2::bigint"
if err = c.pool.QueryRow(query, "advisory", j.ID).Scan(&count); err != nil {
t.Fatal(err)
}
if count != 0 {
t.Error("advisory lock was not released")
}
// make sure conn was returned to pool
stat := c.pool.Stat()
total, available := stat.CurrentConnections, stat.AvailableConnections
if total != available {
t.Errorf("want available=total, got available=%d total=%d", available, total)
}
}
================================================
FILE: worker.go
================================================
package que
import (
"bytes"
"fmt"
"log"
"os"
"runtime"
"strconv"
"sync"
"time"
)
// WorkFunc is a function that performs a Job. If an error is returned, the job
// is reenqueued with exponential backoff.
type WorkFunc func(j *Job) error
// WorkMap is a map of Job names to WorkFuncs that are used to perform Jobs of a
// given type.
type WorkMap map[string]WorkFunc
// Worker is a single worker that pulls jobs off the specified Queue. If no Job
// is found, the Worker will sleep for Interval seconds.
type Worker struct {
// Interval is the amount of time that this Worker should sleep before trying
// to find another Job.
Interval time.Duration
// Queue is the name of the queue to pull Jobs off of. The default value, "",
// is usable and is the default for both que-go and the ruby que library.
Queue string
c *Client
m WorkMap
mu sync.Mutex
done bool
ch chan struct{}
}
var defaultWakeInterval = 5 * time.Second
func init() {
if v := os.Getenv("QUE_WAKE_INTERVAL"); v != "" {
if newInt, err := strconv.Atoi(v); err == nil {
defaultWakeInterval = time.Duration(newInt) * time.Second
}
}
}
// NewWorker returns a Worker that fetches Jobs from the Client and executes
// them using WorkMap. If the type of Job is not registered in the WorkMap, it's
// considered an error and the job is re-enqueued with a backoff.
//
// Workers default to an Interval of 5 seconds, which can be overridden by
// setting the environment variable QUE_WAKE_INTERVAL. The default Queue is the
// nameless queue "", which can be overridden by setting QUE_QUEUE. Either of
// these settings can be changed on the returned Worker before it is started
// with Work().
func NewWorker(c *Client, m WorkMap) *Worker {
return &Worker{
Interval: defaultWakeInterval,
Queue: os.Getenv("QUE_QUEUE"),
c: c,
m: m,
ch: make(chan struct{}),
}
}
// Work pulls jobs off the Worker's Queue at its Interval. This function only
// returns after Shutdown() is called, so it should be run in its own goroutine.
func (w *Worker) Work() {
defer log.Println("worker done")
for {
// Try to work a job
if w.WorkOne() {
// Since we just did work, non-blocking check whether we should exit
select {
case <-w.ch:
return
default:
// continue in loop
}
} else {
// No work found, block until exit or timer expires
select {
case <-w.ch:
return
case <-time.After(w.Interval):
// continue in loop
}
}
}
}
func (w *Worker) WorkOne() (didWork bool) {
j, err := w.c.LockJob(w.Queue)
if err != nil {
log.Printf("attempting to lock job: %v", err)
return
}
if j == nil {
return // no job was available
}
defer j.Done()
defer recoverPanic(j)
didWork = true
wf, ok := w.m[j.Type]
if !ok {
msg := fmt.Sprintf("unknown job type: %q", j.Type)
log.Println(msg)
if err = j.Error(msg); err != nil {
log.Printf("attempting to save error on job %d: %v", j.ID, err)
}
return
}
if err = wf(j); err != nil {
j.Error(err.Error())
return
}
if err = j.Delete(); err != nil {
log.Printf("attempting to delete job %d: %v", j.ID, err)
}
log.Printf("event=job_worked job_id=%d job_type=%s", j.ID, j.Type)
return
}
// Shutdown tells the worker to finish processing its current job and then stop.
// There is currently no timeout for in-progress jobs. This function blocks
// until the Worker has stopped working. It should only be called on an active
// Worker.
func (w *Worker) Shutdown() {
w.mu.Lock()
defer w.mu.Unlock()
if w.done {
return
}
log.Println("worker shutting down gracefully...")
w.ch <- struct{}{}
w.done = true
close(w.ch)
}
// recoverPanic tries to handle panics in job execution.
// A stacktrace is stored into Job last_error.
func recoverPanic(j *Job) {
if r := recover(); r != nil {
// record an error on the job with panic message and stacktrace
stackBuf := make([]byte, 1024)
n := runtime.Stack(stackBuf, false)
buf := &bytes.Buffer{}
fmt.Fprintf(buf, "%v\n", r)
fmt.Fprintln(buf, string(stackBuf[:n]))
fmt.Fprintln(buf, "[...]")
stacktrace := buf.String()
log.Printf("event=panic job_id=%d job_type=%s\n%s", j.ID, j.Type, stacktrace)
if err := j.Error(stacktrace); err != nil {
log.Printf("attempting to save error on job %d: %v", j.ID, err)
}
}
}
// WorkerPool is a pool of Workers, each working jobs from the queue Queue
// at the specified Interval using the WorkMap.
type WorkerPool struct {
WorkMap WorkMap
Interval time.Duration
Queue string
c *Client
workers []*Worker
mu sync.Mutex
done bool
}
// NewWorkerPool creates a new WorkerPool with count workers using the Client c.
func NewWorkerPool(c *Client, wm WorkMap, count int) *WorkerPool {
return &WorkerPool{
c: c,
WorkMap: wm,
Interval: defaultWakeInterval,
workers: make([]*Worker, count),
}
}
// Start starts all of the Workers in the WorkerPool.
func (w *WorkerPool) Start() {
w.mu.Lock()
defer w.mu.Unlock()
for i := range w.workers {
w.workers[i] = NewWorker(w.c, w.WorkMap)
w.workers[i].Interval = w.Interval
w.workers[i].Queue = w.Queue
go w.workers[i].Work()
}
}
// Shutdown sends a Shutdown signal to each of the Workers in the WorkerPool and
// waits for them all to finish shutting down.
func (w *WorkerPool) Shutdown() {
w.mu.Lock()
defer w.mu.Unlock()
if w.done {
return
}
var wg sync.WaitGroup
wg.Add(len(w.workers))
for _, worker := range w.workers {
go func(worker *Worker) {
// If Shutdown is called before Start has been called,
// then these are nil, so don't try to close them
if worker != nil {
worker.Shutdown()
}
wg.Done()
}(worker)
}
wg.Wait()
w.done = true
}
================================================
FILE: worker_test.go
================================================
package que
import (
"fmt"
"io/ioutil"
"log"
"os"
"strings"
"testing"
"github.com/jackc/pgx/pgtype"
)
func init() {
log.SetOutput(ioutil.Discard)
}
func TestWorkerWorkOne(t *testing.T) {
c := openTestClient(t)
defer truncateAndClose(c.pool)
success := false
wm := WorkMap{
"MyJob": func(j *Job) error {
success = true
return nil
},
}
w := NewWorker(c, wm)
didWork := w.WorkOne()
if didWork {
t.Errorf("want didWork=false when no job was queued")
}
if err := c.Enqueue(&Job{Type: "MyJob"}); err != nil {
t.Fatal(err)
}
didWork = w.WorkOne()
if !didWork {
t.Errorf("want didWork=true")
}
if !success {
t.Errorf("want success=true")
}
}
func TestWorkerShutdown(t *testing.T) {
c := openTestClient(t)
defer truncateAndClose(c.pool)
w := NewWorker(c, WorkMap{})
finished := false
go func() {
w.Work()
finished = true
}()
w.Shutdown()
if !finished {
t.Errorf("want finished=true")
}
if !w.done {
t.Errorf("want w.done=true")
}
}
func BenchmarkWorker(b *testing.B) {
c := openTestClient(b)
log.SetOutput(ioutil.Discard)
defer func() {
log.SetOutput(os.Stdout)
}()
defer truncateAndClose(c.pool)
w := NewWorker(c, WorkMap{"Nil": nilWorker})
for i := 0; i < b.N; i++ {
if err := c.Enqueue(&Job{Type: "Nil"}); err != nil {
log.Fatal(err)
}
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
w.WorkOne()
}
}
func nilWorker(j *Job) error {
return nil
}
func TestWorkerWorkReturnsError(t *testing.T) {
c := openTestClient(t)
defer truncateAndClose(c.pool)
called := 0
wm := WorkMap{
"MyJob": func(j *Job) error {
called++
return fmt.Errorf("the error msg")
},
}
w := NewWorker(c, wm)
didWork := w.WorkOne()
if didWork {
t.Errorf("want didWork=false when no job was queued")
}
if err := c.Enqueue(&Job{Type: "MyJob"}); err != nil {
t.Fatal(err)
}
didWork = w.WorkOne()
if !didWork {
t.Errorf("want didWork=true")
}
if called != 1 {
t.Errorf("want called=1 was: %d", called)
}
tx, err := c.pool.Begin()
if err != nil {
t.Fatal(err)
}
defer tx.Rollback()
j, err := findOneJob(tx)
if err != nil {
t.Fatal(err)
}
if j.ErrorCount != 1 {
t.Errorf("want ErrorCount=1 was %d", j.ErrorCount)
}
if j.LastError.Status == pgtype.Null {
t.Errorf("want LastError IS NOT NULL")
}
if j.LastError.String != "the error msg" {
t.Errorf("want LastError=\"the error msg\" was: %q", j.LastError.String)
}
}
func TestWorkerWorkRescuesPanic(t *testing.T) {
c := openTestClient(t)
defer truncateAndClose(c.pool)
called := 0
wm := WorkMap{
"MyJob": func(j *Job) error {
called++
panic("the panic msg")
return nil
},
}
w := NewWorker(c, wm)
if err := c.Enqueue(&Job{Type: "MyJob"}); err != nil {
t.Fatal(err)
}
w.WorkOne()
if called != 1 {
t.Errorf("want called=1 was: %d", called)
}
tx, err := c.pool.Begin()
if err != nil {
t.Fatal(err)
}
defer tx.Rollback()
j, err := findOneJob(tx)
if err != nil {
t.Fatal(err)
}
if j.ErrorCount != 1 {
t.Errorf("want ErrorCount=1 was %d", j.ErrorCount)
}
if j.LastError.Status == pgtype.Null {
t.Errorf("want LastError IS NOT NULL")
}
if !strings.Contains(j.LastError.String, "the panic msg\n") {
t.Errorf("want LastError contains \"the panic msg\\n\" was: %q", j.LastError.String)
}
// basic check if a stacktrace is there - not the stacktrace format itself
if !strings.Contains(j.LastError.String, "worker.go:") {
t.Errorf("want LastError contains \"worker.go:\" was: %q", j.LastError.String)
}
if !strings.Contains(j.LastError.String, "worker_test.go:") {
t.Errorf("want LastError contains \"worker_test.go:\" was: %q", j.LastError.String)
}
}
func TestWorkerWorkOneTypeNotInMap(t *testing.T) {
c := openTestClient(t)
defer truncateAndClose(c.pool)
currentConns := c.pool.Stat().CurrentConnections
availConns := c.pool.Stat().AvailableConnections
success := false
wm := WorkMap{}
w := NewWorker(c, wm)
didWork := w.WorkOne()
if didWork {
t.Errorf("want didWork=false when no job was queued")
}
if err := c.Enqueue(&Job{Type: "MyJob"}); err != nil {
t.Fatal(err)
}
didWork = w.WorkOne()
if !didWork {
t.Errorf("want didWork=true")
}
if success {
t.Errorf("want success=false")
}
if currentConns != c.pool.Stat().CurrentConnections {
t.Errorf("want currentConns euqual: before=%d after=%d", currentConns, c.pool.Stat().CurrentConnections)
}
if availConns != c.pool.Stat().AvailableConnections {
t.Errorf("want availConns euqual: before=%d after=%d", availConns, c.pool.Stat().AvailableConnections)
}
tx, err := c.pool.Begin()
if err != nil {
t.Fatal(err)
}
defer tx.Rollback()
j, err := findOneJob(tx)
if err != nil {
t.Fatal(err)
}
if j.ErrorCount != 1 {
t.Errorf("want ErrorCount=1 was %d", j.ErrorCount)
}
if j.LastError.Status == pgtype.Null {
t.Fatal("want non-nil LastError")
}
if want := "unknown job type: \"MyJob\""; j.LastError.String != want {
t.Errorf("want LastError=%q, got %q", want, j.LastError.String)
}
}
gitextract_ssiubae7/ ├── LICENSE ├── README.md ├── doc.go ├── enqueue_test.go ├── que.go ├── que_test.go ├── schema.sql ├── sql.go ├── util.go ├── work_test.go ├── worker.go └── worker_test.go
SYMBOL INDEX (71 symbols across 9 files)
FILE: enqueue_test.go
function TestEnqueueOnlyType (line 10) | func TestEnqueueOnlyType(t *testing.T) {
function TestEnqueueWithPriority (line 50) | func TestEnqueueWithPriority(t *testing.T) {
function TestEnqueueWithRunAt (line 69) | func TestEnqueueWithRunAt(t *testing.T) {
function TestEnqueueWithArgs (line 90) | func TestEnqueueWithArgs(t *testing.T) {
function TestEnqueueWithQueue (line 109) | func TestEnqueueWithQueue(t *testing.T) {
function TestEnqueueWithEmptyType (line 128) | func TestEnqueueWithEmptyType(t *testing.T) {
function TestEnqueueInTx (line 137) | func TestEnqueueInTx(t *testing.T) {
FILE: que.go
type Job (line 13) | type Job struct
method Conn (line 56) | func (j *Job) Conn() *pgx.Conn {
method Delete (line 67) | func (j *Job) Delete() error {
method Done (line 86) | func (j *Job) Done() {
method Error (line 111) | func (j *Job) Error(msg string) error {
type Client (line 124) | type Client struct
method Enqueue (line 140) | func (c *Client) Enqueue(j *Job) error {
method EnqueueInTx (line 150) | func (c *Client) EnqueueInTx(j *Job, tx *pgx.Tx) error {
method LockJob (line 225) | func (c *Client) LockJob(queue string) (*Job, error) {
function NewClient (line 131) | func NewClient(pool *pgx.ConnPool) *Client {
function execEnqueue (line 154) | func execEnqueue(j *Job, q queryable) error {
type queryable (line 195) | type queryable interface
constant maxLockJobAttempts (line 203) | maxLockJobAttempts = 10
function PrepareStatements (line 298) | func PrepareStatements(conn *pgx.Conn) error {
type Preparer (line 304) | type Preparer interface
function PrepareStatementsWithPreparer (line 312) | func PrepareStatementsWithPreparer(p Preparer) error {
FILE: que_test.go
function openTestClientMaxConns (line 14) | func openTestClientMaxConns(t testing.TB, maxConnections int) *Client {
function openTestClient (line 27) | func openTestClient(t testing.TB) *Client {
function truncateAndClose (line 31) | func truncateAndClose(pool *pgx.ConnPool) {
function findOneJob (line 38) | func findOneJob(q queryable) (*Job, error) {
FILE: schema.sql
type que_jobs (line 1) | CREATE TABLE IF NOT EXISTS que_jobs
FILE: sql.go
constant sqlLockJob (line 28) | sqlLockJob = `
constant sqlUnlockJob (line 63) | sqlUnlockJob = `
constant sqlCheckJob (line 67) | sqlCheckJob = `
constant sqlSetError (line 76) | sqlSetError = `
constant sqlInsertJob (line 87) | sqlInsertJob = `
constant sqlDeleteJob (line 94) | sqlDeleteJob = `
constant sqlJobStats (line 102) | sqlJobStats = `
constant sqlWorkerStates (line 120) | sqlWorkerStates = `
FILE: util.go
function intPow (line 4) | func intPow(x, y int) (r int) {
FILE: work_test.go
function TestLockJob (line 13) | func TestLockJob(t *testing.T) {
function TestLockJobAlreadyLocked (line 82) | func TestLockJobAlreadyLocked(t *testing.T) {
function TestLockJobNoJob (line 109) | func TestLockJobNoJob(t *testing.T) {
function TestLockJobCustomQueue (line 122) | func TestLockJobCustomQueue(t *testing.T) {
function TestJobConn (line 154) | func TestJobConn(t *testing.T) {
function TestJobConnRace (line 176) | func TestJobConnRace(t *testing.T) {
function TestLockJobAdvisoryRace (line 210) | func TestLockJobAdvisoryRace(t *testing.T) {
function TestJobDelete (line 379) | func TestJobDelete(t *testing.T) {
function TestJobDone (line 410) | func TestJobDone(t *testing.T) {
function TestJobDoneMultiple (line 454) | func TestJobDoneMultiple(t *testing.T) {
function TestJobDeleteFromTx (line 475) | func TestJobDeleteFromTx(t *testing.T) {
function TestJobDeleteFromTxRollback (line 526) | func TestJobDeleteFromTxRollback(t *testing.T) {
function TestJobError (line 577) | func TestJobError(t *testing.T) {
FILE: worker.go
type WorkFunc (line 16) | type WorkFunc
type WorkMap (line 20) | type WorkMap
type Worker (line 24) | type Worker struct
method Work (line 72) | func (w *Worker) Work() {
method WorkOne (line 96) | func (w *Worker) WorkOne() (didWork bool) {
method Shutdown (line 136) | func (w *Worker) Shutdown() {
function init (line 43) | func init() {
function NewWorker (line 60) | func NewWorker(c *Client, m WorkMap) *Worker {
function recoverPanic (line 152) | func recoverPanic(j *Job) {
type WorkerPool (line 172) | type WorkerPool struct
method Start (line 194) | func (w *WorkerPool) Start() {
method Shutdown (line 208) | func (w *WorkerPool) Shutdown() {
function NewWorkerPool (line 184) | func NewWorkerPool(c *Client, wm WorkMap, count int) *WorkerPool {
FILE: worker_test.go
function init (line 14) | func init() {
function TestWorkerWorkOne (line 18) | func TestWorkerWorkOne(t *testing.T) {
function TestWorkerShutdown (line 49) | func TestWorkerShutdown(t *testing.T) {
function BenchmarkWorker (line 68) | func BenchmarkWorker(b *testing.B) {
function nilWorker (line 90) | func nilWorker(j *Job) error {
function TestWorkerWorkReturnsError (line 94) | func TestWorkerWorkReturnsError(t *testing.T) {
function TestWorkerWorkRescuesPanic (line 145) | func TestWorkerWorkRescuesPanic(t *testing.T) {
function TestWorkerWorkOneTypeNotInMap (line 196) | func TestWorkerWorkOneTypeNotInMap(t *testing.T) {
Condensed preview — 12 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (53K chars).
[
{
"path": "LICENSE",
"chars": 1080,
"preview": "The MIT License (MIT)\n\nCopyright (c) 2014 Blake Gentry\n\nPermission is hereby granted, free of charge, to any person obta"
},
{
"path": "README.md",
"chars": 1614,
"preview": "# que-go\n\n[][godoc]\n\n## Unmaintained\n\n⚠️ **que-go is unm"
},
{
"path": "doc.go",
"chars": 3079,
"preview": "/*\nPackage que-go is a fully interoperable Golang port of Chris Hanks' Ruby Que\nqueueing library for PostgreSQL. Que use"
},
{
"path": "enqueue_test.go",
"chars": 3425,
"preview": "package que\n\nimport (\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/jackc/pgx/pgtype\"\n)\n\nfunc TestEnqueueOnlyType(t *testing.T) {\n\tc "
},
{
"path": "que.go",
"chars": 9485,
"preview": "package que\n\nimport (\n\t\"errors\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/jackc/pgx\"\n\t\"github.com/jackc/pgx/pgtype\"\n)\n\n// Job is a s"
},
{
"path": "que_test.go",
"chars": 1126,
"preview": "package que\n\nimport (\n\t\"testing\"\n\n\t\"github.com/jackc/pgx\"\n)\n\nvar testConnConfig = pgx.ConnConfig{\n\tHost: \"localhost\""
},
{
"path": "schema.sql",
"chars": 488,
"preview": "CREATE TABLE IF NOT EXISTS que_jobs\n(\n priority smallint NOT NULL DEFAULT 100,\n run_at timestamptz NOT NULL"
},
{
"path": "sql.go",
"chars": 4020,
"preview": "// Copyright (c) 2013 Chris Hanks\n//\n// MIT License\n//\n// Permission is hereby granted, free of charge, to any person ob"
},
{
"path": "util.go",
"chars": 293,
"preview": "package que\n\n// intPow returns x**y, the base-x exponential of y.\nfunc intPow(x, y int) (r int) {\n\tif x == r || y < r {\n"
},
{
"path": "work_test.go",
"chars": 12779,
"preview": "package que\n\nimport (\n\t\"fmt\"\n\t\"sync\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/jackc/pgx\"\n\t\"github.com/jackc/pgx/pgtype\"\n)\n\nfunc "
},
{
"path": "worker.go",
"chars": 5716,
"preview": "package que\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"log\"\n\t\"os\"\n\t\"runtime\"\n\t\"strconv\"\n\t\"sync\"\n\t\"time\"\n)\n\n// WorkFunc is a function th"
},
{
"path": "worker_test.go",
"chars": 5011,
"preview": "package que\n\nimport (\n\t\"fmt\"\n\t\"io/ioutil\"\n\t\"log\"\n\t\"os\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/jackc/pgx/pgtype\"\n)\n\nfunc ini"
}
]
About this extraction
This page contains the full source code of the bgentry/que-go GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 12 files (47.0 KB), approximately 13.8k tokens, and a symbol index with 71 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.