[
  {
    "path": "LICENSE",
    "content": "The MIT License (MIT)\n\nCopyright (c) 2014 Blake Gentry\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n\n"
  },
  {
    "path": "README.md",
    "content": "# que-go\n\n[![GoDoc](https://godoc.org/github.com/bgentry/que-go?status.svg)][godoc]\n\n## Unmaintained\n\n⚠️ **que-go is unmaintained** ⚠️. Please check out [River](https://riverqueue.com) for a fast, reliable Postgres job queue in Go.\n\n## Overview\n\nQue-go is a fully interoperable Golang port of [Chris Hanks][chanks]' [Ruby Que\nqueuing library][ruby-que] for PostgreSQL. Que uses PostgreSQL's advisory locks\nfor speed and reliability.\n\nBecause que-go is an interoperable port of Que, you can enqueue jobs in Ruby\n(i.e. from a Rails app) and write your workers in Go. Or if you have a limited\nset of jobs that you want to write in Go, you can leave most of your workers in\nRuby and just add a few Go workers on a different queue name. Or you can just\nwrite everything in Go :)\n\n## pgx PostgreSQL driver\n\nThis package uses the [pgx][pgx] Go PostgreSQL driver rather than the more\npopular [pq][pq]. Because Que uses session-level advisory locks, we have to hold\nthe same connection throughout the process of getting a job, working it,\ndeleting it, and removing the lock.\n\nPq and the built-in database/sql interfaces do not offer this functionality, so\nwe'd have to implement our own connection pool. Fortunately, pgx already has a\nperfectly usable one built for us. Even better, it offers better performance\nthan pq due largely to its use of binary encoding.\n\nPlease see the [godocs][godoc] for more info and examples.\n\n[godoc]: https://godoc.org/github.com/bgentry/que-go\n[chanks]: https://github.com/chanks\n[ruby-que]: https://github.com/chanks/que\n[pgx]: https://github.com/jackc/pgx\n[pq]: https://github.com/lib/pq\n"
  },
  {
    "path": "doc.go",
    "content": "/*\nPackage que-go is a fully interoperable Golang port of Chris Hanks' Ruby Que\nqueueing library for PostgreSQL. Que uses PostgreSQL's advisory locks\nfor speed and reliability. See the original Que documentation for more details:\nhttps://github.com/chanks/que\n\nBecause que-go is an interoperable port of Que, you can enqueue jobs in Ruby\n(i.e. from a Rails app) and write your workers in Go. Or if you have a limited\nset of jobs that you want to write in Go, you can leave most of your workers in\nRuby and just add a few Go workers on a different queue name.\n\nPostgreSQL Driver pgx\n\nInstead of using database/sql and the more popular pq PostgreSQL driver, this\npackage uses the pgx driver: https://github.com/jackc/pgx\n\nBecause Que uses session-level advisory locks, we have to hold the same\nconnection throughout the process of getting a job, working it, deleting it, and\nremoving the lock.\n\nPq and the built-in database/sql interfaces do not offer this functionality, so\nwe'd have to implement our own connection pool. Fortunately, pgx already has a\nperfectly usable one built for us. Even better, it offers better performance\nthan pq due largely to its use of binary encoding.\n\nPrepared Statements\n\nque-go relies on prepared statements for performance. As of now these have to\nbe initialized manually on your connection pool like so:\n\n    pgxpool, err := pgx.NewConnPool(pgx.ConnPoolConfig{\n        ConnConfig:   pgxcfg,\n        AfterConnect: que.PrepareStatements,\n    })\n\nIf you have suggestions on how to cleanly do this automatically, please open an\nissue!\n\nUsage\n\nHere is a complete example showing worker setup and two jobs enqueued, one with a delay:\n\n    type printNameArgs struct {\n        Name string\n    }\n\n    printName := func(j *que.Job) error {\n        var args printNameArgs\n        if err := json.Unmarshal(j.Args, &args); err != nil {\n            return err\n        }\n        fmt.Printf(\"Hello %s!\\n\", args.Name)\n        return nil\n    }\n\n    pgxcfg, err := pgx.ParseURI(os.Getenv(\"DATABASE_URL\"))\n    if err != nil {\n        log.Fatal(err)\n    }\n\n    pgxpool, err := pgx.NewConnPool(pgx.ConnPoolConfig{\n        ConnConfig:   pgxcfg,\n        AfterConnect: que.PrepareStatements,\n    })\n    if err != nil {\n        log.Fatal(err)\n    }\n    defer pgxpool.Close()\n\n    qc := que.NewClient(pgxpool)\n    wm := que.WorkMap{\n        \"PrintName\": printName,\n    }\n    workers := que.NewWorkerPool(qc, wm, 2) // create a pool w/ 2 workers\n    go workers.Start() // work jobs in another goroutine\n\n    args, err := json.Marshal(printNameArgs{Name: \"bgentry\"})\n    if err != nil {\n        log.Fatal(err)\n    }\n\n    j := &que.Job{\n        Type:  \"PrintName\",\n        Args:  args,\n    }\n    if err := qc.Enqueue(j); err != nil {\n        log.Fatal(err)\n    }\n\n    j := &que.Job{\n        Type:  \"PrintName\",\n        RunAt: time.Now().UTC().Add(30 * time.Second), // delay 30 seconds\n        Args:  args,\n    }\n    if err := qc.Enqueue(j); err != nil {\n        log.Fatal(err)\n    }\n\n    time.Sleep(35 * time.Second) // wait for while\n\n    workers.Shutdown()\n\n*/\npackage que\n"
  },
  {
    "path": "enqueue_test.go",
    "content": "package que\n\nimport (\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/jackc/pgx/pgtype\"\n)\n\nfunc TestEnqueueOnlyType(t *testing.T) {\n\tc := openTestClient(t)\n\tdefer truncateAndClose(c.pool)\n\n\tif err := c.Enqueue(&Job{Type: \"MyJob\"}); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tj, err := findOneJob(c.pool)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\t// check resulting job\n\tif j.ID == 0 {\n\t\tt.Errorf(\"want non-zero ID\")\n\t}\n\tif want := \"\"; j.Queue != want {\n\t\tt.Errorf(\"want Queue=%q, got %q\", want, j.Queue)\n\t}\n\tif want := int16(100); j.Priority != want {\n\t\tt.Errorf(\"want Priority=%d, got %d\", want, j.Priority)\n\t}\n\tif j.RunAt.IsZero() {\n\t\tt.Error(\"want non-zero RunAt\")\n\t}\n\tif want := \"MyJob\"; j.Type != want {\n\t\tt.Errorf(\"want Type=%q, got %q\", want, j.Type)\n\t}\n\tif want, got := \"[]\", string(j.Args); got != want {\n\t\tt.Errorf(\"want Args=%s, got %s\", want, got)\n\t}\n\tif want := int32(0); j.ErrorCount != want {\n\t\tt.Errorf(\"want ErrorCount=%d, got %d\", want, j.ErrorCount)\n\t}\n\tif j.LastError.Status == pgtype.Present {\n\t\tt.Errorf(\"want no LastError, got %v\", j.LastError)\n\t}\n}\n\nfunc TestEnqueueWithPriority(t *testing.T) {\n\tc := openTestClient(t)\n\tdefer truncateAndClose(c.pool)\n\n\twant := int16(99)\n\tif err := c.Enqueue(&Job{Type: \"MyJob\", Priority: want}); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tj, err := findOneJob(c.pool)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tif j.Priority != want {\n\t\tt.Errorf(\"want Priority=%d, got %d\", want, j.Priority)\n\t}\n}\n\nfunc TestEnqueueWithRunAt(t *testing.T) {\n\tc := openTestClient(t)\n\tdefer truncateAndClose(c.pool)\n\n\twant := time.Now().Add(2 * time.Minute)\n\tif err := c.Enqueue(&Job{Type: \"MyJob\", RunAt: want}); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tj, err := findOneJob(c.pool)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\t// truncate to the microsecond as postgres driver does\n\twant = want.Truncate(time.Microsecond)\n\tif !want.Equal(j.RunAt) {\n\t\tt.Errorf(\"want RunAt=%s, got %s\", want, j.RunAt)\n\t}\n}\n\nfunc TestEnqueueWithArgs(t *testing.T) {\n\tc := openTestClient(t)\n\tdefer truncateAndClose(c.pool)\n\n\twant := `{\"arg1\":0, \"arg2\":\"a string\"}`\n\tif err := c.Enqueue(&Job{Type: \"MyJob\", Args: []byte(want)}); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tj, err := findOneJob(c.pool)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tif got := string(j.Args); got != want {\n\t\tt.Errorf(\"want Args=%s, got %s\", want, got)\n\t}\n}\n\nfunc TestEnqueueWithQueue(t *testing.T) {\n\tc := openTestClient(t)\n\tdefer truncateAndClose(c.pool)\n\n\twant := \"special-work-queue\"\n\tif err := c.Enqueue(&Job{Type: \"MyJob\", Queue: want}); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tj, err := findOneJob(c.pool)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tif j.Queue != want {\n\t\tt.Errorf(\"want Queue=%q, got %q\", want, j.Queue)\n\t}\n}\n\nfunc TestEnqueueWithEmptyType(t *testing.T) {\n\tc := openTestClient(t)\n\tdefer truncateAndClose(c.pool)\n\n\tif err := c.Enqueue(&Job{Type: \"\"}); err != ErrMissingType {\n\t\tt.Fatalf(\"want ErrMissingType, got %v\", err)\n\t}\n}\n\nfunc TestEnqueueInTx(t *testing.T) {\n\tc := openTestClient(t)\n\tdefer truncateAndClose(c.pool)\n\n\ttx, err := c.pool.Begin()\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tdefer tx.Rollback()\n\n\tif err = c.EnqueueInTx(&Job{Type: \"MyJob\"}, tx); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tj, err := findOneJob(tx)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif j == nil {\n\t\tt.Fatal(\"want job, got none\")\n\t}\n\n\tif err = tx.Rollback(); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tj, err = findOneJob(c.pool)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif j != nil {\n\t\tt.Fatalf(\"wanted job to be rolled back, got %+v\", j)\n\t}\n}\n"
  },
  {
    "path": "que.go",
    "content": "package que\n\nimport (\n\t\"errors\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/jackc/pgx\"\n\t\"github.com/jackc/pgx/pgtype\"\n)\n\n// Job is a single unit of work for Que to perform.\ntype Job struct {\n\t// ID is the unique database ID of the Job. It is ignored on job creation.\n\tID int64\n\n\t// Queue is the name of the queue. It defaults to the empty queue \"\".\n\tQueue string\n\n\t// Priority is the priority of the Job. The default priority is 100, and a\n\t// lower number means a higher priority. A priority of 5 would be very\n\t// important.\n\tPriority int16\n\n\t// RunAt is the time that this job should be executed. It defaults to now(),\n\t// meaning the job will execute immediately. Set it to a value in the future\n\t// to delay a job's execution.\n\tRunAt time.Time\n\n\t// Type corresponds to the Ruby job_class. If you are interoperating with\n\t// Ruby, you should pick suitable Ruby class names (such as MyJob).\n\tType string\n\n\t// Args must be the bytes of a valid JSON string\n\tArgs []byte\n\n\t// ErrorCount is the number of times this job has attempted to run, but\n\t// failed with an error. It is ignored on job creation.\n\tErrorCount int32\n\n\t// LastError is the error message or stack trace from the last time the job\n\t// failed. It is ignored on job creation.\n\tLastError pgtype.Text\n\n\tmu      sync.Mutex\n\tdeleted bool\n\tpool    *pgx.ConnPool\n\tconn    *pgx.Conn\n}\n\n// Conn returns the pgx connection that this job is locked to. You may initiate\n// transactions on this connection or use it as you please until you call\n// Done(). At that point, this conn will be returned to the pool and it is\n// unsafe to keep using it. This function will return nil if the Job's\n// connection has already been released with Done().\nfunc (j *Job) Conn() *pgx.Conn {\n\tj.mu.Lock()\n\tdefer j.mu.Unlock()\n\n\treturn j.conn\n}\n\n// Delete marks this job as complete by deleting it form the database.\n//\n// You must also later call Done() to return this job's database connection to\n// the pool.\nfunc (j *Job) Delete() error {\n\tj.mu.Lock()\n\tdefer j.mu.Unlock()\n\n\tif j.deleted {\n\t\treturn nil\n\t}\n\n\t_, err := j.conn.Exec(\"que_destroy_job\", j.Queue, j.Priority, j.RunAt, j.ID)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tj.deleted = true\n\treturn nil\n}\n\n// Done releases the Postgres advisory lock on the job and returns the database\n// connection to the pool.\nfunc (j *Job) Done() {\n\tj.mu.Lock()\n\tdefer j.mu.Unlock()\n\n\tif j.conn == nil || j.pool == nil {\n\t\t// already marked as done\n\t\treturn\n\t}\n\n\tvar ok bool\n\t// Swallow this error because we don't want an unlock failure to cause work to\n\t// stop.\n\t_ = j.conn.QueryRow(\"que_unlock_job\", j.ID).Scan(&ok)\n\n\tj.pool.Release(j.conn)\n\tj.pool = nil\n\tj.conn = nil\n}\n\n// Error marks the job as failed and schedules it to be reworked. An error\n// message or backtrace can be provided as msg, which will be saved on the job.\n// It will also increase the error count.\n//\n// You must also later call Done() to return this job's database connection to\n// the pool.\nfunc (j *Job) Error(msg string) error {\n\terrorCount := j.ErrorCount + 1\n\tdelay := intPow(int(errorCount), 4) + 3 // TODO: configurable delay\n\n\t_, err := j.conn.Exec(\"que_set_error\", errorCount, delay, msg, j.Queue, j.Priority, j.RunAt, j.ID)\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn nil\n}\n\n// Client is a Que client that can add jobs to the queue and remove jobs from\n// the queue.\ntype Client struct {\n\tpool *pgx.ConnPool\n\n\t// TODO: add a way to specify default queueing options\n}\n\n// NewClient creates a new Client that uses the pgx pool.\nfunc NewClient(pool *pgx.ConnPool) *Client {\n\treturn &Client{pool: pool}\n}\n\n// ErrMissingType is returned when you attempt to enqueue a job with no Type\n// specified.\nvar ErrMissingType = errors.New(\"job type must be specified\")\n\n// Enqueue adds a job to the queue.\nfunc (c *Client) Enqueue(j *Job) error {\n\treturn execEnqueue(j, c.pool)\n}\n\n// EnqueueInTx adds a job to the queue within the scope of the transaction tx.\n// This allows you to guarantee that an enqueued job will either be committed or\n// rolled back atomically with other changes in the course of this transaction.\n//\n// It is the caller's responsibility to Commit or Rollback the transaction after\n// this function is called.\nfunc (c *Client) EnqueueInTx(j *Job, tx *pgx.Tx) error {\n\treturn execEnqueue(j, tx)\n}\n\nfunc execEnqueue(j *Job, q queryable) error {\n\tif j.Type == \"\" {\n\t\treturn ErrMissingType\n\t}\n\n\tqueue := &pgtype.Text{\n\t\tString: j.Queue,\n\t\tStatus: pgtype.Null,\n\t}\n\tif j.Queue != \"\" {\n\t\tqueue.Status = pgtype.Present\n\t}\n\n\tpriority := &pgtype.Int2{\n\t\tInt:    j.Priority,\n\t\tStatus: pgtype.Null,\n\t}\n\tif j.Priority != 0 {\n\t\tpriority.Status = pgtype.Present\n\t}\n\n\trunAt := &pgtype.Timestamptz{\n\t\tTime:   j.RunAt,\n\t\tStatus: pgtype.Null,\n\t}\n\tif !j.RunAt.IsZero() {\n\t\trunAt.Status = pgtype.Present\n\t}\n\n\targs := &pgtype.Bytea{\n\t\tBytes:  j.Args,\n\t\tStatus: pgtype.Null,\n\t}\n\tif len(j.Args) != 0 {\n\t\targs.Status = pgtype.Present\n\t}\n\n\t_, err := q.Exec(\"que_insert_job\", queue, priority, runAt, j.Type, args)\n\treturn err\n}\n\ntype queryable interface {\n\tExec(sql string, arguments ...interface{}) (commandTag pgx.CommandTag, err error)\n\tQuery(sql string, args ...interface{}) (*pgx.Rows, error)\n\tQueryRow(sql string, args ...interface{}) *pgx.Row\n}\n\n// Maximum number of loop iterations in LockJob before giving up.  This is to\n// avoid looping forever in case something is wrong.\nconst maxLockJobAttempts = 10\n\n// Returned by LockJob if a job could not be retrieved from the queue after\n// several attempts because of concurrently running transactions.  This error\n// should not be returned unless the queue is under extremely heavy\n// concurrency.\nvar ErrAgain = errors.New(\"maximum number of LockJob attempts reached\")\n\n// TODO: consider an alternate Enqueue func that also returns the newly\n// enqueued Job struct. The query sqlInsertJobAndReturn was already written for\n// this.\n\n// LockJob attempts to retrieve a Job from the database in the specified queue.\n// If a job is found, a session-level Postgres advisory lock is created for the\n// Job's ID. If no job is found, nil will be returned instead of an error.\n//\n// Because Que uses session-level advisory locks, we have to hold the\n// same connection throughout the process of getting a job, working it,\n// deleting it, and removing the lock.\n//\n// After the Job has been worked, you must call either Done() or Error() on it\n// in order to return the database connection to the pool and remove the lock.\nfunc (c *Client) LockJob(queue string) (*Job, error) {\n\tconn, err := c.pool.Acquire()\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tj := Job{pool: c.pool, conn: conn}\n\n\tfor i := 0; i < maxLockJobAttempts; i++ {\n\t\terr = conn.QueryRow(\"que_lock_job\", queue).Scan(\n\t\t\t&j.Queue,\n\t\t\t&j.Priority,\n\t\t\t&j.RunAt,\n\t\t\t&j.ID,\n\t\t\t&j.Type,\n\t\t\t&j.Args,\n\t\t\t&j.ErrorCount,\n\t\t)\n\t\tif err != nil {\n\t\t\tc.pool.Release(conn)\n\t\t\tif err == pgx.ErrNoRows {\n\t\t\t\treturn nil, nil\n\t\t\t}\n\t\t\treturn nil, err\n\t\t}\n\n\t\t// Deal with race condition. Explanation from the Ruby Que gem:\n\t\t//\n\t\t// Edge case: It's possible for the lock_job query to have\n\t\t// grabbed a job that's already been worked, if it took its MVCC\n\t\t// snapshot while the job was processing, but didn't attempt the\n\t\t// advisory lock until it was finished. Since we have the lock, a\n\t\t// previous worker would have deleted it by now, so we just\n\t\t// double check that it still exists before working it.\n\t\t//\n\t\t// Note that there is currently no spec for this behavior, since\n\t\t// I'm not sure how to reliably commit a transaction that deletes\n\t\t// the job in a separate thread between lock_job and check_job.\n\t\tvar ok bool\n\t\terr = conn.QueryRow(\"que_check_job\", j.Queue, j.Priority, j.RunAt, j.ID).Scan(&ok)\n\t\tif err == nil {\n\t\t\treturn &j, nil\n\t\t} else if err == pgx.ErrNoRows {\n\t\t\t// Encountered job race condition; start over from the beginning.\n\t\t\t// We're still holding the advisory lock, though, so we need to\n\t\t\t// release it before resuming.  Otherwise we leak the lock,\n\t\t\t// eventually causing the server to run out of locks.\n\t\t\t//\n\t\t\t// Also swallow the possible error, exactly like in Done.\n\t\t\t_ = conn.QueryRow(\"que_unlock_job\", j.ID).Scan(&ok)\n\t\t\tcontinue\n\t\t} else {\n\t\t\tc.pool.Release(conn)\n\t\t\treturn nil, err\n\t\t}\n\t}\n\tc.pool.Release(conn)\n\treturn nil, ErrAgain\n}\n\nvar preparedStatements = map[string]string{\n\t\"que_check_job\":   sqlCheckJob,\n\t\"que_destroy_job\": sqlDeleteJob,\n\t\"que_insert_job\":  sqlInsertJob,\n\t\"que_lock_job\":    sqlLockJob,\n\t\"que_set_error\":   sqlSetError,\n\t\"que_unlock_job\":  sqlUnlockJob,\n}\n\n// PrepareStatements prepares the required statements to run que on the provided\n// *pgx.Conn. Typically it is used as an AfterConnect func for a\n// *pgx.ConnPool. Every connection used by que must have the statements prepared\n// ahead of time.\nfunc PrepareStatements(conn *pgx.Conn) error {\n\treturn PrepareStatementsWithPreparer(conn)\n}\n\n// Preparer defines the interface for types that support preparing\n// statements. This includes all of *pgx.ConnPool, *pgx.Conn, and *pgx.Tx\ntype Preparer interface {\n\tPrepare(name, sql string) (*pgx.PreparedStatement, error)\n}\n\n// PrepareStatementsWithPreparer prepares the required statements to run que on\n// the provided Preparer. This func can be used to prepare statements on a\n// *pgx.ConnPool after it is created, or on a *pg.Tx. Every connection used by\n// que must have the statements prepared ahead of time.\nfunc PrepareStatementsWithPreparer(p Preparer) error {\n\tfor name, sql := range preparedStatements {\n\t\tif _, err := p.Prepare(name, sql); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\treturn nil\n}\n"
  },
  {
    "path": "que_test.go",
    "content": "package que\n\nimport (\n\t\"testing\"\n\n\t\"github.com/jackc/pgx\"\n)\n\nvar testConnConfig = pgx.ConnConfig{\n\tHost:     \"localhost\",\n\tDatabase: \"que-go-test\",\n}\n\nfunc openTestClientMaxConns(t testing.TB, maxConnections int) *Client {\n\tconnPoolConfig := pgx.ConnPoolConfig{\n\t\tConnConfig:     testConnConfig,\n\t\tMaxConnections: maxConnections,\n\t\tAfterConnect:   PrepareStatements,\n\t}\n\tpool, err := pgx.NewConnPool(connPoolConfig)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\treturn NewClient(pool)\n}\n\nfunc openTestClient(t testing.TB) *Client {\n\treturn openTestClientMaxConns(t, 5)\n}\n\nfunc truncateAndClose(pool *pgx.ConnPool) {\n\tif _, err := pool.Exec(\"TRUNCATE TABLE que_jobs\"); err != nil {\n\t\tpanic(err)\n\t}\n\tpool.Close()\n}\n\nfunc findOneJob(q queryable) (*Job, error) {\n\tfindSQL := `\n\tSELECT priority, run_at, job_id, job_class, args, error_count, last_error, queue\n\tFROM que_jobs LIMIT 1`\n\n\tj := &Job{}\n\terr := q.QueryRow(findSQL).Scan(\n\t\t&j.Priority,\n\t\t&j.RunAt,\n\t\t&j.ID,\n\t\t&j.Type,\n\t\t&j.Args,\n\t\t&j.ErrorCount,\n\t\t&j.LastError,\n\t\t&j.Queue,\n\t)\n\tif err == pgx.ErrNoRows {\n\t\treturn nil, nil\n\t}\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn j, nil\n}\n"
  },
  {
    "path": "schema.sql",
    "content": "CREATE TABLE IF NOT EXISTS que_jobs\n(\n  priority    smallint    NOT NULL DEFAULT 100,\n  run_at      timestamptz NOT NULL DEFAULT now(),\n  job_id      bigserial   NOT NULL,\n  job_class   text        NOT NULL,\n  args        json        NOT NULL DEFAULT '[]'::json,\n  error_count integer     NOT NULL DEFAULT 0,\n  last_error  text,\n  queue       text        NOT NULL DEFAULT '',\n\n  CONSTRAINT que_jobs_pkey PRIMARY KEY (queue, priority, run_at, job_id)\n);\n\nCOMMENT ON TABLE que_jobs IS '3';\n"
  },
  {
    "path": "sql.go",
    "content": "// Copyright (c) 2013 Chris Hanks\n//\n// MIT License\n//\n// Permission is hereby granted, free of charge, to any person obtaining\n// a copy of this software and associated documentation files (the\n// \"Software\"), to deal in the Software without restriction, including\n// without limitation the rights to use, copy, modify, merge, publish,\n// distribute, sublicense, and/or sell copies of the Software, and to\n// permit persons to whom the Software is furnished to do so, subject to\n// the following conditions:\n//\n// The above copyright notice and this permission notice shall be\n// included in all copies or substantial portions of the Software.\n//\n// THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND,\n// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF\n// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND\n// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE\n// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION\n// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\npackage que\n\n// Thanks to RhodiumToad in #postgresql for help with the job lock CTE.\nconst (\n\tsqlLockJob = `\nWITH RECURSIVE jobs AS (\n  SELECT (j).*, pg_try_advisory_lock((j).job_id) AS locked\n  FROM (\n    SELECT j\n    FROM que_jobs AS j\n    WHERE queue = $1::text\n    AND run_at <= now()\n    ORDER BY priority, run_at, job_id\n    LIMIT 1\n  ) AS t1\n  UNION ALL (\n    SELECT (j).*, pg_try_advisory_lock((j).job_id) AS locked\n    FROM (\n      SELECT (\n        SELECT j\n        FROM que_jobs AS j\n        WHERE queue = $1::text\n        AND run_at <= now()\n        AND (priority, run_at, job_id) > (jobs.priority, jobs.run_at, jobs.job_id)\n        ORDER BY priority, run_at, job_id\n        LIMIT 1\n      ) AS j\n      FROM jobs\n      WHERE jobs.job_id IS NOT NULL\n      LIMIT 1\n    ) AS t1\n  )\n)\nSELECT queue, priority, run_at, job_id, job_class, args, error_count\nFROM jobs\nWHERE locked\nLIMIT 1\n`\n\n\tsqlUnlockJob = `\nSELECT pg_advisory_unlock($1)\n`\n\n\tsqlCheckJob = `\nSELECT true AS exists\nFROM   que_jobs\nWHERE  queue    = $1::text\nAND    priority = $2::smallint\nAND    run_at   = $3::timestamptz\nAND    job_id   = $4::bigint\n`\n\n\tsqlSetError = `\nUPDATE que_jobs\nSET error_count = $1::integer,\n    run_at      = now() + $2::bigint * '1 second'::interval,\n    last_error  = $3::text\nWHERE queue     = $4::text\nAND   priority  = $5::smallint\nAND   run_at    = $6::timestamptz\nAND   job_id    = $7::bigint\n`\n\n\tsqlInsertJob = `\nINSERT INTO que_jobs\n(queue, priority, run_at, job_class, args)\nVALUES\n(coalesce($1::text, ''::text), coalesce($2::smallint, 100::smallint), coalesce($3::timestamptz, now()::timestamptz), $4::text, coalesce($5::json, '[]'::json))\n`\n\n\tsqlDeleteJob = `\nDELETE FROM que_jobs\nWHERE queue    = $1::text\nAND   priority = $2::smallint\nAND   run_at   = $3::timestamptz\nAND   job_id   = $4::bigint\n`\n\n\tsqlJobStats = `\nSELECT queue,\n       job_class,\n       count(*)                    AS count,\n       count(locks.job_id)         AS count_working,\n       sum((error_count > 0)::int) AS count_errored,\n       max(error_count)            AS highest_error_count,\n       min(run_at)                 AS oldest_run_at\nFROM que_jobs\nLEFT JOIN (\n  SELECT (classid::bigint << 32) + objid::bigint AS job_id\n  FROM pg_locks\n  WHERE locktype = 'advisory'\n) locks USING (job_id)\nGROUP BY queue, job_class\nORDER BY count(*) DESC\n`\n\n\tsqlWorkerStates = `\nSELECT que_jobs.*,\n       pg.pid          AS pg_backend_pid,\n       pg.state        AS pg_state,\n       pg.state_change AS pg_state_changed_at,\n       pg.query        AS pg_last_query,\n       pg.query_start  AS pg_last_query_started_at,\n       pg.xact_start   AS pg_transaction_started_at,\n       pg.waiting      AS pg_waiting_on_lock\nFROM que_jobs\nJOIN (\n  SELECT (classid::bigint << 32) + objid::bigint AS job_id, pg_stat_activity.*\n  FROM pg_locks\n  JOIN pg_stat_activity USING (pid)\n  WHERE locktype = 'advisory'\n) pg USING (job_id)\n`\n)\n"
  },
  {
    "path": "util.go",
    "content": "package que\n\n// intPow returns x**y, the base-x exponential of y.\nfunc intPow(x, y int) (r int) {\n\tif x == r || y < r {\n\t\treturn\n\t}\n\tr = 1\n\tif x == r {\n\t\treturn\n\t}\n\tif x < 0 {\n\t\tx = -x\n\t\tif y&1 == 1 {\n\t\t\tr = -1\n\t\t}\n\t}\n\tfor y > 0 {\n\t\tif y&1 == 1 {\n\t\t\tr *= x\n\t\t}\n\t\tx *= x\n\t\ty >>= 1\n\t}\n\treturn\n}\n"
  },
  {
    "path": "work_test.go",
    "content": "package que\n\nimport (\n\t\"fmt\"\n\t\"sync\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/jackc/pgx\"\n\t\"github.com/jackc/pgx/pgtype\"\n)\n\nfunc TestLockJob(t *testing.T) {\n\tc := openTestClient(t)\n\tdefer truncateAndClose(c.pool)\n\n\tif err := c.Enqueue(&Job{Type: \"MyJob\"}); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tj, err := c.LockJob(\"\")\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tif j.conn == nil {\n\t\tt.Fatal(\"want non-nil conn on locked Job\")\n\t}\n\tif j.pool == nil {\n\t\tt.Fatal(\"want non-nil pool on locked Job\")\n\t}\n\tdefer j.Done()\n\n\t// check values of returned Job\n\tif j.ID == 0 {\n\t\tt.Errorf(\"want non-zero ID\")\n\t}\n\tif want := \"\"; j.Queue != want {\n\t\tt.Errorf(\"want Queue=%q, got %q\", want, j.Queue)\n\t}\n\tif want := int16(100); j.Priority != want {\n\t\tt.Errorf(\"want Priority=%d, got %d\", want, j.Priority)\n\t}\n\tif j.RunAt.IsZero() {\n\t\tt.Error(\"want non-zero RunAt\")\n\t}\n\tif want := \"MyJob\"; j.Type != want {\n\t\tt.Errorf(\"want Type=%q, got %q\", want, j.Type)\n\t}\n\tif want, got := \"[]\", string(j.Args); got != want {\n\t\tt.Errorf(\"want Args=%s, got %s\", want, got)\n\t}\n\tif want := int32(0); j.ErrorCount != want {\n\t\tt.Errorf(\"want ErrorCount=%d, got %d\", want, j.ErrorCount)\n\t}\n\tif j.LastError.Status == pgtype.Present {\n\t\tt.Errorf(\"want no LastError, got %v\", j.LastError)\n\t}\n\n\t// check for advisory lock\n\tvar count int64\n\tquery := \"SELECT count(*) FROM pg_locks WHERE locktype=$1 AND objid=$2::bigint\"\n\tif err = j.pool.QueryRow(query, \"advisory\", j.ID).Scan(&count); err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif count != 1 {\n\t\tt.Errorf(\"want 1 advisory lock, got %d\", count)\n\t}\n\n\t// make sure conn was checked out of pool\n\tstat := c.pool.Stat()\n\ttotal, available := stat.CurrentConnections, stat.AvailableConnections\n\tif want := total - 1; available != want {\n\t\tt.Errorf(\"want available=%d, got %d\", want, available)\n\t}\n\n\tif err = j.Delete(); err != nil {\n\t\tt.Fatal(err)\n\t}\n}\n\nfunc TestLockJobAlreadyLocked(t *testing.T) {\n\tc := openTestClient(t)\n\tdefer truncateAndClose(c.pool)\n\n\tif err := c.Enqueue(&Job{Type: \"MyJob\"}); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tj, err := c.LockJob(\"\")\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif j == nil {\n\t\tt.Fatal(\"wanted job, got none\")\n\t}\n\tdefer j.Done()\n\n\tj2, err := c.LockJob(\"\")\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif j2 != nil {\n\t\tdefer j2.Done()\n\t\tt.Fatalf(\"wanted no job, got %+v\", j2)\n\t}\n}\n\nfunc TestLockJobNoJob(t *testing.T) {\n\tc := openTestClient(t)\n\tdefer truncateAndClose(c.pool)\n\n\tj, err := c.LockJob(\"\")\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif j != nil {\n\t\tt.Errorf(\"want no job, got %v\", j)\n\t}\n}\n\nfunc TestLockJobCustomQueue(t *testing.T) {\n\tc := openTestClient(t)\n\tdefer truncateAndClose(c.pool)\n\n\tif err := c.Enqueue(&Job{Type: \"MyJob\", Queue: \"extra_priority\"}); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tj, err := c.LockJob(\"\")\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif j != nil {\n\t\tj.Done()\n\t\tt.Errorf(\"expected no job to be found with empty queue name, got %+v\", j)\n\t}\n\n\tj, err = c.LockJob(\"extra_priority\")\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tdefer j.Done()\n\n\tif j == nil {\n\t\tt.Fatal(\"wanted job, got none\")\n\t}\n\n\tif err = j.Delete(); err != nil {\n\t\tt.Fatal(err)\n\t}\n}\n\nfunc TestJobConn(t *testing.T) {\n\tc := openTestClient(t)\n\tdefer truncateAndClose(c.pool)\n\n\tif err := c.Enqueue(&Job{Type: \"MyJob\"}); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tj, err := c.LockJob(\"\")\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif j == nil {\n\t\tt.Fatal(\"wanted job, got none\")\n\t}\n\tdefer j.Done()\n\n\tif conn := j.Conn(); conn != j.conn {\n\t\tt.Errorf(\"want %+v, got %+v\", j.conn, conn)\n\t}\n}\n\nfunc TestJobConnRace(t *testing.T) {\n\tc := openTestClient(t)\n\tdefer truncateAndClose(c.pool)\n\n\tif err := c.Enqueue(&Job{Type: \"MyJob\"}); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tj, err := c.LockJob(\"\")\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif j == nil {\n\t\tt.Fatal(\"wanted job, got none\")\n\t}\n\tdefer j.Done()\n\n\tvar wg sync.WaitGroup\n\twg.Add(2)\n\n\t// call Conn and Done in different goroutines to make sure they are safe from\n\t// races.\n\tgo func() {\n\t\t_ = j.Conn()\n\t\twg.Done()\n\t}()\n\tgo func() {\n\t\tj.Done()\n\t\twg.Done()\n\t}()\n\twg.Wait()\n}\n\n// Test the race condition in LockJob\nfunc TestLockJobAdvisoryRace(t *testing.T) {\n\tc := openTestClientMaxConns(t, 2)\n\tdefer truncateAndClose(c.pool)\n\n\t// *pgx.ConnPool doesn't support pools of only one connection.  Make sure\n\t// the other one is busy so we know which backend will be used by LockJob\n\t// below.\n\tunusedConn, err := c.pool.Acquire()\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tdefer c.pool.Release(unusedConn)\n\n\t// We use two jobs: the first one is concurrently deleted, and the second\n\t// one is returned by LockJob after recovering from the race condition.\n\tfor i := 0; i < 2; i++ {\n\t\tif err := c.Enqueue(&Job{Type: \"MyJob\"}); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t}\n\n\t// helper functions\n\tnewConn := func() *pgx.Conn {\n\t\tconn, err := pgx.Connect(testConnConfig)\n\t\tif err != nil {\n\t\t\tpanic(err)\n\t\t}\n\t\treturn conn\n\t}\n\tgetBackendPID := func(conn *pgx.Conn) int32 {\n\t\tvar backendPID int32\n\t\terr := conn.QueryRow(`\n\t\t\tSELECT pg_backend_pid()\n\t\t`).Scan(&backendPID)\n\t\tif err != nil {\n\t\t\tpanic(err)\n\t\t}\n\t\treturn backendPID\n\t}\n\twaitUntilBackendIsWaiting := func(backendPID int32, name string) {\n\t\tconn := newConn()\n\t\ti := 0\n\t\tfor {\n\t\t\tvar waiting bool\n\t\t\terr := conn.QueryRow(`SELECT wait_event is not null from pg_stat_activity where pid=$1`, backendPID).Scan(&waiting)\n\t\t\tif err != nil {\n\t\t\t\tpanic(err)\n\t\t\t}\n\n\t\t\tif waiting {\n\t\t\t\tbreak\n\t\t\t} else {\n\t\t\t\ti++\n\t\t\t\tif i >= 10000/50 {\n\t\t\t\t\tpanic(fmt.Sprintf(\"timed out while waiting for %s\", name))\n\t\t\t\t}\n\n\t\t\t\ttime.Sleep(50 * time.Millisecond)\n\t\t\t}\n\t\t}\n\n\t}\n\n\t// Reproducing the race condition is a bit tricky.  The idea is to form a\n\t// lock queue on the relation that looks like this:\n\t//\n\t//   AccessExclusive <- AccessShare  <- AccessExclusive ( <- AccessShare )\n\t//\n\t// where the leftmost AccessShare lock is the one implicitly taken by the\n\t// sqlLockJob query.  Once we release the leftmost AccessExclusive lock\n\t// without releasing the rightmost one, the session holding the rightmost\n\t// AccessExclusiveLock can run the necessary DELETE before the sqlCheckJob\n\t// query runs (since it'll be blocked behind the rightmost AccessExclusive\n\t// Lock).\n\t//\n\tdeletedJobIDChan := make(chan int64, 1)\n\tlockJobBackendIDChan := make(chan int32)\n\tsecondAccessExclusiveBackendIDChan := make(chan int32)\n\n\tgo func() {\n\t\tconn := newConn()\n\t\tdefer conn.Close()\n\n\t\ttx, err := conn.Begin()\n\t\tif err != nil {\n\t\t\tpanic(err)\n\t\t}\n\t\t_, err = tx.Exec(`LOCK TABLE que_jobs IN ACCESS EXCLUSIVE MODE`)\n\t\tif err != nil {\n\t\t\tpanic(err)\n\t\t}\n\n\t\t// first wait for LockJob to appear behind us\n\t\tbackendID := <-lockJobBackendIDChan\n\t\twaitUntilBackendIsWaiting(backendID, \"LockJob\")\n\n\t\t// then for the AccessExclusive lock to appear behind that one\n\t\tbackendID = <-secondAccessExclusiveBackendIDChan\n\t\twaitUntilBackendIsWaiting(backendID, \"second access exclusive lock\")\n\n\t\terr = tx.Rollback()\n\t\tif err != nil {\n\t\t\tpanic(err)\n\t\t}\n\t}()\n\n\tgo func() {\n\t\tconn := newConn()\n\t\tdefer conn.Close()\n\n\t\t// synchronization point\n\t\tsecondAccessExclusiveBackendIDChan <- getBackendPID(conn)\n\n\t\ttx, err := conn.Begin()\n\t\tif err != nil {\n\t\t\tpanic(err)\n\t\t}\n\t\t_, err = tx.Exec(`LOCK TABLE que_jobs IN ACCESS EXCLUSIVE MODE`)\n\t\tif err != nil {\n\t\t\tpanic(err)\n\t\t}\n\n\t\t// Fake a concurrent transaction grabbing the job\n\t\tvar jid int64\n\t\terr = tx.QueryRow(`\n\t\t\tDELETE FROM que_jobs\n\t\t\tWHERE job_id =\n\t\t\t\t(SELECT min(job_id)\n\t\t\t\t FROM que_jobs)\n\t\t\tRETURNING job_id\n\t\t`).Scan(&jid)\n\t\tif err != nil {\n\t\t\tpanic(err)\n\t\t}\n\n\t\tdeletedJobIDChan <- jid\n\n\t\terr = tx.Commit()\n\t\tif err != nil {\n\t\t\tpanic(err)\n\t\t}\n\t}()\n\n\tconn, err := c.pool.Acquire()\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\tourBackendID := getBackendPID(conn)\n\tc.pool.Release(conn)\n\n\t// synchronization point\n\tlockJobBackendIDChan <- ourBackendID\n\n\tjob, err := c.LockJob(\"\")\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\tdefer job.Done()\n\n\tdeletedJobID := <-deletedJobIDChan\n\n\tt.Logf(\"Got id %d\", job.ID)\n\tt.Logf(\"Concurrently deleted id %d\", deletedJobID)\n\n\tif deletedJobID >= job.ID {\n\t\tt.Fatalf(\"deleted job id %d must be smaller than job.ID %d\", deletedJobID, job.ID)\n\t}\n}\n\nfunc TestJobDelete(t *testing.T) {\n\tc := openTestClient(t)\n\tdefer truncateAndClose(c.pool)\n\n\tif err := c.Enqueue(&Job{Type: \"MyJob\"}); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tj, err := c.LockJob(\"\")\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif j == nil {\n\t\tt.Fatal(\"wanted job, got none\")\n\t}\n\tdefer j.Done()\n\n\tif err = j.Delete(); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\t// make sure job was deleted\n\tj2, err := findOneJob(c.pool)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif j2 != nil {\n\t\tt.Errorf(\"job was not deleted: %+v\", j2)\n\t}\n}\n\nfunc TestJobDone(t *testing.T) {\n\tc := openTestClient(t)\n\tdefer truncateAndClose(c.pool)\n\n\tif err := c.Enqueue(&Job{Type: \"MyJob\"}); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tj, err := c.LockJob(\"\")\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif j == nil {\n\t\tt.Fatal(\"wanted job, got none\")\n\t}\n\n\tj.Done()\n\n\t// make sure conn and pool were cleared\n\tif j.conn != nil {\n\t\tt.Errorf(\"want nil conn, got %+v\", j.conn)\n\t}\n\tif j.pool != nil {\n\t\tt.Errorf(\"want nil pool, got %+v\", j.pool)\n\t}\n\n\t// make sure lock was released\n\tvar count int64\n\tquery := \"SELECT count(*) FROM pg_locks WHERE locktype=$1 AND objid=$2::bigint\"\n\tif err = c.pool.QueryRow(query, \"advisory\", j.ID).Scan(&count); err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif count != 0 {\n\t\tt.Error(\"advisory lock was not released\")\n\t}\n\n\t// make sure conn was returned to pool\n\tstat := c.pool.Stat()\n\ttotal, available := stat.CurrentConnections, stat.AvailableConnections\n\tif total != available {\n\t\tt.Errorf(\"want available=total, got available=%d total=%d\", available, total)\n\t}\n}\n\nfunc TestJobDoneMultiple(t *testing.T) {\n\tc := openTestClient(t)\n\tdefer truncateAndClose(c.pool)\n\n\tif err := c.Enqueue(&Job{Type: \"MyJob\"}); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tj, err := c.LockJob(\"\")\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif j == nil {\n\t\tt.Fatal(\"wanted job, got none\")\n\t}\n\n\tj.Done()\n\t// try calling Done() again\n\tj.Done()\n}\n\nfunc TestJobDeleteFromTx(t *testing.T) {\n\tc := openTestClient(t)\n\tdefer truncateAndClose(c.pool)\n\n\tif err := c.Enqueue(&Job{Type: \"MyJob\"}); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tj, err := c.LockJob(\"\")\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif j == nil {\n\t\tt.Fatal(\"wanted job, got none\")\n\t}\n\n\t// get the job's database connection\n\tconn := j.Conn()\n\tif conn == nil {\n\t\tt.Fatal(\"wanted conn, got nil\")\n\t}\n\n\t// start a transaction\n\ttx, err := conn.Begin()\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\t// delete the job\n\tif err = j.Delete(); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tif err = tx.Commit(); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\t// mark as done\n\tj.Done()\n\n\t// make sure the job is gone\n\tj2, err := findOneJob(c.pool)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tif j2 != nil {\n\t\tt.Errorf(\"wanted no job, got %+v\", j2)\n\t}\n}\n\nfunc TestJobDeleteFromTxRollback(t *testing.T) {\n\tc := openTestClient(t)\n\tdefer truncateAndClose(c.pool)\n\n\tif err := c.Enqueue(&Job{Type: \"MyJob\"}); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tj1, err := c.LockJob(\"\")\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif j1 == nil {\n\t\tt.Fatal(\"wanted job, got none\")\n\t}\n\n\t// get the job's database connection\n\tconn := j1.Conn()\n\tif conn == nil {\n\t\tt.Fatal(\"wanted conn, got nil\")\n\t}\n\n\t// start a transaction\n\ttx, err := conn.Begin()\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\t// delete the job\n\tif err = j1.Delete(); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tif err = tx.Rollback(); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\t// mark as done\n\tj1.Done()\n\n\t// make sure the job still exists and matches j1\n\tj2, err := findOneJob(c.pool)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tif j1.ID != j2.ID {\n\t\tt.Errorf(\"want job %d, got %d\", j1.ID, j2.ID)\n\t}\n}\n\nfunc TestJobError(t *testing.T) {\n\tc := openTestClient(t)\n\tdefer truncateAndClose(c.pool)\n\n\tif err := c.Enqueue(&Job{Type: \"MyJob\"}); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tj, err := c.LockJob(\"\")\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif j == nil {\n\t\tt.Fatal(\"wanted job, got none\")\n\t}\n\tdefer j.Done()\n\n\tmsg := \"world\\nended\"\n\tif err = j.Error(msg); err != nil {\n\t\tt.Fatal(err)\n\t}\n\tj.Done()\n\n\t// make sure job was not deleted\n\tj2, err := findOneJob(c.pool)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif j2 == nil {\n\t\tt.Fatal(\"job was not found\")\n\t}\n\tdefer j2.Done()\n\n\tif j2.LastError.Status == pgtype.Null || j2.LastError.String != msg {\n\t\tt.Errorf(\"want LastError=%q, got %q\", msg, j2.LastError.String)\n\t}\n\tif j2.ErrorCount != 1 {\n\t\tt.Errorf(\"want ErrorCount=%d, got %d\", 1, j2.ErrorCount)\n\t}\n\n\t// make sure lock was released\n\tvar count int64\n\tquery := \"SELECT count(*) FROM pg_locks WHERE locktype=$1 AND objid=$2::bigint\"\n\tif err = c.pool.QueryRow(query, \"advisory\", j.ID).Scan(&count); err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif count != 0 {\n\t\tt.Error(\"advisory lock was not released\")\n\t}\n\n\t// make sure conn was returned to pool\n\tstat := c.pool.Stat()\n\ttotal, available := stat.CurrentConnections, stat.AvailableConnections\n\tif total != available {\n\t\tt.Errorf(\"want available=total, got available=%d total=%d\", available, total)\n\t}\n}\n"
  },
  {
    "path": "worker.go",
    "content": "package que\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"log\"\n\t\"os\"\n\t\"runtime\"\n\t\"strconv\"\n\t\"sync\"\n\t\"time\"\n)\n\n// WorkFunc is a function that performs a Job. If an error is returned, the job\n// is reenqueued with exponential backoff.\ntype WorkFunc func(j *Job) error\n\n// WorkMap is a map of Job names to WorkFuncs that are used to perform Jobs of a\n// given type.\ntype WorkMap map[string]WorkFunc\n\n// Worker is a single worker that pulls jobs off the specified Queue. If no Job\n// is found, the Worker will sleep for Interval seconds.\ntype Worker struct {\n\t// Interval is the amount of time that this Worker should sleep before trying\n\t// to find another Job.\n\tInterval time.Duration\n\n\t// Queue is the name of the queue to pull Jobs off of. The default value, \"\",\n\t// is usable and is the default for both que-go and the ruby que library.\n\tQueue string\n\n\tc *Client\n\tm WorkMap\n\n\tmu   sync.Mutex\n\tdone bool\n\tch   chan struct{}\n}\n\nvar defaultWakeInterval = 5 * time.Second\n\nfunc init() {\n\tif v := os.Getenv(\"QUE_WAKE_INTERVAL\"); v != \"\" {\n\t\tif newInt, err := strconv.Atoi(v); err == nil {\n\t\t\tdefaultWakeInterval = time.Duration(newInt) * time.Second\n\t\t}\n\t}\n}\n\n// NewWorker returns a Worker that fetches Jobs from the Client and executes\n// them using WorkMap. If the type of Job is not registered in the WorkMap, it's\n// considered an error and the job is re-enqueued with a backoff.\n//\n// Workers default to an Interval of 5 seconds, which can be overridden by\n// setting the environment variable QUE_WAKE_INTERVAL. The default Queue is the\n// nameless queue \"\", which can be overridden by setting QUE_QUEUE. Either of\n// these settings can be changed on the returned Worker before it is started\n// with Work().\nfunc NewWorker(c *Client, m WorkMap) *Worker {\n\treturn &Worker{\n\t\tInterval: defaultWakeInterval,\n\t\tQueue:    os.Getenv(\"QUE_QUEUE\"),\n\t\tc:        c,\n\t\tm:        m,\n\t\tch:       make(chan struct{}),\n\t}\n}\n\n// Work pulls jobs off the Worker's Queue at its Interval. This function only\n// returns after Shutdown() is called, so it should be run in its own goroutine.\nfunc (w *Worker) Work() {\n\tdefer log.Println(\"worker done\")\n\tfor {\n\t\t// Try to work a job\n\t\tif w.WorkOne() {\n\t\t\t// Since we just did work, non-blocking check whether we should exit\n\t\t\tselect {\n\t\t\tcase <-w.ch:\n\t\t\t\treturn\n\t\t\tdefault:\n\t\t\t\t// continue in loop\n\t\t\t}\n\t\t} else {\n\t\t\t// No work found, block until exit or timer expires\n\t\t\tselect {\n\t\t\tcase <-w.ch:\n\t\t\t\treturn\n\t\t\tcase <-time.After(w.Interval):\n\t\t\t\t// continue in loop\n\t\t\t}\n\t\t}\n\t}\n}\n\nfunc (w *Worker) WorkOne() (didWork bool) {\n\tj, err := w.c.LockJob(w.Queue)\n\tif err != nil {\n\t\tlog.Printf(\"attempting to lock job: %v\", err)\n\t\treturn\n\t}\n\tif j == nil {\n\t\treturn // no job was available\n\t}\n\tdefer j.Done()\n\tdefer recoverPanic(j)\n\n\tdidWork = true\n\n\twf, ok := w.m[j.Type]\n\tif !ok {\n\t\tmsg := fmt.Sprintf(\"unknown job type: %q\", j.Type)\n\t\tlog.Println(msg)\n\t\tif err = j.Error(msg); err != nil {\n\t\t\tlog.Printf(\"attempting to save error on job %d: %v\", j.ID, err)\n\t\t}\n\t\treturn\n\t}\n\n\tif err = wf(j); err != nil {\n\t\tj.Error(err.Error())\n\t\treturn\n\t}\n\n\tif err = j.Delete(); err != nil {\n\t\tlog.Printf(\"attempting to delete job %d: %v\", j.ID, err)\n\t}\n\tlog.Printf(\"event=job_worked job_id=%d job_type=%s\", j.ID, j.Type)\n\treturn\n}\n\n// Shutdown tells the worker to finish processing its current job and then stop.\n// There is currently no timeout for in-progress jobs. This function blocks\n// until the Worker has stopped working. It should only be called on an active\n// Worker.\nfunc (w *Worker) Shutdown() {\n\tw.mu.Lock()\n\tdefer w.mu.Unlock()\n\n\tif w.done {\n\t\treturn\n\t}\n\n\tlog.Println(\"worker shutting down gracefully...\")\n\tw.ch <- struct{}{}\n\tw.done = true\n\tclose(w.ch)\n}\n\n// recoverPanic tries to handle panics in job execution.\n// A stacktrace is stored into Job last_error.\nfunc recoverPanic(j *Job) {\n\tif r := recover(); r != nil {\n\t\t// record an error on the job with panic message and stacktrace\n\t\tstackBuf := make([]byte, 1024)\n\t\tn := runtime.Stack(stackBuf, false)\n\n\t\tbuf := &bytes.Buffer{}\n\t\tfmt.Fprintf(buf, \"%v\\n\", r)\n\t\tfmt.Fprintln(buf, string(stackBuf[:n]))\n\t\tfmt.Fprintln(buf, \"[...]\")\n\t\tstacktrace := buf.String()\n\t\tlog.Printf(\"event=panic job_id=%d job_type=%s\\n%s\", j.ID, j.Type, stacktrace)\n\t\tif err := j.Error(stacktrace); err != nil {\n\t\t\tlog.Printf(\"attempting to save error on job %d: %v\", j.ID, err)\n\t\t}\n\t}\n}\n\n// WorkerPool is a pool of Workers, each working jobs from the queue Queue\n// at the specified Interval using the WorkMap.\ntype WorkerPool struct {\n\tWorkMap  WorkMap\n\tInterval time.Duration\n\tQueue    string\n\n\tc       *Client\n\tworkers []*Worker\n\tmu      sync.Mutex\n\tdone    bool\n}\n\n// NewWorkerPool creates a new WorkerPool with count workers using the Client c.\nfunc NewWorkerPool(c *Client, wm WorkMap, count int) *WorkerPool {\n\treturn &WorkerPool{\n\t\tc:        c,\n\t\tWorkMap:  wm,\n\t\tInterval: defaultWakeInterval,\n\t\tworkers:  make([]*Worker, count),\n\t}\n}\n\n// Start starts all of the Workers in the WorkerPool.\nfunc (w *WorkerPool) Start() {\n\tw.mu.Lock()\n\tdefer w.mu.Unlock()\n\n\tfor i := range w.workers {\n\t\tw.workers[i] = NewWorker(w.c, w.WorkMap)\n\t\tw.workers[i].Interval = w.Interval\n\t\tw.workers[i].Queue = w.Queue\n\t\tgo w.workers[i].Work()\n\t}\n}\n\n// Shutdown sends a Shutdown signal to each of the Workers in the WorkerPool and\n// waits for them all to finish shutting down.\nfunc (w *WorkerPool) Shutdown() {\n\tw.mu.Lock()\n\tdefer w.mu.Unlock()\n\n\tif w.done {\n\t\treturn\n\t}\n\tvar wg sync.WaitGroup\n\twg.Add(len(w.workers))\n\n\tfor _, worker := range w.workers {\n\t\tgo func(worker *Worker) {\n\t\t\t// If Shutdown is called before Start has been called,\n\t\t\t// then these are nil, so don't try to close them\n\t\t\tif worker != nil {\n\t\t\t\tworker.Shutdown()\n\t\t\t}\n\t\t\twg.Done()\n\t\t}(worker)\n\t}\n\twg.Wait()\n\tw.done = true\n}\n"
  },
  {
    "path": "worker_test.go",
    "content": "package que\n\nimport (\n\t\"fmt\"\n\t\"io/ioutil\"\n\t\"log\"\n\t\"os\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/jackc/pgx/pgtype\"\n)\n\nfunc init() {\n\tlog.SetOutput(ioutil.Discard)\n}\n\nfunc TestWorkerWorkOne(t *testing.T) {\n\tc := openTestClient(t)\n\tdefer truncateAndClose(c.pool)\n\n\tsuccess := false\n\twm := WorkMap{\n\t\t\"MyJob\": func(j *Job) error {\n\t\t\tsuccess = true\n\t\t\treturn nil\n\t\t},\n\t}\n\tw := NewWorker(c, wm)\n\n\tdidWork := w.WorkOne()\n\tif didWork {\n\t\tt.Errorf(\"want didWork=false when no job was queued\")\n\t}\n\n\tif err := c.Enqueue(&Job{Type: \"MyJob\"}); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tdidWork = w.WorkOne()\n\tif !didWork {\n\t\tt.Errorf(\"want didWork=true\")\n\t}\n\tif !success {\n\t\tt.Errorf(\"want success=true\")\n\t}\n}\n\nfunc TestWorkerShutdown(t *testing.T) {\n\tc := openTestClient(t)\n\tdefer truncateAndClose(c.pool)\n\n\tw := NewWorker(c, WorkMap{})\n\tfinished := false\n\tgo func() {\n\t\tw.Work()\n\t\tfinished = true\n\t}()\n\tw.Shutdown()\n\tif !finished {\n\t\tt.Errorf(\"want finished=true\")\n\t}\n\tif !w.done {\n\t\tt.Errorf(\"want w.done=true\")\n\t}\n}\n\nfunc BenchmarkWorker(b *testing.B) {\n\tc := openTestClient(b)\n\tlog.SetOutput(ioutil.Discard)\n\tdefer func() {\n\t\tlog.SetOutput(os.Stdout)\n\t}()\n\tdefer truncateAndClose(c.pool)\n\n\tw := NewWorker(c, WorkMap{\"Nil\": nilWorker})\n\n\tfor i := 0; i < b.N; i++ {\n\t\tif err := c.Enqueue(&Job{Type: \"Nil\"}); err != nil {\n\t\t\tlog.Fatal(err)\n\t\t}\n\t}\n\n\tb.ResetTimer()\n\tfor i := 0; i < b.N; i++ {\n\t\tw.WorkOne()\n\t}\n}\n\nfunc nilWorker(j *Job) error {\n\treturn nil\n}\n\nfunc TestWorkerWorkReturnsError(t *testing.T) {\n\tc := openTestClient(t)\n\tdefer truncateAndClose(c.pool)\n\n\tcalled := 0\n\twm := WorkMap{\n\t\t\"MyJob\": func(j *Job) error {\n\t\t\tcalled++\n\t\t\treturn fmt.Errorf(\"the error msg\")\n\t\t},\n\t}\n\tw := NewWorker(c, wm)\n\n\tdidWork := w.WorkOne()\n\tif didWork {\n\t\tt.Errorf(\"want didWork=false when no job was queued\")\n\t}\n\n\tif err := c.Enqueue(&Job{Type: \"MyJob\"}); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tdidWork = w.WorkOne()\n\tif !didWork {\n\t\tt.Errorf(\"want didWork=true\")\n\t}\n\tif called != 1 {\n\t\tt.Errorf(\"want called=1 was: %d\", called)\n\t}\n\n\ttx, err := c.pool.Begin()\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tdefer tx.Rollback()\n\n\tj, err := findOneJob(tx)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif j.ErrorCount != 1 {\n\t\tt.Errorf(\"want ErrorCount=1 was %d\", j.ErrorCount)\n\t}\n\tif j.LastError.Status == pgtype.Null {\n\t\tt.Errorf(\"want LastError IS NOT NULL\")\n\t}\n\tif j.LastError.String != \"the error msg\" {\n\t\tt.Errorf(\"want LastError=\\\"the error msg\\\" was: %q\", j.LastError.String)\n\t}\n}\n\nfunc TestWorkerWorkRescuesPanic(t *testing.T) {\n\tc := openTestClient(t)\n\tdefer truncateAndClose(c.pool)\n\n\tcalled := 0\n\twm := WorkMap{\n\t\t\"MyJob\": func(j *Job) error {\n\t\t\tcalled++\n\t\t\tpanic(\"the panic msg\")\n\t\t\treturn nil\n\t\t},\n\t}\n\tw := NewWorker(c, wm)\n\n\tif err := c.Enqueue(&Job{Type: \"MyJob\"}); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tw.WorkOne()\n\tif called != 1 {\n\t\tt.Errorf(\"want called=1 was: %d\", called)\n\t}\n\n\ttx, err := c.pool.Begin()\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tdefer tx.Rollback()\n\n\tj, err := findOneJob(tx)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif j.ErrorCount != 1 {\n\t\tt.Errorf(\"want ErrorCount=1 was %d\", j.ErrorCount)\n\t}\n\tif j.LastError.Status == pgtype.Null {\n\t\tt.Errorf(\"want LastError IS NOT NULL\")\n\t}\n\tif !strings.Contains(j.LastError.String, \"the panic msg\\n\") {\n\t\tt.Errorf(\"want LastError contains \\\"the panic msg\\\\n\\\" was: %q\", j.LastError.String)\n\t}\n\t// basic check if a stacktrace is there - not the stacktrace format itself\n\tif !strings.Contains(j.LastError.String, \"worker.go:\") {\n\t\tt.Errorf(\"want LastError contains \\\"worker.go:\\\" was: %q\", j.LastError.String)\n\t}\n\tif !strings.Contains(j.LastError.String, \"worker_test.go:\") {\n\t\tt.Errorf(\"want LastError contains \\\"worker_test.go:\\\" was: %q\", j.LastError.String)\n\t}\n}\n\nfunc TestWorkerWorkOneTypeNotInMap(t *testing.T) {\n\tc := openTestClient(t)\n\tdefer truncateAndClose(c.pool)\n\n\tcurrentConns := c.pool.Stat().CurrentConnections\n\tavailConns := c.pool.Stat().AvailableConnections\n\n\tsuccess := false\n\twm := WorkMap{}\n\tw := NewWorker(c, wm)\n\n\tdidWork := w.WorkOne()\n\tif didWork {\n\t\tt.Errorf(\"want didWork=false when no job was queued\")\n\t}\n\n\tif err := c.Enqueue(&Job{Type: \"MyJob\"}); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tdidWork = w.WorkOne()\n\tif !didWork {\n\t\tt.Errorf(\"want didWork=true\")\n\t}\n\tif success {\n\t\tt.Errorf(\"want success=false\")\n\t}\n\n\tif currentConns != c.pool.Stat().CurrentConnections {\n\t\tt.Errorf(\"want currentConns euqual: before=%d  after=%d\", currentConns, c.pool.Stat().CurrentConnections)\n\t}\n\tif availConns != c.pool.Stat().AvailableConnections {\n\t\tt.Errorf(\"want availConns euqual: before=%d  after=%d\", availConns, c.pool.Stat().AvailableConnections)\n\t}\n\n\ttx, err := c.pool.Begin()\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tdefer tx.Rollback()\n\n\tj, err := findOneJob(tx)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tif j.ErrorCount != 1 {\n\t\tt.Errorf(\"want ErrorCount=1 was %d\", j.ErrorCount)\n\t}\n\tif j.LastError.Status == pgtype.Null {\n\t\tt.Fatal(\"want non-nil LastError\")\n\t}\n\tif want := \"unknown job type: \\\"MyJob\\\"\"; j.LastError.String != want {\n\t\tt.Errorf(\"want LastError=%q, got %q\", want, j.LastError.String)\n\t}\n\n}\n"
  }
]