Compare commits

...

24 Commits

Author SHA1 Message Date
Tyler Davis
91e57a6156 fix: remove deprecated ioutil and rand calls
- Change ioutil to io and os calls per Go 1.16 deprecation
- Move global random to localized random source
2023-03-11 19:16:12 -07:00
Tyler Davis
6bbced3d46 fix: update go versions in mod and docker
- Update Go module to v1.19 format
- Docker builder pinned to Go v1.20.1
- Alpine image pinned to 3.17.2 (rather than `latest`)
2023-03-11 19:16:12 -07:00
Erik Kristensen
8dbdf2b91c fix: remove debug code 2023-01-17 18:47:29 -07:00
Erik Kristensen
530930465d fix: aws credential chain by using aws.Config 2023-01-17 18:47:29 -07:00
Lincoln Stoll
64e535e23b Handle errors when deleting objects from S3
I recently noticed that the cost for ListBucket calls was increasing for an
application that was using Litestream. After investigating it seemed that the
bucket had retained the entire history of data, while Litestream was
continually logging that it was deleting the same data:

```
2022-10-30T12:00:27Z (s3): wal segmented deleted before 0792d3393bf79ced/00000233: n=1428
<snip>
2022-10-30T13:00:24Z (s3): wal segmented deleted before 0792d3393bf79ced/00000233: n=1428
```

This is occuring because the DeleteObjects call is a batch item, that returns
the individual object deletion errors in the response[1]. The S3 replica client
discards the response, and only handles errors in the original API call. I had
a misconfigured IAM policy that meant all deletes were failing, but this never
actually bubbled up as a real error.

To fix this, I added a check for the response body to handle any errors the
operation might have encountered. Because this may include a large number of
errors (in this case 1428 of them), the output is summarized to avoid an overly
large error message. When items are not found, they will not return an error[2]
- they will still be marked as deleted, so this change should be in-line with
the original intentions of this code.

1: https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html#API_DeleteObjects_Example_2
2: https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html
2022-11-03 10:02:42 -06:00
Jose Diaz-Gonzalez
f50f03d8fc Update readme to note that this tool is for disaster recovery, not streaming replication
Refs #411
2022-10-14 15:12:58 -06:00
Ben Johnson
868d564988 Remove streaming replication implementation 2022-08-08 16:34:17 -06:00
Ben Johnson
a8ab14cca2 Update dependencies 2022-08-08 15:24:46 -06:00
Ryan Russell
80cd049ae7 Revert to correct wal_downloader.go
Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-06-23 08:42:37 -06:00
Ryan Russell
2acdab02c8 Improve readability
Signed-off-by: Ryan Russell <ryanrussell@users.noreply.github.com>
2022-06-23 08:42:37 -06:00
Yasuhiro Matsumoto
31aa5b34f6 Fix build tag 2022-06-07 15:09:52 -06:00
Yasuhiro Matsumoto
4522c7bce5 implement Fileinfo for Windows and non-Windows 2022-06-07 15:09:52 -06:00
Ben Johnson
e9dbf83a45 Re-add Fileinfo() 2022-06-07 15:09:52 -06:00
Yasuhiro Matsumoto
7d0167f10a Unwatch directory 2022-06-07 15:09:52 -06:00
Yasuhiro Matsumoto
2c0dce21fa Use fsnotify 2022-06-07 15:09:52 -06:00
Hiroaki Nakamura
98673c6785 Add environment variables for scheme and forcePathStyle 2022-05-16 15:12:39 -06:00
Hiroaki Nakamura
46597ab22f Fix wal internal error log 2022-05-13 15:25:43 -06:00
Hiroaki Nakamura
e6f7c6052d Add two environments for overriding endpoint and region
export LITESTREAM_ACCESS_KEY_ID=your_key_id
export LITESTREAM_SECRET_ACCESS_KEY=your_access_key
export LITESTREAM_ENDPOINT=your_endpoint
export LITESTREAM_REGION=your_region
litestream replicate fruits.db s3://mybkt/fruits.db
2022-05-10 08:48:48 -06:00
Ben Johnson
7d8b8c6ec0 Remove verbose flag from restore docs 2022-05-03 06:33:53 -07:00
Michael Lynch
88737d7164 Add a unit test for internal.MD5Hash 2022-04-17 15:02:37 -06:00
Michael Lynch
6763e9218c Fix path to coverage file 2022-04-17 15:02:32 -06:00
Michael Lynch
301e1172fd Add Go code coverage to CI 2022-04-17 15:02:32 -06:00
Ben Johnson
ca07137d32 Re-add point-in-time restore 2022-04-14 20:03:52 -06:00
Ben Johnson
80f8de4d9e Fix release workflow 2022-04-09 10:34:26 -06:00
60 changed files with 692 additions and 2079 deletions

View File

@@ -22,4 +22,9 @@ jobs:
run: go install ./cmd/litestream
- name: Run unit tests
run: make testdata && go test -v ./...
run: make testdata && go test -v --coverprofile=.coverage.out ./... && go tool cover -html .coverage.out -o .coverage.html
- uses: actions/upload-artifact@v3
with:
name: code-coverage
path: .coverage.html

View File

@@ -23,7 +23,6 @@ jobs:
- arch: amd64
cc: gcc
static: true
deploy_test_runner: true
- arch: arm64
cc: aarch64-linux-gnu-gcc
@@ -105,6 +104,13 @@ jobs:
path: dist/litestream-${{ env.VERSION }}-${{ env.GOOS }}-${{ env.GOARCH }}${{ env.GOARM }}${{ env.SUFFIX }}.deb
if-no-files-found: error
- name: Get release
id: release
uses: bruceadams/get-release@v1.2.3
if: github.event_name == 'release'
env:
GITHUB_TOKEN: ${{ github.token }}
- name: Upload release tarball
uses: actions/upload-release-asset@v1.0.2
if: github.event_name == 'release'

1
.gitignore vendored
View File

@@ -1,2 +1,3 @@
.coverage.*
.DS_Store
/dist

View File

@@ -1,4 +1,4 @@
FROM golang:1.17 as builder
FROM golang:1.20.1 as builder
WORKDIR /src/litestream
COPY . .
@@ -10,7 +10,7 @@ RUN --mount=type=cache,target=/root/.cache/go-build \
go build -ldflags "-s -w -X 'main.Version=${LITESTREAM_VERSION}' -extldflags '-static'" -tags osusergo,netgo,sqlite_omit_load_extension -o /usr/local/bin/litestream ./cmd/litestream
FROM alpine
FROM alpine:3.17.2
COPY --from=builder /usr/local/bin/litestream /usr/local/bin/litestream
ENTRYPOINT ["/usr/local/bin/litestream"]
CMD []

View File

@@ -6,7 +6,7 @@ Litestream
![test](https://github.com/benbjohnson/litestream/workflows/test/badge.svg)
==========
Litestream is a standalone streaming replication tool for SQLite. It runs as a
Litestream is a standalone disaster recovery tool for SQLite. It runs as a
background process and safely replicates changes incrementally to another file
or S3. Litestream only communicates with SQLite through the SQLite API so it
will not corrupt your database.

View File

@@ -6,7 +6,6 @@ import (
"flag"
"fmt"
"io"
"io/ioutil"
"log"
"net/url"
"os"
@@ -23,7 +22,6 @@ import (
"github.com/benbjohnson/litestream"
"github.com/benbjohnson/litestream/abs"
"github.com/benbjohnson/litestream/gs"
"github.com/benbjohnson/litestream/http"
"github.com/benbjohnson/litestream/s3"
"github.com/benbjohnson/litestream/sftp"
_ "github.com/mattn/go-sqlite3"
@@ -254,7 +252,7 @@ func readConfigFile(filename string, expandEnv bool) (_ Config, err error) {
// Read configuration.
// Do not return an error if using default path and file is missing.
buf, err := ioutil.ReadFile(filename)
buf, err := os.ReadFile(filename)
if err != nil {
return config, err
}
@@ -284,7 +282,6 @@ func readConfigFile(filename string, expandEnv bool) (_ Config, err error) {
// DBConfig represents the configuration for a single database.
type DBConfig struct {
Path string `yaml:"path"`
Upstream UpstreamConfig `yaml:"upstream"`
MonitorDelayInterval *time.Duration `yaml:"monitor-delay-interval"`
CheckpointInterval *time.Duration `yaml:"checkpoint-interval"`
MinCheckpointPageN *int `yaml:"min-checkpoint-page-count"`
@@ -308,16 +305,6 @@ func NewDBFromConfigWithPath(dbc *DBConfig, path string) (*litestream.DB, error)
// Initialize database with given path.
db := litestream.NewDB(path)
// Attach upstream HTTP client if specified.
if upstreamURL := dbc.Upstream.URL; upstreamURL != "" {
// Use local database path if upstream path is not specified.
upstreamPath := dbc.Upstream.Path
if upstreamPath == "" {
upstreamPath = db.Path()
}
db.StreamClient = http.NewClient(upstreamURL, upstreamPath)
}
// Override default database settings if specified in configuration.
if dbc.MonitorDelayInterval != nil {
db.MonitorDelayInterval = *dbc.MonitorDelayInterval
@@ -347,11 +334,6 @@ func NewDBFromConfigWithPath(dbc *DBConfig, path string) (*litestream.DB, error)
return db, nil
}
type UpstreamConfig struct {
URL string `yaml:"url"`
Path string `yaml:"path"`
}
// ReplicaConfig represents the configuration for a single replica in a database.
type ReplicaConfig struct {
Type string `yaml:"type"` // "file", "s3"

View File

@@ -3,7 +3,6 @@ package main_test
import (
"bytes"
"io"
"io/ioutil"
"log"
"os"
"path/filepath"
@@ -23,7 +22,7 @@ func TestReadConfigFile(t *testing.T) {
// Ensure global AWS settings are propagated down to replica configurations.
t.Run("PropagateGlobalSettings", func(t *testing.T) {
filename := filepath.Join(t.TempDir(), "litestream.yml")
if err := ioutil.WriteFile(filename, []byte(`
if err := os.WriteFile(filename, []byte(`
access-key-id: XXX
secret-access-key: YYY
@@ -55,7 +54,7 @@ dbs:
os.Setenv("LITESTREAM_TEST_1872363", "s3://foo/bar")
filename := filepath.Join(t.TempDir(), "litestream.yml")
if err := ioutil.WriteFile(filename, []byte(`
if err := os.WriteFile(filename, []byte(`
dbs:
- path: $LITESTREAM_TEST_0129380
replicas:
@@ -82,7 +81,7 @@ dbs:
os.Setenv("LITESTREAM_TEST_9847533", "s3://foo/bar")
filename := filepath.Join(t.TempDir(), "litestream.yml")
if err := ioutil.WriteFile(filename, []byte(`
if err := os.WriteFile(filename, []byte(`
dbs:
- path: /path/to/db
replicas:

View File

@@ -9,6 +9,7 @@ import (
"os"
"path/filepath"
"strconv"
"time"
"github.com/benbjohnson/litestream"
)
@@ -22,14 +23,15 @@ type RestoreCommand struct {
snapshotIndex int // index of snapshot to start from
// CLI options
configPath string // path to config file
noExpandEnv bool // if true, do not expand env variables in config
outputPath string // path to restore database to
replicaName string // optional, name of replica to restore from
generation string // optional, generation to restore
targetIndex int // optional, last WAL index to replay
ifDBNotExists bool // if true, skips restore if output path already exists
ifReplicaExists bool // if true, skips if no backups exist
configPath string // path to config file
noExpandEnv bool // if true, do not expand env variables in config
outputPath string // path to restore database to
replicaName string // optional, name of replica to restore from
generation string // optional, generation to restore
targetIndex int // optional, last WAL index to replay
timestamp time.Time // optional, restore to point-in-time (ISO 8601)
ifDBNotExists bool // if true, skips restore if output path already exists
ifReplicaExists bool // if true, skips if no backups exist
opt litestream.RestoreOptions
}
@@ -53,6 +55,7 @@ func (c *RestoreCommand) Run(ctx context.Context, args []string) (err error) {
fs.StringVar(&c.replicaName, "replica", "", "replica name")
fs.StringVar(&c.generation, "generation", "", "generation name")
fs.Var((*indexVar)(&c.targetIndex), "index", "wal index")
timestampStr := fs.String("timestamp", "", "point-in-time restore (ISO 8601)")
fs.IntVar(&c.opt.Parallelism, "parallelism", c.opt.Parallelism, "parallelism")
fs.BoolVar(&c.ifDBNotExists, "if-db-not-exists", false, "")
fs.BoolVar(&c.ifReplicaExists, "if-replica-exists", false, "")
@@ -66,9 +69,20 @@ func (c *RestoreCommand) Run(ctx context.Context, args []string) (err error) {
}
pathOrURL := fs.Arg(0)
// Parse timestamp.
if *timestampStr != "" {
if c.timestamp, err = time.Parse(time.RFC3339Nano, *timestampStr); err != nil {
return fmt.Errorf("invalid -timestamp, expected ISO 8601: %w", err)
}
}
// Ensure a generation is specified if target index is specified.
if c.targetIndex != -1 && c.generation == "" {
if c.targetIndex != -1 && !c.timestamp.IsZero() {
return fmt.Errorf("cannot specify both -index flag and -timestamp flag")
} else if c.targetIndex != -1 && c.generation == "" {
return fmt.Errorf("must specify -generation flag when using -index flag")
} else if !c.timestamp.IsZero() && c.generation == "" {
return fmt.Errorf("must specify -generation flag when using -timestamp flag")
}
// Default to original database path if output path not specified.
@@ -117,7 +131,11 @@ func (c *RestoreCommand) Run(ctx context.Context, args []string) (err error) {
}
// Determine the maximum available index for the generation if one is not specified.
if c.targetIndex == -1 {
if !c.timestamp.IsZero() {
if c.targetIndex, err = litestream.FindIndexByTimestamp(ctx, r.Client(), c.generation, c.timestamp); err != nil {
return fmt.Errorf("cannot find index for timestamp in generation %q: %w", c.generation, err)
}
} else if c.targetIndex == -1 {
if c.targetIndex, err = litestream.FindMaxIndexByGeneration(ctx, r.Client(), c.generation); err != nil {
return fmt.Errorf("cannot determine latest index in generation %q: %w", c.generation, err)
}
@@ -239,6 +257,10 @@ Arguments:
Restore up to a specific hex-encoded WAL index (inclusive).
Defaults to use the highest available index.
-timestamp DATETIME
Restore up to a specific point-in-time. Must be ISO 8601.
Cannot be specified with -index flag.
-o PATH
Output path of the restored database.
Defaults to original DB path.
@@ -253,9 +275,6 @@ Arguments:
Determines the number of WAL files downloaded in parallel.
Defaults to `+strconv.Itoa(litestream.DefaultRestoreParallelism)+`.
-v
Verbose output.
Examples:
@@ -271,6 +290,9 @@ Examples:
# Restore database from specific generation on S3.
$ litestream restore -replica s3 -generation xxxxxxxx /path/to/db
# Restore database to a specific point in time.
$ litestream restore -generation xxxxxxxx -timestamp 2000-01-01T00:00:00Z /path/to/db
`[1:],
DefaultConfigPath(),
)

409
db.go
View File

@@ -9,7 +9,6 @@ import (
"errors"
"fmt"
"io"
"io/ioutil"
"log"
"math/rand"
"os"
@@ -54,9 +53,6 @@ type DB struct {
pageSize int // page size, in bytes
notifyCh chan struct{} // notifies DB of changes
// Iterators used to stream new WAL changes to replicas
itrs map[*FileWALSegmentIterator]struct{}
// Cached salt & checksum from current shadow header.
hdr []byte
frame []byte
@@ -85,11 +81,6 @@ type DB struct {
checkpointErrorNCounterVec *prometheus.CounterVec
checkpointSecondsCounterVec *prometheus.CounterVec
// Client used to receive live, upstream changes. If specified, then
// DB should be used as read-only as local changes will conflict with
// upstream changes.
StreamClient StreamClient
// Minimum threshold of WAL size, in pages, before a passive checkpoint.
// A passive checkpoint will attempt a checkpoint but fail if there are
// active transactions occurring at the same time.
@@ -130,8 +121,6 @@ func NewDB(path string) *DB {
path: path,
notifyCh: make(chan struct{}, 1),
itrs: make(map[*FileWALSegmentIterator]struct{}),
MinCheckpointPageN: DefaultMinCheckpointPageN,
MaxCheckpointPageN: DefaultMaxCheckpointPageN,
ShadowRetentionN: DefaultShadowRetentionN,
@@ -275,7 +264,7 @@ func (db *DB) invalidatePos(ctx context.Context) error {
}
// Iterate over all segments to find the last one.
itr, err := db.walSegments(context.Background(), generation, false)
itr, err := db.walSegments(context.Background(), generation)
if err != nil {
return err
}
@@ -302,7 +291,7 @@ func (db *DB) invalidatePos(ctx context.Context) error {
}
defer rd.Close()
n, err := io.Copy(ioutil.Discard, lz4.NewReader(rd))
n, err := io.Copy(io.Discard, lz4.NewReader(rd))
if err != nil {
return err
}
@@ -422,9 +411,7 @@ func (db *DB) Open() (err error) {
return fmt.Errorf("cannot remove tmp files: %w", err)
}
// If an upstream client is specified, then we should simply stream changes
// into the database. If it is not specified, then we should monitor the
// database for local changes and replicate them out.
// Continually monitor local changes in a separate goroutine.
db.g.Go(func() error { return db.monitor(db.ctx) })
return nil
@@ -466,14 +453,6 @@ func (db *DB) Close() (err error) {
}
}
// Remove all iterators.
db.mu.Lock()
for itr := range db.itrs {
itr.SetErr(ErrDBClosed)
delete(db.itrs, itr)
}
db.mu.Unlock()
// Release the read lock to allow other applications to handle checkpointing.
if db.rtx != nil {
if e := db.releaseReadLock(); e != nil && err == nil {
@@ -645,74 +624,6 @@ func (db *DB) init() (err error) {
return nil
}
// initReplica initializes a new database file as a replica of an upstream database.
func (db *DB) initReplica(pageSize int) (err error) {
// Exit if already initialized.
if db.db != nil {
return nil
}
// Obtain permissions for parent directory.
fi, err := os.Stat(filepath.Dir(db.path))
if err != nil {
return err
}
db.dirMode = fi.Mode()
dsn := db.path
dsn += fmt.Sprintf("?_busy_timeout=%d", BusyTimeout.Milliseconds())
// Connect to SQLite database. Use the driver registered with a hook to
// prevent WAL files from being removed.
if db.db, err = sql.Open("litestream-sqlite3", dsn); err != nil {
return err
}
// Initialize database file if it doesn't exist. It doesn't matter what we
// store in it as it will be erased by the replication. We just need to
// ensure a WAL file is created and there is at least a page in the database.
if _, err := os.Stat(db.path); os.IsNotExist(err) {
if _, err := db.db.ExecContext(db.ctx, fmt.Sprintf(`PRAGMA page_size = %d`, pageSize)); err != nil {
return fmt.Errorf("set page size: %w", err)
}
var mode string
if err := db.db.QueryRow(`PRAGMA journal_mode = wal`).Scan(&mode); err != nil {
return err
} else if mode != "wal" {
return fmt.Errorf("enable wal failed, mode=%q", mode)
}
if _, err := db.db.ExecContext(db.ctx, `CREATE TABLE IF NOT EXISTS _litestream (id INTEGER)`); err != nil {
return fmt.Errorf("create _litestream table: %w", err)
} else if _, err := db.db.ExecContext(db.ctx, `PRAGMA wal_checkpoint(TRUNCATE)`); err != nil {
return fmt.Errorf("create _litestream table: %w", err)
}
}
// Obtain file info once we know the database exists.
fi, err = os.Stat(db.path)
if err != nil {
return fmt.Errorf("init file stat: %w", err)
}
db.fileMode = fi.Mode()
db.uid, db.gid = internal.Fileinfo(fi)
// Verify page size matches.
if err := db.db.QueryRowContext(db.ctx, `PRAGMA page_size;`).Scan(&db.pageSize); err != nil {
return fmt.Errorf("read page size: %w", err)
} else if db.pageSize != pageSize {
return fmt.Errorf("page size mismatch: %d <> %d", db.pageSize, pageSize)
}
// Ensure meta directory structure exists.
if err := internal.MkdirAll(db.MetaPath(), db.dirMode, db.uid, db.gid); err != nil {
return err
}
return nil
}
func (db *DB) clearGeneration(ctx context.Context) error {
if err := os.Remove(db.GenerationNamePath()); err != nil && !os.IsNotExist(err) {
return err
@@ -759,7 +670,7 @@ func (db *DB) cleanGenerations(ctx context.Context) error {
}
dir := filepath.Join(db.MetaPath(), "generations")
fis, err := ioutil.ReadDir(dir)
fis, err := os.ReadDir(dir)
if os.IsNotExist(err) {
return nil
} else if err != nil {
@@ -927,11 +838,6 @@ func (db *DB) createGeneration(ctx context.Context) (string, error) {
// Sync copies pending data from the WAL to the shadow WAL.
func (db *DB) Sync(ctx context.Context) error {
if db.StreamClient != nil {
db.Logger.Printf("using upstream client, skipping sync")
return nil
}
const retryN = 5
for i := 0; i < retryN; i++ {
@@ -1386,8 +1292,7 @@ func (db *DB) writeWALSegment(ctx context.Context, pos Pos, rd io.Reader) error
}
defer f.Close()
n, err := io.Copy(f, rd)
if err != nil {
if _, err := io.Copy(f, rd); err != nil {
return err
} else if err := f.Sync(); err != nil {
return err
@@ -1405,50 +1310,9 @@ func (db *DB) writeWALSegment(ctx context.Context, pos Pos, rd io.Reader) error
return fmt.Errorf("write position file: %w", err)
}
// Generate
info := WALSegmentInfo{
Generation: pos.Generation,
Index: pos.Index,
Offset: pos.Offset,
Size: n,
CreatedAt: time.Now(),
}
// Notify all managed segment iterators.
for itr := range db.itrs {
// Notify iterators of generation change.
if itr.Generation() != pos.Generation {
itr.SetErr(ErrGenerationChanged)
delete(db.itrs, itr)
continue
}
// Attempt to append segment to end of iterator.
// On error, mark it on the iterator and remove from future notifications.
if err := itr.Append(info); err != nil {
itr.SetErr(fmt.Errorf("cannot append wal segment: %w", err))
delete(db.itrs, itr)
continue
}
}
return nil
}
// readPositionFile reads the position from the position file.
func (db *DB) readPositionFile() (Pos, error) {
buf, err := os.ReadFile(db.PositionPath())
if os.IsNotExist(err) {
return Pos{}, nil
} else if err != nil {
return Pos{}, err
}
// Treat invalid format as a non-existent file so we return an empty position.
pos, _ := ParsePos(strings.TrimSpace(string(buf)))
return pos, nil
}
// writePositionFile writes pos as the current position.
func (db *DB) writePositionFile(pos Pos) error {
return internal.WriteFile(db.PositionPath(), []byte(pos.String()+"\n"), db.fileMode, db.uid, db.gid)
@@ -1458,10 +1322,10 @@ func (db *DB) writePositionFile(pos Pos) error {
func (db *DB) WALSegments(ctx context.Context, generation string) (*FileWALSegmentIterator, error) {
db.mu.Lock()
defer db.mu.Unlock()
return db.walSegments(ctx, generation, true)
return db.walSegments(ctx, generation)
}
func (db *DB) walSegments(ctx context.Context, generation string, managed bool) (*FileWALSegmentIterator, error) {
func (db *DB) walSegments(ctx context.Context, generation string) (*FileWALSegmentIterator, error) {
ents, err := os.ReadDir(db.ShadowWALDir(generation))
if os.IsNotExist(err) {
return NewFileWALSegmentIterator(db.ShadowWALDir(generation), generation, nil), nil
@@ -1481,27 +1345,7 @@ func (db *DB) walSegments(ctx context.Context, generation string, managed bool)
sort.Ints(indexes)
itr := NewFileWALSegmentIterator(db.ShadowWALDir(generation), generation, indexes)
// Managed iterators will have new segments pushed to them.
if managed {
itr.closeFunc = func() error {
return db.CloseWALSegmentIterator(itr)
}
db.itrs[itr] = struct{}{}
}
return itr, nil
}
// CloseWALSegmentIterator removes itr from the list of managed iterators.
func (db *DB) CloseWALSegmentIterator(itr *FileWALSegmentIterator) error {
db.mu.Lock()
defer db.mu.Unlock()
delete(db.itrs, itr)
return nil
return NewFileWALSegmentIterator(db.ShadowWALDir(generation), generation, indexes), nil
}
// SQLite WAL constants
@@ -1645,15 +1489,8 @@ func (db *DB) execCheckpoint(mode string) (err error) {
return nil
}
func (db *DB) monitor(ctx context.Context) error {
if db.StreamClient != nil {
return db.monitorUpstream(ctx)
}
return db.monitorLocal(ctx)
}
// monitor runs in a separate goroutine and monitors the local database & WAL.
func (db *DB) monitorLocal(ctx context.Context) error {
func (db *DB) monitor(ctx context.Context) error {
var timer *time.Timer
if db.MonitorDelayInterval > 0 {
timer = time.NewTimer(db.MonitorDelayInterval)
@@ -1686,189 +1523,6 @@ func (db *DB) monitorLocal(ctx context.Context) error {
}
}
// monitorUpstream runs in a separate goroutine and streams data into the local DB.
func (db *DB) monitorUpstream(ctx context.Context) error {
for {
if err := db.stream(ctx); err != nil {
if ctx.Err() != nil {
return nil
}
db.Logger.Printf("stream error, retrying: %s", err)
}
// Delay before retrying stream.
select {
case <-ctx.Done():
return nil
case <-time.After(1 * time.Second):
}
}
}
// stream initializes the local database and continuously streams new upstream data.
func (db *DB) stream(ctx context.Context) error {
pos, err := db.readPositionFile()
if err != nil {
return fmt.Errorf("read position file: %w", err)
}
// Continuously stream and apply records from client.
sr, err := db.StreamClient.Stream(ctx, pos)
if err != nil {
return fmt.Errorf("stream connect: %w", err)
}
defer sr.Close()
// Initialize the database and create it if it doesn't exist.
if err := db.initReplica(sr.PageSize()); err != nil {
return fmt.Errorf("init replica: %w", err)
}
for {
hdr, err := sr.Next()
if err != nil {
return err
}
switch hdr.Type {
case StreamRecordTypeSnapshot:
if err := db.streamSnapshot(ctx, hdr, sr); err != nil {
return fmt.Errorf("snapshot: %w", err)
}
case StreamRecordTypeWALSegment:
if err := db.streamWALSegment(ctx, hdr, sr); err != nil {
return fmt.Errorf("wal segment: %w", err)
}
default:
return fmt.Errorf("invalid stream record type: 0x%02x", hdr.Type)
}
}
}
// streamSnapshot reads the snapshot into the WAL and applies it to the main database.
func (db *DB) streamSnapshot(ctx context.Context, hdr *StreamRecordHeader, r io.Reader) error {
// Truncate WAL file.
if _, err := db.db.ExecContext(ctx, `PRAGMA wal_checkpoint(TRUNCATE)`); err != nil {
return fmt.Errorf("truncate: %w", err)
}
// Determine total page count.
pageN := int(hdr.Size / int64(db.pageSize))
ww := NewWALWriter(db.WALPath(), db.fileMode, db.pageSize)
if err := ww.Open(); err != nil {
return fmt.Errorf("open wal writer: %w", err)
}
defer func() { _ = ww.Close() }()
if err := ww.WriteHeader(); err != nil {
return fmt.Errorf("write wal header: %w", err)
}
// Iterate over pages
buf := make([]byte, db.pageSize)
for pgno := uint32(1); ; pgno++ {
// Read snapshot page into a buffer.
if _, err := io.ReadFull(r, buf); err == io.EOF {
break
} else if err != nil {
return fmt.Errorf("read snapshot page %d: %w", pgno, err)
}
// Issue a commit flag when the last page is reached.
var commit uint32
if pgno == uint32(pageN) {
commit = uint32(pageN)
}
// Write page into WAL frame.
if err := ww.WriteFrame(pgno, commit, buf); err != nil {
return fmt.Errorf("write wal frame: %w", err)
}
}
// Close WAL file writer.
if err := ww.Close(); err != nil {
return fmt.Errorf("close wal writer: %w", err)
}
// Invalidate WAL index.
if err := invalidateSHMFile(db.path); err != nil {
return fmt.Errorf("invalidate shm file: %w", err)
}
// Write position to file so other processes can read it.
if err := db.writePositionFile(hdr.Pos()); err != nil {
return fmt.Errorf("write position file: %w", err)
}
db.Logger.Printf("snapshot applied")
return nil
}
// streamWALSegment rewrites a WAL segment into the local WAL and applies it to the main database.
func (db *DB) streamWALSegment(ctx context.Context, hdr *StreamRecordHeader, r io.Reader) error {
// Decompress incoming segment
zr := lz4.NewReader(r)
// Drop WAL header if starting from offset zero.
if hdr.Offset == 0 {
if _, err := io.CopyN(io.Discard, zr, WALHeaderSize); err != nil {
return fmt.Errorf("read wal header: %w", err)
}
}
ww := NewWALWriter(db.WALPath(), db.fileMode, db.pageSize)
if err := ww.Open(); err != nil {
return fmt.Errorf("open wal writer: %w", err)
}
defer func() { _ = ww.Close() }()
if err := ww.WriteHeader(); err != nil {
return fmt.Errorf("write wal header: %w", err)
}
// Iterate over incoming WAL pages.
buf := make([]byte, WALFrameHeaderSize+db.pageSize)
for i := 0; ; i++ {
// Read snapshot page into a buffer.
if _, err := io.ReadFull(zr, buf); err == io.EOF {
break
} else if err != nil {
return fmt.Errorf("read wal frame %d: %w", i, err)
}
// Read page number & commit field.
pgno := binary.BigEndian.Uint32(buf[0:])
commit := binary.BigEndian.Uint32(buf[4:])
// Write page into WAL frame.
if err := ww.WriteFrame(pgno, commit, buf[WALFrameHeaderSize:]); err != nil {
return fmt.Errorf("write wal frame: %w", err)
}
}
// Close WAL file writer.
if err := ww.Close(); err != nil {
return fmt.Errorf("close wal writer: %w", err)
}
// Invalidate WAL index.
if err := invalidateSHMFile(db.path); err != nil {
return fmt.Errorf("invalidate shm file: %w", err)
}
// Write position to file so other processes can read it.
if err := db.writePositionFile(hdr.Pos()); err != nil {
return fmt.Errorf("write position file: %w", err)
}
db.Logger.Printf("wal segment applied: %s", hdr.Pos().String())
return nil
}
// ApplyWAL performs a truncating checkpoint on the given database.
func ApplyWAL(ctx context.Context, dbPath, walPath string) error {
// Copy WAL file from it's staging path to the correct "-wal" location.
@@ -2016,51 +1670,6 @@ func logPrefixPath(path string) string {
return path
}
// invalidateSHMFile clears the iVersion field of the -shm file in order that
// the next transaction will rebuild it.
func invalidateSHMFile(dbPath string) error {
db, err := sql.Open("sqlite3", dbPath)
if err != nil {
return fmt.Errorf("reopen db: %w", err)
}
defer func() { _ = db.Close() }()
if _, err := db.Exec(`PRAGMA wal_checkpoint(PASSIVE)`); err != nil {
return fmt.Errorf("passive checkpoint: %w", err)
}
f, err := os.OpenFile(dbPath+"-shm", os.O_RDWR, 0666)
if err != nil {
return fmt.Errorf("open shm index: %w", err)
}
defer f.Close()
buf := make([]byte, WALIndexHeaderSize)
if _, err := io.ReadFull(f, buf); err != nil {
return fmt.Errorf("read shm index: %w", err)
}
// Invalidate "isInit" fields.
buf[12], buf[60] = 0, 0
// Rewrite header.
if _, err := f.Seek(0, io.SeekStart); err != nil {
return fmt.Errorf("seek shm index: %w", err)
} else if _, err := f.Write(buf); err != nil {
return fmt.Errorf("overwrite shm index: %w", err)
} else if err := f.Close(); err != nil {
return fmt.Errorf("close shm index: %w", err)
}
// Truncate WAL file again.
var row [3]int
if err := db.QueryRow(`PRAGMA wal_checkpoint(TRUNCATE)`).Scan(&row[0], &row[1], &row[2]); err != nil {
return fmt.Errorf("truncate: %w", err)
}
return nil
}
// A marker error to indicate that a restart checkpoint could not verify
// continuity between WAL indices and a new generation should be started.
var errRestartGeneration = errors.New("restart generation")

View File

@@ -4,7 +4,6 @@ import (
"context"
"fmt"
"io"
"io/ioutil"
"os"
"path/filepath"
"sort"
@@ -111,7 +110,7 @@ func (c *FileReplicaClient) Generations(ctx context.Context) ([]string, error) {
return nil, fmt.Errorf("cannot determine generations path: %w", err)
}
fis, err := ioutil.ReadDir(root)
fis, err := os.ReadDir(root)
if os.IsNotExist(err) {
return nil, nil
} else if err != nil {
@@ -362,9 +361,8 @@ func (c *FileReplicaClient) DeleteWALSegments(ctx context.Context, a []Pos) erro
}
type FileWALSegmentIterator struct {
mu sync.Mutex
notifyCh chan struct{}
closeFunc func() error
mu sync.Mutex
notifyCh chan struct{}
dir string
generation string
@@ -386,12 +384,6 @@ func NewFileWALSegmentIterator(dir, generation string, indexes []int) *FileWALSe
}
func (itr *FileWALSegmentIterator) Close() (err error) {
if itr.closeFunc != nil {
if e := itr.closeFunc(); e != nil && err == nil {
err = e
}
}
if e := itr.Err(); e != nil && err == nil {
err = e
}

56
go.mod
View File

@@ -1,19 +1,51 @@
module github.com/benbjohnson/litestream
go 1.16
go 1.19
require (
cloud.google.com/go/storage v1.20.0
github.com/Azure/azure-storage-blob-go v0.14.0
github.com/aws/aws-sdk-go v1.42.53
cloud.google.com/go/storage v1.24.0
github.com/Azure/azure-storage-blob-go v0.15.0
github.com/aws/aws-sdk-go v1.44.71
github.com/fsnotify/fsnotify v1.5.4
github.com/mattn/go-shellwords v1.0.12
github.com/mattn/go-sqlite3 v1.14.12
github.com/pierrec/lz4/v4 v4.1.14
github.com/pkg/sftp v1.13.4
github.com/prometheus/client_golang v1.12.1
golang.org/x/crypto v0.0.0-20220112180741-5e0467b6c7ce
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c
golang.org/x/sys v0.0.0-20220204135822-1c1b9b1eba6a
google.golang.org/api v0.68.0
github.com/mattn/go-sqlite3 v1.14.14
github.com/pierrec/lz4/v4 v4.1.15
github.com/pkg/sftp v1.13.5
github.com/prometheus/client_golang v1.13.0
golang.org/x/crypto v0.0.0-20220722155217-630584e8d5aa
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4
google.golang.org/api v0.91.0
gopkg.in/yaml.v2 v2.4.0
)
require (
cloud.google.com/go v0.103.0 // indirect
cloud.google.com/go/compute v1.7.0 // indirect
cloud.google.com/go/iam v0.3.0 // indirect
github.com/Azure/azure-pipeline-go v0.2.3 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/cespare/xxhash/v2 v2.1.2 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/google/go-cmp v0.5.8 // indirect
github.com/google/uuid v1.3.0 // indirect
github.com/googleapis/enterprise-certificate-proxy v0.1.0 // indirect
github.com/googleapis/gax-go/v2 v2.5.1 // indirect
github.com/jmespath/go-jmespath v0.4.0 // indirect
github.com/kr/fs v0.1.0 // indirect
github.com/mattn/go-ieproxy v0.0.7 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.1 // indirect
github.com/prometheus/client_model v0.2.0 // indirect
github.com/prometheus/common v0.37.0 // indirect
github.com/prometheus/procfs v0.8.0 // indirect
go.opencensus.io v0.23.0 // indirect
golang.org/x/net v0.0.0-20220805013720-a33c5aa5df48 // indirect
golang.org/x/oauth2 v0.0.0-20220808172628-8227340efae7 // indirect
golang.org/x/sys v0.0.0-20220808155132-1c4a2a72c664 // indirect
golang.org/x/text v0.3.7 // indirect
golang.org/x/xerrors v0.0.0-20220609144429-65e65417b02f // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/genproto v0.0.0-20220808204814-fd01256a5276 // indirect
google.golang.org/grpc v1.48.0 // indirect
google.golang.org/protobuf v1.28.1 // indirect
)

208
go.sum
View File

@@ -26,9 +26,11 @@ cloud.google.com/go v0.93.3/go.mod h1:8utlLll2EF5XMAV15woO4lSbWQlk8rer9aLOfLh7+Y
cloud.google.com/go v0.94.1/go.mod h1:qAlAugsXlC+JWO+Bke5vCtc9ONxjQT3drlTTnAplMW4=
cloud.google.com/go v0.97.0/go.mod h1:GF7l59pYBVlXQIBLx3a761cZ41F9bBH3JUlihCt2Udc=
cloud.google.com/go v0.99.0/go.mod h1:w0Xx2nLzqWJPuozYQX+hFfCSI8WioryfRDzkoI/Y2ZA=
cloud.google.com/go v0.100.1/go.mod h1:fs4QogzfH5n2pBXBP9vRiU+eCny7lD2vmFZy79Iuw1U=
cloud.google.com/go v0.100.2 h1:t9Iw5QH5v4XtlEQaCtUY7x6sCABps8sW0acw7e2WQ6Y=
cloud.google.com/go v0.100.2/go.mod h1:4Xra9TjzAeYHrl5+oeLlzbM2k3mjVhZh4UqTZ//w99A=
cloud.google.com/go v0.102.0/go.mod h1:oWcCzKlqJ5zgHQt9YsaeTY9KzIvjyy0ArmiBUgpQ+nc=
cloud.google.com/go v0.102.1/go.mod h1:XZ77E9qnTEnrgEOvr4xzfdX5TRo7fB4T2F4O6+34hIU=
cloud.google.com/go v0.103.0 h1:YXtxp9ymmZjlGzxV7VrYQ8aaQuAgcqxSy6YhDX4I458=
cloud.google.com/go v0.103.0/go.mod h1:vwLx1nqLrzLX/fpwSMOXmFIqBOyHsvHbnAdbGSJ+mKk=
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE=
cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc=
@@ -36,12 +38,16 @@ cloud.google.com/go/bigquery v1.5.0/go.mod h1:snEHRnqQbz117VIFhE8bmtwIDY80NLUZUM
cloud.google.com/go/bigquery v1.7.0/go.mod h1://okPTzCYNXSlb24MZs83e2Do+h+VXtc4gLoIoXIAPc=
cloud.google.com/go/bigquery v1.8.0/go.mod h1:J5hqkt3O0uAFnINi6JXValWIb1v0goeZM77hZzJN/fQ=
cloud.google.com/go/compute v0.1.0/go.mod h1:GAesmwr110a34z04OlxYkATPBEfVhkymfTBXtfbBFow=
cloud.google.com/go/compute v1.2.0 h1:EKki8sSdvDU0OO9mAXGwPXOTOgPz2l08R0/IutDH11I=
cloud.google.com/go/compute v1.2.0/go.mod h1:xlogom/6gr8RJGBe7nT2eGsQYAFUbbv8dbC29qE3Xmw=
cloud.google.com/go/compute v1.3.0/go.mod h1:cCZiE1NHEtai4wiufUhW8I8S1JKkAnhnQJWM7YD99wM=
cloud.google.com/go/compute v1.5.0/go.mod h1:9SMHyhJlzhlkJqrPAc839t2BZFTSk6Jdj6mkzQJeu0M=
cloud.google.com/go/compute v1.6.0/go.mod h1:T29tfhtVbq1wvAPo0E3+7vhgmkOYeXjhFvz/FMzPu0s=
cloud.google.com/go/compute v1.6.1/go.mod h1:g85FgpzFvNULZ+S8AYq87axRKuf2Kh7deLqV/jJ3thU=
cloud.google.com/go/compute v1.7.0 h1:v/k9Eueb8aAJ0vZuxKMrgm6kPhCLZU9HxFU+AFDs9Uk=
cloud.google.com/go/compute v1.7.0/go.mod h1:435lt8av5oL9P3fv1OEzSbSUe+ybHXGMPQHHZWZxy9U=
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk=
cloud.google.com/go/iam v0.1.1 h1:4CapQyNFjiksks1/x7jsvsygFPhihslYk5GptIrlX68=
cloud.google.com/go/iam v0.1.1/go.mod h1:CKqrcnI/suGpybEHxZ7BMehL0oA4LpdyJdUlTl9jVMw=
cloud.google.com/go/iam v0.3.0 h1:exkAomrVUuzx9kWFI1wm3KI0uoDeUFPB4kKGzx6x+Gc=
cloud.google.com/go/iam v0.3.0/go.mod h1:XzJPvDayI+9zsASAFO68Hk07u3z+f+JrT2xXNdp4bnY=
cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw=
cloud.google.com/go/pubsub v1.2.0/go.mod h1:jhfEVHT8odbXTkndysNHCcx0awwzvfOlguIAii9o8iA=
@@ -51,13 +57,15 @@ cloud.google.com/go/storage v1.5.0/go.mod h1:tpKbwo567HUNpVclU5sGELwQWBDZ8gh0Zeo
cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohlUTyfDhBk=
cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RXyy7KQOVs=
cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0=
cloud.google.com/go/storage v1.20.0 h1:kv3rQ3clEQdxqokkCCgQo+bxPqcuXiROjxvnKb8Oqdk=
cloud.google.com/go/storage v1.20.0/go.mod h1:TiC1o6FxNCG8y5gB7rqCsFZCIYPMPZCO81ppOoEPLGI=
cloud.google.com/go/storage v1.22.1/go.mod h1:S8N1cAStu7BOeFfE8KAQzmyyLkK8p/vmRq6kuBTW58Y=
cloud.google.com/go/storage v1.23.0/go.mod h1:vOEEDNFnciUMhBeT6hsJIn3ieU5cFRmzeLgDvXzfIXc=
cloud.google.com/go/storage v1.24.0 h1:a4N0gIkx83uoVFGz8B2eAV3OhN90QoWF5OZWLKl39ig=
cloud.google.com/go/storage v1.24.0/go.mod h1:3xrJEFMXBsQLgxwThyjuD3aYlroL0TMRec1ypGUQ0KE=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
github.com/Azure/azure-pipeline-go v0.2.3 h1:7U9HBg1JFK3jHl5qmo4CTZKFTVgMwdFHMVtCdfBE21U=
github.com/Azure/azure-pipeline-go v0.2.3/go.mod h1:x841ezTBIMG6O3lAcl8ATHnsOPVl2bqk7S3ta6S6u4k=
github.com/Azure/azure-storage-blob-go v0.14.0 h1:1BCg74AmVdYwO3dlKwtFU1V0wU2PZdREkXvAmZJRUlM=
github.com/Azure/azure-storage-blob-go v0.14.0/go.mod h1:SMqIBi+SuiQH32bvyjngEewEeXoPfKMgWlBDaYf6fck=
github.com/Azure/azure-storage-blob-go v0.15.0 h1:rXtgp8tN1p29GvpGgfJetavIG0V7OgcSXPpwp3tx6qk=
github.com/Azure/azure-storage-blob-go v0.15.0/go.mod h1:vbjsVbX0dlxnRc4FFMPsS9BsJWPcne7GB7onqlPvz58=
github.com/Azure/go-autorest v14.2.0+incompatible h1:V5VMDjClD3GiElqLWO7mz2MxNAK/vTfRHdAubSIPRgs=
github.com/Azure/go-autorest v14.2.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
github.com/Azure/go-autorest/autorest/adal v0.9.13 h1:Mp5hbtOePIzM8pJVRa3YLrWWmZtoxRXqUEzCfJt3+/Q=
@@ -78,14 +86,13 @@ github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRF
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d/go.mod h1:rBZYJk541a8SKzHPHnH3zbiI+7dagKZ0cgpgrD7Fyho=
github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY=
github.com/aws/aws-sdk-go v1.42.53 h1:56T04NWcmc0ZVYFbUc6HdewDQ9iHQFlmS6hj96dRjJs=
github.com/aws/aws-sdk-go v1.42.53/go.mod h1:OGr6lGMAKGlG9CVrYnWYDKIyb829c6EVBRjxqjmPepc=
github.com/aws/aws-sdk-go v1.44.71 h1:e5ZbeFAdDB9i7NcQWdmIiA/NOC4aWec3syOUtUE0dBA=
github.com/aws/aws-sdk-go v1.44.71/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cespare/xxhash/v2 v2.1.2 h1:YRXhKfTDauu4ajMg1TPgFO5jnlC2HCbmLXMcTG5cbYE=
@@ -97,7 +104,12 @@ github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDk
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
github.com/cncf/udpa/go v0.0.0-20200629203442-efcf912fb354/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
github.com/cncf/udpa/go v0.0.0-20210930031921-04548b0d99d4/go.mod h1:6pvJx4me5XPnfI9Z40ddWsdw2W/uZgQLFXToKeRcDiI=
github.com/cncf/xds/go v0.0.0-20210312221358-fbca930ec8ed/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cncf/xds/go v0.0.0-20210805033703-aa0b78936158/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cncf/xds/go v0.0.0-20210922020428-25de7278fc84/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cncf/xds/go v0.0.0-20211001041855-01bcc9b48dfe/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cncf/xds/go v0.0.0-20211011173535-cb28da3451f1/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
@@ -108,9 +120,13 @@ github.com/envoyproxy/go-control-plane v0.9.7/go.mod h1:cwu0lG7PUMfa9snN8LXBig5y
github.com/envoyproxy/go-control-plane v0.9.9-0.20201210154907-fd9021fe5dad/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
github.com/envoyproxy/go-control-plane v0.9.9-0.20210217033140-668b12f5399d/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
github.com/envoyproxy/go-control-plane v0.9.9-0.20210512163311-63b5d3c536b0/go.mod h1:hliV/p42l8fGbc6Y9bQ70uLwIvmJyVE5k4iMKlh8wCQ=
github.com/envoyproxy/go-control-plane v0.9.10-0.20210907150352-cf90f659a021/go.mod h1:AFq3mo9L8Lqqiid3OhADV3RfLJnjiw63cSpi+fDTRC0=
github.com/envoyproxy/go-control-plane v0.10.2-0.20220325020618-49ff273808a1/go.mod h1:KJwIaB5Mv44NWtYuAOFCVOjcI94vtpEz2JU/D2v6IjE=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/form3tech-oss/jwt-go v3.2.2+incompatible h1:TcekIExNqud5crz4xD2pavyTgWiPvpYe4Xau31I0PRk=
github.com/form3tech-oss/jwt-go v3.2.2+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k=
github.com/fsnotify/fsnotify v1.5.4 h1:jRbGcIw6P2Meqdwuo0H1p6JVLbL5DHKAKlYndzMwVZI=
github.com/fsnotify/fsnotify v1.5.4/go.mod h1:OVB6XrOHzAwXMpEM7uPOzcehqUV2UqJxmVXmkdnm1bU=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
@@ -118,16 +134,19 @@ github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/log v0.1.0/go.mod h1:zbhenjAZHb184qTLMA9ZjW7ThYL0H2mk7Q6pNt4vbaY=
github.com/go-kit/log v0.2.0/go.mod h1:NwTd00d/i8cPZ3xOwwiv2PO5MOcx78fFErGNcVmBjv0=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-logfmt/logfmt v0.5.0/go.mod h1:wCYkCAKZfumFQihp8CzCvQ3paCTfi41vtzG1KdI/P7A=
github.com/go-logfmt/logfmt v0.5.1/go.mod h1:WYhtIu8zTZfxdn5+rREduYbwxfcBr/Vr6KEVveWlfTs=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e h1:1r7pUrabqp18hOBcwBwiTsbnFeTZHV9eER/QT5JVZxY=
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y=
@@ -170,8 +189,9 @@ github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/
github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.7 h1:81/ik6ipDQS2aGcBfIN5dHDB36BwrStyeAQquSYCV4o=
github.com/google/go-cmp v0.5.7/go.mod h1:n+brtR0CgQNWTVd5ZUFpTBC8YFBDLK/h/bpaJ8/DtOE=
github.com/google/go-cmp v0.5.8 h1:e6P7q2lk1O+qJJb4BtCQXlK8vWEO8V1ZeuEdJNOqZyg=
github.com/google/go-cmp v0.5.8/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/martian v2.1.0+incompatible h1:/CP5g8u/VJHijgedC/Legn3BAbAaWPgecwXBIDzw5no=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
@@ -195,13 +215,22 @@ github.com/google/pprof v0.0.0-20210609004039-a478d1d731e9/go.mod h1:kpwsk12EmLe
github.com/google/pprof v0.0.0-20210720184732-4bb14d4b1be1/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.2.0 h1:qJYtXnJRWmpe7m/3XlyhrsLrEURqHRM2kxzoxXqyUDs=
github.com/google/uuid v1.2.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I=
github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/enterprise-certificate-proxy v0.0.0-20220520183353-fd19c99a87aa/go.mod h1:17drOmN3MwGY7t0e+Ei9b45FFGA3fBs3x36SsCg1hq8=
github.com/googleapis/enterprise-certificate-proxy v0.1.0 h1:zO8WHNx/MYiAKJ3d5spxZXZE6KHmIQGQcAzwUzV7qQw=
github.com/googleapis/enterprise-certificate-proxy v0.1.0/go.mod h1:17drOmN3MwGY7t0e+Ei9b45FFGA3fBs3x36SsCg1hq8=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
github.com/googleapis/gax-go/v2 v2.1.0/go.mod h1:Q3nei7sK6ybPYH7twZdmQpAd1MKb7pfu6SK+H1/DsU0=
github.com/googleapis/gax-go/v2 v2.1.1 h1:dp3bWCh+PPO1zjRRiCSczJav13sBvG4UhNyVTa1KqdU=
github.com/googleapis/gax-go/v2 v2.1.1/go.mod h1:hddJymUZASv3XPyGkUpKj8pPO47Rmb0eJc8R6ouapiM=
github.com/googleapis/gax-go/v2 v2.2.0/go.mod h1:as02EH8zWkzwUoLbBaFeQ+arQaj/OthfcblKl4IGNaM=
github.com/googleapis/gax-go/v2 v2.3.0/go.mod h1:b8LNqSzNabLiUpXKkY7HAR5jr6bIT99EXz9pXxye9YM=
github.com/googleapis/gax-go/v2 v2.4.0/go.mod h1:XOTVJ59hdnfJLIP/dh8n5CGryZR2LxK9wbMD5+iXC6c=
github.com/googleapis/gax-go/v2 v2.5.1 h1:kBRZU0PSuI7PspsSb/ChWoVResUcwNVIdpB049pKTiw=
github.com/googleapis/gax-go/v2 v2.5.1/go.mod h1:h6B0KMMFNtI2ddbGJn3T3ZbwkeT6yqEF02fYlzkUCyo=
github.com/googleapis/go-type-adapters v1.0.0/go.mod h1:zHW75FOG2aur7gAO2B+MLby+cLsWGBF62rFAi7WjWO4=
github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw=
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
@@ -232,14 +261,13 @@ github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfn
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/mattn/go-ieproxy v0.0.1 h1:qiyop7gCflfhwCzGyeT0gro3sF9AIg9HU98JORTkqfI=
github.com/mattn/go-ieproxy v0.0.1/go.mod h1:pYabZ6IHcRpFh7vIaLfK7rdcWgFEb3SFJ6/gNWuh88E=
github.com/mattn/go-ieproxy v0.0.7 h1:d2hBmNUJOAf2aGgzMQtz1wBByJQvRk72/1TXBiCVHXU=
github.com/mattn/go-ieproxy v0.0.7/go.mod h1:6ZpRmhBaYuBX1U2za+9rC9iCGLsSp2tftelZne7CPko=
github.com/mattn/go-shellwords v1.0.12 h1:M2zGm7EW6UQJvDeQxo4T51eKPurbeFbe8WtebGE2xrk=
github.com/mattn/go-shellwords v1.0.12/go.mod h1:EZzvwXDESEeg03EKmM+RmDnNOPKG4lLtQsUlTZDWQ8Y=
github.com/mattn/go-sqlite3 v1.14.11 h1:gt+cp9c0XGqe9S/wAHTL3n/7MqY+siPWgWJgqdsFrzQ=
github.com/mattn/go-sqlite3 v1.14.11/go.mod h1:NyWgC/yNuGj7Q9rpYnZvas74GogHl5/Z4A/KQRfk6bU=
github.com/mattn/go-sqlite3 v1.14.12 h1:TJ1bhYJPV44phC+IMu1u2K/i5RriLTPe+yc68XDJ1Z0=
github.com/mattn/go-sqlite3 v1.14.12/go.mod h1:NyWgC/yNuGj7Q9rpYnZvas74GogHl5/Z4A/KQRfk6bU=
github.com/mattn/go-sqlite3 v1.14.14 h1:qZgc/Rwetq+MtyE18WhzjokPD93dNqLGNT3QJuLvBGw=
github.com/mattn/go-sqlite3 v1.14.14/go.mod h1:NyWgC/yNuGj7Q9rpYnZvas74GogHl5/Z4A/KQRfk6bU=
github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
@@ -249,22 +277,23 @@ github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3Rllmb
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/pierrec/lz4/v4 v4.1.14 h1:+fL8AQEZtz/ijeNnpduH0bROTu0O3NZAlPjQxGn8LwE=
github.com/pierrec/lz4/v4 v4.1.14/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4=
github.com/pierrec/lz4/v4 v4.1.15 h1:MO0/ucJhngq7299dKLwIMtgTfbkoSPF6AoMYDd8Q4q0=
github.com/pierrec/lz4/v4 v4.1.15/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/sftp v1.13.4 h1:Lb0RYJCmgUcBgZosfoi9Y9sbl6+LJgOIgk/2Y4YjMFg=
github.com/pkg/sftp v1.13.4/go.mod h1:LzqnAvaD5TWeNBsZpfKxSYn1MbjWwOsCIAFFJbpIsK8=
github.com/pkg/sftp v1.13.5 h1:a3RLUqkyjYRtBTZJZ1VRrKbN3zhuPLlUc3sphVz81go=
github.com/pkg/sftp v1.13.5/go.mod h1:wHDZ0IZX6JcBYRK1TH9bcVq8G7TLpVHYIGJRFnmPfxg=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M=
github.com/prometheus/client_golang v1.11.0/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0=
github.com/prometheus/client_golang v1.12.1 h1:ZiaPsmm9uiBeaSMRznKsCDNtPCS0T3JVDGF+06gjBzk=
github.com/prometheus/client_golang v1.12.1/go.mod h1:3Z9XVyYiZYEO+YQWt3RD2R3jrbd179Rt297l4aS6nDY=
github.com/prometheus/client_golang v1.13.0 h1:b71QUfeo5M8gq2+evJdTPfZhYMAU0uKPkyPJ7TPsloU=
github.com/prometheus/client_golang v1.13.0/go.mod h1:vTeo+zgvILHsnnj/39Ou/1fPN5nJFOEMgftOUOmlvYQ=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
@@ -273,14 +302,16 @@ github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6T
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo=
github.com/prometheus/common v0.26.0/go.mod h1:M7rCNAaPfAosfx8veZJCuw84e35h3Cfd9VFqTh1DIvc=
github.com/prometheus/common v0.32.1 h1:hWIdL3N2HoUx3B8j3YN9mWor0qhY/NlEKZEaXxuIRh4=
github.com/prometheus/common v0.32.1/go.mod h1:vu+V0TpY+O6vW9J44gczi3Ap/oXXR10b+M/gUGO4Hls=
github.com/prometheus/common v0.37.0 h1:ccBbHCgIiT9uSoFY0vX8H3zsNR5eLt17/RQLUvn8pXE=
github.com/prometheus/common v0.37.0/go.mod h1:phzohg0JFMnBEFGxTDbfu3QyL5GI8gTQJFhYO5B3mfA=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
github.com/prometheus/procfs v0.7.3 h1:4jVXhlkAyzOScmCkXBTOLRLTz8EeU+eyjrwB/EPq0VU=
github.com/prometheus/procfs v0.7.3/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
github.com/prometheus/procfs v0.8.0 h1:ODq8ZFEaYeCaZOJlZZdJA2AbQR98dSHSM1KW/You5mo=
github.com/prometheus/procfs v0.8.0/go.mod h1:z7EfXMXOkbkqb9IINtpCn86r/to3BnA0uaxHdg830/4=
github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
@@ -317,9 +348,10 @@ golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8U
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20201002170205-7f63de1d35b0/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210421170649-83a5a9bb288b/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
golang.org/x/crypto v0.0.0-20220112180741-5e0467b6c7ce h1:Roh6XWxHFKrPgC/EQhVubSAGQ6Ozk6IdxHSzt1mR0EI=
golang.org/x/crypto v0.0.0-20220112180741-5e0467b6c7ce/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.0.0-20201016220609-9e8e0b390897/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.0.0-20220722155217-630584e8d5aa h1:zuSxTR4o9y82ebqCUJYNGJbGPo6sKVl54f/TVDObg1c=
golang.org/x/crypto v0.0.0-20220722155217-630584e8d5aa/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
@@ -394,9 +426,19 @@ golang.org/x/net v0.0.0-20210316092652-d523dce5a7f4/go.mod h1:RBQZq4jEuRlivfhVLd
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/net v0.0.0-20210503060351-7fd8e65b6420/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210610132358-84b48f89b13b/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211216030914-fe4d6282115f h1:hEYJvxw1lSnWIl8X9ofsYMklzaDs90JI2az5YMd4fPM=
golang.org/x/net v0.0.0-20211216030914-fe4d6282115f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20220107192237-5cfca573fb4d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220325170049-de3da57026de/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220412020605-290c469a71a5/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220425223048-2871e0cb64e4/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220607020251-c690dde0001d/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.0.0-20220617184016-355a448f1bc9/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.0.0-20220624214902-1bab6f366d9e/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.0.0-20220805013720-a33c5aa5df48 h1:N9Vc/rorQUDes6B9CNdIxAn5jODGj2wzfrei2x4wNj4=
golang.org/x/net v0.0.0-20220805013720-a33c5aa5df48/go.mod h1:YDH+HFinaLZZlnHAfSS6ZXJJ9M9t4Dl22yv3iI2vPwk=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@@ -412,8 +454,14 @@ golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ
golang.org/x/oauth2 v0.0.0-20210628180205-a41e5a781914/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210805134026-6f1e6394065a/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8 h1:RerP+noqYHUQ8CMRcPlC2nvTa4dcBIjegkuWdcUDuqg=
golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
golang.org/x/oauth2 v0.0.0-20220309155454-6242fa91716a/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
golang.org/x/oauth2 v0.0.0-20220411215720-9780585627b5/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
golang.org/x/oauth2 v0.0.0-20220608161450-d0670ef3b1eb/go.mod h1:jaDAt6Dkxork7LmZnYtzbRWj0W47D86a3TGe0YHBvmE=
golang.org/x/oauth2 v0.0.0-20220622183110-fd043fe589d2/go.mod h1:jaDAt6Dkxork7LmZnYtzbRWj0W47D86a3TGe0YHBvmE=
golang.org/x/oauth2 v0.0.0-20220808172628-8227340efae7 h1:dtndE8FcEta75/4kHF3AbpuWzV6f1LjnLrM4pe2SZrw=
golang.org/x/oauth2 v0.0.0-20220808172628-8227340efae7/go.mod h1:h4gKUeWbJ4rQPri7E0u6Gs4e9Ri2zaLxzw5DI5XGrYg=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -424,8 +472,10 @@ golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c h1:5KslGYwFpkhGh+Q16bwMP3cOontH8FOep7tGV86Y7SQ=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220601150217-0de741cfad7f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4 h1:uVc8UZUe6tr40fFVnUP5Oj+veunVezqYl9z7DYw9xzw=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -458,7 +508,6 @@ golang.org/x/sys v0.0.0-20200523222454-059865788121/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200828194041-157a740278f4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200905004654-be1d3432aa8f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -472,7 +521,6 @@ golang.org/x/sys v0.0.0-20210315160823-c6e025ad8005/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20210320140829-1e4c9ba3b0c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423185535-09eb48e85fd7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210514084401-e8d321eab015/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
@@ -486,11 +534,24 @@ golang.org/x/sys v0.0.0-20210908233432-aa78b53d3365/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20211124211545-fe61309f8881/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211210111614-af8b64212486/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220110181412-a018aaa089fe/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220114195835-da31bd327af9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220204135822-1c1b9b1eba6a h1:ppl5mZgokTT8uPkmYOyEUmPTr3ypaKkg5eFOGrAmxxE=
golang.org/x/sys v0.0.0-20220204135822-1c1b9b1eba6a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1 h1:v+OssWQX+hTHEmOBgwxdZxK4zHq3yOs8F9J7mk0PY8E=
golang.org/x/sys v0.0.0-20220128215802-99c3d69c2c27/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220209214540-3681064d5158/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220227234510-4e6760a101f9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220328115105-d36c6a25d886/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220412211240-33da011f77ad/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220502124256-b6088ccd6cba/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220503163025-988cb79eb6c6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220610221304-9f5ed59c137d/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220615213510-4f61da869c0c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220624220833-87e55d714810/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220808155132-1c4a2a72c664 h1:v1W7bwXHsnLLloWYTVEdvGvA7BHMeBYsPcF0GLDxIRs=
golang.org/x/sys v0.0.0-20220808155132-1c4a2a72c664/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 h1:JGgROgKl9N8DuW20oFS5gxc+lE67/N3FcwmBPMe7ArY=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@@ -498,8 +559,9 @@ golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6 h1:aRYxNxv6iGQlyVaZmk6ZgYEDa+Jg18DxebPSrd6bg1M=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7 h1:olpwvP2KacW1ZWvsR7uQhoyTYvKAupfQrRGBFM352Gk=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
@@ -557,8 +619,11 @@ golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1NKEZ+0owSTG1fDTci4IqFcE=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20220411194840-2f41105eb62f/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20220517211312-f3a8303e98df/go.mod h1:K8+ghG5WaK9qNqU5K3HdILfMLy1f3aNYFI/wnl100a8=
golang.org/x/xerrors v0.0.0-20220609144429-65e65417b02f h1:uF6paiQQebLeSXkrTqHqz0MXhXXS1KgF41eUdBNvxK0=
golang.org/x/xerrors v0.0.0-20220609144429-65e65417b02f/go.mod h1:K8+ghG5WaK9qNqU5K3HdILfMLy1f3aNYFI/wnl100a8=
google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=
google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
@@ -590,10 +655,18 @@ google.golang.org/api v0.56.0/go.mod h1:38yMfeP1kfjsl8isn0tliTjIb1rJXcQi4UXlbqiv
google.golang.org/api v0.57.0/go.mod h1:dVPlbZyBo2/OjBpmvNdpn2GRm6rPy75jyU7bmhdrMgI=
google.golang.org/api v0.61.0/go.mod h1:xQRti5UdCmoCEqFxcz93fTl338AVqDgyaDRuOZ3hg9I=
google.golang.org/api v0.63.0/go.mod h1:gs4ij2ffTRXwuzzgJl/56BdwJaA194ijkfn++9tDuPo=
google.golang.org/api v0.64.0/go.mod h1:931CdxA8Rm4t6zqTFGSsgwbAEZ2+GMYurbndwSimebM=
google.golang.org/api v0.66.0/go.mod h1:I1dmXYpX7HGwz/ejRxwQp2qj5bFAz93HiCU1C1oYd9M=
google.golang.org/api v0.68.0 h1:9eJiHhwJKIYX6sX2fUZxQLi7pDRA/MYu8c12q6WbJik=
google.golang.org/api v0.68.0/go.mod h1:sOM8pTpwgflXRhz+oC8H2Dr+UcbMqkPPWNJo88Q7TH8=
google.golang.org/api v0.67.0/go.mod h1:ShHKP8E60yPsKNw/w8w+VYaj9H6buA5UqDp8dhbQZ6g=
google.golang.org/api v0.70.0/go.mod h1:Bs4ZM2HGifEvXwd50TtW70ovgJffJYw2oRCOFU/SkfA=
google.golang.org/api v0.71.0/go.mod h1:4PyU6e6JogV1f9eA4voyrTY2batOLdgZ5qZ5HOCc4j8=
google.golang.org/api v0.74.0/go.mod h1:ZpfMZOVRMywNyvJFeqL9HRWBgAuRfSjJFpe9QtRRyDs=
google.golang.org/api v0.75.0/go.mod h1:pU9QmyHLnzlpar1Mjt4IbapUCy8J+6HD6GeELN69ljA=
google.golang.org/api v0.78.0/go.mod h1:1Sg78yoMLOhlQTeF+ARBoytAcH1NNyyl390YMy6rKmw=
google.golang.org/api v0.80.0/go.mod h1:xY3nI94gbvBrE0J6NHXhxOmW97HG7Khjkku6AFB3Hyg=
google.golang.org/api v0.84.0/go.mod h1:NTsGnUFJMYROtiquksZHBWtHfeMC7iYthki7Eq3pa8o=
google.golang.org/api v0.85.0/go.mod h1:AqZf8Ep9uZ2pyTvgL+x0D3Zt0eoT9b5E8fmzfu6FO2g=
google.golang.org/api v0.86.0/go.mod h1:+Sem1dnrKlrXMR/X0bPnMWyluQe4RsNoYfmNLhOIkzw=
google.golang.org/api v0.91.0 h1:731+JzuwaJoZXRQGmPoBiV+SrsAfUaIkdMCWTcQNPyA=
google.golang.org/api v0.91.0/go.mod h1:+Sem1dnrKlrXMR/X0bPnMWyluQe4RsNoYfmNLhOIkzw=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
@@ -641,6 +714,7 @@ google.golang.org/genproto v0.0.0-20210222152913-aa3ee6e6a81c/go.mod h1:FWY/as6D
google.golang.org/genproto v0.0.0-20210303154014-9728d6b83eeb/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210310155132-4ce2db91004e/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210319143718-93e7006c17a6/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210329143202-679c6ae281ee/go.mod h1:9lPAdzaEmUacj36I+k7YKbEc5CXzPIeORRgDAUOu28A=
google.golang.org/genproto v0.0.0-20210402141018-6c239bbf2bb1/go.mod h1:9lPAdzaEmUacj36I+k7YKbEc5CXzPIeORRgDAUOu28A=
google.golang.org/genproto v0.0.0-20210513213006-bf773b8c8384/go.mod h1:P3QM42oQyzQSnHPnZ/vqoCdDmzH28fzWByN9asMeM8A=
google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
@@ -662,12 +736,28 @@ google.golang.org/genproto v0.0.0-20211118181313-81c1377c94b1/go.mod h1:5CzLGKJ6
google.golang.org/genproto v0.0.0-20211206160659-862468c7d6e0/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/genproto v0.0.0-20211208223120-3a66f561d7aa/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/genproto v0.0.0-20211221195035-429b39de9b1c/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/genproto v0.0.0-20211223182754-3ac035c7e7cb/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/genproto v0.0.0-20220111164026-67b88f271998/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/genproto v0.0.0-20220114231437-d2e6a121cae0/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/genproto v0.0.0-20220201184016-50beb8ab5c44/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/genproto v0.0.0-20220204002441-d6cc3cc0770e h1:hXl9hnyOkeznztYpYxVPAVZfPzcbO6Q0C+nLXodza8k=
google.golang.org/genproto v0.0.0-20220204002441-d6cc3cc0770e/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/genproto v0.0.0-20220126215142-9970aeb2e350/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/genproto v0.0.0-20220207164111-0872dc986b00/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/genproto v0.0.0-20220218161850-94dd64e39d7c/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI=
google.golang.org/genproto v0.0.0-20220222213610-43724f9ea8cf/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI=
google.golang.org/genproto v0.0.0-20220304144024-325a89244dc8/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI=
google.golang.org/genproto v0.0.0-20220310185008-1973136f34c6/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI=
google.golang.org/genproto v0.0.0-20220324131243-acbaeb5b85eb/go.mod h1:hAL49I2IFola2sVEjAn7MEwsja0xp51I0tlGAf9hz4E=
google.golang.org/genproto v0.0.0-20220407144326-9054f6ed7bac/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo=
google.golang.org/genproto v0.0.0-20220413183235-5e96e2839df9/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo=
google.golang.org/genproto v0.0.0-20220414192740-2d67ff6cf2b4/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo=
google.golang.org/genproto v0.0.0-20220421151946-72621c1f0bd3/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo=
google.golang.org/genproto v0.0.0-20220429170224-98d788798c3e/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo=
google.golang.org/genproto v0.0.0-20220505152158-f39f71e6c8f3/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4=
google.golang.org/genproto v0.0.0-20220518221133-4f43b3371335/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4=
google.golang.org/genproto v0.0.0-20220523171625-347a074981d8/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4=
google.golang.org/genproto v0.0.0-20220608133413-ed9918b62aac/go.mod h1:KEWEmljWE5zPzLBa/oHl6DaEt9LmfH6WtH1OHIvleBA=
google.golang.org/genproto v0.0.0-20220616135557-88e70c0c3a90/go.mod h1:KEWEmljWE5zPzLBa/oHl6DaEt9LmfH6WtH1OHIvleBA=
google.golang.org/genproto v0.0.0-20220617124728-180714bec0ad/go.mod h1:KEWEmljWE5zPzLBa/oHl6DaEt9LmfH6WtH1OHIvleBA=
google.golang.org/genproto v0.0.0-20220624142145-8cd45d7dbd1f/go.mod h1:KEWEmljWE5zPzLBa/oHl6DaEt9LmfH6WtH1OHIvleBA=
google.golang.org/genproto v0.0.0-20220628213854-d9e0b6570c03/go.mod h1:KEWEmljWE5zPzLBa/oHl6DaEt9LmfH6WtH1OHIvleBA=
google.golang.org/genproto v0.0.0-20220808204814-fd01256a5276 h1:7PEE9xCtufpGJzrqweakEEnTh7YFELmnKm/ee+5jmfQ=
google.golang.org/genproto v0.0.0-20220808204814-fd01256a5276/go.mod h1:dbqgFATTzChvnt+ujMdZwITVAJHFtfyN1qUhDqEiIlk=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
@@ -693,8 +783,14 @@ google.golang.org/grpc v1.38.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQ
google.golang.org/grpc v1.39.0/go.mod h1:PImNr+rS9TWYb2O4/emRugxiyHZ5JyHW5F+RPnDzfrE=
google.golang.org/grpc v1.39.1/go.mod h1:PImNr+rS9TWYb2O4/emRugxiyHZ5JyHW5F+RPnDzfrE=
google.golang.org/grpc v1.40.0/go.mod h1:ogyxbiOoUXAkP+4+xa6PZSE9DZgIHtSpzjDTB9KAK34=
google.golang.org/grpc v1.40.1 h1:pnP7OclFFFgFi4VHQDQDaoXUVauOFyktqTsqqgzFKbc=
google.golang.org/grpc v1.40.1/go.mod h1:ogyxbiOoUXAkP+4+xa6PZSE9DZgIHtSpzjDTB9KAK34=
google.golang.org/grpc v1.44.0/go.mod h1:k+4IHHFw41K8+bbowsex27ge2rCb65oeWqe4jJ590SU=
google.golang.org/grpc v1.45.0/go.mod h1:lN7owxKUQEqMfSyQikvvk5tf/6zMPsrK+ONuO11+0rQ=
google.golang.org/grpc v1.46.0/go.mod h1:vN9eftEi1UMyUsIF80+uQXhHjbXYbm0uXoFCACuMGWk=
google.golang.org/grpc v1.46.2/go.mod h1:vN9eftEi1UMyUsIF80+uQXhHjbXYbm0uXoFCACuMGWk=
google.golang.org/grpc v1.47.0/go.mod h1:vN9eftEi1UMyUsIF80+uQXhHjbXYbm0uXoFCACuMGWk=
google.golang.org/grpc v1.48.0 h1:rQOsyJ/8+ufEDJd/Gdsz7HG220Mh9HAhFHRGnIjda0w=
google.golang.org/grpc v1.48.0/go.mod h1:vN9eftEi1UMyUsIF80+uQXhHjbXYbm0uXoFCACuMGWk=
google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0/go.mod h1:6Kw0yEErY5E/yWrBtf03jp27GLLJujG4z/JK95pnjjw=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
@@ -708,8 +804,10 @@ google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGj
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.27.1 h1:SnqbnDw1V7RiZcXPx5MEeqPv2s79L9i7BJUlG/+RurQ=
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.28.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
google.golang.org/protobuf v1.28.1 h1:d0NfwRgPtno5B1Wa6L2DAG+KivqkdutMf1UhdNx175w=
google.golang.org/protobuf v1.28.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=

View File

@@ -1,140 +0,0 @@
package http
import (
"context"
"fmt"
"io"
"net/http"
"net/url"
"strconv"
"github.com/benbjohnson/litestream"
)
// Client represents an client for a streaming Litestream HTTP server.
type Client struct {
// Upstream endpoint
URL string
// Path of database on upstream server.
Path string
// Underlying HTTP client
HTTPClient *http.Client
}
// NewClient returns an instance of Client.
func NewClient(rawurl, path string) *Client {
return &Client{
URL: rawurl,
Path: path,
HTTPClient: http.DefaultClient,
}
}
// Stream returns a snapshot and continuous stream of WAL updates.
func (c *Client) Stream(ctx context.Context, pos litestream.Pos) (litestream.StreamReader, error) {
u, err := url.Parse(c.URL)
if err != nil {
return nil, fmt.Errorf("invalid client URL: %w", err)
} else if u.Scheme != "http" && u.Scheme != "https" {
return nil, fmt.Errorf("invalid URL scheme")
} else if u.Host == "" {
return nil, fmt.Errorf("URL host required")
}
// Add path & position to query path.
q := url.Values{"path": []string{c.Path}}
if !pos.IsZero() {
q.Set("generation", pos.Generation)
q.Set("index", litestream.FormatIndex(pos.Index))
}
// Strip off everything but the scheme & host.
*u = url.URL{
Scheme: u.Scheme,
Host: u.Host,
Path: "/stream",
RawQuery: q.Encode(),
}
req, err := http.NewRequest("GET", u.String(), nil)
if err != nil {
return nil, err
}
req = req.WithContext(ctx)
resp, err := c.HTTPClient.Do(req)
if err != nil {
return nil, err
} else if resp.StatusCode != http.StatusOK {
resp.Body.Close()
return nil, fmt.Errorf("invalid response: code=%d", resp.StatusCode)
}
pageSize, _ := strconv.Atoi(resp.Header.Get("Litestream-page-size"))
if pageSize <= 0 {
resp.Body.Close()
return nil, fmt.Errorf("stream page size unavailable")
}
return &StreamReader{
pageSize: pageSize,
rc: resp.Body,
lr: io.LimitedReader{R: resp.Body},
}, nil
}
// StreamReader represents an optional snapshot followed by a continuous stream
// of WAL updates. It is used to implement live read replication from a single
// primary Litestream server to one or more remote Litestream replicas.
type StreamReader struct {
pageSize int
rc io.ReadCloser
lr io.LimitedReader
}
// Close closes the underlying reader.
func (r *StreamReader) Close() (err error) {
if e := r.rc.Close(); err == nil {
err = e
}
return err
}
// PageSize returns the page size on the remote database.
func (r *StreamReader) PageSize() int { return r.pageSize }
// Read reads bytes of the current payload into p. Only valid after a successful
// call to Next(). On io.EOF, call Next() again to begin reading next record.
func (r *StreamReader) Read(p []byte) (n int, err error) {
return r.lr.Read(p)
}
// Next returns the next available record. This call will block until a record
// is available. After calling Next(), read the payload from the reader using
// Read() until io.EOF is reached.
func (r *StreamReader) Next() (*litestream.StreamRecordHeader, error) {
// If bytes remain on the current file, discard.
if r.lr.N > 0 {
if _, err := io.Copy(io.Discard, &r.lr); err != nil {
return nil, err
}
}
// Read record header.
buf := make([]byte, litestream.StreamRecordHeaderSize)
if _, err := io.ReadFull(r.rc, buf); err != nil {
return nil, fmt.Errorf("http.StreamReader.Next(): %w", err)
}
var hdr litestream.StreamRecordHeader
if err := hdr.UnmarshalBinary(buf); err != nil {
return nil, err
}
// Update remaining bytes on file reader.
r.lr.N = hdr.Size
return &hdr, nil
}

View File

@@ -2,13 +2,11 @@ package http
import (
"fmt"
"io"
"log"
"net"
"net/http"
httppprof "net/http/pprof"
"os"
"strconv"
"strings"
"github.com/benbjohnson/litestream"
@@ -113,190 +111,7 @@ func (s *Server) serveHTTP(w http.ResponseWriter, r *http.Request) {
switch r.URL.Path {
case "/metrics":
s.promHandler.ServeHTTP(w, r)
case "/stream":
switch r.Method {
case http.MethodGet:
s.handleGetStream(w, r)
default:
s.writeError(w, r, "Method not allowed", http.StatusMethodNotAllowed)
}
default:
http.NotFound(w, r)
}
}
func (s *Server) handleGetStream(w http.ResponseWriter, r *http.Request) {
q := r.URL.Query()
path := q.Get("path")
if path == "" {
s.writeError(w, r, "Database name required", http.StatusBadRequest)
return
}
// Parse current client position, if available.
var pos litestream.Pos
if generation, index := q.Get("generation"), q.Get("index"); generation != "" && index != "" {
pos.Generation = generation
var err error
if pos.Index, err = litestream.ParseIndex(index); err != nil {
s.writeError(w, r, "Invalid index query parameter", http.StatusBadRequest)
return
}
}
// Fetch database instance from the primary server.
db := s.server.DB(path)
if db == nil {
s.writeError(w, r, "Database not found", http.StatusNotFound)
return
}
// Set the page size in the header.
w.Header().Set("Litestream-page-size", strconv.Itoa(db.PageSize()))
// Determine starting position.
dbPos := db.Pos()
if dbPos.Generation == "" {
s.writeError(w, r, "No generation available", http.StatusServiceUnavailable)
return
}
dbPos.Offset = 0
// Use database position if generation has changed.
var snapshotRequired bool
if pos.Generation != dbPos.Generation {
s.Logger.Printf("stream generation mismatch, using primary position: client.pos=%s", pos)
pos, snapshotRequired = dbPos, true
}
// Obtain iterator before snapshot so we don't miss any WAL segments.
fitr, err := db.WALSegments(r.Context(), pos.Generation)
if err != nil {
s.writeError(w, r, fmt.Sprintf("Cannot obtain WAL iterator: %s", err), http.StatusInternalServerError)
return
}
defer fitr.Close()
bitr := litestream.NewBufferedWALSegmentIterator(fitr)
// Peek at first position to see if client is too old.
if info, ok := bitr.Peek(); !ok {
s.writeError(w, r, "cannot peek WAL iterator, no segments available", http.StatusInternalServerError)
return
} else if cmp, err := litestream.ComparePos(pos, info.Pos()); err != nil {
s.writeError(w, r, fmt.Sprintf("cannot compare pos: %s", err), http.StatusInternalServerError)
return
} else if cmp == -1 {
s.Logger.Printf("stream position no longer available, using using primary position: client.pos=%s", pos)
pos, snapshotRequired = dbPos, true
}
s.Logger.Printf("stream connected: pos=%s snapshot=%v", pos, snapshotRequired)
defer s.Logger.Printf("stream disconnected")
// Write snapshot to response body.
if snapshotRequired {
if err := db.WithFile(func(f *os.File) error {
fi, err := f.Stat()
if err != nil {
return err
}
// Write snapshot header with current position & size.
hdr := litestream.StreamRecordHeader{
Type: litestream.StreamRecordTypeSnapshot,
Generation: pos.Generation,
Index: pos.Index,
Size: fi.Size(),
}
if buf, err := hdr.MarshalBinary(); err != nil {
return fmt.Errorf("marshal snapshot stream record header: %w", err)
} else if _, err := w.Write(buf); err != nil {
return fmt.Errorf("write snapshot stream record header: %w", err)
}
if _, err := io.CopyN(w, f, fi.Size()); err != nil {
return fmt.Errorf("copy snapshot: %w", err)
}
return nil
}); err != nil {
s.writeError(w, r, err.Error(), http.StatusInternalServerError)
return
}
// Flush after snapshot has been written.
w.(http.Flusher).Flush()
}
for {
// Wait for notification of new entries.
select {
case <-r.Context().Done():
return
case <-fitr.NotifyCh():
}
for bitr.Next() {
info := bitr.WALSegment()
// Skip any segments before our initial position.
if cmp, err := litestream.ComparePos(info.Pos(), pos); err != nil {
s.Logger.Printf("pos compare: %s", err)
return
} else if cmp == -1 {
continue
}
hdr := litestream.StreamRecordHeader{
Type: litestream.StreamRecordTypeWALSegment,
Flags: 0,
Generation: info.Generation,
Index: info.Index,
Offset: info.Offset,
Size: info.Size,
}
// Write record header.
data, err := hdr.MarshalBinary()
if err != nil {
s.Logger.Printf("marshal WAL segment stream record header: %s", err)
return
} else if _, err := w.Write(data); err != nil {
s.Logger.Printf("write WAL segment stream record header: %s", err)
return
}
// Copy WAL segment data to writer.
if err := func() error {
rd, err := db.WALSegmentReader(r.Context(), info.Pos())
if err != nil {
return fmt.Errorf("cannot fetch wal segment reader: %w", err)
}
defer rd.Close()
if _, err := io.CopyN(w, rd, hdr.Size); err != nil {
return fmt.Errorf("cannot copy wal segment: %w", err)
}
return nil
}(); err != nil {
log.Print(err)
return
}
// Flush after WAL segment has been written.
w.(http.Flusher).Flush()
}
if bitr.Err() != nil {
s.Logger.Printf("wal iterator error: %s", err)
return
}
}
}
func (s *Server) writeError(w http.ResponseWriter, r *http.Request, err string, code int) {
s.Logger.Printf("error: %s", err)
http.Error(w, err, code)
}

View File

@@ -391,254 +391,6 @@ LOOP:
restoreAndVerify(t, ctx, env, filepath.Join(testDir, "litestream.yml"), filepath.Join(tempDir, "db"))
}
// Ensure a database can be replicated over HTTP.
func TestCmd_Replicate_HTTP(t *testing.T) {
ctx := context.Background()
testDir, tempDir := filepath.Join("testdata", "replicate", "http"), t.TempDir()
if err := os.Mkdir(filepath.Join(tempDir, "0"), 0777); err != nil {
t.Fatal(err)
} else if err := os.Mkdir(filepath.Join(tempDir, "1"), 0777); err != nil {
t.Fatal(err)
}
env0 := []string{"LITESTREAM_TEMPDIR=" + tempDir}
env1 := []string{"LITESTREAM_TEMPDIR=" + tempDir, "LITESTREAM_UPSTREAM_URL=http://localhost:10001"}
cmd0, stdout0, _ := commandContext(ctx, env0, "replicate", "-config", filepath.Join(testDir, "litestream.0.yml"))
if err := cmd0.Start(); err != nil {
t.Fatal(err)
}
cmd1, stdout1, _ := commandContext(ctx, env1, "replicate", "-config", filepath.Join(testDir, "litestream.1.yml"))
if err := cmd1.Start(); err != nil {
t.Fatal(err)
}
db0, err := sql.Open("sqlite3", filepath.Join(tempDir, "0", "db"))
if err != nil {
t.Fatal(err)
} else if _, err := db0.ExecContext(ctx, `PRAGMA journal_mode = wal`); err != nil {
t.Fatal(err)
} else if _, err := db0.ExecContext(ctx, `CREATE TABLE t (id INTEGER PRIMARY KEY)`); err != nil {
t.Fatal(err)
}
defer db0.Close()
// Execute writes periodically.
for i := 0; i < 100; i++ {
t.Logf("[exec] INSERT INTO t (id) VALUES (%d)", i)
if _, err := db0.ExecContext(ctx, `INSERT INTO t (id) VALUES (?)`, i); err != nil {
t.Fatal(err)
}
time.Sleep(100 * time.Millisecond)
}
// Wait for replica to catch up.
time.Sleep(1 * time.Second)
// Verify count in replica table.
db1, err := sql.Open("sqlite3", filepath.Join(tempDir, "1", "db"))
if err != nil {
t.Fatal(err)
}
defer db1.Close()
var n int
if err := db1.QueryRowContext(ctx, `SELECT COUNT(*) FROM t`).Scan(&n); err != nil {
t.Fatal(err)
} else if got, want := n, 100; got != want {
t.Fatalf("replica count=%d, want %d", got, want)
}
// Stop & wait for Litestream command.
killLitestreamCmd(t, cmd1, stdout1) // kill
killLitestreamCmd(t, cmd0, stdout0)
}
// Ensure a database can recover when disconnected from HTTP.
func TestCmd_Replicate_HTTP_PartialRecovery(t *testing.T) {
ctx := context.Background()
testDir, tempDir := filepath.Join("testdata", "replicate", "http-partial-recovery"), t.TempDir()
if err := os.Mkdir(filepath.Join(tempDir, "0"), 0777); err != nil {
t.Fatal(err)
} else if err := os.Mkdir(filepath.Join(tempDir, "1"), 0777); err != nil {
t.Fatal(err)
}
env0 := []string{"LITESTREAM_TEMPDIR=" + tempDir}
env1 := []string{"LITESTREAM_TEMPDIR=" + tempDir, "LITESTREAM_UPSTREAM_URL=http://localhost:10002"}
cmd0, stdout0, _ := commandContext(ctx, env0, "replicate", "-config", filepath.Join(testDir, "litestream.0.yml"))
if err := cmd0.Start(); err != nil {
t.Fatal(err)
}
cmd1, stdout1, _ := commandContext(ctx, env1, "replicate", "-config", filepath.Join(testDir, "litestream.1.yml"))
if err := cmd1.Start(); err != nil {
t.Fatal(err)
}
db0, err := sql.Open("sqlite3", filepath.Join(tempDir, "0", "db"))
if err != nil {
t.Fatal(err)
} else if _, err := db0.ExecContext(ctx, `PRAGMA journal_mode = wal`); err != nil {
t.Fatal(err)
} else if _, err := db0.ExecContext(ctx, `CREATE TABLE t (id INTEGER PRIMARY KEY)`); err != nil {
t.Fatal(err)
}
defer db0.Close()
var index int
insertAndWait := func() {
index++
t.Logf("[exec] INSERT INTO t (id) VALUES (%d)", index)
if _, err := db0.ExecContext(ctx, `INSERT INTO t (id) VALUES (?)`, index); err != nil {
t.Fatal(err)
}
time.Sleep(100 * time.Millisecond)
}
// Execute writes periodically.
for i := 0; i < 50; i++ {
insertAndWait()
}
// Kill the replica.
t.Logf("Killing replica...")
killLitestreamCmd(t, cmd1, stdout1)
t.Logf("Replica killed")
// Keep writing.
for i := 0; i < 25; i++ {
insertAndWait()
}
// Restart replica.
t.Logf("Restarting replica...")
cmd1, stdout1, _ = commandContext(ctx, env1, "replicate", "-config", filepath.Join(testDir, "litestream.1.yml"))
if err := cmd1.Start(); err != nil {
t.Fatal(err)
}
t.Logf("Replica restarted")
// Continue writing...
for i := 0; i < 25; i++ {
insertAndWait()
}
// Wait for replica to catch up.
time.Sleep(1 * time.Second)
// Verify count in replica table.
db1, err := sql.Open("sqlite3", filepath.Join(tempDir, "1", "db"))
if err != nil {
t.Fatal(err)
}
defer db1.Close()
var n int
if err := db1.QueryRowContext(ctx, `SELECT COUNT(*) FROM t`).Scan(&n); err != nil {
t.Fatal(err)
} else if got, want := n, 100; got != want {
t.Fatalf("replica count=%d, want %d", got, want)
}
// Stop & wait for Litestream command.
killLitestreamCmd(t, cmd1, stdout1) // kill
killLitestreamCmd(t, cmd0, stdout0)
}
// Ensure a database can recover when disconnected from HTTP but when last index
// is no longer available.
func TestCmd_Replicate_HTTP_FullRecovery(t *testing.T) {
ctx := context.Background()
testDir, tempDir := filepath.Join("testdata", "replicate", "http-full-recovery"), t.TempDir()
if err := os.Mkdir(filepath.Join(tempDir, "0"), 0777); err != nil {
t.Fatal(err)
} else if err := os.Mkdir(filepath.Join(tempDir, "1"), 0777); err != nil {
t.Fatal(err)
}
env0 := []string{"LITESTREAM_TEMPDIR=" + tempDir}
env1 := []string{"LITESTREAM_TEMPDIR=" + tempDir, "LITESTREAM_UPSTREAM_URL=http://localhost:10002"}
cmd0, stdout0, _ := commandContext(ctx, env0, "replicate", "-config", filepath.Join(testDir, "litestream.0.yml"))
if err := cmd0.Start(); err != nil {
t.Fatal(err)
}
cmd1, stdout1, _ := commandContext(ctx, env1, "replicate", "-config", filepath.Join(testDir, "litestream.1.yml"))
if err := cmd1.Start(); err != nil {
t.Fatal(err)
}
db0, err := sql.Open("sqlite3", filepath.Join(tempDir, "0", "db"))
if err != nil {
t.Fatal(err)
} else if _, err := db0.ExecContext(ctx, `PRAGMA journal_mode = wal`); err != nil {
t.Fatal(err)
} else if _, err := db0.ExecContext(ctx, `CREATE TABLE t (id INTEGER PRIMARY KEY)`); err != nil {
t.Fatal(err)
}
defer db0.Close()
var index int
insertAndWait := func() {
index++
t.Logf("[exec] INSERT INTO t (id) VALUES (%d)", index)
if _, err := db0.ExecContext(ctx, `INSERT INTO t (id) VALUES (?)`, index); err != nil {
t.Fatal(err)
}
time.Sleep(100 * time.Millisecond)
}
// Execute writes periodically.
for i := 0; i < 50; i++ {
insertAndWait()
}
// Kill the replica.
t.Logf("Killing replica...")
killLitestreamCmd(t, cmd1, stdout1)
t.Logf("Replica killed")
// Keep writing.
for i := 0; i < 25; i++ {
insertAndWait()
}
// Restart replica.
t.Logf("Restarting replica...")
cmd1, stdout1, _ = commandContext(ctx, env1, "replicate", "-config", filepath.Join(testDir, "litestream.1.yml"))
if err := cmd1.Start(); err != nil {
t.Fatal(err)
}
t.Logf("Replica restarted")
// Continue writing...
for i := 0; i < 25; i++ {
insertAndWait()
}
// Wait for replica to catch up.
time.Sleep(1 * time.Second)
// Verify count in replica table.
db1, err := sql.Open("sqlite3", filepath.Join(tempDir, "1", "db"))
if err != nil {
t.Fatal(err)
}
defer db1.Close()
var n int
if err := db1.QueryRowContext(ctx, `SELECT COUNT(*) FROM t`).Scan(&n); err != nil {
t.Fatal(err)
} else if got, want := n, 100; got != want {
t.Fatalf("replica count=%d, want %d", got, want)
}
// Stop & wait for Litestream command.
killLitestreamCmd(t, cmd1, stdout1) // kill
killLitestreamCmd(t, cmd0, stdout0)
}
// commandContext returns a "litestream" command with stdout/stderr buffers.
func commandContext(ctx context.Context, env []string, arg ...string) (cmd *exec.Cmd, stdout, stderr *internal.LockingBuffer) {
cmd = exec.CommandContext(ctx, "litestream", arg...)

View File

@@ -4,7 +4,7 @@ import (
"context"
"flag"
"fmt"
"io/ioutil"
"io"
"math/rand"
"os"
"path"
@@ -22,12 +22,13 @@ import (
)
func init() {
rand.Seed(time.Now().UnixNano())
localRand = rand.New(rand.NewSource(time.Now().UnixNano()))
}
var (
// Enables integration tests.
replicaType = flag.String("replica-type", "file", "")
localRand *rand.Rand
)
// S3 settings
@@ -193,7 +194,7 @@ func TestReplicaClient_WriteSnapshot(t *testing.T) {
if r, err := c.SnapshotReader(context.Background(), "b16ddcf5c697540f", 1000); err != nil {
t.Fatal(err)
} else if buf, err := ioutil.ReadAll(r); err != nil {
} else if buf, err := io.ReadAll(r); err != nil {
t.Fatal(err)
} else if err := r.Close(); err != nil {
t.Fatal(err)
@@ -224,7 +225,7 @@ func TestReplicaClient_SnapshotReader(t *testing.T) {
}
defer r.Close()
if buf, err := ioutil.ReadAll(r); err != nil {
if buf, err := io.ReadAll(r); err != nil {
t.Fatal(err)
} else if got, want := string(buf), "foo"; got != want {
t.Fatalf("ReadAll=%v, want %v", got, want)
@@ -378,7 +379,7 @@ func TestReplicaClient_WriteWALSegment(t *testing.T) {
if r, err := c.WALSegmentReader(context.Background(), litestream.Pos{Generation: "b16ddcf5c697540f", Index: 1000, Offset: 2000}); err != nil {
t.Fatal(err)
} else if buf, err := ioutil.ReadAll(r); err != nil {
} else if buf, err := io.ReadAll(r); err != nil {
t.Fatal(err)
} else if err := r.Close(); err != nil {
t.Fatal(err)
@@ -409,7 +410,7 @@ func TestReplicaClient_WALSegmentReader(t *testing.T) {
}
defer r.Close()
if buf, err := ioutil.ReadAll(r); err != nil {
if buf, err := io.ReadAll(r); err != nil {
t.Fatal(err)
} else if got, want := string(buf), "foobar"; got != want {
t.Fatalf("ReadAll=%v, want %v", got, want)

View File

@@ -1,36 +0,0 @@
package internal
import (
"errors"
)
// File event mask constants.
const (
FileEventCreated = 1 << iota
FileEventModified
FileEventDeleted
)
// FileEvent represents an event on a watched file.
type FileEvent struct {
Name string
Mask int
}
// ErrFileEventQueueOverflow is returned when the file event queue has overflowed.
var ErrFileEventQueueOverflow = errors.New("file event queue overflow")
// FileWatcher represents a watcher of file events.
type FileWatcher interface {
Open() error
Close() error
// Returns a channel of events for watched files.
Events() <-chan FileEvent
// Adds a specific file to be watched.
Watch(filename string) error
// Removes a specific file from being watched.
Unwatch(filename string) error
}

View File

@@ -1,259 +0,0 @@
//go:build freebsd || openbsd || netbsd || dragonfly || darwin
package internal
import (
"context"
"log"
"os"
"path/filepath"
"sync"
"time"
"golang.org/x/sync/errgroup"
"golang.org/x/sys/unix"
)
var _ FileWatcher = (*KqueueFileWatcher)(nil)
// KqueueFileWatcher watches files and is notified of events on them.
//
// Watcher code based on https://github.com/fsnotify/fsnotify
type KqueueFileWatcher struct {
fd int
events chan FileEvent
mu sync.Mutex
watches map[string]int
paths map[int]string
notExists map[string]struct{}
g errgroup.Group
ctx context.Context
cancel func()
}
// NewKqueueFileWatcher returns a new instance of KqueueFileWatcher.
func NewKqueueFileWatcher() *KqueueFileWatcher {
return &KqueueFileWatcher{
events: make(chan FileEvent),
watches: make(map[string]int),
paths: make(map[int]string),
notExists: make(map[string]struct{}),
}
}
// NewFileWatcher returns an instance of KqueueFileWatcher on BSD systems.
func NewFileWatcher() FileWatcher {
return NewKqueueFileWatcher()
}
// Events returns a read-only channel of file events.
func (w *KqueueFileWatcher) Events() <-chan FileEvent {
return w.events
}
// Open initializes the watcher and begins listening for file events.
func (w *KqueueFileWatcher) Open() (err error) {
if w.fd, err = unix.Kqueue(); err != nil {
return err
}
w.ctx, w.cancel = context.WithCancel(context.Background())
w.g.Go(func() error {
if err := w.monitor(w.ctx); err != nil && w.ctx.Err() == nil {
return err
}
return nil
})
w.g.Go(func() error {
if err := w.monitorNotExists(w.ctx); err != nil && w.ctx.Err() == nil {
return err
}
return nil
})
return nil
}
// Close stops watching for file events and cleans up resources.
func (w *KqueueFileWatcher) Close() (err error) {
w.cancel()
if w.fd != 0 {
if e := unix.Close(w.fd); e != nil && err == nil {
err = e
}
}
if e := w.g.Wait(); e != nil && err == nil {
err = e
}
return err
}
// Watch begins watching the given file or directory.
func (w *KqueueFileWatcher) Watch(filename string) error {
w.mu.Lock()
defer w.mu.Unlock()
filename = filepath.Clean(filename)
// If file doesn't exist, monitor separately until it does exist as we
// can't watch non-existent files with kqueue.
if _, err := os.Stat(filename); os.IsNotExist(err) {
w.notExists[filename] = struct{}{}
return nil
}
return w.addWatch(filename)
}
func (w *KqueueFileWatcher) addWatch(filename string) error {
wd, err := unix.Open(filename, unix.O_NONBLOCK|unix.O_RDONLY|unix.O_CLOEXEC, 0700)
if err != nil {
return err
}
// TODO: Handle return count different than 1.
kevent := unix.Kevent_t{Fflags: unix.NOTE_DELETE | unix.NOTE_WRITE}
unix.SetKevent(&kevent, wd, unix.EVFILT_VNODE, unix.EV_ADD|unix.EV_CLEAR|unix.EV_ENABLE)
if _, err := unix.Kevent(w.fd, []unix.Kevent_t{kevent}, nil, nil); err != nil {
return err
}
w.watches[filename] = wd
w.paths[wd] = filename
delete(w.notExists, filename)
return err
}
// Unwatch stops watching the given file or directory.
func (w *KqueueFileWatcher) Unwatch(filename string) error {
w.mu.Lock()
defer w.mu.Unlock()
filename = filepath.Clean(filename)
// Look up watch ID by filename.
wd, ok := w.watches[filename]
if !ok {
return nil
}
// TODO: Handle return count different than 1.
var kevent unix.Kevent_t
unix.SetKevent(&kevent, wd, unix.EVFILT_VNODE, unix.EV_DELETE)
if _, err := unix.Kevent(w.fd, []unix.Kevent_t{kevent}, nil, nil); err != nil {
return err
}
unix.Close(wd)
delete(w.paths, wd)
delete(w.watches, filename)
delete(w.notExists, filename)
return nil
}
// monitorNotExist runs in a separate goroutine and monitors for the creation of
// watched files that do not yet exist.
func (w *KqueueFileWatcher) monitorNotExists(ctx context.Context) error {
ticker := time.NewTicker(1 * time.Second)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return nil
case <-ticker.C:
w.checkNotExists(ctx)
}
}
}
func (w *KqueueFileWatcher) checkNotExists(ctx context.Context) {
w.mu.Lock()
defer w.mu.Unlock()
for filename := range w.notExists {
if _, err := os.Stat(filename); os.IsNotExist(err) {
continue
}
if err := w.addWatch(filename); err != nil {
log.Printf("non-existent file monitor: cannot add watch: %s", err)
continue
}
// Send event to channel.
select {
case w.events <- FileEvent{
Name: filename,
Mask: FileEventCreated,
}:
default:
}
}
}
// monitor runs in a separate goroutine and monitors the inotify event queue.
func (w *KqueueFileWatcher) monitor(ctx context.Context) error {
kevents := make([]unix.Kevent_t, 10)
timeout := unix.NsecToTimespec(int64(100 * time.Millisecond))
for {
n, err := unix.Kevent(w.fd, nil, kevents, &timeout)
if err != nil && err != unix.EINTR {
return err
} else if n < 0 {
continue
}
for _, kevent := range kevents[:n] {
if err := w.recv(ctx, &kevent); err != nil {
return err
}
}
}
}
// recv processes a single event from kqeueue.
func (w *KqueueFileWatcher) recv(ctx context.Context, kevent *unix.Kevent_t) error {
if err := ctx.Err(); err != nil {
return err
}
// Look up filename & remove from watcher if this is a delete.
w.mu.Lock()
filename, ok := w.paths[int(kevent.Ident)]
if ok && kevent.Fflags&unix.NOTE_DELETE != 0 {
delete(w.paths, int(kevent.Ident))
delete(w.watches, filename)
unix.Close(int(kevent.Ident))
}
w.mu.Unlock()
// Convert to generic file event mask.
var mask int
if kevent.Fflags&unix.NOTE_WRITE != 0 {
mask |= FileEventModified
}
if kevent.Fflags&unix.NOTE_DELETE != 0 {
mask |= FileEventDeleted
}
// Send event to channel or wait for close.
select {
case <-ctx.Done():
return ctx.Err()
case w.events <- FileEvent{
Name: filename,
Mask: mask,
}:
return nil
}
}

View File

@@ -1,369 +0,0 @@
//go:build linux
package internal
import (
"context"
"fmt"
"log"
"os"
"path/filepath"
"sync"
"time"
"unsafe"
"golang.org/x/sync/errgroup"
"golang.org/x/sys/unix"
)
var _ FileWatcher = (*InotifyFileWatcher)(nil)
// InotifyFileWatcher watches files and is notified of events on them.
//
// Watcher code based on https://github.com/fsnotify/fsnotify
type InotifyFileWatcher struct {
inotify struct {
fd int
buf []byte
}
epoll struct {
fd int // epoll_create1() file descriptor
events []unix.EpollEvent
}
pipe struct {
r int // read pipe file descriptor
w int // write pipe file descriptor
}
events chan FileEvent
mu sync.Mutex
watches map[string]int
paths map[int]string
notExists map[string]struct{}
g errgroup.Group
ctx context.Context
cancel func()
}
// NewInotifyFileWatcher returns a new instance of InotifyFileWatcher.
func NewInotifyFileWatcher() *InotifyFileWatcher {
w := &InotifyFileWatcher{
events: make(chan FileEvent),
watches: make(map[string]int),
paths: make(map[int]string),
notExists: make(map[string]struct{}),
}
w.inotify.buf = make([]byte, 4096*unix.SizeofInotifyEvent)
w.epoll.events = make([]unix.EpollEvent, 64)
return w
}
// NewFileWatcher returns an instance of InotifyFileWatcher on Linux systems.
func NewFileWatcher() FileWatcher {
return NewInotifyFileWatcher()
}
// Events returns a read-only channel of file events.
func (w *InotifyFileWatcher) Events() <-chan FileEvent {
return w.events
}
// Open initializes the watcher and begins listening for file events.
func (w *InotifyFileWatcher) Open() (err error) {
w.inotify.fd, err = unix.InotifyInit1(unix.IN_CLOEXEC)
if err != nil {
return fmt.Errorf("cannot init inotify: %w", err)
}
// Initialize epoll and create a non-blocking pipe.
if w.epoll.fd, err = unix.EpollCreate1(unix.EPOLL_CLOEXEC); err != nil {
return fmt.Errorf("cannot create epoll: %w", err)
}
pipe := []int{-1, -1}
if err := unix.Pipe2(pipe[:], unix.O_NONBLOCK|unix.O_CLOEXEC); err != nil {
return fmt.Errorf("cannot create epoll pipe: %w", err)
}
w.pipe.r, w.pipe.w = pipe[0], pipe[1]
// Register inotify fd with epoll
if err := unix.EpollCtl(w.epoll.fd, unix.EPOLL_CTL_ADD, w.inotify.fd, &unix.EpollEvent{
Fd: int32(w.inotify.fd),
Events: unix.EPOLLIN,
}); err != nil {
return fmt.Errorf("cannot add inotify to epoll: %w", err)
}
// Register pipe fd with epoll
if err := unix.EpollCtl(w.epoll.fd, unix.EPOLL_CTL_ADD, w.pipe.r, &unix.EpollEvent{
Fd: int32(w.pipe.r),
Events: unix.EPOLLIN,
}); err != nil {
return fmt.Errorf("cannot add pipe to epoll: %w", err)
}
w.ctx, w.cancel = context.WithCancel(context.Background())
w.g.Go(func() error {
if err := w.monitor(w.ctx); err != nil && w.ctx.Err() == nil {
return err
}
return nil
})
w.g.Go(func() error {
if err := w.monitorNotExists(w.ctx); err != nil && w.ctx.Err() == nil {
return err
}
return nil
})
return nil
}
// Close stops watching for file events and cleans up resources.
func (w *InotifyFileWatcher) Close() (err error) {
w.cancel()
if e := w.wake(); e != nil && err == nil {
err = e
}
if e := w.g.Wait(); e != nil && err == nil {
err = e
}
return err
}
// Watch begins watching the given file or directory.
func (w *InotifyFileWatcher) Watch(filename string) error {
w.mu.Lock()
defer w.mu.Unlock()
filename = filepath.Clean(filename)
// If file doesn't exist, monitor separately until it does exist as we
// can't watch non-existent files with inotify.
if _, err := os.Stat(filename); os.IsNotExist(err) {
w.notExists[filename] = struct{}{}
return nil
}
return w.addWatch(filename)
}
func (w *InotifyFileWatcher) addWatch(filename string) error {
wd, err := unix.InotifyAddWatch(w.inotify.fd, filename, unix.IN_MODIFY|unix.IN_DELETE_SELF)
if err != nil {
return err
}
w.watches[filename] = wd
w.paths[wd] = filename
delete(w.notExists, filename)
return err
}
// Unwatch stops watching the given file or directory.
func (w *InotifyFileWatcher) Unwatch(filename string) error {
w.mu.Lock()
defer w.mu.Unlock()
filename = filepath.Clean(filename)
// Look up watch ID by filename.
wd, ok := w.watches[filename]
if !ok {
return nil
}
if _, err := unix.InotifyRmWatch(w.inotify.fd, uint32(wd)); err != nil {
return err
}
delete(w.paths, wd)
delete(w.watches, filename)
delete(w.notExists, filename)
return nil
}
// monitorNotExist runs in a separate goroutine and monitors for the creation of
// watched files that do not yet exist.
func (w *InotifyFileWatcher) monitorNotExists(ctx context.Context) error {
ticker := time.NewTicker(1 * time.Second)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return nil
case <-ticker.C:
w.checkNotExists(ctx)
}
}
}
func (w *InotifyFileWatcher) checkNotExists(ctx context.Context) {
w.mu.Lock()
defer w.mu.Unlock()
for filename := range w.notExists {
if _, err := os.Stat(filename); os.IsNotExist(err) {
continue
}
if err := w.addWatch(filename); err != nil {
log.Printf("non-existent file monitor: cannot add watch: %s", err)
continue
}
// Send event to channel.
select {
case w.events <- FileEvent{
Name: filename,
Mask: FileEventCreated,
}:
default:
}
}
}
// monitor runs in a separate goroutine and monitors the inotify event queue.
func (w *InotifyFileWatcher) monitor(ctx context.Context) error {
// Close all file descriptors once monitor exits.
defer func() {
unix.Close(w.inotify.fd)
unix.Close(w.epoll.fd)
unix.Close(w.pipe.w)
unix.Close(w.pipe.r)
}()
for {
if err := w.wait(ctx); err != nil {
return err
} else if err := w.read(ctx); err != nil {
return err
}
}
}
// read reads from the inotify file descriptor. Automatically rety on EINTR.
func (w *InotifyFileWatcher) read(ctx context.Context) error {
for {
n, err := unix.Read(w.inotify.fd, w.inotify.buf)
if err != nil && err != unix.EINTR {
return err
} else if n < 0 {
continue
}
return w.recv(ctx, w.inotify.buf[:n])
}
}
func (w *InotifyFileWatcher) recv(ctx context.Context, b []byte) error {
if err := ctx.Err(); err != nil {
return err
}
for {
if len(b) == 0 {
return nil
} else if len(b) < unix.SizeofInotifyEvent {
return fmt.Errorf("InotifyFileWatcher.recv(): inotify short record: n=%d", len(b))
}
event := (*unix.InotifyEvent)(unsafe.Pointer(&b[0]))
if event.Mask&unix.IN_Q_OVERFLOW != 0 {
// TODO: Change to notify all watches.
return ErrFileEventQueueOverflow
}
// Remove deleted files from the lookups.
w.mu.Lock()
name, ok := w.paths[int(event.Wd)]
if ok && event.Mask&unix.IN_DELETE_SELF != 0 {
delete(w.paths, int(event.Wd))
delete(w.watches, name)
}
w.mu.Unlock()
//if nameLen > 0 {
// // Point "bytes" at the first byte of the filename
// bytes := (*[unix.PathMax]byte)(unsafe.Pointer(&buf[offset+unix.SizeofInotifyEvent]))[:nameLen:nameLen]
// // The filename is padded with NULL bytes. TrimRight() gets rid of those.
// name += "/" + strings.TrimRight(string(bytes[0:nameLen]), "\000")
//}
// Move to next event.
b = b[unix.SizeofInotifyEvent+event.Len:]
// Skip event if ignored.
if event.Mask&unix.IN_IGNORED != 0 {
continue
}
// Convert to generic file event mask.
var mask int
if event.Mask&unix.IN_MODIFY != 0 {
mask |= FileEventModified
}
if event.Mask&unix.IN_DELETE_SELF != 0 {
mask |= FileEventDeleted
}
// Send event to channel or wait for close.
select {
case <-ctx.Done():
return ctx.Err()
case w.events <- FileEvent{
Name: name,
Mask: mask,
}:
}
}
}
func (w *InotifyFileWatcher) wait(ctx context.Context) error {
for {
n, err := unix.EpollWait(w.epoll.fd, w.epoll.events, -1)
if n == 0 || err == unix.EINTR {
continue
} else if err != nil {
return err
}
// Read events to see if we have data available on inotify or if we are awaken.
var hasData bool
for _, event := range w.epoll.events[:n] {
switch event.Fd {
case int32(w.inotify.fd): // inotify file descriptor
hasData = hasData || event.Events&(unix.EPOLLHUP|unix.EPOLLERR|unix.EPOLLIN) != 0
case int32(w.pipe.r): // epoll file descriptor
if _, err := unix.Read(w.pipe.r, make([]byte, 1024)); err != nil && err != unix.EAGAIN {
return fmt.Errorf("epoll pipe error: %w", err)
}
}
}
// Check if context is closed and then exit if data is available.
if err := ctx.Err(); err != nil {
return err
} else if hasData {
return nil
}
}
}
func (w *InotifyFileWatcher) wake() error {
if _, err := unix.Write(w.pipe.w, []byte{0}); err != nil && err != unix.EAGAIN {
return err
}
return nil
}

View File

@@ -1,211 +0,0 @@
package internal_test
import (
"database/sql"
"path/filepath"
"strings"
"testing"
"time"
"github.com/benbjohnson/litestream/internal"
_ "github.com/mattn/go-sqlite3"
)
func TestFileWatcher(t *testing.T) {
t.Run("WriteAndRemove", func(t *testing.T) {
dbPath := filepath.Join(t.TempDir(), "db")
w := internal.NewFileWatcher()
if err := w.Open(); err != nil {
t.Fatal(err)
}
defer w.Close()
db, err := sql.Open("sqlite3", dbPath)
if err != nil {
t.Fatal(err)
}
defer db.Close()
if _, err := db.Exec(`PRAGMA journal_mode = wal`); err != nil {
t.Fatal(err)
} else if _, err := db.Exec(`CREATE TABLE t (x)`); err != nil {
t.Fatal(err)
}
if err := w.Watch(dbPath + "-wal"); err != nil {
t.Fatal(err)
}
// Write to the WAL file & ensure a "modified" event occurs.
if _, err := db.Exec(`INSERT INTO t (x) VALUES (1)`); err != nil {
t.Fatal(err)
}
select {
case <-time.After(10 * time.Second):
t.Fatal("timeout waiting for event")
case event := <-w.Events():
if got, want := event.Name, dbPath+"-wal"; got != want {
t.Fatalf("name=%s, want %s", got, want)
} else if got, want := event.Mask, internal.FileEventModified; got != want {
t.Fatalf("mask=0x%02x, want 0x%02x", got, want)
}
}
// Flush any duplicate events.
drainFileEventChannel(w.Events())
// Close database and ensure checkpointed WAL creates a "delete" event.
if err := db.Close(); err != nil {
t.Fatal(err)
}
select {
case <-time.After(10 * time.Second):
t.Fatal("timeout waiting for event")
case event := <-w.Events():
if got, want := event.Name, dbPath+"-wal"; got != want {
t.Fatalf("name=%s, want %s", got, want)
} else if got, want := event.Mask, internal.FileEventDeleted; got != want {
t.Fatalf("mask=0x%02x, want 0x%02x", got, want)
}
}
})
t.Run("LargeTx", func(t *testing.T) {
w := internal.NewFileWatcher()
if err := w.Open(); err != nil {
t.Fatal(err)
}
defer w.Close()
dbPath := filepath.Join(t.TempDir(), "db")
db, err := sql.Open("sqlite3", dbPath)
if err != nil {
t.Fatal(err)
} else if _, err := db.Exec(`PRAGMA cache_size = 4`); err != nil {
t.Fatal(err)
} else if _, err := db.Exec(`PRAGMA journal_mode = wal`); err != nil {
t.Fatal(err)
} else if _, err := db.Exec(`CREATE TABLE t (x)`); err != nil {
t.Fatal(err)
}
defer db.Close()
if err := w.Watch(dbPath + "-wal"); err != nil {
t.Fatal(err)
}
// Start a transaction to ensure writing large data creates multiple write events.
tx, err := db.Begin()
if err != nil {
t.Fatal(err)
}
defer func() { _ = tx.Rollback() }()
// Write enough data to require a spill.
for i := 0; i < 100; i++ {
if _, err := tx.Exec(`INSERT INTO t (x) VALUES (?)`, strings.Repeat("x", 512)); err != nil {
t.Fatal(err)
}
}
// Ensure spill writes to disk.
select {
case <-time.After(10 * time.Second):
t.Fatal("timeout waiting for event")
case event := <-w.Events():
if got, want := event.Name, dbPath+"-wal"; got != want {
t.Fatalf("name=%s, want %s", got, want)
} else if got, want := event.Mask, internal.FileEventModified; got != want {
t.Fatalf("mask=0x%02x, want 0x%02x", got, want)
}
}
// Flush any duplicate events.
drainFileEventChannel(w.Events())
if err := tx.Commit(); err != nil {
t.Fatal(err)
}
// Final commit should spill remaining pages and cause another write event.
select {
case <-time.After(10 * time.Second):
t.Fatal("timeout waiting for event")
case event := <-w.Events():
if got, want := event.Name, dbPath+"-wal"; got != want {
t.Fatalf("name=%s, want %s", got, want)
} else if got, want := event.Mask, internal.FileEventModified; got != want {
t.Fatalf("mask=0x%02x, want 0x%02x", got, want)
}
}
})
t.Run("WatchBeforeCreate", func(t *testing.T) {
dbPath := filepath.Join(t.TempDir(), "db")
w := internal.NewFileWatcher()
if err := w.Open(); err != nil {
t.Fatal(err)
}
defer w.Close()
if err := w.Watch(dbPath); err != nil {
t.Fatal(err)
} else if err := w.Watch(dbPath + "-wal"); err != nil {
t.Fatal(err)
}
db, err := sql.Open("sqlite3", dbPath)
if err != nil {
t.Fatal(err)
}
defer db.Close()
if _, err := db.Exec(`CREATE TABLE t (x)`); err != nil {
t.Fatal(err)
}
// Wait for main database creation event.
waitForFileEvent(t, w.Events(), internal.FileEvent{Name: dbPath, Mask: internal.FileEventCreated})
// Write to the WAL file & ensure a "modified" event occurs.
if _, err := db.Exec(`PRAGMA journal_mode = wal`); err != nil {
t.Fatal(err)
} else if _, err := db.Exec(`INSERT INTO t (x) VALUES (1)`); err != nil {
t.Fatal(err)
}
// Wait for WAL creation event.
waitForFileEvent(t, w.Events(), internal.FileEvent{Name: dbPath + "-wal", Mask: internal.FileEventCreated})
})
}
func drainFileEventChannel(ch <-chan internal.FileEvent) {
for {
select {
case <-time.After(100 * time.Millisecond):
return
case <-ch:
}
}
}
func waitForFileEvent(tb testing.TB, ch <-chan internal.FileEvent, want internal.FileEvent) {
tb.Helper()
timeout := time.After(10 * time.Second)
for {
select {
case <-timeout:
tb.Fatalf("timeout waiting for event: %#v", want)
case got := <-ch:
if got == want {
return
}
}
}
}

View File

@@ -176,15 +176,6 @@ func MkdirAll(path string, mode os.FileMode, uid, gid int) error {
return nil
}
// Fileinfo returns syscall fields from a FileInfo object.
func Fileinfo(fi os.FileInfo) (uid, gid int) {
if fi == nil {
return -1, -1
}
stat := fi.Sys().(*syscall.Stat_t)
return int(stat.Uid), int(stat.Gid)
}
// ParseSnapshotPath parses the index from a snapshot filename. Used by path-based replicas.
func ParseSnapshotPath(s string) (index int, err error) {
a := snapshotPathRegex.FindStringSubmatch(s)

View File

@@ -102,6 +102,24 @@ func TestTruncateDuration(t *testing.T) {
}
}
func TestMD5Hash(t *testing.T) {
for _, tt := range []struct {
input []byte
output string
}{
{[]byte{}, "d41d8cd98f00b204e9800998ecf8427e"},
{[]byte{0x0}, "93b885adfe0da089cdf634904fd59f71"},
{[]byte{0x0, 0x1, 0x2, 0x3}, "37b59afd592725f9305e484a5d7f5168"},
{[]byte("Hello, world!"), "6cd3556deb0da54bca060b4c39479839"},
} {
t.Run(fmt.Sprintf("%v", tt.input), func(t *testing.T) {
if got, want := internal.MD5Hash(tt.input), tt.output; got != want {
t.Fatalf("hash=%s, want %s", got, want)
}
})
}
}
func TestOnceCloser(t *testing.T) {
var closed bool
var rc = &mock.ReadCloser{

18
internal/internal_unix.go Normal file
View File

@@ -0,0 +1,18 @@
//go:build !windows
// +build !windows
package internal
import (
"os"
"syscall"
)
// Fileinfo returns syscall fields from a FileInfo object.
func Fileinfo(fi os.FileInfo) (uid, gid int) {
if fi == nil {
return -1, -1
}
stat := fi.Sys().(*syscall.Stat_t)
return int(stat.Uid), int(stat.Gid)
}

View File

@@ -0,0 +1,13 @@
//go:build windows
// +build windows
package internal
import (
"os"
)
// Fileinfo returns syscall fields from a FileInfo object.
func Fileinfo(fi os.FileInfo) (uid, gid int) {
return -1, -1
}

View File

@@ -1,7 +1,6 @@
package litestream
import (
"context"
"database/sql"
"encoding/binary"
"errors"
@@ -536,76 +535,6 @@ func ParseOffset(s string) (int64, error) {
return int64(v), nil
}
const (
StreamRecordTypeSnapshot = 1
StreamRecordTypeWALSegment = 2
)
const StreamRecordHeaderSize = 0 +
4 + 4 + // type, flags
8 + 8 + 8 + 8 // generation, index, offset, size
type StreamRecordHeader struct {
Type int
Flags int
Generation string
Index int
Offset int64
Size int64
}
func (hdr *StreamRecordHeader) Pos() Pos {
return Pos{
Generation: hdr.Generation,
Index: hdr.Index,
Offset: hdr.Offset,
}
}
func (hdr *StreamRecordHeader) MarshalBinary() ([]byte, error) {
generation, err := strconv.ParseUint(hdr.Generation, 16, 64)
if err != nil {
return nil, fmt.Errorf("invalid generation: %q", generation)
}
data := make([]byte, StreamRecordHeaderSize)
binary.BigEndian.PutUint32(data[0:4], uint32(hdr.Type))
binary.BigEndian.PutUint32(data[4:8], uint32(hdr.Flags))
binary.BigEndian.PutUint64(data[8:16], generation)
binary.BigEndian.PutUint64(data[16:24], uint64(hdr.Index))
binary.BigEndian.PutUint64(data[24:32], uint64(hdr.Offset))
binary.BigEndian.PutUint64(data[32:40], uint64(hdr.Size))
return data, nil
}
// UnmarshalBinary from data into hdr.
func (hdr *StreamRecordHeader) UnmarshalBinary(data []byte) error {
if len(data) < StreamRecordHeaderSize {
return io.ErrUnexpectedEOF
}
hdr.Type = int(binary.BigEndian.Uint32(data[0:4]))
hdr.Flags = int(binary.BigEndian.Uint32(data[4:8]))
hdr.Generation = fmt.Sprintf("%016x", binary.BigEndian.Uint64(data[8:16]))
hdr.Index = int(binary.BigEndian.Uint64(data[16:24]))
hdr.Offset = int64(binary.BigEndian.Uint64(data[24:32]))
hdr.Size = int64(binary.BigEndian.Uint64(data[32:40]))
return nil
}
// StreamClient represents a client for streaming changes to a replica DB.
type StreamClient interface {
// Stream returns a reader which contains and optional snapshot followed
// by a series of WAL segments. This stream begins from the given position.
Stream(ctx context.Context, pos Pos) (StreamReader, error)
}
// StreamReader represents a reader that streams snapshot and WAL records.
type StreamReader interface {
io.ReadCloser
PageSize() int
Next() (*StreamRecordHeader, error)
}
// removeDBFiles deletes the database and related files (journal, shm, wal).
func removeDBFiles(filename string) error {
if err := os.Remove(filename); err != nil && !os.IsNotExist(err) {

View File

@@ -4,7 +4,6 @@ import (
"context"
"fmt"
"io"
"io/ioutil"
"log"
"os"
"sort"
@@ -105,7 +104,7 @@ func (r *Replica) Client() ReplicaClient { return r.client }
// Starts replicating in a background goroutine.
func (r *Replica) Start(ctx context.Context) {
// Ignore if replica is being used sychronously.
// Ignore if replica is being used synchronously.
if !r.MonitorEnabled {
return
}
@@ -375,7 +374,7 @@ func (r *Replica) calcPos(ctx context.Context, generation string) (pos Pos, err
}
defer rd.Close()
n, err := io.Copy(ioutil.Discard, lz4.NewReader(rd))
n, err := io.Copy(io.Discard, lz4.NewReader(rd))
if err != nil {
return pos, err
}

View File

@@ -234,6 +234,88 @@ func ReplicaClientTimeBounds(ctx context.Context, client ReplicaClient) (min, ma
return min, max, nil
}
// FindIndexByTimestamp returns the highest index before a given point-in-time
// within a generation. Returns ErrNoSnapshots if no index exists on the replica
// for the generation.
func FindIndexByTimestamp(ctx context.Context, client ReplicaClient, generation string, timestamp time.Time) (index int, err error) {
snapshotIndex, err := FindSnapshotIndexByTimestamp(ctx, client, generation, timestamp)
if err == ErrNoSnapshots {
return 0, err
} else if err != nil {
return 0, fmt.Errorf("max snapshot index: %w", err)
}
// Determine the highest available WAL index.
walIndex, err := FindWALIndexByTimestamp(ctx, client, generation, timestamp)
if err != nil && err != ErrNoWALSegments {
return 0, fmt.Errorf("max wal index: %w", err)
}
// Use snapshot index if it's after the last WAL index.
if snapshotIndex > walIndex {
return snapshotIndex, nil
}
return walIndex, nil
}
// FindSnapshotIndexByTimestamp returns the highest snapshot index before timestamp.
// Returns ErrNoSnapshots if no snapshots exist for the generation on the replica.
func FindSnapshotIndexByTimestamp(ctx context.Context, client ReplicaClient, generation string, timestamp time.Time) (index int, err error) {
itr, err := client.Snapshots(ctx, generation)
if err != nil {
return 0, fmt.Errorf("snapshots: %w", err)
}
defer func() { _ = itr.Close() }()
// Iterate over snapshots to find the highest index.
var n int
for ; itr.Next(); n++ {
if info := itr.Snapshot(); info.CreatedAt.After(timestamp) {
continue
} else if info.Index > index {
index = info.Index
}
}
if err := itr.Close(); err != nil {
return 0, fmt.Errorf("snapshot iteration: %w", err)
}
// Return an error if no snapshots were found.
if n == 0 {
return 0, ErrNoSnapshots
}
return index, nil
}
// FindWALIndexByTimestamp returns the highest WAL index before timestamp.
// Returns ErrNoWALSegments if no segments exist for the generation on the replica.
func FindWALIndexByTimestamp(ctx context.Context, client ReplicaClient, generation string, timestamp time.Time) (index int, err error) {
itr, err := client.WALSegments(ctx, generation)
if err != nil {
return 0, fmt.Errorf("wal segments: %w", err)
}
defer func() { _ = itr.Close() }()
// Iterate over WAL segments to find the highest index.
var n int
for ; itr.Next(); n++ {
if info := itr.WALSegment(); info.CreatedAt.After(timestamp) {
continue
} else if info.Index > index {
index = info.Index
}
}
if err := itr.Close(); err != nil {
return 0, fmt.Errorf("wal segment iteration: %w", err)
}
// Return an error if no WAL segments were found.
if n == 0 {
return 0, ErrNoWALSegments
}
return index, nil
}
// FindMaxIndexByGeneration returns the last index within a generation.
// Returns ErrNoSnapshots if no index exists on the replica for the generation.
func FindMaxIndexByGeneration(ctx context.Context, client ReplicaClient, generation string) (index int, err error) {

View File

@@ -489,6 +489,167 @@ func TestFindMaxIndexByGeneration(t *testing.T) {
})
}
func TestFindSnapshotIndexByTimestamp(t *testing.T) {
t.Run("OK", func(t *testing.T) {
client := litestream.NewFileReplicaClient(filepath.Join("testdata", "snapshot-index-by-timestamp", "ok"))
if index, err := litestream.FindSnapshotIndexByTimestamp(context.Background(), client, "0000000000000000", time.Date(2000, 1, 3, 0, 0, 0, 0, time.UTC)); err != nil {
t.Fatal(err)
} else if got, want := index, 0x000007d0; got != want {
t.Fatalf("index=%d, want %d", got, want)
}
})
t.Run("ErrNoSnapshots", func(t *testing.T) {
client := litestream.NewFileReplicaClient(filepath.Join("testdata", "snapshot-index-by-timestamp", "no-snapshots"))
_, err := litestream.FindSnapshotIndexByTimestamp(context.Background(), client, "0000000000000000", time.Date(2000, 1, 1, 0, 0, 0, 0, time.UTC))
if err != litestream.ErrNoSnapshots {
t.Fatalf("unexpected error: %s", err)
}
})
t.Run("ErrSnapshots", func(t *testing.T) {
var client mock.ReplicaClient
client.SnapshotsFunc = func(ctx context.Context, generation string) (litestream.SnapshotIterator, error) {
return nil, fmt.Errorf("marker")
}
_, err := litestream.FindSnapshotIndexByTimestamp(context.Background(), &client, "0000000000000000", time.Date(2000, 1, 1, 0, 0, 0, 0, time.UTC))
if err == nil || err.Error() != `snapshots: marker` {
t.Fatalf("unexpected error: %s", err)
}
})
t.Run("ErrSnapshotIteration", func(t *testing.T) {
var itr mock.SnapshotIterator
itr.NextFunc = func() bool { return false }
itr.CloseFunc = func() error { return fmt.Errorf("marker") }
var client mock.ReplicaClient
client.SnapshotsFunc = func(ctx context.Context, generation string) (litestream.SnapshotIterator, error) {
return &itr, nil
}
_, err := litestream.FindSnapshotIndexByTimestamp(context.Background(), &client, "0000000000000000", time.Date(2000, 1, 1, 0, 0, 0, 0, time.UTC))
if err == nil || err.Error() != `snapshot iteration: marker` {
t.Fatalf("unexpected error: %s", err)
}
})
}
func TestFindWALIndexByTimestamp(t *testing.T) {
t.Run("OK", func(t *testing.T) {
client := litestream.NewFileReplicaClient(filepath.Join("testdata", "wal-index-by-timestamp", "ok"))
if index, err := litestream.FindWALIndexByTimestamp(context.Background(), client, "0000000000000000", time.Date(2000, 1, 3, 0, 0, 0, 0, time.UTC)); err != nil {
t.Fatal(err)
} else if got, want := index, 1; got != want {
t.Fatalf("index=%d, want %d", got, want)
}
})
t.Run("ErrNoWALSegments", func(t *testing.T) {
client := litestream.NewFileReplicaClient(filepath.Join("testdata", "wal-index-by-timestamp", "no-wal"))
_, err := litestream.FindWALIndexByTimestamp(context.Background(), client, "0000000000000000", time.Date(2000, 1, 1, 0, 0, 0, 0, time.UTC))
if err != litestream.ErrNoWALSegments {
t.Fatalf("unexpected error: %s", err)
}
})
t.Run("ErrWALSegments", func(t *testing.T) {
var client mock.ReplicaClient
client.WALSegmentsFunc = func(ctx context.Context, generation string) (litestream.WALSegmentIterator, error) {
return nil, fmt.Errorf("marker")
}
_, err := litestream.FindWALIndexByTimestamp(context.Background(), &client, "0000000000000000", time.Date(2000, 1, 1, 0, 0, 0, 0, time.UTC))
if err == nil || err.Error() != `wal segments: marker` {
t.Fatalf("unexpected error: %s", err)
}
})
t.Run("ErrWALSegmentIteration", func(t *testing.T) {
var itr mock.WALSegmentIterator
itr.NextFunc = func() bool { return false }
itr.CloseFunc = func() error { return fmt.Errorf("marker") }
var client mock.ReplicaClient
client.WALSegmentsFunc = func(ctx context.Context, generation string) (litestream.WALSegmentIterator, error) {
return &itr, nil
}
_, err := litestream.FindWALIndexByTimestamp(context.Background(), &client, "0000000000000000", time.Date(2000, 1, 1, 0, 0, 0, 0, time.UTC))
if err == nil || err.Error() != `wal segment iteration: marker` {
t.Fatalf("unexpected error: %s", err)
}
})
}
func TestFindIndexByTimestamp(t *testing.T) {
t.Run("OK", func(t *testing.T) {
client := litestream.NewFileReplicaClient(filepath.Join("testdata", "index-by-timestamp", "ok"))
if index, err := litestream.FindIndexByTimestamp(context.Background(), client, "0000000000000000", time.Date(2000, 1, 4, 0, 0, 0, 0, time.UTC)); err != nil {
t.Fatal(err)
} else if got, want := index, 0x00000002; got != want {
t.Fatalf("index=%d, want %d", got, want)
}
})
t.Run("NoWAL", func(t *testing.T) {
client := litestream.NewFileReplicaClient(filepath.Join("testdata", "index-by-timestamp", "no-wal"))
if index, err := litestream.FindIndexByTimestamp(context.Background(), client, "0000000000000000", time.Date(2000, 1, 2, 0, 0, 0, 0, time.UTC)); err != nil {
t.Fatal(err)
} else if got, want := index, 0x00000001; got != want {
t.Fatalf("index=%d, want %d", got, want)
}
})
t.Run("SnapshotLaterThanWAL", func(t *testing.T) {
client := litestream.NewFileReplicaClient(filepath.Join("testdata", "index-by-timestamp", "snapshot-later-than-wal"))
if index, err := litestream.FindIndexByTimestamp(context.Background(), client, "0000000000000000", time.Date(2000, 1, 3, 0, 0, 0, 0, time.UTC)); err != nil {
t.Fatal(err)
} else if got, want := index, 0x00000001; got != want {
t.Fatalf("index=%d, want %d", got, want)
}
})
t.Run("ErrNoSnapshots", func(t *testing.T) {
client := litestream.NewFileReplicaClient(filepath.Join("testdata", "index-by-timestamp", "no-snapshots"))
_, err := litestream.FindIndexByTimestamp(context.Background(), client, "0000000000000000", time.Date(2000, 1, 1, 0, 0, 0, 0, time.UTC))
if err != litestream.ErrNoSnapshots {
t.Fatalf("unexpected error: %s", err)
}
})
t.Run("ErrSnapshots", func(t *testing.T) {
var client mock.ReplicaClient
client.SnapshotsFunc = func(ctx context.Context, generation string) (litestream.SnapshotIterator, error) {
return nil, fmt.Errorf("marker")
}
_, err := litestream.FindIndexByTimestamp(context.Background(), &client, "0000000000000000", time.Date(2000, 1, 1, 0, 0, 0, 0, time.UTC))
if err == nil || err.Error() != `max snapshot index: snapshots: marker` {
t.Fatalf("unexpected error: %s", err)
}
})
t.Run("ErrWALSegments", func(t *testing.T) {
var client mock.ReplicaClient
client.SnapshotsFunc = func(ctx context.Context, generation string) (litestream.SnapshotIterator, error) {
return litestream.NewSnapshotInfoSliceIterator([]litestream.SnapshotInfo{{Index: 0x00000001}}), nil
}
client.WALSegmentsFunc = func(ctx context.Context, generation string) (litestream.WALSegmentIterator, error) {
return nil, fmt.Errorf("marker")
}
_, err := litestream.FindIndexByTimestamp(context.Background(), &client, "0000000000000000", time.Date(2000, 1, 1, 0, 0, 0, 0, time.UTC))
if err == nil || err.Error() != `max wal index: wal segments: marker` {
t.Fatalf("unexpected error: %s", err)
}
})
}
func TestRestore(t *testing.T) {
t.Run("OK", func(t *testing.T) {
testDir := filepath.Join("testdata", "restore", "ok")

View File

@@ -10,6 +10,7 @@ import (
"os"
"path"
"regexp"
"strconv"
"strings"
"sync"
"time"
@@ -17,7 +18,6 @@ import (
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awserr"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/defaults"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/s3"
"github.com/aws/aws-sdk-go/service/s3/s3manager"
@@ -94,6 +94,7 @@ func (c *ReplicaClient) Init(ctx context.Context) (err error) {
if region != "" {
config.Region = aws.String(region)
}
sess, err := session.NewSession(config)
if err != nil {
return fmt.Errorf("cannot create aws session: %w", err)
@@ -106,7 +107,8 @@ func (c *ReplicaClient) Init(ctx context.Context) (err error) {
// config returns the AWS configuration. Uses the default credential chain
// unless a key/secret are explicitly set.
func (c *ReplicaClient) config() *aws.Config {
config := defaults.Get().Config
config := &aws.Config{}
if c.AccessKeyID != "" || c.SecretAccessKey != "" {
config.Credentials = credentials.NewStaticCredentials(c.AccessKeyID, c.SecretAccessKey, "")
}
@@ -206,10 +208,14 @@ func (c *ReplicaClient) DeleteGeneration(ctx context.Context, generation string)
n = len(objIDs)
}
if _, err := c.s3.DeleteObjectsWithContext(ctx, &s3.DeleteObjectsInput{
out, err := c.s3.DeleteObjectsWithContext(ctx, &s3.DeleteObjectsInput{
Bucket: aws.String(c.Bucket),
Delete: &s3.Delete{Objects: objIDs[:n], Quiet: aws.Bool(true)},
}); err != nil {
})
if err != nil {
return err
}
if err := deleteOutputError(out); err != nil {
return err
}
internal.OperationTotalCounterVec.WithLabelValues(ReplicaClientType, "DELETE").Inc()
@@ -296,10 +302,14 @@ func (c *ReplicaClient) DeleteSnapshot(ctx context.Context, generation string, i
key := path.Join(c.Path, "generations", generation, "snapshots", litestream.FormatIndex(index)+".snapshot.lz4")
if _, err := c.s3.DeleteObjectsWithContext(ctx, &s3.DeleteObjectsInput{
out, err := c.s3.DeleteObjectsWithContext(ctx, &s3.DeleteObjectsInput{
Bucket: aws.String(c.Bucket),
Delete: &s3.Delete{Objects: []*s3.ObjectIdentifier{{Key: &key}}, Quiet: aws.Bool(true)},
}); err != nil {
})
if err != nil {
return err
}
if err := deleteOutputError(out); err != nil {
return err
}
@@ -396,13 +406,16 @@ func (c *ReplicaClient) DeleteWALSegments(ctx context.Context, a []litestream.Po
}
// Delete S3 objects in bulk.
if _, err := c.s3.DeleteObjectsWithContext(ctx, &s3.DeleteObjectsInput{
out, err := c.s3.DeleteObjectsWithContext(ctx, &s3.DeleteObjectsInput{
Bucket: aws.String(c.Bucket),
Delete: &s3.Delete{Objects: objIDs[:n], Quiet: aws.Bool(true)},
}); err != nil {
})
if err != nil {
return err
}
if err := deleteOutputError(out); err != nil {
return err
}
internal.OperationTotalCounterVec.WithLabelValues(ReplicaClientType, "DELETE").Inc()
a = a[n:]
@@ -445,10 +458,14 @@ func (c *ReplicaClient) DeleteAll(ctx context.Context) error {
n = len(objIDs)
}
if _, err := c.s3.DeleteObjectsWithContext(ctx, &s3.DeleteObjectsInput{
out, err := c.s3.DeleteObjectsWithContext(ctx, &s3.DeleteObjectsInput{
Bucket: aws.String(c.Bucket),
Delete: &s3.Delete{Objects: objIDs[:n], Quiet: aws.Bool(true)},
}); err != nil {
})
if err != nil {
return err
}
if err := deleteOutputError(out); err != nil {
return err
}
internal.OperationTotalCounterVec.WithLabelValues(ReplicaClientType, "DELETE").Inc()
@@ -703,6 +720,27 @@ func ParseHost(s string) (bucket, region, endpoint string, forcePathStyle bool)
endpoint = net.JoinHostPort(endpoint, port)
}
if s := os.Getenv("LITESTREAM_SCHEME"); s != "" {
if s != "https" && s != "http" {
panic(fmt.Sprintf("Unsupported LITESTREAM_SCHEME value: %q", s))
} else {
scheme = s
}
}
if e := os.Getenv("LITESTREAM_ENDPOINT"); e != "" {
endpoint = e
}
if r := os.Getenv("LITESTREAM_REGION"); r != "" {
region = r
}
if s := os.Getenv("LITESTREAM_FORCE_PATH_STYLE"); s != "" {
if b, err := strconv.ParseBool(s); err != nil {
panic(fmt.Sprintf("Invalid LITESTREAM_FORCE_PATH_STYLE value: %q", s))
} else {
forcePathStyle = b
}
}
// Prepend scheme to endpoint.
if endpoint != "" {
endpoint = scheme + "://" + endpoint
@@ -727,3 +765,15 @@ func isNotExists(err error) bool {
return false
}
}
func deleteOutputError(out *s3.DeleteObjectsOutput) error {
switch len(out.Errors) {
case 0:
return nil
case 1:
return fmt.Errorf("deleting object %s: %s - %s", *out.Errors[0].Key, *out.Errors[0].Code, *out.Errors[0].Message)
default:
return fmt.Errorf("%d errors occured deleting objects, %s: %s - (%s (and %d others)",
len(out.Errors), *out.Errors[0].Key, *out.Errors[0].Code, *out.Errors[0].Message, len(out.Errors)-1)
}
}

View File

@@ -3,10 +3,11 @@ package litestream
import (
"context"
"fmt"
"path/filepath"
"strings"
"sync"
"github.com/benbjohnson/litestream/internal"
"github.com/fsnotify/fsnotify"
"golang.org/x/sync/errgroup"
)
@@ -15,7 +16,7 @@ import (
type Server struct {
mu sync.Mutex
dbs map[string]*DB // databases by path
watcher internal.FileWatcher
watcher *fsnotify.Watcher
ctx context.Context
cancel func()
@@ -31,8 +32,9 @@ func NewServer() *Server {
// Open initializes the server and begins watching for file system events.
func (s *Server) Open() error {
s.watcher = internal.NewFileWatcher()
if err := s.watcher.Open(); err != nil {
var err error
s.watcher, err = fsnotify.NewWatcher()
if err != nil {
return err
}
@@ -110,10 +112,8 @@ func (s *Server) Watch(path string, fn func(path string) (*DB, error)) error {
s.dbs[path] = db
// Watch for changes on the database file & WAL.
if err := s.watcher.Watch(path); err != nil {
if err := s.watcher.Add(filepath.Dir(path)); err != nil {
return fmt.Errorf("watch db file: %w", err)
} else if err := s.watcher.Watch(path + "-wal"); err != nil {
return fmt.Errorf("watch wal file: %w", err)
}
// Kick off an initial sync.
@@ -137,7 +137,7 @@ func (s *Server) Unwatch(path string) error {
delete(s.dbs, path)
// Stop watching for changes on the database WAL.
if err := s.watcher.Unwatch(path + "-wal"); err != nil {
if err := s.watcher.Remove(filepath.Dir(path)); err != nil {
return fmt.Errorf("unwatch file: %w", err)
}
@@ -149,13 +149,26 @@ func (s *Server) Unwatch(path string) error {
return nil
}
func (s *Server) isWatched(event fsnotify.Event) bool {
path := event.Name
path = strings.TrimSuffix(path, "-wal")
if _, ok := s.dbs[path]; ok {
return true
}
return false
}
// monitor runs in a separate goroutine and dispatches notifications to managed DBs.
func (s *Server) monitor(ctx context.Context) error {
for {
select {
case <-ctx.Done():
return ctx.Err()
case event := <-s.watcher.Events():
case event := <-s.watcher.Events:
if !s.isWatched(event) {
continue
}
if err := s.dispatchFileEvent(ctx, event); err != nil {
return err
}
@@ -164,7 +177,7 @@ func (s *Server) monitor(ctx context.Context) error {
}
// dispatchFileEvent dispatches a notification to the database which owns the file.
func (s *Server) dispatchFileEvent(ctx context.Context, event internal.FileEvent) error {
func (s *Server) dispatchFileEvent(ctx context.Context, event fsnotify.Event) error {
path := event.Name
path = strings.TrimSuffix(path, "-wal")

5
testdata/Makefile vendored
View File

@@ -1,8 +1,13 @@
.PHONY: default
default:
make -C find-latest-generation/ok
make -C index-by-timestamp/no-wal
make -C index-by-timestamp/ok
make -C index-by-timestamp/snapshot-later-than-wal
make -C generation-time-bounds/ok
make -C generation-time-bounds/snapshots-only
make -C replica-client-time-bounds/ok
make -C snapshot-time-bounds/ok
make -C snapshot-index-by-timestamp/ok
make -C wal-time-bounds/ok
make -C wal-index-by-timestamp/ok

View File

@@ -0,0 +1,6 @@
.PHONY: default
default:
TZ=UTC touch -ct 200001010000 generations/0000000000000000/snapshots/0000000000000000.snapshot.lz4
TZ=UTC touch -ct 200001020000 generations/0000000000000000/snapshots/0000000000000001.snapshot.lz4
TZ=UTC touch -ct 200001030000 generations/0000000000000000/snapshots/0000000000000002.snapshot.lz4

11
testdata/index-by-timestamp/ok/Makefile vendored Normal file
View File

@@ -0,0 +1,11 @@
.PHONY: default
default:
TZ=UTC touch -ct 200001010000 generations/0000000000000000/snapshots/0000000000000000.snapshot.lz4
TZ=UTC touch -ct 200001030000 generations/0000000000000000/snapshots/0000000000000001.snapshot.lz4
TZ=UTC touch -ct 200001010000 generations/0000000000000000/wal/0000000000000000/0000000000000000.wal.lz4
TZ=UTC touch -ct 200001020000 generations/0000000000000000/wal/0000000000000000/0000000000001234.wal.lz4
TZ=UTC touch -ct 200001030000 generations/0000000000000000/wal/0000000000000001/0000000000000000.wal.lz4
TZ=UTC touch -ct 200001040000 generations/0000000000000000/wal/0000000000000002/0000000000000000.wal.lz4
TZ=UTC touch -ct 200001050000 generations/0000000000000000/wal/0000000000000003/0000000000000000.wal.lz4

View File

@@ -0,0 +1,7 @@
.PHONY: default
default:
TZ=UTC touch -ct 200001010000 generations/0000000000000000/snapshots/0000000000000000.snapshot.lz4
TZ=UTC touch -ct 200001030000 generations/0000000000000000/snapshots/0000000000000001.snapshot.lz4
TZ=UTC touch -ct 200001020000 generations/0000000000000000/wal/0000000000000000/0000000000000000.wal.lz4
TZ=UTC touch -ct 200001020000 generations/0000000000000000/wal/0000000000000000/0000000000001234.wal.lz4

View File

@@ -0,0 +1,5 @@
.PHONY: default
default:
TZ=UTC touch -ct 200001010000 generations/0000000000000000/snapshots/0000000000000000.snapshot.lz4
TZ=UTC touch -ct 200001020000 generations/0000000000000000/snapshots/00000000000003e8.snapshot.lz4
TZ=UTC touch -ct 200001030000 generations/0000000000000000/snapshots/00000000000007d0.snapshot.lz4

View File

@@ -0,0 +1,6 @@
.PHONY: default
default:
TZ=UTC touch -ct 200001010000 generations/0000000000000000/wal/0000000000000000/0000000000000000.wal.lz4
TZ=UTC touch -ct 200001020000 generations/0000000000000000/wal/0000000000000000/0000000000001234.wal.lz4
TZ=UTC touch -ct 200001030000 generations/0000000000000000/wal/0000000000000001/0000000000000000.wal.lz4

View File

@@ -20,7 +20,7 @@ import (
// with the prefix and suffixed with the WAL index. It is the responsibility of
// the caller to clean up these WAL files.
//
// The purpose of the parallization is that RTT & WAL apply time can consume
// The purpose of the parallelization is that RTT & WAL apply time can consume
// much of the restore time so it's useful to download multiple WAL files in
// the background to minimize the latency. While some WAL indexes may be
// downloaded out of order, the WALDownloader ensures that Next() always

View File

@@ -383,7 +383,7 @@ func testWALDownloader(t *testing.T, parallelism int) {
}
})
// Ensure a gap in indicies returns an error.
// Ensure a gap in indices returns an error.
t.Run("ErrMissingMiddleIndex", func(t *testing.T) {
testDir := filepath.Join("testdata", "wal-downloader", "missing-middle-index")
tempDir := t.TempDir()