Compare commits
77 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
16c50d1d2e | ||
|
|
929a66314c | ||
|
|
2e7a6ae715 | ||
|
|
896aef070c | ||
|
|
3598d8b572 | ||
|
|
3183cf0e2e | ||
|
|
a59ee6ed63 | ||
|
|
e4c1a82eb2 | ||
|
|
aa54e4698d | ||
|
|
43e40ce8d3 | ||
|
|
0bd1b13b94 | ||
|
|
1c16aae550 | ||
|
|
49f47ea87f | ||
|
|
8947adc312 | ||
|
|
9341863bdb | ||
|
|
998e831c5c | ||
|
|
b2ca113fb5 | ||
|
|
b211e82ed2 | ||
|
|
e2779169a0 | ||
|
|
ec2f9c84d5 | ||
|
|
78eb8dcc53 | ||
|
|
cafa0f5942 | ||
|
|
325482a97c | ||
|
|
9cee1285b9 | ||
|
|
a14a74d678 | ||
|
|
f652186adf | ||
|
|
afb8731ead | ||
|
|
ce2d54cc20 | ||
|
|
d802e15b4f | ||
|
|
d6ece0b826 | ||
|
|
cb007762be | ||
|
|
6a90714bbe | ||
|
|
622ba82ebb | ||
|
|
6ca010e9db | ||
|
|
ad9ce43127 | ||
|
|
167d333fcd | ||
|
|
c5390dec1d | ||
|
|
e2cbd5fb63 | ||
|
|
8d083f7a2d | ||
|
|
37442babfb | ||
|
|
962a2a894b | ||
|
|
0c61c9f7fe | ||
|
|
267b140fab | ||
|
|
1b194535e6 | ||
|
|
58a6c765fe | ||
|
|
2604052a9f | ||
|
|
7f81890bae | ||
|
|
2ff073c735 | ||
|
|
6fd11ccab5 | ||
|
|
6c49fba592 | ||
|
|
922fa0798e | ||
|
|
976df182c0 | ||
|
|
0e28a650e6 | ||
|
|
f17768e830 | ||
|
|
2c142d3a0c | ||
|
|
4e469f8b02 | ||
|
|
3f268b70f8 | ||
|
|
ad7bf7f974 | ||
|
|
778451f09f | ||
|
|
8e9a15933b | ||
|
|
da1d7c3183 | ||
|
|
a178ef4714 | ||
|
|
7ca2e193b9 | ||
|
|
39a6fabb9f | ||
|
|
0249b4e4f5 | ||
|
|
67eeb49101 | ||
|
|
f7213ed35c | ||
|
|
a532a0198e | ||
|
|
16f79e5814 | ||
|
|
39aefc2c02 | ||
|
|
0b08669bca | ||
|
|
8f5761ee13 | ||
|
|
d2eb4fa5ba | ||
|
|
ca489c5e73 | ||
|
|
f0ae48af4c | ||
|
|
9eae39e2fa | ||
|
|
42ab293ffb |
17
.github/CONTRIBUTING.md
vendored
Normal file
17
.github/CONTRIBUTING.md
vendored
Normal file
@@ -0,0 +1,17 @@
|
||||
## Open-source, not open-contribution
|
||||
|
||||
[Similar to SQLite](https://www.sqlite.org/copyright.html), Litestream is open
|
||||
source but closed to contributions. This keeps the code base free of proprietary
|
||||
or licensed code but it also helps me continue to maintain and build Litestream.
|
||||
|
||||
As the author of [BoltDB](https://github.com/boltdb/bolt), I found that
|
||||
accepting and maintaining third party patches contributed to my burn out and
|
||||
I eventually archived the project. Writing databases & low-level replication
|
||||
tools involves nuance and simple one line changes can have profound and
|
||||
unexpected changes in correctness and performance. Small contributions
|
||||
typically required hours of my time to properly test and validate them.
|
||||
|
||||
I am grateful for community involvement, bug reports, & feature requests. I do
|
||||
not wish to come off as anything but welcoming, however, I've
|
||||
made the decision to keep this project closed to contributions for my own
|
||||
mental health and long term viability of the project.
|
||||
2
.github/workflows/test.yml
vendored
2
.github/workflows/test.yml
vendored
@@ -1,4 +1,4 @@
|
||||
on: [push, pull_request]
|
||||
on: push
|
||||
name: test
|
||||
jobs:
|
||||
test:
|
||||
|
||||
13
Makefile
13
Makefile
@@ -1,19 +1,22 @@
|
||||
default:
|
||||
|
||||
dist:
|
||||
dist-linux:
|
||||
mkdir -p dist
|
||||
cp etc/litestream.yml dist/litestream.yml
|
||||
docker run --rm -v "${PWD}":/usr/src/litestream -w /usr/src/litestream -e GOOS=linux -e GOARCH=amd64 golang:1.15 go build -v -o dist/litestream ./cmd/litestream
|
||||
tar -cz -f dist/litestream-linux-amd64.tar.gz -C dist litestream
|
||||
|
||||
deb: dist
|
||||
dist-macos:
|
||||
ifndef LITESTREAM_VERSION
|
||||
$(error LITESTREAM_VERSION is undefined)
|
||||
endif
|
||||
cat etc/nfpm.yml | envsubst > dist/nfpm.yml
|
||||
nfpm pkg --config dist/nfpm.yml --packager deb --target dist/litestream.deb
|
||||
mkdir -p dist
|
||||
go build -v -ldflags "-X 'main.Version=${LITESTREAM_VERSION}'" -o dist/litestream ./cmd/litestream
|
||||
gon etc/gon.hcl
|
||||
mv dist/litestream.zip dist/litestream-${LITESTREAM_VERSION}-darwin-amd64.zip
|
||||
openssl dgst -sha256 dist/litestream-${LITESTREAM_VERSION}-darwin-amd64.zip
|
||||
|
||||
clean:
|
||||
rm -rf dist
|
||||
|
||||
.PHONY: deb dist clean
|
||||
.PHONY: default dist-linux dist-macos clean
|
||||
|
||||
193
README.md
193
README.md
@@ -1,4 +1,8 @@
|
||||
Litestream 
|
||||
Litestream
|
||||

|
||||

|
||||

|
||||

|
||||
==========
|
||||
|
||||
Litestream is a standalone streaming replication tool for SQLite. It runs as a
|
||||
@@ -6,183 +10,35 @@ background process and safely replicates changes incrementally to another file
|
||||
or S3. Litestream only communicates with SQLite through the SQLite API so it
|
||||
will not corrupt your database.
|
||||
|
||||
If you need support or have ideas for improving Litestream, please join the
|
||||
[Litestream Slack][slack] or visit the [GitHub Discussions](https://github.com/benbjohnson/litestream/discussions).
|
||||
Please visit the [Litestream web site](https://litestream.io) for installation
|
||||
instructions and documentation.
|
||||
|
||||
If you find this project interesting, please consider starring the project on
|
||||
GitHub.
|
||||
|
||||
|
||||
## Installation
|
||||
|
||||
### Homebrew
|
||||
|
||||
TODO
|
||||
[slack]: https://join.slack.com/t/litestream/shared_invite/zt-n0j4s3ci-lx1JziR3bV6L2NMF723H3Q
|
||||
|
||||
|
||||
### Linux (Debian)
|
||||
## Acknowledgements
|
||||
|
||||
You can download the `.deb` file from the [Releases page][releases] page and
|
||||
then run the following:
|
||||
|
||||
```sh
|
||||
$ sudo dpkg -i litestream-v0.3.0-linux-amd64.deb
|
||||
```
|
||||
|
||||
Once installed, you'll need to enable & start the service:
|
||||
|
||||
```sh
|
||||
$ sudo systemctl enable litestream
|
||||
$ sudo systemctl start litestream
|
||||
```
|
||||
|
||||
|
||||
### Release binaries
|
||||
|
||||
You can also download the release binary for your system from the
|
||||
[Releases page][releases] and run it as a standalone application.
|
||||
|
||||
|
||||
## Configuration
|
||||
|
||||
Once installed locally, you'll need to create a config file. By default, the
|
||||
config file lives at `/etc/litestream.yml` but you can pass in a different
|
||||
path to any `litestream` command using the `-config PATH` flag.
|
||||
|
||||
The configuration specifies one or more `dbs` and a list of one or more replica
|
||||
locations for each db. Below are some common configurations:
|
||||
|
||||
### Replicate to S3
|
||||
|
||||
This will replicate the database at `/path/to/db` to the `"/db"` path inside
|
||||
the S3 bucket named `"mybkt"`.
|
||||
|
||||
```yaml
|
||||
access-key-id: AKIAxxxxxxxxxxxxxxxx
|
||||
secret-access-key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/xxxxxxxxx
|
||||
|
||||
dbs:
|
||||
- path: /path/to/db
|
||||
replicas:
|
||||
- path: s3://mybkt/db
|
||||
```
|
||||
|
||||
### Replicate to another file path
|
||||
|
||||
This will replicate the database at `/path/to/db` to a directory named
|
||||
`/path/to/replica`.
|
||||
|
||||
```yaml
|
||||
dbs:
|
||||
- path: /path/to/db
|
||||
replicas:
|
||||
- path: /path/to/replica
|
||||
```
|
||||
|
||||
|
||||
### Other configuration options
|
||||
|
||||
These are some additional configuration options available on replicas:
|
||||
|
||||
- `type`—Specify the type of replica (`"file"` or `"s3"`). Derived from `"path"`.
|
||||
- `name`—Specify an optional name for the replica if you are using multiple replicas.
|
||||
- `path`—File path or URL to the replica location.
|
||||
- `retention`—Length of time to keep replicated WAL files. Defaults to `24h`.
|
||||
- `retention-check-interval`—Time between retention enforcement checks. Defaults to `1h`.
|
||||
- `validation-interval`—Interval between periodic checks to ensure restored backup matches current database. Disabled by default.
|
||||
|
||||
These replica options are only available for S3 replicas:
|
||||
|
||||
- `bucket`—S3 bucket name. Derived from `"path"`.
|
||||
- `region`—S3 bucket region. Looked up on startup if unspecified.
|
||||
- `sync-interval`—Replication sync frequency.
|
||||
|
||||
|
||||
## Usage
|
||||
|
||||
### Replication
|
||||
|
||||
Once your configuration is saved, you'll need to begin replication. If you
|
||||
installed the `.deb` file then run:
|
||||
|
||||
```sh
|
||||
$ sudo systemctl restart litestream
|
||||
```
|
||||
|
||||
To run litestream on its own, run:
|
||||
|
||||
```sh
|
||||
# Replicate using the /etc/litestream.yml configuration.
|
||||
$ litestream replicate
|
||||
|
||||
# Replicate using a different configuration path.
|
||||
$ litestream replicate -config /path/to/litestream.yml
|
||||
```
|
||||
|
||||
The `litestream` command will initialize and then wait indefinitely for changes.
|
||||
You should see your destination replica path is now populated with a
|
||||
`generations` directory. Inside there should be a 16-character hex generation
|
||||
directory and inside there should be snapshots & WAL files. As you make changes
|
||||
to your source database, changes will be copied over to your replica incrementally.
|
||||
|
||||
|
||||
### Restoring a backup
|
||||
|
||||
Litestream can restore a previous snapshot and replay all replicated WAL files.
|
||||
By default, it will restore up to the latest WAL file but you can also perform
|
||||
point-in-time restores.
|
||||
|
||||
A database can only be restored to a path that does not exist so you don't need
|
||||
to worry about accidentally overwriting your current database.
|
||||
|
||||
```sh
|
||||
# Restore database to original path.
|
||||
$ litestream restore /path/to/db
|
||||
|
||||
# Restore database to a new location.
|
||||
$ litestream restore -o /tmp/mynewdb /path/to/db
|
||||
|
||||
# Restore database to a specific point-in-time.
|
||||
$ litestream restore -timestamp 2020-01-01T00:00:00Z /path/to/db
|
||||
```
|
||||
|
||||
Point-in-time restores only have the resolution of the timestamp of the WAL file
|
||||
itself. By default, litestream will start a new WAL file every minute so
|
||||
point-in-time restores are only accurate to the minute.
|
||||
|
||||
|
||||
|
||||
## How it works
|
||||
|
||||
SQLite provides a WAL (write-ahead log) journaling mode which writes pages to
|
||||
a `-wal` file before eventually being copied over to the original database file.
|
||||
This copying process is known as checkpointing. The WAL file works as a circular
|
||||
buffer so when the WAL reaches a certain size then it restarts from the beginning.
|
||||
|
||||
Litestream works by taking over the checkpointing process and controlling when
|
||||
it is restarted to ensure that it copies every new page. Checkpointing is only
|
||||
allowed when there are no read transactions so Litestream maintains a
|
||||
long-running read transaction against each database until it is ready to
|
||||
checkpoint.
|
||||
|
||||
The SQLite WAL file is copied to a separate location called the shadow WAL which
|
||||
ensures that it will not be overwritten by SQLite. This shadow WAL acts as a
|
||||
temporary buffer so that replicas can replicate to their destination (e.g.
|
||||
another file path or to S3). The shadow WAL files are removed once they have
|
||||
been fully replicated. You can find the shadow directory as a hidden directory
|
||||
next to your database file. If you database file is named `/var/lib/my.db` then
|
||||
the shadow directory will be `/var/lib/.my.db-litestream`.
|
||||
|
||||
Litestream groups a snapshot and all subsequent WAL changes into "generations".
|
||||
A generation is started on initial replication of a database and a new
|
||||
generation will be started if litestream detects that the WAL replication is
|
||||
no longer contiguous. This can occur if the `litestream` process is stopped and
|
||||
another process is allowed to checkpoint the WAL.
|
||||
While the Litestream project does not accept external code patches, many
|
||||
of the most valuable contributions are in the forms of testing, feedback, and
|
||||
documentation. These help harden software and streamline usage for other users.
|
||||
|
||||
I want to give special thanks to individuals who invest much of their time and
|
||||
energy into the project to help make it better. Shout out to [Michael
|
||||
Lynch](https://github.com/mtlynch) for digging into issues and contributing to
|
||||
the documentation.
|
||||
|
||||
|
||||
## Open-source, not open-contribution
|
||||
|
||||
[Similar to SQLite](https://www.sqlite.org/copyright.html), litestream is open
|
||||
source but closed to contributions. This keeps the code base free of proprietary
|
||||
or licensed code but it also helps me continue to maintain and build litestream.
|
||||
[Similar to SQLite](https://www.sqlite.org/copyright.html), Litestream is open
|
||||
source but closed to code contributions. This keeps the code base free of
|
||||
proprietary or licensed code but it also helps me continue to maintain and build
|
||||
Litestream.
|
||||
|
||||
As the author of [BoltDB](https://github.com/boltdb/bolt), I found that
|
||||
accepting and maintaining third party patches contributed to my burn out and
|
||||
@@ -196,5 +52,8 @@ not wish to come off as anything but welcoming, however, I've
|
||||
made the decision to keep this project closed to contributions for my own
|
||||
mental health and long term viability of the project.
|
||||
|
||||
The [documentation repository][docs] is MIT licensed and pull requests are welcome there.
|
||||
|
||||
[releases]: https://github.com/benbjohnson/litestream/releases
|
||||
[docs]: https://github.com/benbjohnson/litestream.io
|
||||
|
||||
|
||||
@@ -2,7 +2,6 @@ package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"flag"
|
||||
"fmt"
|
||||
"os"
|
||||
@@ -15,30 +14,31 @@ type DatabasesCommand struct{}
|
||||
|
||||
// Run executes the command.
|
||||
func (c *DatabasesCommand) Run(ctx context.Context, args []string) (err error) {
|
||||
var configPath string
|
||||
fs := flag.NewFlagSet("litestream-databases", flag.ContinueOnError)
|
||||
registerConfigFlag(fs, &configPath)
|
||||
configPath := registerConfigFlag(fs)
|
||||
fs.Usage = c.Usage
|
||||
if err := fs.Parse(args); err != nil {
|
||||
return err
|
||||
} else if fs.NArg() != 0 {
|
||||
return fmt.Errorf("too many argument")
|
||||
return fmt.Errorf("too many arguments")
|
||||
}
|
||||
|
||||
// Load configuration.
|
||||
if configPath == "" {
|
||||
return errors.New("-config required")
|
||||
if *configPath == "" {
|
||||
*configPath = DefaultConfigPath()
|
||||
}
|
||||
config, err := ReadConfigFile(configPath)
|
||||
config, err := ReadConfigFile(*configPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// List all databases.
|
||||
w := tabwriter.NewWriter(os.Stdout, 0, 8, 1, '\t', 0)
|
||||
w := tabwriter.NewWriter(os.Stdout, 0, 8, 2, ' ', 0)
|
||||
defer w.Flush()
|
||||
|
||||
fmt.Fprintln(w, "path\treplicas")
|
||||
for _, dbConfig := range config.DBs {
|
||||
db, err := newDBFromConfig(&config, dbConfig)
|
||||
db, err := NewDBFromConfig(dbConfig)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -53,7 +53,6 @@ func (c *DatabasesCommand) Run(ctx context.Context, args []string) (err error) {
|
||||
strings.Join(replicaNames, ","),
|
||||
)
|
||||
}
|
||||
w.Flush()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -2,14 +2,14 @@ package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"flag"
|
||||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"text/tabwriter"
|
||||
"time"
|
||||
|
||||
"github.com/benbjohnson/litestream"
|
||||
)
|
||||
|
||||
// GenerationsCommand represents a command to list all generations for a database.
|
||||
@@ -17,58 +17,74 @@ type GenerationsCommand struct{}
|
||||
|
||||
// Run executes the command.
|
||||
func (c *GenerationsCommand) Run(ctx context.Context, args []string) (err error) {
|
||||
var configPath string
|
||||
fs := flag.NewFlagSet("litestream-generations", flag.ContinueOnError)
|
||||
registerConfigFlag(fs, &configPath)
|
||||
configPath := registerConfigFlag(fs)
|
||||
replicaName := fs.String("replica", "", "replica name")
|
||||
fs.Usage = c.Usage
|
||||
if err := fs.Parse(args); err != nil {
|
||||
return err
|
||||
} else if fs.NArg() == 0 || fs.Arg(0) == "" {
|
||||
return fmt.Errorf("database path required")
|
||||
return fmt.Errorf("database path or replica URL required")
|
||||
} else if fs.NArg() > 1 {
|
||||
return fmt.Errorf("too many arguments")
|
||||
}
|
||||
|
||||
var db *litestream.DB
|
||||
var r litestream.Replica
|
||||
updatedAt := time.Now()
|
||||
if isURL(fs.Arg(0)) {
|
||||
if *configPath != "" {
|
||||
return fmt.Errorf("cannot specify a replica URL and the -config flag")
|
||||
}
|
||||
if r, err = NewReplicaFromConfig(&ReplicaConfig{URL: fs.Arg(0)}, nil); err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
if *configPath == "" {
|
||||
*configPath = DefaultConfigPath()
|
||||
}
|
||||
|
||||
// Load configuration.
|
||||
if configPath == "" {
|
||||
return errors.New("-config required")
|
||||
}
|
||||
config, err := ReadConfigFile(configPath)
|
||||
config, err := ReadConfigFile(*configPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Determine absolute path for database.
|
||||
dbPath, err := filepath.Abs(fs.Arg(0))
|
||||
if err != nil {
|
||||
// Lookup database from configuration file by path.
|
||||
if path, err := expand(fs.Arg(0)); err != nil {
|
||||
return err
|
||||
} else if dbc := config.DBConfig(path); dbc == nil {
|
||||
return fmt.Errorf("database not found in config: %s", path)
|
||||
} else if db, err = NewDBFromConfig(dbc); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Instantiate DB from from configuration.
|
||||
dbConfig := config.DBConfig(dbPath)
|
||||
if dbConfig == nil {
|
||||
return fmt.Errorf("database not found in config: %s", dbPath)
|
||||
// Filter by replica, if specified.
|
||||
if *replicaName != "" {
|
||||
if r = db.Replica(*replicaName); r == nil {
|
||||
return fmt.Errorf("replica %q not found for database %q", *replicaName, db.Path())
|
||||
}
|
||||
db, err := newDBFromConfig(&config, dbConfig)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Determine last time database or WAL was updated.
|
||||
updatedAt, err := db.UpdatedAt()
|
||||
if err != nil {
|
||||
if updatedAt, err = db.UpdatedAt(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
var replicas []litestream.Replica
|
||||
if r != nil {
|
||||
replicas = []litestream.Replica{r}
|
||||
} else {
|
||||
replicas = db.Replicas
|
||||
}
|
||||
|
||||
// List each generation.
|
||||
w := tabwriter.NewWriter(os.Stdout, 0, 8, 1, '\t', 0)
|
||||
fmt.Fprintln(w, "name\tgeneration\tlag\tstart\tend")
|
||||
for _, r := range db.Replicas {
|
||||
if *replicaName != "" && r.Name() != *replicaName {
|
||||
continue
|
||||
}
|
||||
w := tabwriter.NewWriter(os.Stdout, 0, 8, 2, ' ', 0)
|
||||
defer w.Flush()
|
||||
|
||||
fmt.Fprintln(w, "name\tgeneration\tlag\tstart\tend")
|
||||
for _, r := range replicas {
|
||||
generations, err := r.Generations(ctx)
|
||||
if err != nil {
|
||||
log.Printf("%s: cannot list generations: %s", r.Name(), err)
|
||||
@@ -90,10 +106,8 @@ func (c *GenerationsCommand) Run(ctx context.Context, args []string) (err error)
|
||||
stats.CreatedAt.Format(time.RFC3339),
|
||||
stats.UpdatedAt.Format(time.RFC3339),
|
||||
)
|
||||
w.Flush()
|
||||
}
|
||||
}
|
||||
w.Flush()
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -101,17 +115,21 @@ func (c *GenerationsCommand) Run(ctx context.Context, args []string) (err error)
|
||||
// Usage prints the help message to STDOUT.
|
||||
func (c *GenerationsCommand) Usage() {
|
||||
fmt.Printf(`
|
||||
The generations command lists all generations for a database. It also lists
|
||||
stats about their lag behind the primary database and the time range they cover.
|
||||
The generations command lists all generations for a database or replica. It also
|
||||
lists stats about their lag behind the primary database and the time range they
|
||||
cover.
|
||||
|
||||
Usage:
|
||||
|
||||
litestream generations [arguments] DB
|
||||
litestream generations [arguments] DB_PATH
|
||||
|
||||
litestream generations [arguments] REPLICA_URL
|
||||
|
||||
Arguments:
|
||||
|
||||
-config PATH
|
||||
Specifies the configuration file. Defaults to %s
|
||||
Specifies the configuration file.
|
||||
Defaults to %s
|
||||
|
||||
-replica NAME
|
||||
Optional, filters by replica.
|
||||
|
||||
@@ -2,15 +2,18 @@ package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"flag"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"net/url"
|
||||
"os"
|
||||
"os/signal"
|
||||
"os/user"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
@@ -25,14 +28,17 @@ var (
|
||||
Version = "(development build)"
|
||||
)
|
||||
|
||||
// errStop is a terminal error for indicating program should quit.
|
||||
var errStop = errors.New("stop")
|
||||
|
||||
func main() {
|
||||
log.SetFlags(0)
|
||||
|
||||
m := NewMain()
|
||||
if err := m.Run(context.Background(), os.Args[1:]); err == flag.ErrHelp {
|
||||
if err := m.Run(context.Background(), os.Args[1:]); err == flag.ErrHelp || err == errStop {
|
||||
os.Exit(1)
|
||||
} else if err != nil {
|
||||
fmt.Fprintln(os.Stderr, err)
|
||||
log.Println(err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
@@ -47,6 +53,14 @@ func NewMain() *Main {
|
||||
|
||||
// Run executes the program.
|
||||
func (m *Main) Run(ctx context.Context, args []string) (err error) {
|
||||
// Execute replication command if running as a Windows service.
|
||||
if isService, err := isWindowsService(); err != nil {
|
||||
return err
|
||||
} else if isService {
|
||||
return runWindowsService(ctx)
|
||||
}
|
||||
|
||||
// Extract command name.
|
||||
var cmd string
|
||||
if len(args) > 0 {
|
||||
cmd, args = args[0], args[1:]
|
||||
@@ -58,7 +72,28 @@ func (m *Main) Run(ctx context.Context, args []string) (err error) {
|
||||
case "generations":
|
||||
return (&GenerationsCommand{}).Run(ctx, args)
|
||||
case "replicate":
|
||||
return (&ReplicateCommand{}).Run(ctx, args)
|
||||
c := NewReplicateCommand()
|
||||
if err := c.ParseFlags(ctx, args); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Setup signal handler.
|
||||
ctx, cancel := context.WithCancel(ctx)
|
||||
ch := make(chan os.Signal, 1)
|
||||
signal.Notify(ch, os.Interrupt)
|
||||
go func() { <-ch; cancel() }()
|
||||
|
||||
if err := c.Run(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Wait for signal to stop program.
|
||||
<-ctx.Done()
|
||||
signal.Reset()
|
||||
|
||||
// Gracefully close.
|
||||
return c.Close()
|
||||
|
||||
case "restore":
|
||||
return (&RestoreCommand{}).Run(ctx, args)
|
||||
case "snapshots":
|
||||
@@ -87,21 +122,16 @@ Usage:
|
||||
|
||||
The commands are:
|
||||
|
||||
databases list databases specified in config file
|
||||
generations list available generations for a database
|
||||
replicate runs a server to replicate databases
|
||||
restore recovers database backup from a replica
|
||||
snapshots list available snapshots for a database
|
||||
validate checks replica to ensure a consistent state with primary
|
||||
version prints the version
|
||||
version prints the binary version
|
||||
wal list available WAL files for a database
|
||||
`[1:])
|
||||
}
|
||||
|
||||
// Default configuration settings.
|
||||
const (
|
||||
DefaultAddr = ":9090"
|
||||
)
|
||||
|
||||
// Config represents a configuration file for the litestream daemon.
|
||||
type Config struct {
|
||||
// Bind address for serving metrics.
|
||||
@@ -113,25 +143,25 @@ type Config struct {
|
||||
// Global S3 settings
|
||||
AccessKeyID string `yaml:"access-key-id"`
|
||||
SecretAccessKey string `yaml:"secret-access-key"`
|
||||
Region string `yaml:"region"`
|
||||
Bucket string `yaml:"bucket"`
|
||||
}
|
||||
|
||||
// Normalize expands paths and parses URL-specified replicas.
|
||||
func (c *Config) Normalize() error {
|
||||
for i := range c.DBs {
|
||||
if err := c.DBs[i].Normalize(); err != nil {
|
||||
return err
|
||||
// propagateGlobalSettings copies global S3 settings to replica configs.
|
||||
func (c *Config) propagateGlobalSettings() {
|
||||
for _, dbc := range c.DBs {
|
||||
for _, rc := range dbc.Replicas {
|
||||
if rc.AccessKeyID == "" {
|
||||
rc.AccessKeyID = c.AccessKeyID
|
||||
}
|
||||
if rc.SecretAccessKey == "" {
|
||||
rc.SecretAccessKey = c.SecretAccessKey
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DefaultConfig returns a new instance of Config with defaults set.
|
||||
func DefaultConfig() Config {
|
||||
return Config{
|
||||
Addr: DefaultAddr,
|
||||
}
|
||||
return Config{}
|
||||
}
|
||||
|
||||
// DBConfig returns database configuration by path.
|
||||
@@ -145,18 +175,13 @@ func (c *Config) DBConfig(path string) *DBConfig {
|
||||
}
|
||||
|
||||
// ReadConfigFile unmarshals config from filename. Expands path if needed.
|
||||
func ReadConfigFile(filename string) (Config, error) {
|
||||
func ReadConfigFile(filename string) (_ Config, err error) {
|
||||
config := DefaultConfig()
|
||||
|
||||
// Expand filename, if necessary.
|
||||
if prefix := "~" + string(os.PathSeparator); strings.HasPrefix(filename, prefix) {
|
||||
u, err := user.Current()
|
||||
filename, err = expand(filename)
|
||||
if err != nil {
|
||||
return config, err
|
||||
} else if u.HomeDir == "" {
|
||||
return config, fmt.Errorf("home directory unset")
|
||||
}
|
||||
filename = filepath.Join(u.HomeDir, strings.TrimPrefix(filename, prefix))
|
||||
}
|
||||
|
||||
// Read & deserialize configuration.
|
||||
@@ -168,106 +193,57 @@ func ReadConfigFile(filename string) (Config, error) {
|
||||
return config, err
|
||||
}
|
||||
|
||||
if err := config.Normalize(); err != nil {
|
||||
// Normalize paths.
|
||||
for _, dbConfig := range config.DBs {
|
||||
if dbConfig.Path, err = expand(dbConfig.Path); err != nil {
|
||||
return config, err
|
||||
}
|
||||
}
|
||||
|
||||
// Propage settings from global config to replica configs.
|
||||
config.propagateGlobalSettings()
|
||||
|
||||
return config, nil
|
||||
}
|
||||
|
||||
// DBConfig represents the configuration for a single database.
|
||||
type DBConfig struct {
|
||||
Path string `yaml:"path"`
|
||||
MonitorInterval *time.Duration `yaml:"monitor-interval"`
|
||||
CheckpointInterval *time.Duration `yaml:"checkpoint-interval"`
|
||||
MinCheckpointPageN *int `yaml:"min-checkpoint-page-count"`
|
||||
MaxCheckpointPageN *int `yaml:"max-checkpoint-page-count"`
|
||||
|
||||
Replicas []*ReplicaConfig `yaml:"replicas"`
|
||||
}
|
||||
|
||||
// Normalize expands paths and parses URL-specified replicas.
|
||||
func (c *DBConfig) Normalize() error {
|
||||
for i := range c.Replicas {
|
||||
if err := c.Replicas[i].Normalize(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// ReplicaConfig represents the configuration for a single replica in a database.
|
||||
type ReplicaConfig struct {
|
||||
Type string `yaml:"type"` // "file", "s3"
|
||||
Name string `yaml:"name"` // name of replica, optional.
|
||||
Path string `yaml:"path"`
|
||||
Retention time.Duration `yaml:"retention"`
|
||||
RetentionCheckInterval time.Duration `yaml:"retention-check-interval"`
|
||||
SyncInterval time.Duration `yaml:"sync-interval"` // s3 only
|
||||
ValidationInterval time.Duration `yaml:"validation-interval"`
|
||||
|
||||
// S3 settings
|
||||
AccessKeyID string `yaml:"access-key-id"`
|
||||
SecretAccessKey string `yaml:"secret-access-key"`
|
||||
Region string `yaml:"region"`
|
||||
Bucket string `yaml:"bucket"`
|
||||
}
|
||||
|
||||
// Normalize expands paths and parses URL-specified replicas.
|
||||
func (c *ReplicaConfig) Normalize() error {
|
||||
// Expand path filename, if necessary.
|
||||
if prefix := "~" + string(os.PathSeparator); strings.HasPrefix(c.Path, prefix) {
|
||||
u, err := user.Current()
|
||||
// NewDBFromConfig instantiates a DB based on a configuration.
|
||||
func NewDBFromConfig(dbc *DBConfig) (*litestream.DB, error) {
|
||||
path, err := expand(dbc.Path)
|
||||
if err != nil {
|
||||
return err
|
||||
} else if u.HomeDir == "" {
|
||||
return fmt.Errorf("cannot expand replica path, no home directory available")
|
||||
}
|
||||
c.Path = filepath.Join(u.HomeDir, strings.TrimPrefix(c.Path, prefix))
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Attempt to parse as URL. Ignore if it is not a URL or if there is no scheme.
|
||||
u, err := url.Parse(c.Path)
|
||||
if err != nil || u.Scheme == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
switch u.Scheme {
|
||||
case "file":
|
||||
u.Scheme = ""
|
||||
c.Type = u.Scheme
|
||||
c.Path = path.Clean(u.String())
|
||||
return nil
|
||||
|
||||
case "s3":
|
||||
c.Type = u.Scheme
|
||||
c.Path = strings.TrimPrefix(path.Clean(u.Path), "/")
|
||||
c.Bucket = u.Host
|
||||
if u := u.User; u != nil {
|
||||
c.AccessKeyID = u.Username()
|
||||
c.SecretAccessKey, _ = u.Password()
|
||||
}
|
||||
return nil
|
||||
|
||||
default:
|
||||
return fmt.Errorf("unrecognized replica type in path scheme: %s", c.Path)
|
||||
}
|
||||
}
|
||||
|
||||
// DefaultConfigPath returns the default config path.
|
||||
func DefaultConfigPath() string {
|
||||
if v := os.Getenv("LITESTREAM_CONFIG"); v != "" {
|
||||
return v
|
||||
}
|
||||
return "/etc/litestream.yml"
|
||||
}
|
||||
|
||||
func registerConfigFlag(fs *flag.FlagSet, p *string) {
|
||||
fs.StringVar(p, "config", DefaultConfigPath(), "config path")
|
||||
}
|
||||
|
||||
// newDBFromConfig instantiates a DB based on a configuration.
|
||||
func newDBFromConfig(c *Config, dbc *DBConfig) (*litestream.DB, error) {
|
||||
// Initialize database with given path.
|
||||
db := litestream.NewDB(dbc.Path)
|
||||
db := litestream.NewDB(path)
|
||||
|
||||
// Override default database settings if specified in configuration.
|
||||
if dbc.MonitorInterval != nil {
|
||||
db.MonitorInterval = *dbc.MonitorInterval
|
||||
}
|
||||
if dbc.CheckpointInterval != nil {
|
||||
db.CheckpointInterval = *dbc.CheckpointInterval
|
||||
}
|
||||
if dbc.MinCheckpointPageN != nil {
|
||||
db.MinCheckpointPageN = *dbc.MinCheckpointPageN
|
||||
}
|
||||
if dbc.MaxCheckpointPageN != nil {
|
||||
db.MaxCheckpointPageN = *dbc.MaxCheckpointPageN
|
||||
}
|
||||
|
||||
// Instantiate and attach replicas.
|
||||
for _, rc := range dbc.Replicas {
|
||||
r, err := newReplicaFromConfig(db, c, dbc, rc)
|
||||
r, err := NewReplicaFromConfig(rc, db)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -277,85 +253,231 @@ func newDBFromConfig(c *Config, dbc *DBConfig) (*litestream.DB, error) {
|
||||
return db, nil
|
||||
}
|
||||
|
||||
// newReplicaFromConfig instantiates a replica for a DB based on a config.
|
||||
func newReplicaFromConfig(db *litestream.DB, c *Config, dbc *DBConfig, rc *ReplicaConfig) (litestream.Replica, error) {
|
||||
switch rc.Type {
|
||||
case "", "file":
|
||||
return newFileReplicaFromConfig(db, c, dbc, rc)
|
||||
// ReplicaConfig represents the configuration for a single replica in a database.
|
||||
type ReplicaConfig struct {
|
||||
Type string `yaml:"type"` // "file", "s3"
|
||||
Name string `yaml:"name"` // name of replica, optional.
|
||||
Path string `yaml:"path"`
|
||||
URL string `yaml:"url"`
|
||||
Retention time.Duration `yaml:"retention"`
|
||||
RetentionCheckInterval time.Duration `yaml:"retention-check-interval"`
|
||||
SyncInterval time.Duration `yaml:"sync-interval"` // s3 only
|
||||
SnapshotInterval time.Duration `yaml:"snapshot-interval"`
|
||||
ValidationInterval time.Duration `yaml:"validation-interval"`
|
||||
|
||||
// S3 settings
|
||||
AccessKeyID string `yaml:"access-key-id"`
|
||||
SecretAccessKey string `yaml:"secret-access-key"`
|
||||
Region string `yaml:"region"`
|
||||
Bucket string `yaml:"bucket"`
|
||||
Endpoint string `yaml:"endpoint"`
|
||||
ForcePathStyle *bool `yaml:"force-path-style"`
|
||||
}
|
||||
|
||||
// NewReplicaFromConfig instantiates a replica for a DB based on a config.
|
||||
func NewReplicaFromConfig(c *ReplicaConfig, db *litestream.DB) (litestream.Replica, error) {
|
||||
// Ensure user did not specify URL in path.
|
||||
if isURL(c.Path) {
|
||||
return nil, fmt.Errorf("replica path cannot be a url, please use the 'url' field instead: %s", c.Path)
|
||||
}
|
||||
|
||||
switch c.ReplicaType() {
|
||||
case "file":
|
||||
return newFileReplicaFromConfig(c, db)
|
||||
case "s3":
|
||||
return newS3ReplicaFromConfig(db, c, dbc, rc)
|
||||
return newS3ReplicaFromConfig(c, db)
|
||||
default:
|
||||
return nil, fmt.Errorf("unknown replica type in config: %q", rc.Type)
|
||||
return nil, fmt.Errorf("unknown replica type in config: %q", c.Type)
|
||||
}
|
||||
}
|
||||
|
||||
// newFileReplicaFromConfig returns a new instance of FileReplica build from config.
|
||||
func newFileReplicaFromConfig(db *litestream.DB, c *Config, dbc *DBConfig, rc *ReplicaConfig) (*litestream.FileReplica, error) {
|
||||
if rc.Path == "" {
|
||||
return nil, fmt.Errorf("%s: file replica path required", db.Path())
|
||||
func newFileReplicaFromConfig(c *ReplicaConfig, db *litestream.DB) (_ *litestream.FileReplica, err error) {
|
||||
// Ensure URL & path are not both specified.
|
||||
if c.URL != "" && c.Path != "" {
|
||||
return nil, fmt.Errorf("cannot specify url & path for file replica")
|
||||
}
|
||||
|
||||
r := litestream.NewFileReplica(db, rc.Name, rc.Path)
|
||||
if v := rc.Retention; v > 0 {
|
||||
// Parse path from URL, if specified.
|
||||
path := c.Path
|
||||
if c.URL != "" {
|
||||
if _, _, path, err = ParseReplicaURL(c.URL); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
// Ensure path is set explicitly or derived from URL field.
|
||||
if path == "" {
|
||||
return nil, fmt.Errorf("file replica path required")
|
||||
}
|
||||
|
||||
// Expand home prefix and return absolute path.
|
||||
if path, err = expand(path); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Instantiate replica and apply time fields, if set.
|
||||
r := litestream.NewFileReplica(db, c.Name, path)
|
||||
if v := c.Retention; v > 0 {
|
||||
r.Retention = v
|
||||
}
|
||||
if v := rc.RetentionCheckInterval; v > 0 {
|
||||
if v := c.RetentionCheckInterval; v > 0 {
|
||||
r.RetentionCheckInterval = v
|
||||
}
|
||||
if v := rc.ValidationInterval; v > 0 {
|
||||
if v := c.SnapshotInterval; v > 0 {
|
||||
r.SnapshotInterval = v
|
||||
}
|
||||
if v := c.ValidationInterval; v > 0 {
|
||||
r.ValidationInterval = v
|
||||
}
|
||||
return r, nil
|
||||
}
|
||||
|
||||
// newS3ReplicaFromConfig returns a new instance of S3Replica build from config.
|
||||
func newS3ReplicaFromConfig(db *litestream.DB, c *Config, dbc *DBConfig, rc *ReplicaConfig) (*s3.Replica, error) {
|
||||
// Use global or replica-specific S3 settings.
|
||||
accessKeyID := c.AccessKeyID
|
||||
if v := rc.AccessKeyID; v != "" {
|
||||
accessKeyID = v
|
||||
func newS3ReplicaFromConfig(c *ReplicaConfig, db *litestream.DB) (_ *s3.Replica, err error) {
|
||||
// Ensure URL & constituent parts are not both specified.
|
||||
if c.URL != "" && c.Path != "" {
|
||||
return nil, fmt.Errorf("cannot specify url & path for s3 replica")
|
||||
} else if c.URL != "" && c.Bucket != "" {
|
||||
return nil, fmt.Errorf("cannot specify url & bucket for s3 replica")
|
||||
}
|
||||
secretAccessKey := c.SecretAccessKey
|
||||
if v := rc.SecretAccessKey; v != "" {
|
||||
secretAccessKey = v
|
||||
|
||||
bucket, path := c.Bucket, c.Path
|
||||
region, endpoint := c.Region, c.Endpoint
|
||||
|
||||
// Use path style if an endpoint is explicitly set. This works because the
|
||||
// only service to not use path style is AWS which does not use an endpoint.
|
||||
forcePathStyle := (endpoint != "")
|
||||
if v := c.ForcePathStyle; v != nil {
|
||||
forcePathStyle = *v
|
||||
}
|
||||
bucket := c.Bucket
|
||||
if v := rc.Bucket; v != "" {
|
||||
bucket = v
|
||||
|
||||
// Apply settings from URL, if specified.
|
||||
if c.URL != "" {
|
||||
_, host, upath, err := ParseReplicaURL(c.URL)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
ubucket, uregion, uendpoint, uforcePathStyle := s3.ParseHost(host)
|
||||
|
||||
// Only apply URL parts to field that have not been overridden.
|
||||
if path == "" {
|
||||
path = upath
|
||||
}
|
||||
if bucket == "" {
|
||||
bucket = ubucket
|
||||
}
|
||||
if region == "" {
|
||||
region = uregion
|
||||
}
|
||||
if endpoint == "" {
|
||||
endpoint = uendpoint
|
||||
}
|
||||
if !forcePathStyle {
|
||||
forcePathStyle = uforcePathStyle
|
||||
}
|
||||
region := c.Region
|
||||
if v := rc.Region; v != "" {
|
||||
region = v
|
||||
}
|
||||
|
||||
// Ensure required settings are set.
|
||||
if accessKeyID == "" {
|
||||
return nil, fmt.Errorf("%s: s3 access key id required", db.Path())
|
||||
} else if secretAccessKey == "" {
|
||||
return nil, fmt.Errorf("%s: s3 secret access key required", db.Path())
|
||||
} else if bucket == "" {
|
||||
return nil, fmt.Errorf("%s: s3 bucket required", db.Path())
|
||||
if bucket == "" {
|
||||
return nil, fmt.Errorf("bucket required for s3 replica")
|
||||
}
|
||||
|
||||
// Build replica.
|
||||
r := s3.NewReplica(db, rc.Name)
|
||||
r.AccessKeyID = accessKeyID
|
||||
r.SecretAccessKey = secretAccessKey
|
||||
r.Region = region
|
||||
r := s3.NewReplica(db, c.Name)
|
||||
r.AccessKeyID = c.AccessKeyID
|
||||
r.SecretAccessKey = c.SecretAccessKey
|
||||
r.Bucket = bucket
|
||||
r.Path = rc.Path
|
||||
r.Path = path
|
||||
r.Region = region
|
||||
r.Endpoint = endpoint
|
||||
r.ForcePathStyle = forcePathStyle
|
||||
|
||||
if v := rc.Retention; v > 0 {
|
||||
if v := c.Retention; v > 0 {
|
||||
r.Retention = v
|
||||
}
|
||||
if v := rc.RetentionCheckInterval; v > 0 {
|
||||
if v := c.RetentionCheckInterval; v > 0 {
|
||||
r.RetentionCheckInterval = v
|
||||
}
|
||||
if v := rc.SyncInterval; v > 0 {
|
||||
if v := c.SyncInterval; v > 0 {
|
||||
r.SyncInterval = v
|
||||
}
|
||||
if v := rc.ValidationInterval; v > 0 {
|
||||
if v := c.SnapshotInterval; v > 0 {
|
||||
r.SnapshotInterval = v
|
||||
}
|
||||
if v := c.ValidationInterval; v > 0 {
|
||||
r.ValidationInterval = v
|
||||
}
|
||||
return r, nil
|
||||
}
|
||||
|
||||
// ParseReplicaURL parses a replica URL.
|
||||
func ParseReplicaURL(s string) (scheme, host, urlpath string, err error) {
|
||||
u, err := url.Parse(s)
|
||||
if err != nil {
|
||||
return "", "", "", err
|
||||
}
|
||||
|
||||
switch u.Scheme {
|
||||
case "file":
|
||||
scheme, u.Scheme = u.Scheme, ""
|
||||
return scheme, "", path.Clean(u.String()), nil
|
||||
|
||||
case "":
|
||||
return u.Scheme, u.Host, u.Path, fmt.Errorf("replica url scheme required: %s", s)
|
||||
|
||||
default:
|
||||
return u.Scheme, u.Host, strings.TrimPrefix(path.Clean(u.Path), "/"), nil
|
||||
}
|
||||
}
|
||||
|
||||
// isURL returns true if s can be parsed and has a scheme.
|
||||
func isURL(s string) bool {
|
||||
return regexp.MustCompile(`^\w+:\/\/`).MatchString(s)
|
||||
}
|
||||
|
||||
// ReplicaType returns the type based on the type field or extracted from the URL.
|
||||
func (c *ReplicaConfig) ReplicaType() string {
|
||||
scheme, _, _, _ := ParseReplicaURL(c.URL)
|
||||
if scheme != "" {
|
||||
return scheme
|
||||
} else if c.Type != "" {
|
||||
return c.Type
|
||||
}
|
||||
return "file"
|
||||
}
|
||||
|
||||
// DefaultConfigPath returns the default config path.
|
||||
func DefaultConfigPath() string {
|
||||
if v := os.Getenv("LITESTREAM_CONFIG"); v != "" {
|
||||
return v
|
||||
}
|
||||
return defaultConfigPath
|
||||
}
|
||||
|
||||
func registerConfigFlag(fs *flag.FlagSet) *string {
|
||||
return fs.String("config", "", "config path")
|
||||
}
|
||||
|
||||
// expand returns an absolute path for s.
|
||||
func expand(s string) (string, error) {
|
||||
// Just expand to absolute path if there is no home directory prefix.
|
||||
prefix := "~" + string(os.PathSeparator)
|
||||
if s != "~" && !strings.HasPrefix(s, prefix) {
|
||||
return filepath.Abs(s)
|
||||
}
|
||||
|
||||
// Look up home directory.
|
||||
u, err := user.Current()
|
||||
if err != nil {
|
||||
return "", err
|
||||
} else if u.HomeDir == "" {
|
||||
return "", fmt.Errorf("cannot expand path %s, no home directory available", s)
|
||||
}
|
||||
|
||||
// Return path with tilde replaced by the home directory.
|
||||
if s == "~" {
|
||||
return u.HomeDir, nil
|
||||
}
|
||||
return filepath.Join(u.HomeDir, strings.TrimPrefix(s, prefix)), nil
|
||||
}
|
||||
|
||||
17
cmd/litestream/main_notwindows.go
Normal file
17
cmd/litestream/main_notwindows.go
Normal file
@@ -0,0 +1,17 @@
|
||||
// +build !windows
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
)
|
||||
|
||||
const defaultConfigPath = "/etc/litestream.yml"
|
||||
|
||||
func isWindowsService() (bool, error) {
|
||||
return false, nil
|
||||
}
|
||||
|
||||
func runWindowsService(ctx context.Context) error {
|
||||
panic("cannot run windows service as unix process")
|
||||
}
|
||||
131
cmd/litestream/main_test.go
Normal file
131
cmd/litestream/main_test.go
Normal file
@@ -0,0 +1,131 @@
|
||||
package main_test
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"github.com/benbjohnson/litestream"
|
||||
main "github.com/benbjohnson/litestream/cmd/litestream"
|
||||
"github.com/benbjohnson/litestream/s3"
|
||||
)
|
||||
|
||||
func TestReadConfigFile(t *testing.T) {
|
||||
// Ensure global AWS settings are propagated down to replica configurations.
|
||||
t.Run("PropagateGlobalSettings", func(t *testing.T) {
|
||||
filename := filepath.Join(t.TempDir(), "litestream.yml")
|
||||
if err := ioutil.WriteFile(filename, []byte(`
|
||||
access-key-id: XXX
|
||||
secret-access-key: YYY
|
||||
|
||||
dbs:
|
||||
- path: /path/to/db
|
||||
replicas:
|
||||
- url: s3://foo/bar
|
||||
`[1:]), 0666); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
config, err := main.ReadConfigFile(filename)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
} else if got, want := config.AccessKeyID, `XXX`; got != want {
|
||||
t.Fatalf("AccessKeyID=%v, want %v", got, want)
|
||||
} else if got, want := config.SecretAccessKey, `YYY`; got != want {
|
||||
t.Fatalf("SecretAccessKey=%v, want %v", got, want)
|
||||
} else if got, want := config.DBs[0].Replicas[0].AccessKeyID, `XXX`; got != want {
|
||||
t.Fatalf("Replica.AccessKeyID=%v, want %v", got, want)
|
||||
} else if got, want := config.DBs[0].Replicas[0].SecretAccessKey, `YYY`; got != want {
|
||||
t.Fatalf("Replica.SecretAccessKey=%v, want %v", got, want)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func TestNewFileReplicaFromConfig(t *testing.T) {
|
||||
r, err := main.NewReplicaFromConfig(&main.ReplicaConfig{Path: "/foo"}, nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
} else if r, ok := r.(*litestream.FileReplica); !ok {
|
||||
t.Fatal("unexpected replica type")
|
||||
} else if got, want := r.Path(), "/foo"; got != want {
|
||||
t.Fatalf("Path=%s, want %s", got, want)
|
||||
}
|
||||
}
|
||||
|
||||
func TestNewS3ReplicaFromConfig(t *testing.T) {
|
||||
t.Run("URL", func(t *testing.T) {
|
||||
r, err := main.NewReplicaFromConfig(&main.ReplicaConfig{URL: "s3://foo/bar"}, nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
} else if r, ok := r.(*s3.Replica); !ok {
|
||||
t.Fatal("unexpected replica type")
|
||||
} else if got, want := r.Bucket, "foo"; got != want {
|
||||
t.Fatalf("Bucket=%s, want %s", got, want)
|
||||
} else if got, want := r.Path, "bar"; got != want {
|
||||
t.Fatalf("Path=%s, want %s", got, want)
|
||||
} else if got, want := r.Region, ""; got != want {
|
||||
t.Fatalf("Region=%s, want %s", got, want)
|
||||
} else if got, want := r.Endpoint, ""; got != want {
|
||||
t.Fatalf("Endpoint=%s, want %s", got, want)
|
||||
} else if got, want := r.ForcePathStyle, false; got != want {
|
||||
t.Fatalf("ForcePathStyle=%v, want %v", got, want)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("MinIO", func(t *testing.T) {
|
||||
r, err := main.NewReplicaFromConfig(&main.ReplicaConfig{URL: "s3://foo.localhost:9000/bar"}, nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
} else if r, ok := r.(*s3.Replica); !ok {
|
||||
t.Fatal("unexpected replica type")
|
||||
} else if got, want := r.Bucket, "foo"; got != want {
|
||||
t.Fatalf("Bucket=%s, want %s", got, want)
|
||||
} else if got, want := r.Path, "bar"; got != want {
|
||||
t.Fatalf("Path=%s, want %s", got, want)
|
||||
} else if got, want := r.Region, "us-east-1"; got != want {
|
||||
t.Fatalf("Region=%s, want %s", got, want)
|
||||
} else if got, want := r.Endpoint, "http://localhost:9000"; got != want {
|
||||
t.Fatalf("Endpoint=%s, want %s", got, want)
|
||||
} else if got, want := r.ForcePathStyle, true; got != want {
|
||||
t.Fatalf("ForcePathStyle=%v, want %v", got, want)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("Backblaze", func(t *testing.T) {
|
||||
r, err := main.NewReplicaFromConfig(&main.ReplicaConfig{URL: "s3://foo.s3.us-west-000.backblazeb2.com/bar"}, nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
} else if r, ok := r.(*s3.Replica); !ok {
|
||||
t.Fatal("unexpected replica type")
|
||||
} else if got, want := r.Bucket, "foo"; got != want {
|
||||
t.Fatalf("Bucket=%s, want %s", got, want)
|
||||
} else if got, want := r.Path, "bar"; got != want {
|
||||
t.Fatalf("Path=%s, want %s", got, want)
|
||||
} else if got, want := r.Region, "us-west-000"; got != want {
|
||||
t.Fatalf("Region=%s, want %s", got, want)
|
||||
} else if got, want := r.Endpoint, "https://s3.us-west-000.backblazeb2.com"; got != want {
|
||||
t.Fatalf("Endpoint=%s, want %s", got, want)
|
||||
} else if got, want := r.ForcePathStyle, true; got != want {
|
||||
t.Fatalf("ForcePathStyle=%v, want %v", got, want)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("GCS", func(t *testing.T) {
|
||||
r, err := main.NewReplicaFromConfig(&main.ReplicaConfig{URL: "s3://foo.storage.googleapis.com/bar"}, nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
} else if r, ok := r.(*s3.Replica); !ok {
|
||||
t.Fatal("unexpected replica type")
|
||||
} else if got, want := r.Bucket, "foo"; got != want {
|
||||
t.Fatalf("Bucket=%s, want %s", got, want)
|
||||
} else if got, want := r.Path, "bar"; got != want {
|
||||
t.Fatalf("Path=%s, want %s", got, want)
|
||||
} else if got, want := r.Region, "us-east-1"; got != want {
|
||||
t.Fatalf("Region=%s, want %s", got, want)
|
||||
} else if got, want := r.Endpoint, "https://storage.googleapis.com"; got != want {
|
||||
t.Fatalf("Endpoint=%s, want %s", got, want)
|
||||
} else if got, want := r.ForcePathStyle, true; got != want {
|
||||
t.Fatalf("ForcePathStyle=%v, want %v", got, want)
|
||||
}
|
||||
})
|
||||
}
|
||||
105
cmd/litestream/main_windows.go
Normal file
105
cmd/litestream/main_windows.go
Normal file
@@ -0,0 +1,105 @@
|
||||
// +build windows
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"io"
|
||||
"log"
|
||||
"os"
|
||||
|
||||
"golang.org/x/sys/windows"
|
||||
"golang.org/x/sys/windows/svc"
|
||||
"golang.org/x/sys/windows/svc/eventlog"
|
||||
)
|
||||
|
||||
const defaultConfigPath = `C:\Litestream\litestream.yml`
|
||||
|
||||
// serviceName is the Windows Service name.
|
||||
const serviceName = "Litestream"
|
||||
|
||||
// isWindowsService returns true if currently executing within a Windows service.
|
||||
func isWindowsService() (bool, error) {
|
||||
return svc.IsWindowsService()
|
||||
}
|
||||
|
||||
func runWindowsService(ctx context.Context) error {
|
||||
// Attempt to install new log service. This will fail if already installed.
|
||||
// We don't log the error because we don't have anywhere to log until we open the log.
|
||||
_ = eventlog.InstallAsEventCreate(serviceName, eventlog.Error|eventlog.Warning|eventlog.Info)
|
||||
|
||||
elog, err := eventlog.Open(serviceName)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer elog.Close()
|
||||
|
||||
// Set eventlog as log writer while running.
|
||||
log.SetOutput((*eventlogWriter)(elog))
|
||||
defer log.SetOutput(os.Stderr)
|
||||
|
||||
log.Print("Litestream service starting")
|
||||
|
||||
if err := svc.Run(serviceName, &windowsService{ctx: ctx}); err != nil {
|
||||
return errStop
|
||||
}
|
||||
|
||||
log.Print("Litestream service stopped")
|
||||
return nil
|
||||
}
|
||||
|
||||
// windowsService is an interface adapter for svc.Handler.
|
||||
type windowsService struct {
|
||||
ctx context.Context
|
||||
}
|
||||
|
||||
func (s *windowsService) Execute(args []string, r <-chan svc.ChangeRequest, statusCh chan<- svc.Status) (svcSpecificEC bool, exitCode uint32) {
|
||||
var err error
|
||||
|
||||
// Notify Windows that the service is starting up.
|
||||
statusCh <- svc.Status{State: svc.StartPending}
|
||||
|
||||
// Instantiate replication command and load configuration.
|
||||
c := NewReplicateCommand()
|
||||
if c.Config, err = ReadConfigFile(DefaultConfigPath()); err != nil {
|
||||
log.Printf("cannot load configuration: %s", err)
|
||||
return true, 1
|
||||
}
|
||||
|
||||
// Execute replication command.
|
||||
if err := c.Run(s.ctx); err != nil {
|
||||
log.Printf("cannot replicate: %s", err)
|
||||
statusCh <- svc.Status{State: svc.StopPending}
|
||||
return true, 2
|
||||
}
|
||||
|
||||
// Notify Windows that the service is now running.
|
||||
statusCh <- svc.Status{State: svc.Running, Accepts: svc.AcceptStop}
|
||||
|
||||
for {
|
||||
select {
|
||||
case req := <-r:
|
||||
switch req.Cmd {
|
||||
case svc.Stop:
|
||||
c.Close()
|
||||
statusCh <- svc.Status{State: svc.StopPending}
|
||||
return false, windows.NO_ERROR
|
||||
case svc.Interrogate:
|
||||
statusCh <- req.CurrentStatus
|
||||
default:
|
||||
log.Printf("Litestream service received unexpected change request cmd: %d", req.Cmd)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Ensure implementation implements io.Writer interface.
|
||||
var _ io.Writer = (*eventlogWriter)(nil)
|
||||
|
||||
// eventlogWriter is an adapter for using eventlog.Log as an io.Writer.
|
||||
type eventlogWriter eventlog.Log
|
||||
|
||||
func (w *eventlogWriter) Write(p []byte) (n int, err error) {
|
||||
elog := (*eventlog.Log)(w)
|
||||
return 0, elog.Info(1, string(p))
|
||||
}
|
||||
@@ -2,7 +2,6 @@ package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"flag"
|
||||
"fmt"
|
||||
"log"
|
||||
@@ -10,7 +9,7 @@ import (
|
||||
"net/http"
|
||||
_ "net/http/pprof"
|
||||
"os"
|
||||
"os/signal"
|
||||
"time"
|
||||
|
||||
"github.com/benbjohnson/litestream"
|
||||
"github.com/benbjohnson/litestream/s3"
|
||||
@@ -19,52 +18,75 @@ import (
|
||||
|
||||
// ReplicateCommand represents a command that continuously replicates SQLite databases.
|
||||
type ReplicateCommand struct {
|
||||
ConfigPath string
|
||||
Config Config
|
||||
|
||||
// List of managed databases specified in the config.
|
||||
DBs []*litestream.DB
|
||||
}
|
||||
|
||||
// Run loads all databases specified in the configuration.
|
||||
func (c *ReplicateCommand) Run(ctx context.Context, args []string) (err error) {
|
||||
func NewReplicateCommand() *ReplicateCommand {
|
||||
return &ReplicateCommand{}
|
||||
}
|
||||
|
||||
// ParseFlags parses the CLI flags and loads the configuration file.
|
||||
func (c *ReplicateCommand) ParseFlags(ctx context.Context, args []string) (err error) {
|
||||
fs := flag.NewFlagSet("litestream-replicate", flag.ContinueOnError)
|
||||
verbose := fs.Bool("v", false, "verbose logging")
|
||||
registerConfigFlag(fs, &c.ConfigPath)
|
||||
tracePath := fs.String("trace", "", "trace path")
|
||||
configPath := registerConfigFlag(fs)
|
||||
fs.Usage = c.Usage
|
||||
if err := fs.Parse(args); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Load configuration.
|
||||
if c.ConfigPath == "" {
|
||||
return errors.New("-config required")
|
||||
// Load configuration or use CLI args to build db/replica.
|
||||
if fs.NArg() == 1 {
|
||||
return fmt.Errorf("must specify at least one replica URL for %s", fs.Arg(0))
|
||||
} else if fs.NArg() > 1 {
|
||||
if *configPath != "" {
|
||||
return fmt.Errorf("cannot specify a replica URL and the -config flag")
|
||||
}
|
||||
config, err := ReadConfigFile(c.ConfigPath)
|
||||
if err != nil {
|
||||
|
||||
dbConfig := &DBConfig{Path: fs.Arg(0)}
|
||||
for _, u := range fs.Args()[1:] {
|
||||
dbConfig.Replicas = append(dbConfig.Replicas, &ReplicaConfig{
|
||||
URL: u,
|
||||
SyncInterval: 1 * time.Second,
|
||||
})
|
||||
}
|
||||
c.Config.DBs = []*DBConfig{dbConfig}
|
||||
} else {
|
||||
if *configPath == "" {
|
||||
*configPath = DefaultConfigPath()
|
||||
}
|
||||
if c.Config, err = ReadConfigFile(*configPath); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Enable trace logging.
|
||||
if *verbose {
|
||||
litestream.Tracef = log.Printf
|
||||
if *tracePath != "" {
|
||||
f, err := os.Create(*tracePath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer f.Close()
|
||||
litestream.Tracef = log.New(f, "", log.LstdFlags|log.LUTC|log.Lshortfile).Printf
|
||||
}
|
||||
|
||||
// Setup signal handler.
|
||||
ctx, cancel := context.WithCancel(ctx)
|
||||
ch := make(chan os.Signal, 1)
|
||||
signal.Notify(ch, os.Interrupt)
|
||||
go func() { <-ch; cancel() }()
|
||||
return nil
|
||||
}
|
||||
|
||||
// Run loads all databases specified in the configuration.
|
||||
func (c *ReplicateCommand) Run(ctx context.Context) (err error) {
|
||||
// Display version information.
|
||||
fmt.Printf("litestream %s\n", Version)
|
||||
log.Printf("litestream %s", Version)
|
||||
|
||||
if len(config.DBs) == 0 {
|
||||
fmt.Println("no databases specified in configuration")
|
||||
if len(c.Config.DBs) == 0 {
|
||||
log.Println("no databases specified in configuration")
|
||||
}
|
||||
|
||||
for _, dbConfig := range config.DBs {
|
||||
db, err := newDBFromConfig(&config, dbConfig)
|
||||
for _, dbConfig := range c.Config.DBs {
|
||||
db, err := NewDBFromConfig(dbConfig)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -78,41 +100,37 @@ func (c *ReplicateCommand) Run(ctx context.Context, args []string) (err error) {
|
||||
|
||||
// Notify user that initialization is done.
|
||||
for _, db := range c.DBs {
|
||||
fmt.Printf("initialized db: %s\n", db.Path())
|
||||
log.Printf("initialized db: %s", db.Path())
|
||||
for _, r := range db.Replicas {
|
||||
switch r := r.(type) {
|
||||
case *litestream.FileReplica:
|
||||
fmt.Printf("replicating to: name=%q type=%q path=%q\n", r.Name(), r.Type(), r.Path())
|
||||
log.Printf("replicating to: name=%q type=%q path=%q", r.Name(), r.Type(), r.Path())
|
||||
case *s3.Replica:
|
||||
fmt.Printf("replicating to: name=%q type=%q bucket=%q path=%q region=%q\n", r.Name(), r.Type(), r.Bucket, r.Path, r.Region)
|
||||
log.Printf("replicating to: name=%q type=%q bucket=%q path=%q region=%q endpoint=%q sync-interval=%s", r.Name(), r.Type(), r.Bucket, r.Path, r.Region, r.Endpoint, r.SyncInterval)
|
||||
default:
|
||||
fmt.Printf("replicating to: name=%q type=%q\n", r.Name(), r.Type())
|
||||
log.Printf("replicating to: name=%q type=%q", r.Name(), r.Type())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Serve metrics over HTTP if enabled.
|
||||
if config.Addr != "" {
|
||||
_, port, _ := net.SplitHostPort(config.Addr)
|
||||
fmt.Printf("serving metrics on http://localhost:%s/metrics\n", port)
|
||||
if c.Config.Addr != "" {
|
||||
hostport := c.Config.Addr
|
||||
if host, port, _ := net.SplitHostPort(c.Config.Addr); port == "" {
|
||||
return fmt.Errorf("must specify port for bind address: %q", c.Config.Addr)
|
||||
} else if host == "" {
|
||||
hostport = net.JoinHostPort("localhost", port)
|
||||
}
|
||||
|
||||
log.Printf("serving metrics on http://%s/metrics", hostport)
|
||||
go func() {
|
||||
http.Handle("/metrics", promhttp.Handler())
|
||||
if err := http.ListenAndServe(config.Addr, nil); err != nil {
|
||||
if err := http.ListenAndServe(c.Config.Addr, nil); err != nil {
|
||||
log.Printf("cannot start metrics server: %s", err)
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
// Wait for signal to stop program.
|
||||
<-ctx.Done()
|
||||
signal.Reset()
|
||||
|
||||
// Gracefully close
|
||||
if err := c.Close(); err != nil {
|
||||
fmt.Fprintln(os.Stderr, err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -120,32 +138,38 @@ func (c *ReplicateCommand) Run(ctx context.Context, args []string) (err error) {
|
||||
func (c *ReplicateCommand) Close() (err error) {
|
||||
for _, db := range c.DBs {
|
||||
if e := db.SoftClose(); e != nil {
|
||||
fmt.Printf("error closing db: path=%s err=%s\n", db.Path(), e)
|
||||
log.Printf("error closing db: path=%s err=%s", db.Path(), e)
|
||||
if err == nil {
|
||||
err = e
|
||||
}
|
||||
}
|
||||
}
|
||||
// TODO(windows): Clear DBs
|
||||
return err
|
||||
}
|
||||
|
||||
// Usage prints the help screen to STDOUT.
|
||||
func (c *ReplicateCommand) Usage() {
|
||||
fmt.Printf(`
|
||||
The replicate command starts a server to monitor & replicate databases
|
||||
specified in your configuration file.
|
||||
The replicate command starts a server to monitor & replicate databases.
|
||||
You can specify your database & replicas in a configuration file or you can
|
||||
replicate a single database file by specifying its path and its replicas in the
|
||||
command line arguments.
|
||||
|
||||
Usage:
|
||||
|
||||
litestream replicate [arguments]
|
||||
|
||||
litestream replicate [arguments] DB_PATH REPLICA_URL [REPLICA_URL...]
|
||||
|
||||
Arguments:
|
||||
|
||||
-config PATH
|
||||
Specifies the configuration file. Defaults to %s
|
||||
Specifies the configuration file.
|
||||
Defaults to %s
|
||||
|
||||
-v
|
||||
Enable verbose logging output.
|
||||
-trace PATH
|
||||
Write verbose trace logging to PATH.
|
||||
|
||||
`[1:], DefaultConfigPath())
|
||||
}
|
||||
|
||||
@@ -7,7 +7,6 @@ import (
|
||||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"time"
|
||||
|
||||
"github.com/benbjohnson/litestream"
|
||||
@@ -18,10 +17,11 @@ type RestoreCommand struct{}
|
||||
|
||||
// Run executes the command.
|
||||
func (c *RestoreCommand) Run(ctx context.Context, args []string) (err error) {
|
||||
var configPath string
|
||||
opt := litestream.NewRestoreOptions()
|
||||
opt.Verbose = true
|
||||
|
||||
fs := flag.NewFlagSet("litestream-restore", flag.ContinueOnError)
|
||||
registerConfigFlag(fs, &configPath)
|
||||
configPath := registerConfigFlag(fs)
|
||||
fs.StringVar(&opt.OutputPath, "o", "", "output path")
|
||||
fs.StringVar(&opt.ReplicaName, "replica", "", "replica name")
|
||||
fs.StringVar(&opt.Generation, "generation", "", "generation name")
|
||||
@@ -33,20 +33,11 @@ func (c *RestoreCommand) Run(ctx context.Context, args []string) (err error) {
|
||||
if err := fs.Parse(args); err != nil {
|
||||
return err
|
||||
} else if fs.NArg() == 0 || fs.Arg(0) == "" {
|
||||
return fmt.Errorf("database path required")
|
||||
return fmt.Errorf("database path or replica URL required")
|
||||
} else if fs.NArg() > 1 {
|
||||
return fmt.Errorf("too many arguments")
|
||||
}
|
||||
|
||||
// Load configuration.
|
||||
if configPath == "" {
|
||||
return errors.New("-config required")
|
||||
}
|
||||
config, err := ReadConfigFile(configPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Parse timestamp, if specified.
|
||||
if *timestampStr != "" {
|
||||
if opt.Timestamp, err = time.Parse(time.RFC3339, *timestampStr); err != nil {
|
||||
@@ -64,23 +55,76 @@ func (c *RestoreCommand) Run(ctx context.Context, args []string) (err error) {
|
||||
opt.Logger = log.New(os.Stderr, "", log.LstdFlags)
|
||||
}
|
||||
|
||||
// Determine absolute path for database.
|
||||
dbPath, err := filepath.Abs(fs.Arg(0))
|
||||
if err != nil {
|
||||
// Determine replica & generation to restore from.
|
||||
var r litestream.Replica
|
||||
if isURL(fs.Arg(0)) {
|
||||
if *configPath != "" {
|
||||
return fmt.Errorf("cannot specify a replica URL and the -config flag")
|
||||
}
|
||||
if r, err = c.loadFromURL(ctx, fs.Arg(0), &opt); err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
if *configPath == "" {
|
||||
*configPath = DefaultConfigPath()
|
||||
}
|
||||
if r, err = c.loadFromConfig(ctx, fs.Arg(0), *configPath, &opt); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Instantiate DB.
|
||||
// Return an error if no matching targets found.
|
||||
if opt.Generation == "" {
|
||||
return fmt.Errorf("no matching backups found")
|
||||
}
|
||||
|
||||
return litestream.RestoreReplica(ctx, r, opt)
|
||||
}
|
||||
|
||||
// loadFromURL creates a replica & updates the restore options from a replica URL.
|
||||
func (c *RestoreCommand) loadFromURL(ctx context.Context, replicaURL string, opt *litestream.RestoreOptions) (litestream.Replica, error) {
|
||||
r, err := NewReplicaFromConfig(&ReplicaConfig{URL: replicaURL}, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
opt.Generation, _, err = litestream.CalcReplicaRestoreTarget(ctx, r, *opt)
|
||||
return r, err
|
||||
}
|
||||
|
||||
// loadFromConfig returns a replica & updates the restore options from a DB reference.
|
||||
func (c *RestoreCommand) loadFromConfig(ctx context.Context, dbPath, configPath string, opt *litestream.RestoreOptions) (litestream.Replica, error) {
|
||||
// Load configuration.
|
||||
config, err := ReadConfigFile(configPath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Lookup database from configuration file by path.
|
||||
if dbPath, err = expand(dbPath); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
dbConfig := config.DBConfig(dbPath)
|
||||
if dbConfig == nil {
|
||||
return fmt.Errorf("database not found in config: %s", dbPath)
|
||||
return nil, fmt.Errorf("database not found in config: %s", dbPath)
|
||||
}
|
||||
db, err := newDBFromConfig(&config, dbConfig)
|
||||
db, err := NewDBFromConfig(dbConfig)
|
||||
if err != nil {
|
||||
return err
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return db.Restore(ctx, opt)
|
||||
// Restore into original database path if not specified.
|
||||
if opt.OutputPath == "" {
|
||||
opt.OutputPath = dbPath
|
||||
}
|
||||
|
||||
// Determine the appropriate replica & generation to restore from,
|
||||
r, generation, err := db.CalcRestoreTarget(ctx, *opt)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
opt.Generation = generation
|
||||
|
||||
return r, nil
|
||||
}
|
||||
|
||||
// Usage prints the help screen to STDOUT.
|
||||
@@ -90,7 +134,9 @@ The restore command recovers a database from a previous snapshot and WAL.
|
||||
|
||||
Usage:
|
||||
|
||||
litestream restore [arguments] DB
|
||||
litestream restore [arguments] DB_PATH
|
||||
|
||||
litestream restore [arguments] REPLICA_URL
|
||||
|
||||
Arguments:
|
||||
|
||||
|
||||
@@ -2,11 +2,9 @@ package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"flag"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"text/tabwriter"
|
||||
"time"
|
||||
|
||||
@@ -18,9 +16,8 @@ type SnapshotsCommand struct{}
|
||||
|
||||
// Run executes the command.
|
||||
func (c *SnapshotsCommand) Run(ctx context.Context, args []string) (err error) {
|
||||
var configPath string
|
||||
fs := flag.NewFlagSet("litestream-snapshots", flag.ContinueOnError)
|
||||
registerConfigFlag(fs, &configPath)
|
||||
configPath := registerConfigFlag(fs)
|
||||
replicaName := fs.String("replica", "", "replica name")
|
||||
fs.Usage = c.Usage
|
||||
if err := fs.Parse(args); err != nil {
|
||||
@@ -31,37 +28,47 @@ func (c *SnapshotsCommand) Run(ctx context.Context, args []string) (err error) {
|
||||
return fmt.Errorf("too many arguments")
|
||||
}
|
||||
|
||||
var db *litestream.DB
|
||||
var r litestream.Replica
|
||||
if isURL(fs.Arg(0)) {
|
||||
if *configPath != "" {
|
||||
return fmt.Errorf("cannot specify a replica URL and the -config flag")
|
||||
}
|
||||
if r, err = NewReplicaFromConfig(&ReplicaConfig{URL: fs.Arg(0)}, nil); err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
if *configPath == "" {
|
||||
*configPath = DefaultConfigPath()
|
||||
}
|
||||
|
||||
// Load configuration.
|
||||
if configPath == "" {
|
||||
return errors.New("-config required")
|
||||
}
|
||||
config, err := ReadConfigFile(configPath)
|
||||
config, err := ReadConfigFile(*configPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Determine absolute path for database.
|
||||
dbPath, err := filepath.Abs(fs.Arg(0))
|
||||
if err != nil {
|
||||
// Lookup database from configuration file by path.
|
||||
if path, err := expand(fs.Arg(0)); err != nil {
|
||||
return err
|
||||
} else if dbc := config.DBConfig(path); dbc == nil {
|
||||
return fmt.Errorf("database not found in config: %s", path)
|
||||
} else if db, err = NewDBFromConfig(dbc); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Instantiate DB.
|
||||
dbConfig := config.DBConfig(dbPath)
|
||||
if dbConfig == nil {
|
||||
return fmt.Errorf("database not found in config: %s", dbPath)
|
||||
// Filter by replica, if specified.
|
||||
if *replicaName != "" {
|
||||
if r = db.Replica(*replicaName); r == nil {
|
||||
return fmt.Errorf("replica %q not found for database %q", *replicaName, db.Path())
|
||||
}
|
||||
}
|
||||
db, err := newDBFromConfig(&config, dbConfig)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Find snapshots by db or replica.
|
||||
var infos []*litestream.SnapshotInfo
|
||||
if *replicaName != "" {
|
||||
if r := db.Replica(*replicaName); r == nil {
|
||||
return fmt.Errorf("replica %q not found for database %q", *replicaName, dbPath)
|
||||
} else if infos, err = r.Snapshots(ctx); err != nil {
|
||||
if r != nil {
|
||||
if infos, err = r.Snapshots(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
@@ -71,7 +78,9 @@ func (c *SnapshotsCommand) Run(ctx context.Context, args []string) (err error) {
|
||||
}
|
||||
|
||||
// List all snapshots.
|
||||
w := tabwriter.NewWriter(os.Stdout, 0, 8, 1, '\t', 0)
|
||||
w := tabwriter.NewWriter(os.Stdout, 0, 8, 2, ' ', 0)
|
||||
defer w.Flush()
|
||||
|
||||
fmt.Fprintln(w, "replica\tgeneration\tindex\tsize\tcreated")
|
||||
for _, info := range infos {
|
||||
fmt.Fprintf(w, "%s\t%s\t%d\t%d\t%s\n",
|
||||
@@ -82,7 +91,6 @@ func (c *SnapshotsCommand) Run(ctx context.Context, args []string) (err error) {
|
||||
info.CreatedAt.Format(time.RFC3339),
|
||||
)
|
||||
}
|
||||
w.Flush()
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -90,11 +98,13 @@ func (c *SnapshotsCommand) Run(ctx context.Context, args []string) (err error) {
|
||||
// Usage prints the help screen to STDOUT.
|
||||
func (c *SnapshotsCommand) Usage() {
|
||||
fmt.Printf(`
|
||||
The snapshots command lists all snapshots available for a database.
|
||||
The snapshots command lists all snapshots available for a database or replica.
|
||||
|
||||
Usage:
|
||||
|
||||
litestream snapshots [arguments] DB
|
||||
litestream snapshots [arguments] DB_PATH
|
||||
|
||||
litestream snapshots [arguments] REPLICA_URL
|
||||
|
||||
Arguments:
|
||||
|
||||
@@ -105,7 +115,6 @@ Arguments:
|
||||
-replica NAME
|
||||
Optional, filter by a specific replica.
|
||||
|
||||
|
||||
Examples:
|
||||
|
||||
# List all snapshots for a database.
|
||||
@@ -114,6 +123,9 @@ Examples:
|
||||
# List all snapshots on S3.
|
||||
$ litestream snapshots -replica s3 /path/to/db
|
||||
|
||||
# List all snapshots by replica URL.
|
||||
$ litestream snapshots s3://mybkt/db
|
||||
|
||||
`[1:],
|
||||
DefaultConfigPath(),
|
||||
)
|
||||
|
||||
@@ -2,11 +2,9 @@ package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"flag"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"text/tabwriter"
|
||||
"time"
|
||||
|
||||
@@ -18,9 +16,8 @@ type WALCommand struct{}
|
||||
|
||||
// Run executes the command.
|
||||
func (c *WALCommand) Run(ctx context.Context, args []string) (err error) {
|
||||
var configPath string
|
||||
fs := flag.NewFlagSet("litestream-wal", flag.ContinueOnError)
|
||||
registerConfigFlag(fs, &configPath)
|
||||
configPath := registerConfigFlag(fs)
|
||||
replicaName := fs.String("replica", "", "replica name")
|
||||
generation := fs.String("generation", "", "generation name")
|
||||
fs.Usage = c.Usage
|
||||
@@ -32,37 +29,47 @@ func (c *WALCommand) Run(ctx context.Context, args []string) (err error) {
|
||||
return fmt.Errorf("too many arguments")
|
||||
}
|
||||
|
||||
var db *litestream.DB
|
||||
var r litestream.Replica
|
||||
if isURL(fs.Arg(0)) {
|
||||
if *configPath != "" {
|
||||
return fmt.Errorf("cannot specify a replica URL and the -config flag")
|
||||
}
|
||||
if r, err = NewReplicaFromConfig(&ReplicaConfig{URL: fs.Arg(0)}, nil); err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
if *configPath == "" {
|
||||
*configPath = DefaultConfigPath()
|
||||
}
|
||||
|
||||
// Load configuration.
|
||||
if configPath == "" {
|
||||
return errors.New("-config required")
|
||||
}
|
||||
config, err := ReadConfigFile(configPath)
|
||||
config, err := ReadConfigFile(*configPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Determine absolute path for database.
|
||||
dbPath, err := filepath.Abs(fs.Arg(0))
|
||||
if err != nil {
|
||||
// Lookup database from configuration file by path.
|
||||
if path, err := expand(fs.Arg(0)); err != nil {
|
||||
return err
|
||||
} else if dbc := config.DBConfig(path); dbc == nil {
|
||||
return fmt.Errorf("database not found in config: %s", path)
|
||||
} else if db, err = NewDBFromConfig(dbc); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Instantiate DB.
|
||||
dbConfig := config.DBConfig(dbPath)
|
||||
if dbConfig == nil {
|
||||
return fmt.Errorf("database not found in config: %s", dbPath)
|
||||
}
|
||||
db, err := newDBFromConfig(&config, dbConfig)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Find snapshots by db or replica.
|
||||
var infos []*litestream.WALInfo
|
||||
// Filter by replica, if specified.
|
||||
if *replicaName != "" {
|
||||
if r := db.Replica(*replicaName); r == nil {
|
||||
return fmt.Errorf("replica %q not found for database %q", *replicaName, dbPath)
|
||||
} else if infos, err = r.WALs(ctx); err != nil {
|
||||
if r = db.Replica(*replicaName); r == nil {
|
||||
return fmt.Errorf("replica %q not found for database %q", *replicaName, db.Path())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Find WAL files by db or replica.
|
||||
var infos []*litestream.WALInfo
|
||||
if r != nil {
|
||||
if infos, err = r.WALs(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
@@ -72,7 +79,9 @@ func (c *WALCommand) Run(ctx context.Context, args []string) (err error) {
|
||||
}
|
||||
|
||||
// List all WAL files.
|
||||
w := tabwriter.NewWriter(os.Stdout, 0, 8, 1, '\t', 0)
|
||||
w := tabwriter.NewWriter(os.Stdout, 0, 8, 2, ' ', 0)
|
||||
defer w.Flush()
|
||||
|
||||
fmt.Fprintln(w, "replica\tgeneration\tindex\toffset\tsize\tcreated")
|
||||
for _, info := range infos {
|
||||
if *generation != "" && info.Generation != *generation {
|
||||
@@ -88,7 +97,6 @@ func (c *WALCommand) Run(ctx context.Context, args []string) (err error) {
|
||||
info.CreatedAt.Format(time.RFC3339),
|
||||
)
|
||||
}
|
||||
w.Flush()
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -100,7 +108,9 @@ The wal command lists all wal files available for a database.
|
||||
|
||||
Usage:
|
||||
|
||||
litestream wal [arguments] DB
|
||||
litestream wal [arguments] DB_PATH
|
||||
|
||||
litestream wal [arguments] REPLICA_URL
|
||||
|
||||
Arguments:
|
||||
|
||||
@@ -114,14 +124,16 @@ Arguments:
|
||||
-generation NAME
|
||||
Optional, filter by a specific generation.
|
||||
|
||||
|
||||
Examples:
|
||||
|
||||
# List all WAL files for a database.
|
||||
$ litestream wal /path/to/db
|
||||
|
||||
# List all WAL files on S3 for a specific generation.
|
||||
$ litestream snapshots -replica s3 -generation xxxxxxxx /path/to/db
|
||||
$ litestream wal -replica s3 -generation xxxxxxxx /path/to/db
|
||||
|
||||
# List all WAL files for replica URL.
|
||||
$ litestream wal s3://mybkt/db
|
||||
|
||||
`[1:],
|
||||
DefaultConfigPath(),
|
||||
|
||||
314
db.go
314
db.go
@@ -33,6 +33,10 @@ const (
|
||||
DefaultMaxCheckpointPageN = 10000
|
||||
)
|
||||
|
||||
// MaxIndex is the maximum possible WAL index.
|
||||
// If this index is reached then a new generation will be started.
|
||||
const MaxIndex = 0x7FFFFFFF
|
||||
|
||||
// BusyTimeout is the timeout to wait for EBUSY from SQLite.
|
||||
const BusyTimeout = 1 * time.Second
|
||||
|
||||
@@ -41,6 +45,7 @@ type DB struct {
|
||||
mu sync.RWMutex
|
||||
path string // part to database
|
||||
db *sql.DB // target database
|
||||
f *os.File // long-running db file descriptor
|
||||
rtx *sql.Tx // long running read transaction
|
||||
pageSize int // page size, in bytes
|
||||
notify chan struct{} // closes on WAL change
|
||||
@@ -255,6 +260,11 @@ func (db *DB) PageSize() int {
|
||||
|
||||
// Open initializes the background monitoring goroutine.
|
||||
func (db *DB) Open() (err error) {
|
||||
// Validate fields on database.
|
||||
if db.MinCheckpointPageN <= 0 {
|
||||
return fmt.Errorf("minimum checkpoint page count required")
|
||||
}
|
||||
|
||||
// Validate that all replica names are unique.
|
||||
m := make(map[string]struct{})
|
||||
for _, r := range db.Replicas {
|
||||
@@ -281,15 +291,23 @@ func (db *DB) Open() (err error) {
|
||||
// Close releases the read lock & closes the database. This method should only
|
||||
// be called by tests as it causes the underlying database to be checkpointed.
|
||||
func (db *DB) Close() (err error) {
|
||||
if e := db.SoftClose(); e != nil && err == nil {
|
||||
// Ensure replicas all stop replicating.
|
||||
for _, r := range db.Replicas {
|
||||
r.Stop(true)
|
||||
}
|
||||
|
||||
if db.rtx != nil {
|
||||
if e := db.releaseReadLock(); e != nil && err == nil {
|
||||
err = e
|
||||
}
|
||||
}
|
||||
|
||||
if db.db != nil {
|
||||
if e := db.db.Close(); e != nil && err == nil {
|
||||
err = e
|
||||
}
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -377,11 +395,34 @@ func (db *DB) init() (err error) {
|
||||
dsn := db.path
|
||||
dsn += fmt.Sprintf("?_busy_timeout=%d", BusyTimeout.Milliseconds())
|
||||
|
||||
// Connect to SQLite database & enable WAL.
|
||||
// Connect to SQLite database.
|
||||
if db.db, err = sql.Open("sqlite3", dsn); err != nil {
|
||||
return err
|
||||
} else if _, err := db.db.Exec(`PRAGMA journal_mode = wal;`); err != nil {
|
||||
}
|
||||
|
||||
// Open long-running database file descriptor. Required for non-OFD locks.
|
||||
if db.f, err = os.Open(db.path); err != nil {
|
||||
return fmt.Errorf("open db file descriptor: %w", err)
|
||||
}
|
||||
|
||||
// Ensure database is closed if init fails.
|
||||
// Initialization can retry on next sync.
|
||||
defer func() {
|
||||
if err != nil {
|
||||
_ = db.releaseReadLock()
|
||||
db.db.Close()
|
||||
db.f.Close()
|
||||
db.db, db.f = nil, nil
|
||||
}
|
||||
}()
|
||||
|
||||
// Enable WAL and ensure it is set. New mode should be returned on success:
|
||||
// https://www.sqlite.org/pragma.html#pragma_journal_mode
|
||||
var mode string
|
||||
if err := db.db.QueryRow(`PRAGMA journal_mode = wal;`).Scan(&mode); err != nil {
|
||||
return fmt.Errorf("enable wal: %w", err)
|
||||
} else if mode != "wal" {
|
||||
return fmt.Errorf("enable wal failed, mode=%q", mode)
|
||||
}
|
||||
|
||||
// Disable autocheckpoint for litestream's connection.
|
||||
@@ -421,7 +462,7 @@ func (db *DB) init() (err error) {
|
||||
|
||||
// If we have an existing shadow WAL, ensure the headers match.
|
||||
if err := db.verifyHeadersMatch(); err != nil {
|
||||
log.Printf("%s: cannot determine last wal position, clearing generation (%s)", db.path, err)
|
||||
log.Printf("%s: init: cannot determine last wal position, clearing generation; %s", db.path, err)
|
||||
if err := os.Remove(db.GenerationNamePath()); err != nil && !os.IsNotExist(err) {
|
||||
return fmt.Errorf("remove generation name: %w", err)
|
||||
}
|
||||
@@ -471,7 +512,7 @@ func (db *DB) verifyHeadersMatch() error {
|
||||
}
|
||||
|
||||
if !bytes.Equal(hdr0, hdr1) {
|
||||
return fmt.Errorf("wal header mismatch")
|
||||
return fmt.Errorf("wal header mismatch %x <> %x on %s", hdr0, hdr1, shadowWALPath)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
@@ -565,7 +606,7 @@ func (db *DB) SoftClose() (err error) {
|
||||
|
||||
// Ensure replicas all stop replicating.
|
||||
for _, r := range db.Replicas {
|
||||
r.Stop()
|
||||
r.Stop(false)
|
||||
}
|
||||
|
||||
if db.rtx != nil {
|
||||
@@ -622,7 +663,7 @@ func (db *DB) CurrentGeneration() (string, error) {
|
||||
return "", err
|
||||
}
|
||||
|
||||
// TODO: Verify if generation directory exists. If not, delete.
|
||||
// TODO: Verify if generation directory exists. If not, delete name file.
|
||||
|
||||
generation := strings.TrimSpace(string(buf))
|
||||
if len(generation) != GenerationNameLen {
|
||||
@@ -736,7 +777,7 @@ func (db *DB) Sync() (err error) {
|
||||
if info.generation, err = db.createGeneration(); err != nil {
|
||||
return fmt.Errorf("create generation: %w", err)
|
||||
}
|
||||
log.Printf("%s: new generation %q, %s", db.path, info.generation, info.reason)
|
||||
log.Printf("%s: sync: new generation %q, %s", db.path, info.generation, info.reason)
|
||||
|
||||
// Clear shadow wal info.
|
||||
info.shadowWALPath = db.ShadowWALPath(info.generation, 0)
|
||||
@@ -839,15 +880,19 @@ func (db *DB) verify() (info syncInfo, err error) {
|
||||
if err != nil {
|
||||
return info, err
|
||||
}
|
||||
info.walSize = fi.Size()
|
||||
info.walSize = frameAlign(fi.Size(), db.pageSize)
|
||||
info.walModTime = fi.ModTime()
|
||||
db.walSizeGauge.Set(float64(fi.Size()))
|
||||
|
||||
// Open shadow WAL to copy append to.
|
||||
info.shadowWALPath, err = db.CurrentShadowWALPath(info.generation)
|
||||
index, _, err := db.CurrentShadowWALIndex(info.generation)
|
||||
if err != nil {
|
||||
return info, fmt.Errorf("cannot determine shadow WAL: %w", err)
|
||||
return info, fmt.Errorf("cannot determine shadow WAL index: %w", err)
|
||||
} else if index >= MaxIndex {
|
||||
info.reason = "max index exceeded"
|
||||
return info, nil
|
||||
}
|
||||
info.shadowWALPath = db.ShadowWALPath(generation, index)
|
||||
|
||||
// Determine shadow WAL current size.
|
||||
fi, err = os.Stat(info.shadowWALPath)
|
||||
@@ -859,7 +904,6 @@ func (db *DB) verify() (info syncInfo, err error) {
|
||||
}
|
||||
info.shadowWALSize = frameAlign(fi.Size(), db.pageSize)
|
||||
|
||||
// Truncate shadow WAL if there is a partial page.
|
||||
// Exit if shadow WAL does not contain a full header.
|
||||
if info.shadowWALSize < WALHeaderSize {
|
||||
info.reason = "short shadow wal"
|
||||
@@ -892,9 +936,9 @@ func (db *DB) verify() (info syncInfo, err error) {
|
||||
// Verify last page synced still matches.
|
||||
if info.shadowWALSize > WALHeaderSize {
|
||||
offset := info.shadowWALSize - int64(db.pageSize+WALFrameHeaderSize)
|
||||
if buf0, err := readFileAt(db.WALPath(), offset, int64(db.pageSize+WALFrameHeaderSize)); err != nil {
|
||||
if buf0, err := readWALFileAt(db.WALPath(), offset, int64(db.pageSize+WALFrameHeaderSize)); err != nil {
|
||||
return info, fmt.Errorf("cannot read last synced wal page: %w", err)
|
||||
} else if buf1, err := readFileAt(info.shadowWALPath, offset, int64(db.pageSize+WALFrameHeaderSize)); err != nil {
|
||||
} else if buf1, err := readWALFileAt(info.shadowWALPath, offset, int64(db.pageSize+WALFrameHeaderSize)); err != nil {
|
||||
return info, fmt.Errorf("cannot read last synced shadow wal page: %w", err)
|
||||
} else if !bytes.Equal(buf0, buf1) {
|
||||
info.reason = "wal overwritten by another process"
|
||||
@@ -1017,65 +1061,65 @@ func (db *DB) copyToShadowWAL(filename string) (newSize int64, err error) {
|
||||
return 0, fmt.Errorf("last checksum: %w", err)
|
||||
}
|
||||
|
||||
// Seek to correct position on both files.
|
||||
// Seek to correct position on real wal.
|
||||
if _, err := r.Seek(origSize, io.SeekStart); err != nil {
|
||||
return 0, fmt.Errorf("wal seek: %w", err)
|
||||
return 0, fmt.Errorf("real wal seek: %w", err)
|
||||
} else if _, err := w.Seek(origSize, io.SeekStart); err != nil {
|
||||
return 0, fmt.Errorf("shadow wal seek: %w", err)
|
||||
}
|
||||
|
||||
// Read through WAL from last position to find the page of the last
|
||||
// committed transaction.
|
||||
tmpSz := origSize
|
||||
frame := make([]byte, db.pageSize+WALFrameHeaderSize)
|
||||
var buf bytes.Buffer
|
||||
offset := origSize
|
||||
lastCommitSize := origSize
|
||||
buf := make([]byte, db.pageSize+WALFrameHeaderSize)
|
||||
for {
|
||||
Tracef("%s: copy-shadow: %s @ %d", db.path, filename, tmpSz)
|
||||
|
||||
// Read next page from WAL file.
|
||||
if _, err := io.ReadFull(r, buf); err == io.EOF || err == io.ErrUnexpectedEOF {
|
||||
Tracef("%s: copy-shadow: break %s", db.path, err)
|
||||
if _, err := io.ReadFull(r, frame); err == io.EOF || err == io.ErrUnexpectedEOF {
|
||||
Tracef("%s: copy-shadow: break %s @ %d; err=%s", db.path, filename, offset, err)
|
||||
break // end of file or partial page
|
||||
} else if err != nil {
|
||||
return 0, fmt.Errorf("read wal: %w", err)
|
||||
}
|
||||
|
||||
// Read frame salt & compare to header salt. Stop reading on mismatch.
|
||||
salt0 := binary.BigEndian.Uint32(buf[8:])
|
||||
salt1 := binary.BigEndian.Uint32(buf[12:])
|
||||
salt0 := binary.BigEndian.Uint32(frame[8:])
|
||||
salt1 := binary.BigEndian.Uint32(frame[12:])
|
||||
if salt0 != hsalt0 || salt1 != hsalt1 {
|
||||
Tracef("%s: copy-shadow: break: salt mismatch", db.path)
|
||||
break
|
||||
}
|
||||
|
||||
// Verify checksum of page is valid.
|
||||
fchksum0 := binary.BigEndian.Uint32(buf[16:])
|
||||
fchksum1 := binary.BigEndian.Uint32(buf[20:])
|
||||
chksum0, chksum1 = Checksum(bo, chksum0, chksum1, buf[:8]) // frame header
|
||||
chksum0, chksum1 = Checksum(bo, chksum0, chksum1, buf[24:]) // frame data
|
||||
fchksum0 := binary.BigEndian.Uint32(frame[16:])
|
||||
fchksum1 := binary.BigEndian.Uint32(frame[20:])
|
||||
chksum0, chksum1 = Checksum(bo, chksum0, chksum1, frame[:8]) // frame header
|
||||
chksum0, chksum1 = Checksum(bo, chksum0, chksum1, frame[24:]) // frame data
|
||||
if chksum0 != fchksum0 || chksum1 != fchksum1 {
|
||||
return 0, fmt.Errorf("checksum mismatch: offset=%d (%x,%x) != (%x,%x)", tmpSz, chksum0, chksum1, fchksum0, fchksum1)
|
||||
log.Printf("copy shadow: checksum mismatch, skipping: offset=%d (%x,%x) != (%x,%x)", offset, chksum0, chksum1, fchksum0, fchksum1)
|
||||
break
|
||||
}
|
||||
|
||||
// Add page to the new size of the shadow WAL.
|
||||
tmpSz += int64(len(buf))
|
||||
buf.Write(frame)
|
||||
|
||||
// Mark commit record.
|
||||
newDBSize := binary.BigEndian.Uint32(buf[4:])
|
||||
Tracef("%s: copy-shadow: ok %s offset=%d salt=%x %x", db.path, filename, offset, salt0, salt1)
|
||||
offset += int64(len(frame))
|
||||
|
||||
// Flush to shadow WAL if commit record.
|
||||
newDBSize := binary.BigEndian.Uint32(frame[4:])
|
||||
if newDBSize != 0 {
|
||||
lastCommitSize = tmpSz
|
||||
if _, err := buf.WriteTo(w); err != nil {
|
||||
return 0, fmt.Errorf("write shadow wal: %w", err)
|
||||
}
|
||||
buf.Reset()
|
||||
lastCommitSize = offset
|
||||
}
|
||||
}
|
||||
|
||||
// Seek to correct position on both files.
|
||||
if _, err := r.Seek(origSize, io.SeekStart); err != nil {
|
||||
return 0, fmt.Errorf("wal seek: %w", err)
|
||||
} else if _, err := w.Seek(origSize, io.SeekStart); err != nil {
|
||||
return 0, fmt.Errorf("shadow wal seek: %w", err)
|
||||
}
|
||||
|
||||
// Copy bytes, sync & close.
|
||||
if _, err := io.CopyN(w, r, lastCommitSize-origSize); err != nil {
|
||||
return 0, err
|
||||
} else if err := w.Sync(); err != nil {
|
||||
// Sync & close.
|
||||
if err := w.Sync(); err != nil {
|
||||
return 0, err
|
||||
} else if err := w.Close(); err != nil {
|
||||
return 0, err
|
||||
@@ -1098,6 +1142,8 @@ func (db *DB) ShadowWALReader(pos Pos) (r *ShadowWALReader, err error) {
|
||||
return nil, err
|
||||
} else if r.N() > 0 {
|
||||
return r, nil
|
||||
} else if err := r.Close(); err != nil { // no data, close, try next
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Otherwise attempt to read the start of the next WAL file.
|
||||
@@ -1171,6 +1217,9 @@ type ShadowWALReader struct {
|
||||
pos Pos
|
||||
}
|
||||
|
||||
// Name returns the filename of the underlying file.
|
||||
func (r *ShadowWALReader) Name() string { return r.f.Name() }
|
||||
|
||||
// Close closes the underlying WAL file handle.
|
||||
func (r *ShadowWALReader) Close() error { return r.f.Close() }
|
||||
|
||||
@@ -1264,7 +1313,7 @@ func (db *DB) checkpoint(mode string) (err error) {
|
||||
if err := db.db.QueryRow(rawsql).Scan(&row[0], &row[1], &row[2]); err != nil {
|
||||
return err
|
||||
}
|
||||
log.Printf("%s: checkpoint: mode=%v (%d,%d,%d)", db.path, mode, row[0], row[1], row[2])
|
||||
Tracef("%s: checkpoint: mode=%v (%d,%d,%d)", db.path, mode, row[0], row[1], row[2])
|
||||
|
||||
// Reacquire the read lock immediately after the checkpoint.
|
||||
if err := db.acquireReadLock(); err != nil {
|
||||
@@ -1288,6 +1337,11 @@ func (db *DB) checkpointAndInit(generation, mode string) error {
|
||||
return err
|
||||
}
|
||||
|
||||
// Copy shadow WAL before checkpoint to copy as much as possible.
|
||||
if _, err := db.copyToShadowWAL(shadowWALPath); err != nil {
|
||||
return fmt.Errorf("cannot copy to end of shadow wal before checkpoint: %w", err)
|
||||
}
|
||||
|
||||
// Execute checkpoint and immediately issue a write to the WAL to ensure
|
||||
// a new page is written.
|
||||
if err := db.checkpoint(mode); err != nil {
|
||||
@@ -1303,6 +1357,21 @@ func (db *DB) checkpointAndInit(generation, mode string) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Start a transaction. This will be promoted immediately after.
|
||||
tx, err := db.db.Begin()
|
||||
if err != nil {
|
||||
return fmt.Errorf("begin: %w", err)
|
||||
}
|
||||
defer func() { _ = rollback(tx) }()
|
||||
|
||||
// Insert into the lock table to promote to a write tx. The lock table
|
||||
// insert will never actually occur because our tx will be rolled back,
|
||||
// however, it will ensure our tx grabs the write lock. Unfortunately,
|
||||
// we can't call "BEGIN IMMEDIATE" as we are already in a transaction.
|
||||
if _, err := tx.ExecContext(db.ctx, `INSERT INTO _litestream_lock (id) VALUES (1);`); err != nil {
|
||||
return fmt.Errorf("_litestream_lock: %w", err)
|
||||
}
|
||||
|
||||
// Copy the end of the previous WAL before starting a new shadow WAL.
|
||||
if _, err := db.copyToShadowWAL(shadowWALPath); err != nil {
|
||||
return fmt.Errorf("cannot copy to end of shadow wal: %w", err)
|
||||
@@ -1320,6 +1389,10 @@ func (db *DB) checkpointAndInit(generation, mode string) error {
|
||||
return fmt.Errorf("cannot init shadow wal file: name=%s err=%w", newShadowWALPath, err)
|
||||
}
|
||||
|
||||
// Release write lock before checkpointing & exiting.
|
||||
if err := tx.Rollback(); err != nil {
|
||||
return fmt.Errorf("rollback post-checkpoint tx: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -1343,15 +1416,17 @@ func (db *DB) monitor() {
|
||||
}
|
||||
}
|
||||
|
||||
// Restore restores the database from a replica based on the options given.
|
||||
// RestoreReplica restores the database from a replica based on the options given.
|
||||
// This method will restore into opt.OutputPath, if specified, or into the
|
||||
// DB's original database path. It can optionally restore from a specific
|
||||
// replica or generation or it will automatically choose the best one. Finally,
|
||||
// a timestamp can be specified to restore the database to a specific
|
||||
// point-in-time.
|
||||
func (db *DB) Restore(ctx context.Context, opt RestoreOptions) error {
|
||||
func RestoreReplica(ctx context.Context, r Replica, opt RestoreOptions) error {
|
||||
// Validate options.
|
||||
if opt.Generation == "" && opt.Index != math.MaxInt64 {
|
||||
if opt.OutputPath == "" {
|
||||
return fmt.Errorf("output path required")
|
||||
} else if opt.Generation == "" && opt.Index != math.MaxInt64 {
|
||||
return fmt.Errorf("must specify generation when restoring to index")
|
||||
} else if opt.Index != math.MaxInt64 && !opt.Timestamp.IsZero() {
|
||||
return fmt.Errorf("cannot specify index & timestamp to restore")
|
||||
@@ -1363,69 +1438,65 @@ func (db *DB) Restore(ctx context.Context, opt RestoreOptions) error {
|
||||
logger = log.New(ioutil.Discard, "", 0)
|
||||
}
|
||||
|
||||
// Determine the correct output path.
|
||||
outputPath := opt.OutputPath
|
||||
if outputPath == "" {
|
||||
outputPath = db.Path()
|
||||
logPrefix := r.Name()
|
||||
if db := r.DB(); db != nil {
|
||||
logPrefix = fmt.Sprintf("%s(%s)", db.Path(), r.Name())
|
||||
}
|
||||
|
||||
// Ensure output path does not already exist (unless this is a dry run).
|
||||
if !opt.DryRun {
|
||||
if _, err := os.Stat(outputPath); err == nil {
|
||||
return fmt.Errorf("cannot restore, output path already exists: %s", outputPath)
|
||||
if _, err := os.Stat(opt.OutputPath); err == nil {
|
||||
return fmt.Errorf("cannot restore, output path already exists: %s", opt.OutputPath)
|
||||
} else if err != nil && !os.IsNotExist(err) {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Determine target replica & generation to restore from.
|
||||
r, generation, err := db.restoreTarget(ctx, opt, logger)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Find lastest snapshot that occurs before timestamp.
|
||||
minWALIndex, err := SnapshotIndexAt(ctx, r, generation, opt.Timestamp)
|
||||
minWALIndex, err := SnapshotIndexAt(ctx, r, opt.Generation, opt.Timestamp)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot find snapshot index for restore: %w", err)
|
||||
}
|
||||
|
||||
// Find the maximum WAL index that occurs before timestamp.
|
||||
maxWALIndex, err := WALIndexAt(ctx, r, generation, opt.Index, opt.Timestamp)
|
||||
maxWALIndex, err := WALIndexAt(ctx, r, opt.Generation, opt.Index, opt.Timestamp)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot find max wal index for restore: %w", err)
|
||||
}
|
||||
log.Printf("%s(%s): starting restore: generation %08x, index %08x-%08x", db.path, r.Name(), generation, minWALIndex, maxWALIndex)
|
||||
logger.Printf("%s: starting restore: generation %s, index %08x-%08x", logPrefix, opt.Generation, minWALIndex, maxWALIndex)
|
||||
|
||||
// Initialize starting position.
|
||||
pos := Pos{Generation: generation, Index: minWALIndex}
|
||||
tmpPath := outputPath + ".tmp"
|
||||
pos := Pos{Generation: opt.Generation, Index: minWALIndex}
|
||||
tmpPath := opt.OutputPath + ".tmp"
|
||||
|
||||
// Copy snapshot to output path.
|
||||
logger.Printf("%s: restoring snapshot %s/%08x to %s", logPrefix, opt.Generation, minWALIndex, tmpPath)
|
||||
if !opt.DryRun {
|
||||
if err := db.restoreSnapshot(ctx, r, pos.Generation, pos.Index, tmpPath); err != nil {
|
||||
if err := restoreSnapshot(ctx, r, pos.Generation, pos.Index, tmpPath); err != nil {
|
||||
return fmt.Errorf("cannot restore snapshot: %w", err)
|
||||
}
|
||||
}
|
||||
log.Printf("%s(%s): restoring snapshot %s/%08x to %s", db.path, r.Name(), generation, minWALIndex, tmpPath)
|
||||
|
||||
// Restore each WAL file until we reach our maximum index.
|
||||
for index := minWALIndex; index <= maxWALIndex; index++ {
|
||||
if !opt.DryRun {
|
||||
if err = db.restoreWAL(ctx, r, generation, index, tmpPath); os.IsNotExist(err) && index == minWALIndex && index == maxWALIndex {
|
||||
log.Printf("%s(%s): no wal available, snapshot only", db.path, r.Name())
|
||||
if err = restoreWAL(ctx, r, opt.Generation, index, tmpPath); os.IsNotExist(err) && index == minWALIndex && index == maxWALIndex {
|
||||
logger.Printf("%s: no wal available, snapshot only", logPrefix)
|
||||
break // snapshot file only, ignore error
|
||||
} else if err != nil {
|
||||
return fmt.Errorf("cannot restore wal: %w", err)
|
||||
}
|
||||
}
|
||||
log.Printf("%s(%s): restored wal %s/%08x", db.path, r.Name(), generation, index)
|
||||
|
||||
if opt.Verbose {
|
||||
logger.Printf("%s: restored wal %s/%08x", logPrefix, opt.Generation, index)
|
||||
}
|
||||
}
|
||||
|
||||
// Copy file to final location.
|
||||
log.Printf("%s(%s): renaming database from temporary location", db.path, r.Name())
|
||||
logger.Printf("%s: renaming database from temporary location", logPrefix)
|
||||
if !opt.DryRun {
|
||||
if err := os.Rename(tmpPath, outputPath); err != nil {
|
||||
if err := os.Rename(tmpPath, opt.OutputPath); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
@@ -1433,21 +1504,8 @@ func (db *DB) Restore(ctx context.Context, opt RestoreOptions) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func checksumFile(filename string) (uint64, error) {
|
||||
f, err := os.Open(filename)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
h := crc64.New(crc64.MakeTable(crc64.ISO))
|
||||
if _, err := io.Copy(h, f); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
return h.Sum64(), nil
|
||||
}
|
||||
|
||||
func (db *DB) restoreTarget(ctx context.Context, opt RestoreOptions, logger *log.Logger) (Replica, string, error) {
|
||||
// CalcRestoreTarget returns a replica & generation to restore from based on opt criteria.
|
||||
func (db *DB) CalcRestoreTarget(ctx context.Context, opt RestoreOptions) (Replica, string, error) {
|
||||
var target struct {
|
||||
replica Replica
|
||||
generation string
|
||||
@@ -1460,9 +1518,31 @@ func (db *DB) restoreTarget(ctx context.Context, opt RestoreOptions, logger *log
|
||||
continue
|
||||
}
|
||||
|
||||
generation, stats, err := CalcReplicaRestoreTarget(ctx, r, opt)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
|
||||
// Use the latest replica if we have multiple candidates.
|
||||
if !stats.UpdatedAt.After(target.stats.UpdatedAt) {
|
||||
continue
|
||||
}
|
||||
|
||||
target.replica, target.generation, target.stats = r, generation, stats
|
||||
}
|
||||
return target.replica, target.generation, nil
|
||||
}
|
||||
|
||||
// CalcReplicaRestoreTarget returns a generation to restore from.
|
||||
func CalcReplicaRestoreTarget(ctx context.Context, r Replica, opt RestoreOptions) (generation string, stats GenerationStats, err error) {
|
||||
var target struct {
|
||||
generation string
|
||||
stats GenerationStats
|
||||
}
|
||||
|
||||
generations, err := r.Generations(ctx)
|
||||
if err != nil {
|
||||
return nil, "", fmt.Errorf("cannot fetch generations: %w", err)
|
||||
return "", stats, fmt.Errorf("cannot fetch generations: %w", err)
|
||||
}
|
||||
|
||||
// Search generations for one that contains the requested timestamp.
|
||||
@@ -1475,7 +1555,7 @@ func (db *DB) restoreTarget(ctx context.Context, opt RestoreOptions, logger *log
|
||||
// Fetch stats for generation.
|
||||
stats, err := r.GenerationStats(ctx, generation)
|
||||
if err != nil {
|
||||
return nil, "", fmt.Errorf("cannot determine stats for generation (%s/%s): %s", r.Name(), generation, err)
|
||||
return "", stats, fmt.Errorf("cannot determine stats for generation (%s/%s): %s", r.Name(), generation, err)
|
||||
}
|
||||
|
||||
// Skip if it does not contain timestamp.
|
||||
@@ -1490,27 +1570,28 @@ func (db *DB) restoreTarget(ctx context.Context, opt RestoreOptions, logger *log
|
||||
continue
|
||||
}
|
||||
|
||||
target.replica = r
|
||||
target.generation = generation
|
||||
target.stats = stats
|
||||
}
|
||||
}
|
||||
|
||||
// Return an error if no matching targets found.
|
||||
if target.generation == "" {
|
||||
return nil, "", fmt.Errorf("no matching backups found")
|
||||
}
|
||||
|
||||
return target.replica, target.generation, nil
|
||||
return target.generation, target.stats, nil
|
||||
}
|
||||
|
||||
// restoreSnapshot copies a snapshot from the replica to a file.
|
||||
func (db *DB) restoreSnapshot(ctx context.Context, r Replica, generation string, index int, filename string) error {
|
||||
if err := mkdirAll(filepath.Dir(filename), db.dirmode, db.diruid, db.dirgid); err != nil {
|
||||
func restoreSnapshot(ctx context.Context, r Replica, generation string, index int, filename string) error {
|
||||
// Determine the user/group & mode based on the DB, if available.
|
||||
uid, gid, mode := -1, -1, os.FileMode(0600)
|
||||
diruid, dirgid, dirmode := -1, -1, os.FileMode(0700)
|
||||
if db := r.DB(); db != nil {
|
||||
uid, gid, mode = db.uid, db.gid, db.mode
|
||||
diruid, dirgid, dirmode = db.diruid, db.dirgid, db.dirmode
|
||||
}
|
||||
|
||||
if err := mkdirAll(filepath.Dir(filename), dirmode, diruid, dirgid); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
f, err := createFile(filename, db.uid, db.gid)
|
||||
f, err := createFile(filename, mode, uid, gid)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -1533,7 +1614,13 @@ func (db *DB) restoreSnapshot(ctx context.Context, r Replica, generation string,
|
||||
}
|
||||
|
||||
// restoreWAL copies a WAL file from the replica to the local WAL and forces checkpoint.
|
||||
func (db *DB) restoreWAL(ctx context.Context, r Replica, generation string, index int, dbPath string) error {
|
||||
func restoreWAL(ctx context.Context, r Replica, generation string, index int, dbPath string) error {
|
||||
// Determine the user/group & mode based on the DB, if available.
|
||||
uid, gid, mode := -1, -1, os.FileMode(0600)
|
||||
if db := r.DB(); db != nil {
|
||||
uid, gid, mode = db.uid, db.gid, db.mode
|
||||
}
|
||||
|
||||
// Open WAL file from replica.
|
||||
rd, err := r.WALReader(ctx, generation, index)
|
||||
if err != nil {
|
||||
@@ -1542,7 +1629,7 @@ func (db *DB) restoreWAL(ctx context.Context, r Replica, generation string, inde
|
||||
defer rd.Close()
|
||||
|
||||
// Open handle to destination WAL path.
|
||||
f, err := createFile(dbPath+"-wal", db.uid, db.gid)
|
||||
f, err := createFile(dbPath+"-wal", mode, uid, gid)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -1562,13 +1649,14 @@ func (db *DB) restoreWAL(ctx context.Context, r Replica, generation string, inde
|
||||
}
|
||||
defer d.Close()
|
||||
|
||||
if _, err := d.Exec(`PRAGMA wal_checkpoint(TRUNCATE);`); err != nil {
|
||||
return err
|
||||
} else if err := d.Close(); err != nil {
|
||||
var row [3]int
|
||||
if err := d.QueryRow(`PRAGMA wal_checkpoint(TRUNCATE);`).Scan(&row[0], &row[1], &row[2]); err != nil {
|
||||
return err
|
||||
} else if row[0] != 0 {
|
||||
return fmt.Errorf("truncation checkpoint failed during restore (%d,%d,%d)", row[0], row[1], row[2])
|
||||
}
|
||||
|
||||
return nil
|
||||
return d.Close()
|
||||
}
|
||||
|
||||
// CRC64 returns a CRC-64 ISO checksum of the database and its current position.
|
||||
@@ -1576,6 +1664,8 @@ func (db *DB) restoreWAL(ctx context.Context, r Replica, generation string, inde
|
||||
// This function obtains a read lock so it prevents syncs from occurring until
|
||||
// the operation is complete. The database will still be usable but it will be
|
||||
// unable to checkpoint during this time.
|
||||
//
|
||||
// If dst is set, the database file is copied to that location before checksum.
|
||||
func (db *DB) CRC64() (uint64, Pos, error) {
|
||||
db.mu.Lock()
|
||||
defer db.mu.Unlock()
|
||||
@@ -1606,11 +1696,14 @@ func (db *DB) CRC64() (uint64, Pos, error) {
|
||||
}
|
||||
pos.Offset = 0
|
||||
|
||||
chksum, err := checksumFile(db.Path())
|
||||
if err != nil {
|
||||
// Seek to the beginning of the db file descriptor and checksum whole file.
|
||||
h := crc64.New(crc64.MakeTable(crc64.ISO))
|
||||
if _, err := db.f.Seek(0, io.SeekStart); err != nil {
|
||||
return 0, pos, err
|
||||
} else if _, err := io.Copy(h, db.f); err != nil {
|
||||
return 0, pos, err
|
||||
}
|
||||
return chksum, pos, nil
|
||||
return h.Sum64(), pos, nil
|
||||
}
|
||||
|
||||
// RestoreOptions represents options for DB.Restore().
|
||||
@@ -1639,8 +1732,9 @@ type RestoreOptions struct {
|
||||
// Only equivalent log output for a regular restore.
|
||||
DryRun bool
|
||||
|
||||
// Logger used to print status to.
|
||||
// Logging settings.
|
||||
Logger *log.Logger
|
||||
Verbose bool
|
||||
}
|
||||
|
||||
// NewRestoreOptions returns a new instance of RestoreOptions with defaults.
|
||||
|
||||
17
etc/build.ps1
Normal file
17
etc/build.ps1
Normal file
@@ -0,0 +1,17 @@
|
||||
[CmdletBinding()]
|
||||
Param (
|
||||
[Parameter(Mandatory = $true)]
|
||||
[String] $Version
|
||||
)
|
||||
$ErrorActionPreference = "Stop"
|
||||
|
||||
# Update working directory.
|
||||
Push-Location $PSScriptRoot
|
||||
Trap {
|
||||
Pop-Location
|
||||
}
|
||||
|
||||
Invoke-Expression "candle.exe -nologo -arch x64 -ext WixUtilExtension -out litestream.wixobj -dVersion=`"$Version`" litestream.wxs"
|
||||
Invoke-Expression "light.exe -nologo -spdb -ext WixUtilExtension -out `"litestream-${Version}.msi`" litestream.wixobj"
|
||||
|
||||
Pop-Location
|
||||
15
etc/gon.hcl
Normal file
15
etc/gon.hcl
Normal file
@@ -0,0 +1,15 @@
|
||||
source = ["./dist/litestream"]
|
||||
bundle_id = "com.middlemost.litestream"
|
||||
|
||||
apple_id {
|
||||
username = "benbjohnson@yahoo.com"
|
||||
password = "@env:AC_PASSWORD"
|
||||
}
|
||||
|
||||
sign {
|
||||
application_identity = "Developer ID Application: Middlemost Systems, LLC"
|
||||
}
|
||||
|
||||
zip {
|
||||
output_path = "dist/litestream.zip"
|
||||
}
|
||||
89
etc/litestream.wxs
Normal file
89
etc/litestream.wxs
Normal file
@@ -0,0 +1,89 @@
|
||||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<Wix
|
||||
xmlns="http://schemas.microsoft.com/wix/2006/wi"
|
||||
xmlns:util="http://schemas.microsoft.com/wix/UtilExtension"
|
||||
>
|
||||
<?if $(sys.BUILDARCH)=x64 ?>
|
||||
<?define PlatformProgramFiles = "ProgramFiles64Folder" ?>
|
||||
<?else ?>
|
||||
<?define PlatformProgramFiles = "ProgramFilesFolder" ?>
|
||||
<?endif ?>
|
||||
|
||||
<Product
|
||||
Id="*"
|
||||
UpgradeCode="5371367e-58b3-4e52-be0d-46945eb71ce6"
|
||||
Name="Litestream"
|
||||
Version="$(var.Version)"
|
||||
Manufacturer="Litestream"
|
||||
Language="1033"
|
||||
Codepage="1252"
|
||||
>
|
||||
<Package
|
||||
Id="*"
|
||||
Manufacturer="Litestream"
|
||||
InstallScope="perMachine"
|
||||
InstallerVersion="500"
|
||||
Description="Litestream $(var.Version) installer"
|
||||
Compressed="yes"
|
||||
/>
|
||||
|
||||
<Media Id="1" Cabinet="litestream.cab" EmbedCab="yes"/>
|
||||
|
||||
<MajorUpgrade
|
||||
Schedule="afterInstallInitialize"
|
||||
DowngradeErrorMessage="A later version of [ProductName] is already installed. Setup will now exit."
|
||||
/>
|
||||
|
||||
<Directory Id="TARGETDIR" Name="SourceDir">
|
||||
<Directory Id="$(var.PlatformProgramFiles)">
|
||||
<Directory Id="APPLICATIONROOTDIRECTORY" Name="Litestream"/>
|
||||
</Directory>
|
||||
</Directory>
|
||||
|
||||
<ComponentGroup Id="Files">
|
||||
<Component Directory="APPLICATIONROOTDIRECTORY">
|
||||
<File
|
||||
Id="litestream.exe"
|
||||
Name="litestream.exe"
|
||||
Source="litestream.exe"
|
||||
KeyPath="yes"
|
||||
/>
|
||||
|
||||
<ServiceInstall
|
||||
Id="InstallService"
|
||||
Name="Litestream"
|
||||
DisplayName="Litestream"
|
||||
Description="Replicates SQLite databases"
|
||||
ErrorControl="normal"
|
||||
Start="auto"
|
||||
Type="ownProcess"
|
||||
>
|
||||
<util:ServiceConfig
|
||||
FirstFailureActionType="restart"
|
||||
SecondFailureActionType="restart"
|
||||
ThirdFailureActionType="restart"
|
||||
RestartServiceDelayInSeconds="60"
|
||||
/>
|
||||
<ServiceDependency Id="wmiApSrv" />
|
||||
</ServiceInstall>
|
||||
|
||||
<ServiceControl
|
||||
Id="ServiceStateControl"
|
||||
Name="Litestream"
|
||||
Remove="uninstall"
|
||||
Start="install"
|
||||
Stop="both"
|
||||
/>
|
||||
<util:EventSource
|
||||
Log="Application"
|
||||
Name="Litestream"
|
||||
EventMessageFile="%SystemRoot%\System32\EventCreate.exe"
|
||||
/>
|
||||
</Component>
|
||||
</ComponentGroup>
|
||||
|
||||
<Feature Id="DefaultFeature" Level="1">
|
||||
<ComponentGroupRef Id="Files" />
|
||||
</Feature>
|
||||
</Product>
|
||||
</Wix>
|
||||
@@ -6,5 +6,5 @@
|
||||
# - path: /path/to/primary/db # Database to replicate from
|
||||
# replicas:
|
||||
# - path: /path/to/replica # File-based replication
|
||||
# - path: s3://my.bucket.com/db # S3-based replication
|
||||
# - url: s3://my.bucket.com/db # S3-based replication
|
||||
|
||||
|
||||
1
go.mod
1
go.mod
@@ -8,5 +8,6 @@ require (
|
||||
github.com/mattn/go-sqlite3 v1.14.5
|
||||
github.com/pierrec/lz4/v4 v4.1.3
|
||||
github.com/prometheus/client_golang v1.9.0
|
||||
golang.org/x/sys v0.0.0-20201214210602-f9fddec55a1e
|
||||
gopkg.in/yaml.v2 v2.4.0
|
||||
)
|
||||
|
||||
@@ -36,6 +36,7 @@ const (
|
||||
|
||||
// Litestream errors.
|
||||
var (
|
||||
ErrNoGeneration = errors.New("no generation available")
|
||||
ErrNoSnapshots = errors.New("no snapshots available")
|
||||
ErrChecksumMismatch = errors.New("invalid replica, checksum mismatch")
|
||||
)
|
||||
@@ -95,9 +96,9 @@ type Pos struct {
|
||||
// String returns a string representation.
|
||||
func (p Pos) String() string {
|
||||
if p.IsZero() {
|
||||
return "<>"
|
||||
return ""
|
||||
}
|
||||
return fmt.Sprintf("<%s,%08x,%d>", p.Generation, p.Index, p.Offset)
|
||||
return fmt.Sprintf("%s/%08x:%d", p.Generation, p.Index, p.Offset)
|
||||
}
|
||||
|
||||
// IsZero returns true if p is the zero value.
|
||||
@@ -151,8 +152,9 @@ func readWALHeader(filename string) ([]byte, error) {
|
||||
return buf[:n], err
|
||||
}
|
||||
|
||||
// readFileAt reads a slice from a file.
|
||||
func readFileAt(filename string, offset, n int64) ([]byte, error) {
|
||||
// readWALFileAt reads a slice from a file. Do not use this with database files
|
||||
// as it causes problems with non-OFD locks.
|
||||
func readWALFileAt(filename string, offset, n int64) ([]byte, error) {
|
||||
f, err := os.Open(filename)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -257,8 +259,8 @@ func isHexChar(ch rune) bool {
|
||||
}
|
||||
|
||||
// createFile creates the file and attempts to set the UID/GID.
|
||||
func createFile(filename string, uid, gid int) (*os.File, error) {
|
||||
f, err := os.Create(filename)
|
||||
func createFile(filename string, perm os.FileMode, uid, gid int) (*os.File, error) {
|
||||
f, err := os.OpenFile(filename, os.O_RDWR|os.O_CREATE|os.O_TRUNC, perm)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
@@ -4,7 +4,6 @@ package litestream
|
||||
|
||||
import (
|
||||
"os"
|
||||
"syscall"
|
||||
)
|
||||
|
||||
// fileinfo returns syscall fields from a FileInfo object.
|
||||
@@ -15,7 +14,7 @@ func fileinfo(fi os.FileInfo) (uid, gid int) {
|
||||
// fixRootDirectory is copied from the standard library for use with mkdirAll()
|
||||
func fixRootDirectory(p string) string {
|
||||
if len(p) == len(`\\?\c:`) {
|
||||
if IsPathSeparator(p[0]) && IsPathSeparator(p[1]) && p[2] == '?' && IsPathSeparator(p[3]) && p[5] == ':' {
|
||||
if os.IsPathSeparator(p[0]) && os.IsPathSeparator(p[1]) && p[2] == '?' && os.IsPathSeparator(p[3]) && p[5] == ':' {
|
||||
return p + `\`
|
||||
}
|
||||
}
|
||||
|
||||
293
replica.go
293
replica.go
@@ -2,7 +2,9 @@ package litestream
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
"hash/crc64"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
@@ -30,10 +32,10 @@ type Replica interface {
|
||||
DB() *DB
|
||||
|
||||
// Starts replicating in a background goroutine.
|
||||
Start(ctx context.Context)
|
||||
Start(ctx context.Context) error
|
||||
|
||||
// Stops all replication processing. Blocks until processing stopped.
|
||||
Stop()
|
||||
Stop(hard bool) error
|
||||
|
||||
// Returns the last replication position.
|
||||
LastPos() Pos
|
||||
@@ -89,6 +91,9 @@ type FileReplica struct {
|
||||
mu sync.RWMutex
|
||||
pos Pos // last position
|
||||
|
||||
muf sync.Mutex
|
||||
f *os.File // long-running file descriptor to avoid non-OFD lock issues
|
||||
|
||||
wg sync.WaitGroup
|
||||
cancel func()
|
||||
|
||||
@@ -97,8 +102,11 @@ type FileReplica struct {
|
||||
walIndexGauge prometheus.Gauge
|
||||
walOffsetGauge prometheus.Gauge
|
||||
|
||||
// Frequency to create new snapshots.
|
||||
SnapshotInterval time.Duration
|
||||
|
||||
// Time to keep snapshots and related WAL files.
|
||||
// Database is snapshotted after interval and older WAL files are discarded.
|
||||
// Database is snapshotted after interval, if needed, and older WAL files are discarded.
|
||||
Retention time.Duration
|
||||
|
||||
// Time between checks for retention.
|
||||
@@ -125,10 +133,14 @@ func NewFileReplica(db *DB, name, dst string) *FileReplica {
|
||||
MonitorEnabled: true,
|
||||
}
|
||||
|
||||
r.snapshotTotalGauge = internal.ReplicaSnapshotTotalGaugeVec.WithLabelValues(db.path, r.Name())
|
||||
r.walBytesCounter = internal.ReplicaWALBytesCounterVec.WithLabelValues(db.path, r.Name())
|
||||
r.walIndexGauge = internal.ReplicaWALIndexGaugeVec.WithLabelValues(db.path, r.Name())
|
||||
r.walOffsetGauge = internal.ReplicaWALOffsetGaugeVec.WithLabelValues(db.path, r.Name())
|
||||
var dbPath string
|
||||
if db != nil {
|
||||
dbPath = db.Path()
|
||||
}
|
||||
r.snapshotTotalGauge = internal.ReplicaSnapshotTotalGaugeVec.WithLabelValues(dbPath, r.Name())
|
||||
r.walBytesCounter = internal.ReplicaWALBytesCounterVec.WithLabelValues(dbPath, r.Name())
|
||||
r.walIndexGauge = internal.ReplicaWALIndexGaugeVec.WithLabelValues(dbPath, r.Name())
|
||||
r.walOffsetGauge = internal.ReplicaWALOffsetGaugeVec.WithLabelValues(dbPath, r.Name())
|
||||
|
||||
return r
|
||||
}
|
||||
@@ -384,36 +396,52 @@ func (r *FileReplica) WALs(ctx context.Context) ([]*WALInfo, error) {
|
||||
}
|
||||
|
||||
// Start starts replication for a given generation.
|
||||
func (r *FileReplica) Start(ctx context.Context) {
|
||||
func (r *FileReplica) Start(ctx context.Context) (err error) {
|
||||
// Ignore if replica is being used sychronously.
|
||||
if !r.MonitorEnabled {
|
||||
return
|
||||
return nil
|
||||
}
|
||||
|
||||
// Stop previous replication.
|
||||
r.Stop()
|
||||
r.Stop(false)
|
||||
|
||||
// Wrap context with cancelation.
|
||||
ctx, r.cancel = context.WithCancel(ctx)
|
||||
|
||||
// Start goroutine to replicate data.
|
||||
r.wg.Add(3)
|
||||
r.wg.Add(4)
|
||||
go func() { defer r.wg.Done(); r.monitor(ctx) }()
|
||||
go func() { defer r.wg.Done(); r.retainer(ctx) }()
|
||||
go func() { defer r.wg.Done(); r.snapshotter(ctx) }()
|
||||
go func() { defer r.wg.Done(); r.validator(ctx) }()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Stop cancels any outstanding replication and blocks until finished.
|
||||
func (r *FileReplica) Stop() {
|
||||
//
|
||||
// Performing a hard stop will close the DB file descriptor which could release
|
||||
// locks on per-process locks. Hard stops should only be performed when
|
||||
// stopping the entire process.
|
||||
func (r *FileReplica) Stop(hard bool) (err error) {
|
||||
r.cancel()
|
||||
r.wg.Wait()
|
||||
|
||||
r.muf.Lock()
|
||||
defer r.muf.Unlock()
|
||||
if hard && r.f != nil {
|
||||
if e := r.f.Close(); e != nil && err == nil {
|
||||
err = e
|
||||
}
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// monitor runs in a separate goroutine and continuously replicates the DB.
|
||||
func (r *FileReplica) monitor(ctx context.Context) {
|
||||
// Clear old temporary files that my have been left from a crash.
|
||||
if err := removeTmpFiles(r.dst); err != nil {
|
||||
log.Printf("%s(%s): cannot remove tmp files: %s", r.db.Path(), r.Name(), err)
|
||||
log.Printf("%s(%s): monitor: cannot remove tmp files: %s", r.db.Path(), r.Name(), err)
|
||||
}
|
||||
|
||||
// Continuously check for new data to replicate.
|
||||
@@ -433,7 +461,7 @@ func (r *FileReplica) monitor(ctx context.Context) {
|
||||
|
||||
// Synchronize the shadow wal into the replication directory.
|
||||
if err := r.Sync(ctx); err != nil {
|
||||
log.Printf("%s(%s): sync error: %s", r.db.Path(), r.Name(), err)
|
||||
log.Printf("%s(%s): monitor error: %s", r.db.Path(), r.Name(), err)
|
||||
continue
|
||||
}
|
||||
}
|
||||
@@ -441,7 +469,18 @@ func (r *FileReplica) monitor(ctx context.Context) {
|
||||
|
||||
// retainer runs in a separate goroutine and handles retention.
|
||||
func (r *FileReplica) retainer(ctx context.Context) {
|
||||
ticker := time.NewTicker(r.RetentionCheckInterval)
|
||||
// Disable retention enforcement if retention period is non-positive.
|
||||
if r.Retention <= 0 {
|
||||
return
|
||||
}
|
||||
|
||||
// Ensure check interval is not longer than retention period.
|
||||
checkInterval := r.RetentionCheckInterval
|
||||
if checkInterval > r.Retention {
|
||||
checkInterval = r.Retention
|
||||
}
|
||||
|
||||
ticker := time.NewTicker(checkInterval)
|
||||
defer ticker.Stop()
|
||||
|
||||
for {
|
||||
@@ -450,7 +489,29 @@ func (r *FileReplica) retainer(ctx context.Context) {
|
||||
return
|
||||
case <-ticker.C:
|
||||
if err := r.EnforceRetention(ctx); err != nil {
|
||||
log.Printf("%s(%s): retain error: %s", r.db.Path(), r.Name(), err)
|
||||
log.Printf("%s(%s): retainer error: %s", r.db.Path(), r.Name(), err)
|
||||
continue
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// snapshotter runs in a separate goroutine and handles snapshotting.
|
||||
func (r *FileReplica) snapshotter(ctx context.Context) {
|
||||
if r.SnapshotInterval <= 0 {
|
||||
return
|
||||
}
|
||||
|
||||
ticker := time.NewTicker(r.SnapshotInterval)
|
||||
defer ticker.Stop()
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return
|
||||
case <-ticker.C:
|
||||
if err := r.Snapshot(ctx); err != nil && err != ErrNoGeneration {
|
||||
log.Printf("%s(%s): snapshotter error: %s", r.db.Path(), r.Name(), err)
|
||||
continue
|
||||
}
|
||||
}
|
||||
@@ -526,8 +587,23 @@ func (r *FileReplica) CalcPos(ctx context.Context, generation string) (pos Pos,
|
||||
return pos, nil
|
||||
}
|
||||
|
||||
// Snapshot copies the entire database to the replica path.
|
||||
func (r *FileReplica) Snapshot(ctx context.Context) error {
|
||||
// Find current position of database.
|
||||
pos, err := r.db.Pos()
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot determine current db generation: %w", err)
|
||||
} else if pos.IsZero() {
|
||||
return ErrNoGeneration
|
||||
}
|
||||
return r.snapshot(ctx, pos.Generation, pos.Index)
|
||||
}
|
||||
|
||||
// snapshot copies the entire database to the replica path.
|
||||
func (r *FileReplica) snapshot(ctx context.Context, generation string, index int) error {
|
||||
r.muf.Lock()
|
||||
defer r.muf.Unlock()
|
||||
|
||||
// Acquire a read lock on the database during snapshot to prevent checkpoints.
|
||||
tx, err := r.db.db.Begin()
|
||||
if err != nil {
|
||||
@@ -544,11 +620,55 @@ func (r *FileReplica) snapshot(ctx context.Context, generation string, index int
|
||||
return nil
|
||||
}
|
||||
|
||||
startTime := time.Now()
|
||||
|
||||
if err := mkdirAll(filepath.Dir(snapshotPath), r.db.dirmode, r.db.diruid, r.db.dirgid); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return compressFile(r.db.Path(), snapshotPath, r.db.uid, r.db.gid)
|
||||
// Open db file descriptor, if not already open.
|
||||
if r.f == nil {
|
||||
if r.f, err = os.Open(r.db.Path()); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if _, err := r.f.Seek(0, io.SeekStart); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
fi, err := r.f.Stat()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
w, err := createFile(snapshotPath+".tmp", fi.Mode(), r.db.uid, r.db.gid)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer w.Close()
|
||||
|
||||
zr := lz4.NewWriter(w)
|
||||
defer zr.Close()
|
||||
|
||||
// Copy & compress file contents to temporary file.
|
||||
if _, err := io.Copy(zr, r.f); err != nil {
|
||||
return err
|
||||
} else if err := zr.Close(); err != nil {
|
||||
return err
|
||||
} else if err := w.Sync(); err != nil {
|
||||
return err
|
||||
} else if err := w.Close(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Move compressed file to final location.
|
||||
if err := os.Rename(snapshotPath+".tmp", snapshotPath); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
log.Printf("%s(%s): snapshot: creating %s/%08x t=%s", r.db.Path(), r.Name(), generation, index, time.Since(startTime).Truncate(time.Millisecond))
|
||||
return nil
|
||||
}
|
||||
|
||||
// snapshotN returns the number of snapshots for a generation.
|
||||
@@ -589,6 +709,8 @@ func (r *FileReplica) Sync(ctx context.Context) (err error) {
|
||||
}
|
||||
generation := dpos.Generation
|
||||
|
||||
Tracef("%s(%s): replica sync: db.pos=%s", r.db.Path(), r.Name(), dpos)
|
||||
|
||||
// Create snapshot if no snapshots exist for generation.
|
||||
if n, err := r.snapshotN(generation); err != nil {
|
||||
return err
|
||||
@@ -608,6 +730,7 @@ func (r *FileReplica) Sync(ctx context.Context) (err error) {
|
||||
return fmt.Errorf("cannot determine replica position: %s", err)
|
||||
}
|
||||
|
||||
Tracef("%s(%s): replica sync: calc new pos: %s", r.db.Path(), r.Name(), pos)
|
||||
r.mu.Lock()
|
||||
r.pos = pos
|
||||
r.mu.Unlock()
|
||||
@@ -660,11 +783,48 @@ func (r *FileReplica) syncWAL(ctx context.Context) (err error) {
|
||||
return err
|
||||
}
|
||||
|
||||
n, err := io.Copy(w, rd)
|
||||
r.walBytesCounter.Add(float64(n))
|
||||
// Copy header if at offset zero.
|
||||
var psalt uint64 // previous salt value
|
||||
if pos := rd.Pos(); pos.Offset == 0 {
|
||||
buf := make([]byte, WALHeaderSize)
|
||||
if _, err := io.ReadFull(rd, buf); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
psalt = binary.BigEndian.Uint64(buf[16:24])
|
||||
|
||||
n, err := w.Write(buf)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
r.walBytesCounter.Add(float64(n))
|
||||
}
|
||||
|
||||
// Copy frames.
|
||||
for {
|
||||
pos := rd.Pos()
|
||||
assert(pos.Offset == frameAlign(pos.Offset, r.db.pageSize), "shadow wal reader not frame aligned")
|
||||
|
||||
buf := make([]byte, WALFrameHeaderSize+r.db.pageSize)
|
||||
if _, err := io.ReadFull(rd, buf); err == io.EOF {
|
||||
break
|
||||
} else if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Verify salt matches the previous frame/header read.
|
||||
salt := binary.BigEndian.Uint64(buf[8:16])
|
||||
if psalt != 0 && psalt != salt {
|
||||
return fmt.Errorf("replica salt mismatch: %s", filepath.Base(filename))
|
||||
}
|
||||
psalt = salt
|
||||
|
||||
n, err := w.Write(buf)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
r.walBytesCounter.Add(float64(n))
|
||||
}
|
||||
|
||||
if err := w.Sync(); err != nil {
|
||||
return err
|
||||
@@ -706,7 +866,7 @@ func (r *FileReplica) compress(ctx context.Context, generation string) error {
|
||||
}
|
||||
|
||||
dst := filename + ".lz4"
|
||||
if err := compressFile(filename, dst, r.db.uid, r.db.gid); err != nil {
|
||||
if err := compressWALFile(filename, dst, r.db.uid, r.db.gid); err != nil {
|
||||
return err
|
||||
} else if err := os.Remove(filename); err != nil {
|
||||
return err
|
||||
@@ -792,7 +952,6 @@ func (r *FileReplica) EnforceRetention(ctx context.Context) (err error) {
|
||||
|
||||
// If no retained snapshots exist, create a new snapshot.
|
||||
if len(snapshots) == 0 {
|
||||
log.Printf("%s(%s): snapshots exceeds retention, creating new snapshot", r.db.Path(), r.Name())
|
||||
if err := r.snapshot(ctx, pos.Generation, pos.Index); err != nil {
|
||||
return fmt.Errorf("cannot snapshot: %w", err)
|
||||
}
|
||||
@@ -810,7 +969,7 @@ func (r *FileReplica) EnforceRetention(ctx context.Context) (err error) {
|
||||
|
||||
// Delete generations if it has no snapshots being retained.
|
||||
if snapshot == nil {
|
||||
log.Printf("%s(%s): generation %q has no retained snapshots, deleting", r.db.Path(), r.Name(), generation)
|
||||
log.Printf("%s(%s): retainer: deleting generation %q has no retained snapshots, deleting", r.db.Path(), r.Name(), generation)
|
||||
if err := os.RemoveAll(r.GenerationDir(generation)); err != nil {
|
||||
return fmt.Errorf("cannot delete generation %q dir: %w", generation, err)
|
||||
}
|
||||
@@ -839,6 +998,7 @@ func (r *FileReplica) deleteGenerationSnapshotsBefore(ctx context.Context, gener
|
||||
return err
|
||||
}
|
||||
|
||||
var n int
|
||||
for _, fi := range fis {
|
||||
idx, _, err := ParseSnapshotPath(fi.Name())
|
||||
if err != nil {
|
||||
@@ -847,10 +1007,13 @@ func (r *FileReplica) deleteGenerationSnapshotsBefore(ctx context.Context, gener
|
||||
continue
|
||||
}
|
||||
|
||||
log.Printf("%s(%s): retention exceeded, deleting from generation %q: %s", r.db.Path(), r.Name(), generation, fi.Name())
|
||||
if err := os.Remove(filepath.Join(dir, fi.Name())); err != nil {
|
||||
return err
|
||||
}
|
||||
n++
|
||||
}
|
||||
if n > 0 {
|
||||
log.Printf("%s(%s): retainer: deleting snapshots before %s/%08x; n=%d", r.db.Path(), r.Name(), generation, index, n)
|
||||
}
|
||||
|
||||
return nil
|
||||
@@ -867,6 +1030,7 @@ func (r *FileReplica) deleteGenerationWALBefore(ctx context.Context, generation
|
||||
return err
|
||||
}
|
||||
|
||||
var n int
|
||||
for _, fi := range fis {
|
||||
idx, _, _, err := ParseWALPath(fi.Name())
|
||||
if err != nil {
|
||||
@@ -875,10 +1039,13 @@ func (r *FileReplica) deleteGenerationWALBefore(ctx context.Context, generation
|
||||
continue
|
||||
}
|
||||
|
||||
log.Printf("%s(%s): generation %q wal no longer retained, deleting %s", r.db.Path(), r.Name(), generation, fi.Name())
|
||||
if err := os.Remove(filepath.Join(dir, fi.Name())); err != nil {
|
||||
return err
|
||||
}
|
||||
n++
|
||||
}
|
||||
if n > 0 {
|
||||
log.Printf("%s(%s): retainer: deleting wal files before %s/%08x n=%d", r.db.Path(), r.Name(), generation, index, n)
|
||||
}
|
||||
|
||||
return nil
|
||||
@@ -923,6 +1090,10 @@ func WALIndexAt(ctx context.Context, r Replica, generation string, maxIndex int,
|
||||
|
||||
var index int
|
||||
for _, wal := range wals {
|
||||
if wal.Generation != generation {
|
||||
continue
|
||||
}
|
||||
|
||||
if !timestamp.IsZero() && wal.CreatedAt.After(timestamp) {
|
||||
continue // after timestamp, skip
|
||||
} else if wal.Index > maxIndex {
|
||||
@@ -941,15 +1112,21 @@ func WALIndexAt(ctx context.Context, r Replica, generation string, maxIndex int,
|
||||
return index, nil
|
||||
}
|
||||
|
||||
// compressFile compresses a file and replaces it with a new file with a .lz4 extension.
|
||||
func compressFile(src, dst string, uid, gid int) error {
|
||||
// compressWALFile compresses a file and replaces it with a new file with a .lz4 extension.
|
||||
// Do not use this on database files because of issues with non-OFD locks.
|
||||
func compressWALFile(src, dst string, uid, gid int) error {
|
||||
r, err := os.Open(src)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer r.Close()
|
||||
|
||||
w, err := createFile(dst+".tmp", uid, gid)
|
||||
fi, err := r.Stat()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
w, err := createFile(dst+".tmp", fi.Mode(), uid, gid)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -978,20 +1155,6 @@ func compressFile(src, dst string, uid, gid int) error {
|
||||
func ValidateReplica(ctx context.Context, r Replica) error {
|
||||
db := r.DB()
|
||||
|
||||
// Compute checksum of primary database under lock. This prevents a
|
||||
// sync from occurring and the database will not be written.
|
||||
chksum0, pos, err := db.CRC64()
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot compute checksum: %w", err)
|
||||
}
|
||||
log.Printf("%s(%s): primary checksum computed: %016x @ %s", db.Path(), r.Name(), chksum0, pos)
|
||||
|
||||
// Wait until replica catches up to position.
|
||||
log.Printf("%s(%s): waiting for replica", db.Path(), r.Name())
|
||||
if err := waitForReplica(ctx, r, pos); err != nil {
|
||||
return fmt.Errorf("cannot wait for replica: %w", err)
|
||||
}
|
||||
|
||||
// Restore replica to a temporary directory.
|
||||
tmpdir, err := ioutil.TempDir("", "*-litestream")
|
||||
if err != nil {
|
||||
@@ -999,8 +1162,20 @@ func ValidateReplica(ctx context.Context, r Replica) error {
|
||||
}
|
||||
defer os.RemoveAll(tmpdir)
|
||||
|
||||
restorePath := filepath.Join(tmpdir, "db")
|
||||
if err := db.Restore(ctx, RestoreOptions{
|
||||
// Compute checksum of primary database under lock. This prevents a
|
||||
// sync from occurring and the database will not be written.
|
||||
chksum0, pos, err := db.CRC64()
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot compute checksum: %w", err)
|
||||
}
|
||||
|
||||
// Wait until replica catches up to position.
|
||||
if err := waitForReplica(ctx, r, pos); err != nil {
|
||||
return fmt.Errorf("cannot wait for replica: %w", err)
|
||||
}
|
||||
|
||||
restorePath := filepath.Join(tmpdir, "replica")
|
||||
if err := RestoreReplica(ctx, r, RestoreOptions{
|
||||
OutputPath: restorePath,
|
||||
ReplicaName: r.Name(),
|
||||
Generation: pos.Generation,
|
||||
@@ -1011,22 +1186,38 @@ func ValidateReplica(ctx context.Context, r Replica) error {
|
||||
}
|
||||
|
||||
// Open file handle for restored database.
|
||||
chksum1, err := checksumFile(restorePath)
|
||||
// NOTE: This open is ok as the restored database is not managed by litestream.
|
||||
f, err := os.Open(restorePath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
log.Printf("%s(%s): restore complete, replica checksum=%016x", db.Path(), r.Name(), chksum1)
|
||||
// Read entire file into checksum.
|
||||
h := crc64.New(crc64.MakeTable(crc64.ISO))
|
||||
if _, err := io.Copy(h, f); err != nil {
|
||||
return err
|
||||
}
|
||||
chksum1 := h.Sum64()
|
||||
|
||||
status := "ok"
|
||||
mismatch := chksum0 != chksum1
|
||||
if mismatch {
|
||||
status = "mismatch"
|
||||
}
|
||||
log.Printf("%s(%s): validator: status=%s db=%016x replica=%016x pos=%s", db.Path(), r.Name(), status, chksum0, chksum1, pos)
|
||||
|
||||
// Validate checksums match.
|
||||
if chksum0 != chksum1 {
|
||||
if mismatch {
|
||||
internal.ReplicaValidationTotalCounterVec.WithLabelValues(db.Path(), r.Name(), "error").Inc()
|
||||
return ErrChecksumMismatch
|
||||
}
|
||||
|
||||
internal.ReplicaValidationTotalCounterVec.WithLabelValues(db.Path(), r.Name(), "ok").Inc()
|
||||
log.Printf("%s(%s): replica ok", db.Path(), r.Name())
|
||||
|
||||
if err := os.RemoveAll(tmpdir); err != nil {
|
||||
return fmt.Errorf("cannot remove temporary validation directory: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -1037,6 +1228,9 @@ func waitForReplica(ctx context.Context, r Replica, pos Pos) error {
|
||||
ticker := time.NewTicker(500 * time.Millisecond)
|
||||
defer ticker.Stop()
|
||||
|
||||
timer := time.NewTicker(10 * time.Second)
|
||||
defer ticker.Stop()
|
||||
|
||||
once := make(chan struct{}, 1)
|
||||
once <- struct{}{}
|
||||
|
||||
@@ -1044,6 +1238,8 @@ func waitForReplica(ctx context.Context, r Replica, pos Pos) error {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
case <-timer.C:
|
||||
return fmt.Errorf("replica wait exceeded timeout")
|
||||
case <-ticker.C:
|
||||
case <-once: // immediate on first check
|
||||
}
|
||||
@@ -1051,7 +1247,7 @@ func waitForReplica(ctx context.Context, r Replica, pos Pos) error {
|
||||
// Obtain current position of replica, check if past target position.
|
||||
curr, err := r.CalcPos(ctx, pos.Generation)
|
||||
if err != nil {
|
||||
log.Printf("%s(%s): cannot obtain replica position: %s", db.Path(), r.Name(), err)
|
||||
log.Printf("%s(%s): validator: cannot obtain replica position: %s", db.Path(), r.Name(), err)
|
||||
continue
|
||||
}
|
||||
|
||||
@@ -1070,7 +1266,6 @@ func waitForReplica(ctx context.Context, r Replica, pos Pos) error {
|
||||
|
||||
// If not ready, restart loop.
|
||||
if !ready {
|
||||
log.Printf("%s(%s): replica at %s, waiting for %s", db.Path(), r.Name(), curr, pos)
|
||||
continue
|
||||
}
|
||||
|
||||
|
||||
240
s3/s3.go
240
s3/s3.go
@@ -7,13 +7,16 @@ import (
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"net"
|
||||
"os"
|
||||
"path"
|
||||
"regexp"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/aws/aws-sdk-go/aws"
|
||||
"github.com/aws/aws-sdk-go/aws/credentials"
|
||||
"github.com/aws/aws-sdk-go/aws/defaults"
|
||||
"github.com/aws/aws-sdk-go/aws/session"
|
||||
"github.com/aws/aws-sdk-go/service/s3"
|
||||
"github.com/aws/aws-sdk-go/service/s3/s3manager"
|
||||
@@ -49,6 +52,9 @@ type Replica struct {
|
||||
snapshotMu sync.Mutex
|
||||
pos litestream.Pos // last position
|
||||
|
||||
muf sync.Mutex
|
||||
f *os.File // long-lived read-only db file descriptor
|
||||
|
||||
wg sync.WaitGroup
|
||||
cancel func()
|
||||
|
||||
@@ -71,10 +77,15 @@ type Replica struct {
|
||||
Region string
|
||||
Bucket string
|
||||
Path string
|
||||
Endpoint string
|
||||
ForcePathStyle bool
|
||||
|
||||
// Time between syncs with the shadow WAL.
|
||||
SyncInterval time.Duration
|
||||
|
||||
// Frequency to create new snapshots.
|
||||
SnapshotInterval time.Duration
|
||||
|
||||
// Time to keep snapshots and related WAL files.
|
||||
// Database is snapshotted after interval and older WAL files are discarded.
|
||||
Retention time.Duration
|
||||
@@ -104,16 +115,20 @@ func NewReplica(db *litestream.DB, name string) *Replica {
|
||||
MonitorEnabled: true,
|
||||
}
|
||||
|
||||
r.snapshotTotalGauge = internal.ReplicaSnapshotTotalGaugeVec.WithLabelValues(db.Path(), r.Name())
|
||||
r.walBytesCounter = internal.ReplicaWALBytesCounterVec.WithLabelValues(db.Path(), r.Name())
|
||||
r.walIndexGauge = internal.ReplicaWALIndexGaugeVec.WithLabelValues(db.Path(), r.Name())
|
||||
r.walOffsetGauge = internal.ReplicaWALOffsetGaugeVec.WithLabelValues(db.Path(), r.Name())
|
||||
r.putOperationTotalCounter = operationTotalCounterVec.WithLabelValues(db.Path(), r.Name(), "PUT")
|
||||
r.putOperationBytesCounter = operationBytesCounterVec.WithLabelValues(db.Path(), r.Name(), "PUT")
|
||||
r.getOperationTotalCounter = operationTotalCounterVec.WithLabelValues(db.Path(), r.Name(), "GET")
|
||||
r.getOperationBytesCounter = operationBytesCounterVec.WithLabelValues(db.Path(), r.Name(), "GET")
|
||||
r.listOperationTotalCounter = operationTotalCounterVec.WithLabelValues(db.Path(), r.Name(), "LIST")
|
||||
r.deleteOperationTotalCounter = operationTotalCounterVec.WithLabelValues(db.Path(), r.Name(), "DELETE")
|
||||
var dbPath string
|
||||
if db != nil {
|
||||
dbPath = db.Path()
|
||||
}
|
||||
r.snapshotTotalGauge = internal.ReplicaSnapshotTotalGaugeVec.WithLabelValues(dbPath, r.Name())
|
||||
r.walBytesCounter = internal.ReplicaWALBytesCounterVec.WithLabelValues(dbPath, r.Name())
|
||||
r.walIndexGauge = internal.ReplicaWALIndexGaugeVec.WithLabelValues(dbPath, r.Name())
|
||||
r.walOffsetGauge = internal.ReplicaWALOffsetGaugeVec.WithLabelValues(dbPath, r.Name())
|
||||
r.putOperationTotalCounter = operationTotalCounterVec.WithLabelValues(dbPath, r.Name(), "PUT")
|
||||
r.putOperationBytesCounter = operationBytesCounterVec.WithLabelValues(dbPath, r.Name(), "PUT")
|
||||
r.getOperationTotalCounter = operationTotalCounterVec.WithLabelValues(dbPath, r.Name(), "GET")
|
||||
r.getOperationBytesCounter = operationBytesCounterVec.WithLabelValues(dbPath, r.Name(), "GET")
|
||||
r.listOperationTotalCounter = operationTotalCounterVec.WithLabelValues(dbPath, r.Name(), "LIST")
|
||||
r.deleteOperationTotalCounter = operationTotalCounterVec.WithLabelValues(dbPath, r.Name(), "DELETE")
|
||||
|
||||
return r
|
||||
}
|
||||
@@ -405,29 +420,47 @@ func (r *Replica) WALs(ctx context.Context) ([]*litestream.WALInfo, error) {
|
||||
}
|
||||
|
||||
// Start starts replication for a given generation.
|
||||
func (r *Replica) Start(ctx context.Context) {
|
||||
func (r *Replica) Start(ctx context.Context) (err error) {
|
||||
// Ignore if replica is being used sychronously.
|
||||
if !r.MonitorEnabled {
|
||||
return
|
||||
return nil
|
||||
}
|
||||
|
||||
// Stop previous replication.
|
||||
r.Stop()
|
||||
r.Stop(false)
|
||||
|
||||
// Wrap context with cancelation.
|
||||
ctx, r.cancel = context.WithCancel(ctx)
|
||||
|
||||
// Start goroutines to manage replica data.
|
||||
r.wg.Add(3)
|
||||
r.wg.Add(4)
|
||||
go func() { defer r.wg.Done(); r.monitor(ctx) }()
|
||||
go func() { defer r.wg.Done(); r.retainer(ctx) }()
|
||||
go func() { defer r.wg.Done(); r.snapshotter(ctx) }()
|
||||
go func() { defer r.wg.Done(); r.validator(ctx) }()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Stop cancels any outstanding replication and blocks until finished.
|
||||
func (r *Replica) Stop() {
|
||||
//
|
||||
// Performing a hard stop will close the DB file descriptor which could release
|
||||
// locks on per-process locks. Hard stops should only be performed when
|
||||
// stopping the entire process.
|
||||
func (r *Replica) Stop(hard bool) (err error) {
|
||||
r.cancel()
|
||||
r.wg.Wait()
|
||||
|
||||
r.muf.Lock()
|
||||
defer r.muf.Unlock()
|
||||
|
||||
if hard && r.f != nil {
|
||||
if e := r.f.Close(); e != nil && err == nil {
|
||||
err = e
|
||||
}
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
// monitor runs in a separate goroutine and continuously replicates the DB.
|
||||
@@ -462,7 +495,7 @@ func (r *Replica) monitor(ctx context.Context) {
|
||||
|
||||
// Synchronize the shadow wal into the replication directory.
|
||||
if err := r.Sync(ctx); err != nil {
|
||||
log.Printf("%s(%s): sync error: %s", r.db.Path(), r.Name(), err)
|
||||
log.Printf("%s(%s): monitor error: %s", r.db.Path(), r.Name(), err)
|
||||
continue
|
||||
}
|
||||
}
|
||||
@@ -470,7 +503,18 @@ func (r *Replica) monitor(ctx context.Context) {
|
||||
|
||||
// retainer runs in a separate goroutine and handles retention.
|
||||
func (r *Replica) retainer(ctx context.Context) {
|
||||
ticker := time.NewTicker(r.RetentionCheckInterval)
|
||||
// Disable retention enforcement if retention period is non-positive.
|
||||
if r.Retention <= 0 {
|
||||
return
|
||||
}
|
||||
|
||||
// Ensure check interval is not longer than retention period.
|
||||
checkInterval := r.RetentionCheckInterval
|
||||
if checkInterval > r.Retention {
|
||||
checkInterval = r.Retention
|
||||
}
|
||||
|
||||
ticker := time.NewTicker(checkInterval)
|
||||
defer ticker.Stop()
|
||||
|
||||
for {
|
||||
@@ -486,6 +530,28 @@ func (r *Replica) retainer(ctx context.Context) {
|
||||
}
|
||||
}
|
||||
|
||||
// snapshotter runs in a separate goroutine and handles snapshotting.
|
||||
func (r *Replica) snapshotter(ctx context.Context) {
|
||||
if r.SnapshotInterval <= 0 {
|
||||
return
|
||||
}
|
||||
|
||||
ticker := time.NewTicker(r.SnapshotInterval)
|
||||
defer ticker.Stop()
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return
|
||||
case <-ticker.C:
|
||||
if err := r.Snapshot(ctx); err != nil && err != litestream.ErrNoGeneration {
|
||||
log.Printf("%s(%s): snapshotter error: %s", r.db.Path(), r.Name(), err)
|
||||
continue
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// validator runs in a separate goroutine and handles periodic validation.
|
||||
func (r *Replica) validator(ctx context.Context) {
|
||||
// Initialize counters since validation occurs infrequently.
|
||||
@@ -563,8 +629,23 @@ func (r *Replica) CalcPos(ctx context.Context, generation string) (pos litestrea
|
||||
return pos, nil
|
||||
}
|
||||
|
||||
// Snapshot copies the entire database to the replica path.
|
||||
func (r *Replica) Snapshot(ctx context.Context) error {
|
||||
// Find current position of database.
|
||||
pos, err := r.db.Pos()
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot determine current db generation: %w", err)
|
||||
} else if pos.IsZero() {
|
||||
return litestream.ErrNoGeneration
|
||||
}
|
||||
return r.snapshot(ctx, pos.Generation, pos.Index)
|
||||
}
|
||||
|
||||
// snapshot copies the entire database to the replica path.
|
||||
func (r *Replica) snapshot(ctx context.Context, generation string, index int) error {
|
||||
r.muf.Lock()
|
||||
defer r.muf.Unlock()
|
||||
|
||||
// Acquire a read lock on the database during snapshot to prevent checkpoints.
|
||||
tx, err := r.db.SQLDB().Begin()
|
||||
if err != nil {
|
||||
@@ -575,14 +656,21 @@ func (r *Replica) snapshot(ctx context.Context, generation string, index int) er
|
||||
}
|
||||
defer func() { _ = tx.Rollback() }()
|
||||
|
||||
// Open database file handle.
|
||||
f, err := os.Open(r.db.Path())
|
||||
if err != nil {
|
||||
// Open long-lived file descriptor on database.
|
||||
if r.f == nil {
|
||||
if r.f, err = os.Open(r.db.Path()); err != nil {
|
||||
return err
|
||||
}
|
||||
defer f.Close()
|
||||
}
|
||||
|
||||
fi, err := f.Stat()
|
||||
// Move the file descriptor to the beginning. We only use one long lived
|
||||
// file descriptor because some operating systems will remove the database
|
||||
// lock when closing a separate file descriptor on the DB.
|
||||
if _, err := r.f.Seek(0, io.SeekStart); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
fi, err := r.f.Stat()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -590,7 +678,7 @@ func (r *Replica) snapshot(ctx context.Context, generation string, index int) er
|
||||
pr, pw := io.Pipe()
|
||||
zw := lz4.NewWriter(pw)
|
||||
go func() {
|
||||
if _, err := io.Copy(zw, f); err != nil {
|
||||
if _, err := io.Copy(zw, r.f); err != nil {
|
||||
_ = pw.CloseWithError(err)
|
||||
return
|
||||
}
|
||||
@@ -598,6 +686,7 @@ func (r *Replica) snapshot(ctx context.Context, generation string, index int) er
|
||||
}()
|
||||
|
||||
snapshotPath := r.SnapshotPath(generation, index)
|
||||
startTime := time.Now()
|
||||
|
||||
if _, err := r.uploader.UploadWithContext(ctx, &s3manager.UploadInput{
|
||||
Bucket: aws.String(r.Bucket),
|
||||
@@ -610,6 +699,7 @@ func (r *Replica) snapshot(ctx context.Context, generation string, index int) er
|
||||
r.putOperationTotalCounter.Inc()
|
||||
r.putOperationBytesCounter.Add(float64(fi.Size()))
|
||||
|
||||
log.Printf("%s(%s): snapshot: creating %s/%08x t=%s", r.db.Path(), r.Name(), generation, index, time.Since(startTime).Truncate(time.Millisecond))
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -638,19 +728,26 @@ func (r *Replica) Init(ctx context.Context) (err error) {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Look up region if not specified.
|
||||
// Look up region if not specified and no endpoint is used.
|
||||
// Endpoints are typically used for non-S3 object stores and do not
|
||||
// necessarily require a region.
|
||||
region := r.Region
|
||||
if region == "" {
|
||||
if r.Endpoint == "" {
|
||||
if region, err = r.findBucketRegion(ctx, r.Bucket); err != nil {
|
||||
return fmt.Errorf("cannot lookup bucket region: %w", err)
|
||||
}
|
||||
} else {
|
||||
region = "us-east-1" // default for non-S3 object stores
|
||||
}
|
||||
}
|
||||
|
||||
// Create new AWS session.
|
||||
sess, err := session.NewSession(&aws.Config{
|
||||
Credentials: credentials.NewStaticCredentials(r.AccessKeyID, r.SecretAccessKey, ""),
|
||||
Region: aws.String(region),
|
||||
})
|
||||
config := r.config()
|
||||
if region != "" {
|
||||
config.Region = aws.String(region)
|
||||
}
|
||||
sess, err := session.NewSession(config)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot create aws session: %w", err)
|
||||
}
|
||||
@@ -659,12 +756,27 @@ func (r *Replica) Init(ctx context.Context) (err error) {
|
||||
return nil
|
||||
}
|
||||
|
||||
// config returns the AWS configuration. Uses the default credential chain
|
||||
// unless a key/secret are explicitly set.
|
||||
func (r *Replica) config() *aws.Config {
|
||||
config := defaults.Get().Config
|
||||
if r.AccessKeyID != "" || r.SecretAccessKey != "" {
|
||||
config.Credentials = credentials.NewStaticCredentials(r.AccessKeyID, r.SecretAccessKey, "")
|
||||
}
|
||||
if r.Endpoint != "" {
|
||||
config.Endpoint = aws.String(r.Endpoint)
|
||||
}
|
||||
if r.ForcePathStyle {
|
||||
config.S3ForcePathStyle = aws.Bool(r.ForcePathStyle)
|
||||
}
|
||||
return config
|
||||
}
|
||||
|
||||
func (r *Replica) findBucketRegion(ctx context.Context, bucket string) (string, error) {
|
||||
// Connect to US standard region to fetch info.
|
||||
sess, err := session.NewSession(&aws.Config{
|
||||
Credentials: credentials.NewStaticCredentials(r.AccessKeyID, r.SecretAccessKey, ""),
|
||||
Region: aws.String("us-east-1"),
|
||||
})
|
||||
config := r.config()
|
||||
config.Region = aws.String("us-east-1")
|
||||
sess, err := session.NewSession(config)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
@@ -922,7 +1034,6 @@ func (r *Replica) EnforceRetention(ctx context.Context) (err error) {
|
||||
|
||||
// If no retained snapshots exist, create a new snapshot.
|
||||
if len(snapshots) == 0 {
|
||||
log.Printf("%s(%s): snapshots exceeds retention, creating new snapshot", r.db.Path(), r.Name())
|
||||
if err := r.snapshot(ctx, pos.Generation, pos.Index); err != nil {
|
||||
return fmt.Errorf("cannot snapshot: %w", err)
|
||||
}
|
||||
@@ -945,7 +1056,6 @@ func (r *Replica) EnforceRetention(ctx context.Context) (err error) {
|
||||
|
||||
// Delete generations if it has no snapshots being retained.
|
||||
if snapshot == nil {
|
||||
log.Printf("%s(%s): generation %q has no retained snapshots, deleting", r.db.Path(), r.Name(), generation)
|
||||
if err := r.deleteGenerationBefore(ctx, generation, -1); err != nil {
|
||||
return fmt.Errorf("cannot delete generation %q dir: %w", generation, err)
|
||||
}
|
||||
@@ -988,16 +1098,13 @@ func (r *Replica) deleteGenerationBefore(ctx context.Context, generation string,
|
||||
}
|
||||
|
||||
// Delete all files in batches.
|
||||
var n int
|
||||
for i := 0; i < len(objIDs); i += MaxKeys {
|
||||
j := i + MaxKeys
|
||||
if j > len(objIDs) {
|
||||
j = len(objIDs)
|
||||
}
|
||||
|
||||
for _, objID := range objIDs[i:j] {
|
||||
log.Printf("%s(%s): retention exceeded, deleting from generation %q: %s", r.db.Path(), r.Name(), generation, path.Base(*objID.Key))
|
||||
}
|
||||
|
||||
if _, err := r.s3.DeleteObjectsWithContext(ctx, &s3.DeleteObjectsInput{
|
||||
Bucket: aws.String(r.Bucket),
|
||||
Delete: &s3.Delete{
|
||||
@@ -1007,12 +1114,69 @@ func (r *Replica) deleteGenerationBefore(ctx context.Context, generation string,
|
||||
}); err != nil {
|
||||
return err
|
||||
}
|
||||
n += len(objIDs[i:j])
|
||||
r.deleteOperationTotalCounter.Inc()
|
||||
}
|
||||
|
||||
log.Printf("%s(%s): retainer: deleting wal files before %s/%08x n=%d", r.db.Path(), r.Name(), generation, index, n)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// ParseHost extracts data from a hostname depending on the service provider.
|
||||
func ParseHost(s string) (bucket, region, endpoint string, forcePathStyle bool) {
|
||||
// Extract port if one is specified.
|
||||
host, port, err := net.SplitHostPort(s)
|
||||
if err != nil {
|
||||
host = s
|
||||
}
|
||||
|
||||
// Default to path-based URLs, except for with AWS S3 itself.
|
||||
forcePathStyle = true
|
||||
|
||||
// Extract fields from provider-specific host formats.
|
||||
scheme := "https"
|
||||
if a := localhostRegex.FindStringSubmatch(host); a != nil {
|
||||
bucket, region = a[1], "us-east-1"
|
||||
scheme, endpoint = "http", "localhost"
|
||||
} else if a := gcsRegex.FindStringSubmatch(host); a != nil {
|
||||
bucket, region = a[1], "us-east-1"
|
||||
endpoint = "storage.googleapis.com"
|
||||
} else if a := digitalOceanRegex.FindStringSubmatch(host); a != nil {
|
||||
bucket, region = a[1], a[2]
|
||||
endpoint = fmt.Sprintf("%s.digitaloceanspaces.com", region)
|
||||
} else if a := linodeRegex.FindStringSubmatch(host); a != nil {
|
||||
bucket, region = a[1], a[2]
|
||||
endpoint = fmt.Sprintf("%s.linodeobjects.com", region)
|
||||
} else if a := backblazeRegex.FindStringSubmatch(host); a != nil {
|
||||
bucket, region = a[1], a[2]
|
||||
endpoint = fmt.Sprintf("s3.%s.backblazeb2.com", region)
|
||||
} else {
|
||||
bucket = host
|
||||
forcePathStyle = false
|
||||
}
|
||||
|
||||
// Add port back to endpoint, if available.
|
||||
if endpoint != "" && port != "" {
|
||||
endpoint = net.JoinHostPort(endpoint, port)
|
||||
}
|
||||
|
||||
// Prepend scheme to endpoint.
|
||||
if endpoint != "" {
|
||||
endpoint = scheme + "://" + endpoint
|
||||
}
|
||||
|
||||
return bucket, region, endpoint, forcePathStyle
|
||||
}
|
||||
|
||||
var (
|
||||
localhostRegex = regexp.MustCompile(`^(?:(.+)\.)?localhost$`)
|
||||
digitalOceanRegex = regexp.MustCompile(`^(?:(.+)\.)?([^.]+)\.digitaloceanspaces.com$`)
|
||||
linodeRegex = regexp.MustCompile(`^(?:(.+)\.)?([^.]+)\.linodeobjects.com$`)
|
||||
backblazeRegex = regexp.MustCompile(`^(?:(.+)\.)?s3.([^.]+)\.backblazeb2.com$`)
|
||||
gcsRegex = regexp.MustCompile(`^(?:(.+)\.)?storage.googleapis.com$`)
|
||||
)
|
||||
|
||||
// S3 metrics.
|
||||
var (
|
||||
operationTotalCounterVec = promauto.NewCounterVec(prometheus.CounterOpts{
|
||||
|
||||
80
s3/s3_test.go
Normal file
80
s3/s3_test.go
Normal file
@@ -0,0 +1,80 @@
|
||||
package s3_test
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/benbjohnson/litestream/s3"
|
||||
)
|
||||
|
||||
func TestParseHost(t *testing.T) {
|
||||
// Ensure non-specific hosts return as buckets.
|
||||
t.Run("S3", func(t *testing.T) {
|
||||
bucket, region, endpoint, forcePathStyle := s3.ParseHost(`test.litestream.io`)
|
||||
if got, want := bucket, `test.litestream.io`; got != want {
|
||||
t.Fatalf("bucket=%q, want %q", got, want)
|
||||
} else if got, want := region, ``; got != want {
|
||||
t.Fatalf("region=%q, want %q", got, want)
|
||||
} else if got, want := endpoint, ``; got != want {
|
||||
t.Fatalf("endpoint=%q, want %q", got, want)
|
||||
} else if got, want := forcePathStyle, false; got != want {
|
||||
t.Fatalf("forcePathStyle=%v, want %v", got, want)
|
||||
}
|
||||
})
|
||||
|
||||
// Ensure localhosts use an HTTP endpoint and extract the bucket name.
|
||||
t.Run("Localhost", func(t *testing.T) {
|
||||
t.Run("WithPort", func(t *testing.T) {
|
||||
bucket, region, endpoint, forcePathStyle := s3.ParseHost(`test.localhost:9000`)
|
||||
if got, want := bucket, `test`; got != want {
|
||||
t.Fatalf("bucket=%q, want %q", got, want)
|
||||
} else if got, want := region, `us-east-1`; got != want {
|
||||
t.Fatalf("region=%q, want %q", got, want)
|
||||
} else if got, want := endpoint, `http://localhost:9000`; got != want {
|
||||
t.Fatalf("endpoint=%q, want %q", got, want)
|
||||
} else if got, want := forcePathStyle, true; got != want {
|
||||
t.Fatalf("forcePathStyle=%v, want %v", got, want)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("WithoutPort", func(t *testing.T) {
|
||||
bucket, region, endpoint, forcePathStyle := s3.ParseHost(`test.localhost`)
|
||||
if got, want := bucket, `test`; got != want {
|
||||
t.Fatalf("bucket=%q, want %q", got, want)
|
||||
} else if got, want := region, `us-east-1`; got != want {
|
||||
t.Fatalf("region=%q, want %q", got, want)
|
||||
} else if got, want := endpoint, `http://localhost`; got != want {
|
||||
t.Fatalf("endpoint=%q, want %q", got, want)
|
||||
} else if got, want := forcePathStyle, true; got != want {
|
||||
t.Fatalf("forcePathStyle=%v, want %v", got, want)
|
||||
}
|
||||
})
|
||||
})
|
||||
|
||||
// Ensure backblaze B2 URLs extract bucket, region, & endpoint from host.
|
||||
t.Run("Backblaze", func(t *testing.T) {
|
||||
bucket, region, endpoint, forcePathStyle := s3.ParseHost(`test-123.s3.us-west-000.backblazeb2.com`)
|
||||
if got, want := bucket, `test-123`; got != want {
|
||||
t.Fatalf("bucket=%q, want %q", got, want)
|
||||
} else if got, want := region, `us-west-000`; got != want {
|
||||
t.Fatalf("region=%q, want %q", got, want)
|
||||
} else if got, want := endpoint, `https://s3.us-west-000.backblazeb2.com`; got != want {
|
||||
t.Fatalf("endpoint=%q, want %q", got, want)
|
||||
} else if got, want := forcePathStyle, true; got != want {
|
||||
t.Fatalf("forcePathStyle=%v, want %v", got, want)
|
||||
}
|
||||
})
|
||||
|
||||
// Ensure GCS URLs extract bucket & endpoint from host.
|
||||
t.Run("GCS", func(t *testing.T) {
|
||||
bucket, region, endpoint, forcePathStyle := s3.ParseHost(`litestream.io.storage.googleapis.com`)
|
||||
if got, want := bucket, `litestream.io`; got != want {
|
||||
t.Fatalf("bucket=%q, want %q", got, want)
|
||||
} else if got, want := region, `us-east-1`; got != want {
|
||||
t.Fatalf("region=%q, want %q", got, want)
|
||||
} else if got, want := endpoint, `https://storage.googleapis.com`; got != want {
|
||||
t.Fatalf("endpoint=%q, want %q", got, want)
|
||||
} else if got, want := forcePathStyle, true; got != want {
|
||||
t.Fatalf("forcePathStyle=%v, want %v", got, want)
|
||||
}
|
||||
})
|
||||
}
|
||||
Reference in New Issue
Block a user