Compare commits
39 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
16c50d1d2e | ||
|
|
929a66314c | ||
|
|
2e7a6ae715 | ||
|
|
896aef070c | ||
|
|
3598d8b572 | ||
|
|
3183cf0e2e | ||
|
|
a59ee6ed63 | ||
|
|
e4c1a82eb2 | ||
|
|
aa54e4698d | ||
|
|
43e40ce8d3 | ||
|
|
0bd1b13b94 | ||
|
|
1c16aae550 | ||
|
|
49f47ea87f | ||
|
|
8947adc312 | ||
|
|
9341863bdb | ||
|
|
998e831c5c | ||
|
|
b2ca113fb5 | ||
|
|
b211e82ed2 | ||
|
|
e2779169a0 | ||
|
|
ec2f9c84d5 | ||
|
|
78eb8dcc53 | ||
|
|
cafa0f5942 | ||
|
|
325482a97c | ||
|
|
9cee1285b9 | ||
|
|
a14a74d678 | ||
|
|
f652186adf | ||
|
|
afb8731ead | ||
|
|
ce2d54cc20 | ||
|
|
d802e15b4f | ||
|
|
d6ece0b826 | ||
|
|
cb007762be | ||
|
|
6a90714bbe | ||
|
|
622ba82ebb | ||
|
|
6ca010e9db | ||
|
|
ad9ce43127 | ||
|
|
167d333fcd | ||
|
|
c5390dec1d | ||
|
|
e2cbd5fb63 | ||
|
|
8d083f7a2d |
275
README.md
275
README.md
@@ -10,275 +10,35 @@ background process and safely replicates changes incrementally to another file
|
|||||||
or S3. Litestream only communicates with SQLite through the SQLite API so it
|
or S3. Litestream only communicates with SQLite through the SQLite API so it
|
||||||
will not corrupt your database.
|
will not corrupt your database.
|
||||||
|
|
||||||
If you need support or have ideas for improving Litestream, please visit the
|
If you need support or have ideas for improving Litestream, please join the
|
||||||
[GitHub Discussions](https://github.com/benbjohnson/litestream/discussions) to
|
[Litestream Slack][slack] or visit the [GitHub Discussions](https://github.com/benbjohnson/litestream/discussions).
|
||||||
chat.
|
Please visit the [Litestream web site](https://litestream.io) for installation
|
||||||
|
instructions and documentation.
|
||||||
|
|
||||||
If you find this project interesting, please consider starring the project on
|
If you find this project interesting, please consider starring the project on
|
||||||
GitHub.
|
GitHub.
|
||||||
|
|
||||||
|
[slack]: https://join.slack.com/t/litestream/shared_invite/zt-n0j4s3ci-lx1JziR3bV6L2NMF723H3Q
|
||||||
|
|
||||||
## Installation
|
|
||||||
|
|
||||||
### Mac OS (Homebrew)
|
## Acknowledgements
|
||||||
|
|
||||||
To install from homebrew, run the following command:
|
While the Litestream project does not accept external code patches, many
|
||||||
|
of the most valuable contributions are in the forms of testing, feedback, and
|
||||||
```sh
|
documentation. These help harden software and streamline usage for other users.
|
||||||
$ brew install benbjohnson/litestream/litestream
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
### Linux (Debian)
|
|
||||||
|
|
||||||
You can download the `.deb` file from the [Releases page][releases] page and
|
|
||||||
then run the following:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
$ sudo dpkg -i litestream-v0.3.0-linux-amd64.deb
|
|
||||||
```
|
|
||||||
|
|
||||||
Once installed, you'll need to enable & start the service:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
$ sudo systemctl enable litestream
|
|
||||||
$ sudo systemctl start litestream
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
### Release binaries
|
|
||||||
|
|
||||||
You can also download the release binary for your system from the
|
|
||||||
[releases page][releases] and run it as a standalone application.
|
|
||||||
|
|
||||||
|
|
||||||
### Building from source
|
|
||||||
|
|
||||||
Download and install the [Go toolchain](https://golang.org/) and then run:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
$ go install ./cmd/litestream
|
|
||||||
```
|
|
||||||
|
|
||||||
The `litestream` binary should be in your `$GOPATH/bin` folder.
|
|
||||||
|
|
||||||
|
|
||||||
## Quick Start
|
|
||||||
|
|
||||||
Litestream provides a configuration file that can be used for production
|
|
||||||
deployments but you can also specify a single database and replica on the
|
|
||||||
command line when trying it out.
|
|
||||||
|
|
||||||
First, you'll need to create an S3 bucket that we'll call `"mybkt"` in this
|
|
||||||
example. You'll also need to set your AWS credentials:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
$ export AWS_ACCESS_KEY_ID=AKIAxxxxxxxxxxxxxxxx
|
|
||||||
$ export AWS_SECRET_ACCESS_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/xxxxxxxxx
|
|
||||||
```
|
|
||||||
|
|
||||||
Next, you can run the `litestream replicate` command with the path to the
|
|
||||||
database you want to backup and the URL of your replica destination:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
$ litestream replicate /path/to/db s3://mybkt/db
|
|
||||||
```
|
|
||||||
|
|
||||||
If you make changes to your local database, those changes will be replicated
|
|
||||||
to S3 every 10 seconds. From another terminal window, you can restore your
|
|
||||||
database from your S3 replica:
|
|
||||||
|
|
||||||
```
|
|
||||||
$ litestream restore -o /path/to/restored/db s3://mybkt/db
|
|
||||||
```
|
|
||||||
|
|
||||||
Voila! 🎉
|
|
||||||
|
|
||||||
Your database should be restored to the last replicated state that
|
|
||||||
was sent to S3. You can adjust your replication frequency and other options by
|
|
||||||
using a configuration-based approach specified below.
|
|
||||||
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
A configuration-based install gives you more replication options. By default,
|
|
||||||
the config file lives at `/etc/litestream.yml` but you can pass in a different
|
|
||||||
path to any `litestream` command using the `-config PATH` flag. You can also
|
|
||||||
set the `LITESTREAM_CONFIG` environment variable to specify a new path.
|
|
||||||
|
|
||||||
The configuration specifies one or more `dbs` and a list of one or more replica
|
|
||||||
locations for each db. Below are some common configurations:
|
|
||||||
|
|
||||||
### Replicate to S3
|
|
||||||
|
|
||||||
This will replicate the database at `/path/to/db` to the `"/db"` path inside
|
|
||||||
the S3 bucket named `"mybkt"`.
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
access-key-id: AKIAxxxxxxxxxxxxxxxx
|
|
||||||
secret-access-key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/xxxxxxxxx
|
|
||||||
|
|
||||||
dbs:
|
|
||||||
- path: /path/to/db
|
|
||||||
replicas:
|
|
||||||
- url: s3://mybkt/db
|
|
||||||
```
|
|
||||||
|
|
||||||
### Replicate to another file path
|
|
||||||
|
|
||||||
This will replicate the database at `/path/to/db` to a directory named
|
|
||||||
`/path/to/replica`.
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
dbs:
|
|
||||||
- path: /path/to/db
|
|
||||||
replicas:
|
|
||||||
- path: /path/to/replica
|
|
||||||
```
|
|
||||||
|
|
||||||
### Retention period
|
|
||||||
|
|
||||||
By default, replicas will retain a snapshot & subsequent WAL changes for 24
|
|
||||||
hours. When the snapshot age exceeds the retention threshold, a new snapshot
|
|
||||||
is taken and uploaded and the previous snapshot and WAL files are removed.
|
|
||||||
|
|
||||||
You can configure this setting per-replica. Times are parsed using [Go's
|
|
||||||
duration](https://golang.org/pkg/time/#ParseDuration) so time units of hours
|
|
||||||
(`h`), minutes (`m`), and seconds (`s`) are allowed but days, weeks, months, and
|
|
||||||
years are not.
|
|
||||||
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
db:
|
|
||||||
- path: /path/to/db
|
|
||||||
replicas:
|
|
||||||
- url: s3://mybkt/db
|
|
||||||
retention: 1h # 1 hour retention
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
### Monitoring replication
|
|
||||||
|
|
||||||
You can also enable a Prometheus metrics endpoint to monitor replication by
|
|
||||||
specifying a bind address with the `addr` field:
|
|
||||||
|
|
||||||
```yml
|
|
||||||
addr: ":9090"
|
|
||||||
```
|
|
||||||
|
|
||||||
This will make metrics available at: http://localhost:9090/metrics
|
|
||||||
|
|
||||||
|
|
||||||
### Other configuration options
|
|
||||||
|
|
||||||
These are some additional configuration options available on replicas:
|
|
||||||
|
|
||||||
- `type`—Specify the type of replica (`"file"` or `"s3"`). Derived from `"path"`.
|
|
||||||
- `name`—Specify an optional name for the replica if you are using multiple replicas.
|
|
||||||
- `path`—File path to the replica location.
|
|
||||||
- `url`—URL to the replica location.
|
|
||||||
- `retention-check-interval`—Time between retention enforcement checks. Defaults to `1h`.
|
|
||||||
- `validation-interval`—Interval between periodic checks to ensure restored backup matches current database. Disabled by default.
|
|
||||||
|
|
||||||
These replica options are only available for S3 replicas:
|
|
||||||
|
|
||||||
- `bucket`—S3 bucket name. Derived from `"path"`.
|
|
||||||
- `region`—S3 bucket region. Looked up on startup if unspecified.
|
|
||||||
- `sync-interval`—Replication sync frequency.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
### Replication
|
|
||||||
|
|
||||||
Once your configuration is saved, you'll need to begin replication. If you
|
|
||||||
installed the `.deb` file then run:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
$ sudo systemctl restart litestream
|
|
||||||
```
|
|
||||||
|
|
||||||
To run litestream on its own, run:
|
|
||||||
|
|
||||||
```sh
|
|
||||||
# Replicate using the /etc/litestream.yml configuration.
|
|
||||||
$ litestream replicate
|
|
||||||
|
|
||||||
# Replicate using a different configuration path.
|
|
||||||
$ litestream replicate -config /path/to/litestream.yml
|
|
||||||
```
|
|
||||||
|
|
||||||
The `litestream` command will initialize and then wait indefinitely for changes.
|
|
||||||
You should see your destination replica path is now populated with a
|
|
||||||
`generations` directory. Inside there should be a 16-character hex generation
|
|
||||||
directory and inside there should be snapshots & WAL files. As you make changes
|
|
||||||
to your source database, changes will be copied over to your replica incrementally.
|
|
||||||
|
|
||||||
|
|
||||||
### Restoring a backup
|
|
||||||
|
|
||||||
Litestream can restore a previous snapshot and replay all replicated WAL files.
|
|
||||||
By default, it will restore up to the latest WAL file but you can also perform
|
|
||||||
point-in-time restores.
|
|
||||||
|
|
||||||
A database can only be restored to a path that does not exist so you don't need
|
|
||||||
to worry about accidentally overwriting your current database.
|
|
||||||
|
|
||||||
```sh
|
|
||||||
# Restore database to original path.
|
|
||||||
$ litestream restore /path/to/db
|
|
||||||
|
|
||||||
# Restore database to a new location.
|
|
||||||
$ litestream restore -o /path/to/restored/db /path/to/db
|
|
||||||
|
|
||||||
# Restore from a replica URL.
|
|
||||||
$ litestream restore -o /path/to/restored/db s3://mybkt/db
|
|
||||||
|
|
||||||
# Restore database to a specific point-in-time.
|
|
||||||
$ litestream restore -timestamp 2020-01-01T00:00:00Z /path/to/db
|
|
||||||
```
|
|
||||||
|
|
||||||
Point-in-time restores only have the resolution of the timestamp of the WAL file
|
|
||||||
itself. By default, Litestream will start a new WAL file every minute so
|
|
||||||
point-in-time restores are only accurate to the minute.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## How it works
|
|
||||||
|
|
||||||
SQLite provides a WAL (write-ahead log) journaling mode which writes pages to
|
|
||||||
a `-wal` file before eventually being copied over to the original database file.
|
|
||||||
This copying process is known as checkpointing. The WAL file works as a circular
|
|
||||||
buffer so when the WAL reaches a certain size then it restarts from the beginning.
|
|
||||||
|
|
||||||
Litestream works by taking over the checkpointing process and controlling when
|
|
||||||
it is restarted to ensure that it copies every new page. Checkpointing is only
|
|
||||||
allowed when there are no read transactions so Litestream maintains a
|
|
||||||
long-running read transaction against each database until it is ready to
|
|
||||||
checkpoint.
|
|
||||||
|
|
||||||
The SQLite WAL file is copied to a separate location called the shadow WAL which
|
|
||||||
ensures that it will not be overwritten by SQLite. This shadow WAL acts as a
|
|
||||||
temporary buffer so that replicas can replicate to their destination (e.g.
|
|
||||||
another file path or to S3). The shadow WAL files are removed once they have
|
|
||||||
been fully replicated. You can find the shadow directory as a hidden directory
|
|
||||||
next to your database file. If you database file is named `/var/lib/my.db` then
|
|
||||||
the shadow directory will be `/var/lib/.my.db-litestream`.
|
|
||||||
|
|
||||||
Litestream groups a snapshot and all subsequent WAL changes into "generations".
|
|
||||||
A generation is started on initial replication of a database and a new
|
|
||||||
generation will be started if litestream detects that the WAL replication is
|
|
||||||
no longer contiguous. This can occur if the `litestream` process is stopped and
|
|
||||||
another process is allowed to checkpoint the WAL.
|
|
||||||
|
|
||||||
|
I want to give special thanks to individuals who invest much of their time and
|
||||||
|
energy into the project to help make it better. Shout out to [Michael
|
||||||
|
Lynch](https://github.com/mtlynch) for digging into issues and contributing to
|
||||||
|
the documentation.
|
||||||
|
|
||||||
|
|
||||||
## Open-source, not open-contribution
|
## Open-source, not open-contribution
|
||||||
|
|
||||||
[Similar to SQLite](https://www.sqlite.org/copyright.html), Litestream is open
|
[Similar to SQLite](https://www.sqlite.org/copyright.html), Litestream is open
|
||||||
source but closed to contributions. This keeps the code base free of proprietary
|
source but closed to code contributions. This keeps the code base free of
|
||||||
or licensed code but it also helps me continue to maintain and build Litestream.
|
proprietary or licensed code but it also helps me continue to maintain and build
|
||||||
|
Litestream.
|
||||||
|
|
||||||
As the author of [BoltDB](https://github.com/boltdb/bolt), I found that
|
As the author of [BoltDB](https://github.com/boltdb/bolt), I found that
|
||||||
accepting and maintaining third party patches contributed to my burn out and
|
accepting and maintaining third party patches contributed to my burn out and
|
||||||
@@ -292,5 +52,8 @@ not wish to come off as anything but welcoming, however, I've
|
|||||||
made the decision to keep this project closed to contributions for my own
|
made the decision to keep this project closed to contributions for my own
|
||||||
mental health and long term viability of the project.
|
mental health and long term viability of the project.
|
||||||
|
|
||||||
|
The [documentation repository][docs] is MIT licensed and pull requests are welcome there.
|
||||||
|
|
||||||
[releases]: https://github.com/benbjohnson/litestream/releases
|
[releases]: https://github.com/benbjohnson/litestream/releases
|
||||||
|
[docs]: https://github.com/benbjohnson/litestream.io
|
||||||
|
|
||||||
|
|||||||
@@ -2,7 +2,6 @@ package main
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"errors"
|
|
||||||
"flag"
|
"flag"
|
||||||
"fmt"
|
"fmt"
|
||||||
"os"
|
"os"
|
||||||
@@ -15,21 +14,20 @@ type DatabasesCommand struct{}
|
|||||||
|
|
||||||
// Run executes the command.
|
// Run executes the command.
|
||||||
func (c *DatabasesCommand) Run(ctx context.Context, args []string) (err error) {
|
func (c *DatabasesCommand) Run(ctx context.Context, args []string) (err error) {
|
||||||
var configPath string
|
|
||||||
fs := flag.NewFlagSet("litestream-databases", flag.ContinueOnError)
|
fs := flag.NewFlagSet("litestream-databases", flag.ContinueOnError)
|
||||||
registerConfigFlag(fs, &configPath)
|
configPath := registerConfigFlag(fs)
|
||||||
fs.Usage = c.Usage
|
fs.Usage = c.Usage
|
||||||
if err := fs.Parse(args); err != nil {
|
if err := fs.Parse(args); err != nil {
|
||||||
return err
|
return err
|
||||||
} else if fs.NArg() != 0 {
|
} else if fs.NArg() != 0 {
|
||||||
return fmt.Errorf("too many argument")
|
return fmt.Errorf("too many arguments")
|
||||||
}
|
}
|
||||||
|
|
||||||
// Load configuration.
|
// Load configuration.
|
||||||
if configPath == "" {
|
if *configPath == "" {
|
||||||
return errors.New("-config required")
|
*configPath = DefaultConfigPath()
|
||||||
}
|
}
|
||||||
config, err := ReadConfigFile(configPath)
|
config, err := ReadConfigFile(*configPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -40,7 +38,7 @@ func (c *DatabasesCommand) Run(ctx context.Context, args []string) (err error) {
|
|||||||
|
|
||||||
fmt.Fprintln(w, "path\treplicas")
|
fmt.Fprintln(w, "path\treplicas")
|
||||||
for _, dbConfig := range config.DBs {
|
for _, dbConfig := range config.DBs {
|
||||||
db, err := newDBFromConfig(&config, dbConfig)
|
db, err := NewDBFromConfig(dbConfig)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -2,7 +2,6 @@ package main
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"errors"
|
|
||||||
"flag"
|
"flag"
|
||||||
"fmt"
|
"fmt"
|
||||||
"log"
|
"log"
|
||||||
@@ -18,9 +17,8 @@ type GenerationsCommand struct{}
|
|||||||
|
|
||||||
// Run executes the command.
|
// Run executes the command.
|
||||||
func (c *GenerationsCommand) Run(ctx context.Context, args []string) (err error) {
|
func (c *GenerationsCommand) Run(ctx context.Context, args []string) (err error) {
|
||||||
var configPath string
|
|
||||||
fs := flag.NewFlagSet("litestream-generations", flag.ContinueOnError)
|
fs := flag.NewFlagSet("litestream-generations", flag.ContinueOnError)
|
||||||
registerConfigFlag(fs, &configPath)
|
configPath := registerConfigFlag(fs)
|
||||||
replicaName := fs.String("replica", "", "replica name")
|
replicaName := fs.String("replica", "", "replica name")
|
||||||
fs.Usage = c.Usage
|
fs.Usage = c.Usage
|
||||||
if err := fs.Parse(args); err != nil {
|
if err := fs.Parse(args); err != nil {
|
||||||
@@ -35,12 +33,19 @@ func (c *GenerationsCommand) Run(ctx context.Context, args []string) (err error)
|
|||||||
var r litestream.Replica
|
var r litestream.Replica
|
||||||
updatedAt := time.Now()
|
updatedAt := time.Now()
|
||||||
if isURL(fs.Arg(0)) {
|
if isURL(fs.Arg(0)) {
|
||||||
if r, err = NewReplicaFromURL(fs.Arg(0)); err != nil {
|
if *configPath != "" {
|
||||||
|
return fmt.Errorf("cannot specify a replica URL and the -config flag")
|
||||||
|
}
|
||||||
|
if r, err = NewReplicaFromConfig(&ReplicaConfig{URL: fs.Arg(0)}, nil); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
} else if configPath != "" {
|
} else {
|
||||||
|
if *configPath == "" {
|
||||||
|
*configPath = DefaultConfigPath()
|
||||||
|
}
|
||||||
|
|
||||||
// Load configuration.
|
// Load configuration.
|
||||||
config, err := ReadConfigFile(configPath)
|
config, err := ReadConfigFile(*configPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -50,7 +55,7 @@ func (c *GenerationsCommand) Run(ctx context.Context, args []string) (err error)
|
|||||||
return err
|
return err
|
||||||
} else if dbc := config.DBConfig(path); dbc == nil {
|
} else if dbc := config.DBConfig(path); dbc == nil {
|
||||||
return fmt.Errorf("database not found in config: %s", path)
|
return fmt.Errorf("database not found in config: %s", path)
|
||||||
} else if db, err = newDBFromConfig(&config, dbc); err != nil {
|
} else if db, err = NewDBFromConfig(dbc); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -65,8 +70,6 @@ func (c *GenerationsCommand) Run(ctx context.Context, args []string) (err error)
|
|||||||
if updatedAt, err = db.UpdatedAt(); err != nil {
|
if updatedAt, err = db.UpdatedAt(); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
} else {
|
|
||||||
return errors.New("config path or replica URL required")
|
|
||||||
}
|
}
|
||||||
|
|
||||||
var replicas []litestream.Replica
|
var replicas []litestream.Replica
|
||||||
|
|||||||
@@ -2,15 +2,18 @@ package main
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
|
"errors"
|
||||||
"flag"
|
"flag"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io/ioutil"
|
"io/ioutil"
|
||||||
"log"
|
"log"
|
||||||
"net/url"
|
"net/url"
|
||||||
"os"
|
"os"
|
||||||
|
"os/signal"
|
||||||
"os/user"
|
"os/user"
|
||||||
"path"
|
"path"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
|
"regexp"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
@@ -25,14 +28,17 @@ var (
|
|||||||
Version = "(development build)"
|
Version = "(development build)"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// errStop is a terminal error for indicating program should quit.
|
||||||
|
var errStop = errors.New("stop")
|
||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
log.SetFlags(0)
|
log.SetFlags(0)
|
||||||
|
|
||||||
m := NewMain()
|
m := NewMain()
|
||||||
if err := m.Run(context.Background(), os.Args[1:]); err == flag.ErrHelp {
|
if err := m.Run(context.Background(), os.Args[1:]); err == flag.ErrHelp || err == errStop {
|
||||||
os.Exit(1)
|
os.Exit(1)
|
||||||
} else if err != nil {
|
} else if err != nil {
|
||||||
fmt.Fprintln(os.Stderr, err)
|
log.Println(err)
|
||||||
os.Exit(1)
|
os.Exit(1)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -47,6 +53,14 @@ func NewMain() *Main {
|
|||||||
|
|
||||||
// Run executes the program.
|
// Run executes the program.
|
||||||
func (m *Main) Run(ctx context.Context, args []string) (err error) {
|
func (m *Main) Run(ctx context.Context, args []string) (err error) {
|
||||||
|
// Execute replication command if running as a Windows service.
|
||||||
|
if isService, err := isWindowsService(); err != nil {
|
||||||
|
return err
|
||||||
|
} else if isService {
|
||||||
|
return runWindowsService(ctx)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Extract command name.
|
||||||
var cmd string
|
var cmd string
|
||||||
if len(args) > 0 {
|
if len(args) > 0 {
|
||||||
cmd, args = args[0], args[1:]
|
cmd, args = args[0], args[1:]
|
||||||
@@ -58,7 +72,28 @@ func (m *Main) Run(ctx context.Context, args []string) (err error) {
|
|||||||
case "generations":
|
case "generations":
|
||||||
return (&GenerationsCommand{}).Run(ctx, args)
|
return (&GenerationsCommand{}).Run(ctx, args)
|
||||||
case "replicate":
|
case "replicate":
|
||||||
return (&ReplicateCommand{}).Run(ctx, args)
|
c := NewReplicateCommand()
|
||||||
|
if err := c.ParseFlags(ctx, args); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Setup signal handler.
|
||||||
|
ctx, cancel := context.WithCancel(ctx)
|
||||||
|
ch := make(chan os.Signal, 1)
|
||||||
|
signal.Notify(ch, os.Interrupt)
|
||||||
|
go func() { <-ch; cancel() }()
|
||||||
|
|
||||||
|
if err := c.Run(ctx); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Wait for signal to stop program.
|
||||||
|
<-ctx.Done()
|
||||||
|
signal.Reset()
|
||||||
|
|
||||||
|
// Gracefully close.
|
||||||
|
return c.Close()
|
||||||
|
|
||||||
case "restore":
|
case "restore":
|
||||||
return (&RestoreCommand{}).Run(ctx, args)
|
return (&RestoreCommand{}).Run(ctx, args)
|
||||||
case "snapshots":
|
case "snapshots":
|
||||||
@@ -108,8 +143,20 @@ type Config struct {
|
|||||||
// Global S3 settings
|
// Global S3 settings
|
||||||
AccessKeyID string `yaml:"access-key-id"`
|
AccessKeyID string `yaml:"access-key-id"`
|
||||||
SecretAccessKey string `yaml:"secret-access-key"`
|
SecretAccessKey string `yaml:"secret-access-key"`
|
||||||
Region string `yaml:"region"`
|
}
|
||||||
Bucket string `yaml:"bucket"`
|
|
||||||
|
// propagateGlobalSettings copies global S3 settings to replica configs.
|
||||||
|
func (c *Config) propagateGlobalSettings() {
|
||||||
|
for _, dbc := range c.DBs {
|
||||||
|
for _, rc := range dbc.Replicas {
|
||||||
|
if rc.AccessKeyID == "" {
|
||||||
|
rc.AccessKeyID = c.AccessKeyID
|
||||||
|
}
|
||||||
|
if rc.SecretAccessKey == "" {
|
||||||
|
rc.SecretAccessKey = c.SecretAccessKey
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// DefaultConfig returns a new instance of Config with defaults set.
|
// DefaultConfig returns a new instance of Config with defaults set.
|
||||||
@@ -153,15 +200,59 @@ func ReadConfigFile(filename string) (_ Config, err error) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Propage settings from global config to replica configs.
|
||||||
|
config.propagateGlobalSettings()
|
||||||
|
|
||||||
return config, nil
|
return config, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// DBConfig represents the configuration for a single database.
|
// DBConfig represents the configuration for a single database.
|
||||||
type DBConfig struct {
|
type DBConfig struct {
|
||||||
Path string `yaml:"path"`
|
Path string `yaml:"path"`
|
||||||
|
MonitorInterval *time.Duration `yaml:"monitor-interval"`
|
||||||
|
CheckpointInterval *time.Duration `yaml:"checkpoint-interval"`
|
||||||
|
MinCheckpointPageN *int `yaml:"min-checkpoint-page-count"`
|
||||||
|
MaxCheckpointPageN *int `yaml:"max-checkpoint-page-count"`
|
||||||
|
|
||||||
Replicas []*ReplicaConfig `yaml:"replicas"`
|
Replicas []*ReplicaConfig `yaml:"replicas"`
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// NewDBFromConfig instantiates a DB based on a configuration.
|
||||||
|
func NewDBFromConfig(dbc *DBConfig) (*litestream.DB, error) {
|
||||||
|
path, err := expand(dbc.Path)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Initialize database with given path.
|
||||||
|
db := litestream.NewDB(path)
|
||||||
|
|
||||||
|
// Override default database settings if specified in configuration.
|
||||||
|
if dbc.MonitorInterval != nil {
|
||||||
|
db.MonitorInterval = *dbc.MonitorInterval
|
||||||
|
}
|
||||||
|
if dbc.CheckpointInterval != nil {
|
||||||
|
db.CheckpointInterval = *dbc.CheckpointInterval
|
||||||
|
}
|
||||||
|
if dbc.MinCheckpointPageN != nil {
|
||||||
|
db.MinCheckpointPageN = *dbc.MinCheckpointPageN
|
||||||
|
}
|
||||||
|
if dbc.MaxCheckpointPageN != nil {
|
||||||
|
db.MaxCheckpointPageN = *dbc.MaxCheckpointPageN
|
||||||
|
}
|
||||||
|
|
||||||
|
// Instantiate and attach replicas.
|
||||||
|
for _, rc := range dbc.Replicas {
|
||||||
|
r, err := NewReplicaFromConfig(rc, db)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
db.Replicas = append(db.Replicas, r)
|
||||||
|
}
|
||||||
|
|
||||||
|
return db, nil
|
||||||
|
}
|
||||||
|
|
||||||
// ReplicaConfig represents the configuration for a single replica in a database.
|
// ReplicaConfig represents the configuration for a single replica in a database.
|
||||||
type ReplicaConfig struct {
|
type ReplicaConfig struct {
|
||||||
Type string `yaml:"type"` // "file", "s3"
|
Type string `yaml:"type"` // "file", "s3"
|
||||||
@@ -171,6 +262,7 @@ type ReplicaConfig struct {
|
|||||||
Retention time.Duration `yaml:"retention"`
|
Retention time.Duration `yaml:"retention"`
|
||||||
RetentionCheckInterval time.Duration `yaml:"retention-check-interval"`
|
RetentionCheckInterval time.Duration `yaml:"retention-check-interval"`
|
||||||
SyncInterval time.Duration `yaml:"sync-interval"` // s3 only
|
SyncInterval time.Duration `yaml:"sync-interval"` // s3 only
|
||||||
|
SnapshotInterval time.Duration `yaml:"snapshot-interval"`
|
||||||
ValidationInterval time.Duration `yaml:"validation-interval"`
|
ValidationInterval time.Duration `yaml:"validation-interval"`
|
||||||
|
|
||||||
// S3 settings
|
// S3 settings
|
||||||
@@ -178,26 +270,145 @@ type ReplicaConfig struct {
|
|||||||
SecretAccessKey string `yaml:"secret-access-key"`
|
SecretAccessKey string `yaml:"secret-access-key"`
|
||||||
Region string `yaml:"region"`
|
Region string `yaml:"region"`
|
||||||
Bucket string `yaml:"bucket"`
|
Bucket string `yaml:"bucket"`
|
||||||
|
Endpoint string `yaml:"endpoint"`
|
||||||
|
ForcePathStyle *bool `yaml:"force-path-style"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewReplicaFromURL returns a new Replica instance configured from a URL.
|
// NewReplicaFromConfig instantiates a replica for a DB based on a config.
|
||||||
// The replica's database is not set.
|
func NewReplicaFromConfig(c *ReplicaConfig, db *litestream.DB) (litestream.Replica, error) {
|
||||||
func NewReplicaFromURL(s string) (litestream.Replica, error) {
|
// Ensure user did not specify URL in path.
|
||||||
scheme, host, path, err := ParseReplicaURL(s)
|
if isURL(c.Path) {
|
||||||
if err != nil {
|
return nil, fmt.Errorf("replica path cannot be a url, please use the 'url' field instead: %s", c.Path)
|
||||||
|
}
|
||||||
|
|
||||||
|
switch c.ReplicaType() {
|
||||||
|
case "file":
|
||||||
|
return newFileReplicaFromConfig(c, db)
|
||||||
|
case "s3":
|
||||||
|
return newS3ReplicaFromConfig(c, db)
|
||||||
|
default:
|
||||||
|
return nil, fmt.Errorf("unknown replica type in config: %q", c.Type)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// newFileReplicaFromConfig returns a new instance of FileReplica build from config.
|
||||||
|
func newFileReplicaFromConfig(c *ReplicaConfig, db *litestream.DB) (_ *litestream.FileReplica, err error) {
|
||||||
|
// Ensure URL & path are not both specified.
|
||||||
|
if c.URL != "" && c.Path != "" {
|
||||||
|
return nil, fmt.Errorf("cannot specify url & path for file replica")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse path from URL, if specified.
|
||||||
|
path := c.Path
|
||||||
|
if c.URL != "" {
|
||||||
|
if _, _, path, err = ParseReplicaURL(c.URL); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Ensure path is set explicitly or derived from URL field.
|
||||||
|
if path == "" {
|
||||||
|
return nil, fmt.Errorf("file replica path required")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Expand home prefix and return absolute path.
|
||||||
|
if path, err = expand(path); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
switch scheme {
|
// Instantiate replica and apply time fields, if set.
|
||||||
case "file":
|
r := litestream.NewFileReplica(db, c.Name, path)
|
||||||
return litestream.NewFileReplica(nil, "", path), nil
|
if v := c.Retention; v > 0 {
|
||||||
case "s3":
|
r.Retention = v
|
||||||
r := s3.NewReplica(nil, "")
|
|
||||||
r.Bucket, r.Path = host, path
|
|
||||||
return r, nil
|
|
||||||
default:
|
|
||||||
return nil, fmt.Errorf("invalid replica url type: %s", s)
|
|
||||||
}
|
}
|
||||||
|
if v := c.RetentionCheckInterval; v > 0 {
|
||||||
|
r.RetentionCheckInterval = v
|
||||||
|
}
|
||||||
|
if v := c.SnapshotInterval; v > 0 {
|
||||||
|
r.SnapshotInterval = v
|
||||||
|
}
|
||||||
|
if v := c.ValidationInterval; v > 0 {
|
||||||
|
r.ValidationInterval = v
|
||||||
|
}
|
||||||
|
return r, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// newS3ReplicaFromConfig returns a new instance of S3Replica build from config.
|
||||||
|
func newS3ReplicaFromConfig(c *ReplicaConfig, db *litestream.DB) (_ *s3.Replica, err error) {
|
||||||
|
// Ensure URL & constituent parts are not both specified.
|
||||||
|
if c.URL != "" && c.Path != "" {
|
||||||
|
return nil, fmt.Errorf("cannot specify url & path for s3 replica")
|
||||||
|
} else if c.URL != "" && c.Bucket != "" {
|
||||||
|
return nil, fmt.Errorf("cannot specify url & bucket for s3 replica")
|
||||||
|
}
|
||||||
|
|
||||||
|
bucket, path := c.Bucket, c.Path
|
||||||
|
region, endpoint := c.Region, c.Endpoint
|
||||||
|
|
||||||
|
// Use path style if an endpoint is explicitly set. This works because the
|
||||||
|
// only service to not use path style is AWS which does not use an endpoint.
|
||||||
|
forcePathStyle := (endpoint != "")
|
||||||
|
if v := c.ForcePathStyle; v != nil {
|
||||||
|
forcePathStyle = *v
|
||||||
|
}
|
||||||
|
|
||||||
|
// Apply settings from URL, if specified.
|
||||||
|
if c.URL != "" {
|
||||||
|
_, host, upath, err := ParseReplicaURL(c.URL)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
ubucket, uregion, uendpoint, uforcePathStyle := s3.ParseHost(host)
|
||||||
|
|
||||||
|
// Only apply URL parts to field that have not been overridden.
|
||||||
|
if path == "" {
|
||||||
|
path = upath
|
||||||
|
}
|
||||||
|
if bucket == "" {
|
||||||
|
bucket = ubucket
|
||||||
|
}
|
||||||
|
if region == "" {
|
||||||
|
region = uregion
|
||||||
|
}
|
||||||
|
if endpoint == "" {
|
||||||
|
endpoint = uendpoint
|
||||||
|
}
|
||||||
|
if !forcePathStyle {
|
||||||
|
forcePathStyle = uforcePathStyle
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Ensure required settings are set.
|
||||||
|
if bucket == "" {
|
||||||
|
return nil, fmt.Errorf("bucket required for s3 replica")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Build replica.
|
||||||
|
r := s3.NewReplica(db, c.Name)
|
||||||
|
r.AccessKeyID = c.AccessKeyID
|
||||||
|
r.SecretAccessKey = c.SecretAccessKey
|
||||||
|
r.Bucket = bucket
|
||||||
|
r.Path = path
|
||||||
|
r.Region = region
|
||||||
|
r.Endpoint = endpoint
|
||||||
|
r.ForcePathStyle = forcePathStyle
|
||||||
|
|
||||||
|
if v := c.Retention; v > 0 {
|
||||||
|
r.Retention = v
|
||||||
|
}
|
||||||
|
if v := c.RetentionCheckInterval; v > 0 {
|
||||||
|
r.RetentionCheckInterval = v
|
||||||
|
}
|
||||||
|
if v := c.SyncInterval; v > 0 {
|
||||||
|
r.SyncInterval = v
|
||||||
|
}
|
||||||
|
if v := c.SnapshotInterval; v > 0 {
|
||||||
|
r.SnapshotInterval = v
|
||||||
|
}
|
||||||
|
if v := c.ValidationInterval; v > 0 {
|
||||||
|
r.ValidationInterval = v
|
||||||
|
}
|
||||||
|
return r, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// ParseReplicaURL parses a replica URL.
|
// ParseReplicaURL parses a replica URL.
|
||||||
@@ -222,15 +433,14 @@ func ParseReplicaURL(s string) (scheme, host, urlpath string, err error) {
|
|||||||
|
|
||||||
// isURL returns true if s can be parsed and has a scheme.
|
// isURL returns true if s can be parsed and has a scheme.
|
||||||
func isURL(s string) bool {
|
func isURL(s string) bool {
|
||||||
u, err := url.Parse(s)
|
return regexp.MustCompile(`^\w+:\/\/`).MatchString(s)
|
||||||
return err == nil && u.Scheme != ""
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// ReplicaType returns the type based on the type field or extracted from the URL.
|
// ReplicaType returns the type based on the type field or extracted from the URL.
|
||||||
func (c *ReplicaConfig) ReplicaType() string {
|
func (c *ReplicaConfig) ReplicaType() string {
|
||||||
typ, _, _, _ := ParseReplicaURL(c.URL)
|
scheme, _, _, _ := ParseReplicaURL(c.URL)
|
||||||
if typ != "" {
|
if scheme != "" {
|
||||||
return typ
|
return scheme
|
||||||
} else if c.Type != "" {
|
} else if c.Type != "" {
|
||||||
return c.Type
|
return c.Type
|
||||||
}
|
}
|
||||||
@@ -242,138 +452,11 @@ func DefaultConfigPath() string {
|
|||||||
if v := os.Getenv("LITESTREAM_CONFIG"); v != "" {
|
if v := os.Getenv("LITESTREAM_CONFIG"); v != "" {
|
||||||
return v
|
return v
|
||||||
}
|
}
|
||||||
return "/etc/litestream.yml"
|
return defaultConfigPath
|
||||||
}
|
}
|
||||||
|
|
||||||
func registerConfigFlag(fs *flag.FlagSet, p *string) {
|
func registerConfigFlag(fs *flag.FlagSet) *string {
|
||||||
fs.StringVar(p, "config", DefaultConfigPath(), "config path")
|
return fs.String("config", "", "config path")
|
||||||
}
|
|
||||||
|
|
||||||
// newDBFromConfig instantiates a DB based on a configuration.
|
|
||||||
func newDBFromConfig(c *Config, dbc *DBConfig) (*litestream.DB, error) {
|
|
||||||
path, err := expand(dbc.Path)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Initialize database with given path.
|
|
||||||
db := litestream.NewDB(path)
|
|
||||||
|
|
||||||
// Instantiate and attach replicas.
|
|
||||||
for _, rc := range dbc.Replicas {
|
|
||||||
r, err := newReplicaFromConfig(db, c, dbc, rc)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
db.Replicas = append(db.Replicas, r)
|
|
||||||
}
|
|
||||||
|
|
||||||
return db, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// newReplicaFromConfig instantiates a replica for a DB based on a config.
|
|
||||||
func newReplicaFromConfig(db *litestream.DB, c *Config, dbc *DBConfig, rc *ReplicaConfig) (litestream.Replica, error) {
|
|
||||||
// Ensure user did not specify URL in path.
|
|
||||||
if isURL(rc.Path) {
|
|
||||||
return nil, fmt.Errorf("replica path cannot be a url, please use the 'url' field instead: %s", rc.Path)
|
|
||||||
}
|
|
||||||
|
|
||||||
switch rc.ReplicaType() {
|
|
||||||
case "file":
|
|
||||||
return newFileReplicaFromConfig(db, c, dbc, rc)
|
|
||||||
case "s3":
|
|
||||||
return newS3ReplicaFromConfig(db, c, dbc, rc)
|
|
||||||
default:
|
|
||||||
return nil, fmt.Errorf("unknown replica type in config: %q", rc.Type)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// newFileReplicaFromConfig returns a new instance of FileReplica build from config.
|
|
||||||
func newFileReplicaFromConfig(db *litestream.DB, c *Config, dbc *DBConfig, rc *ReplicaConfig) (_ *litestream.FileReplica, err error) {
|
|
||||||
path := rc.Path
|
|
||||||
if rc.URL != "" {
|
|
||||||
_, _, path, err = ParseReplicaURL(rc.URL)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if path == "" {
|
|
||||||
return nil, fmt.Errorf("%s: file replica path required", db.Path())
|
|
||||||
}
|
|
||||||
|
|
||||||
if path, err = expand(path); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
r := litestream.NewFileReplica(db, rc.Name, path)
|
|
||||||
if v := rc.Retention; v > 0 {
|
|
||||||
r.Retention = v
|
|
||||||
}
|
|
||||||
if v := rc.RetentionCheckInterval; v > 0 {
|
|
||||||
r.RetentionCheckInterval = v
|
|
||||||
}
|
|
||||||
if v := rc.ValidationInterval; v > 0 {
|
|
||||||
r.ValidationInterval = v
|
|
||||||
}
|
|
||||||
return r, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// newS3ReplicaFromConfig returns a new instance of S3Replica build from config.
|
|
||||||
func newS3ReplicaFromConfig(db *litestream.DB, c *Config, dbc *DBConfig, rc *ReplicaConfig) (_ *s3.Replica, err error) {
|
|
||||||
bucket := c.Bucket
|
|
||||||
if v := rc.Bucket; v != "" {
|
|
||||||
bucket = v
|
|
||||||
}
|
|
||||||
|
|
||||||
path := rc.Path
|
|
||||||
if rc.URL != "" {
|
|
||||||
_, bucket, path, err = ParseReplicaURL(rc.URL)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Use global or replica-specific S3 settings.
|
|
||||||
accessKeyID := c.AccessKeyID
|
|
||||||
if v := rc.AccessKeyID; v != "" {
|
|
||||||
accessKeyID = v
|
|
||||||
}
|
|
||||||
secretAccessKey := c.SecretAccessKey
|
|
||||||
if v := rc.SecretAccessKey; v != "" {
|
|
||||||
secretAccessKey = v
|
|
||||||
}
|
|
||||||
region := c.Region
|
|
||||||
if v := rc.Region; v != "" {
|
|
||||||
region = v
|
|
||||||
}
|
|
||||||
|
|
||||||
// Ensure required settings are set.
|
|
||||||
if bucket == "" {
|
|
||||||
return nil, fmt.Errorf("%s: s3 bucket required", db.Path())
|
|
||||||
}
|
|
||||||
|
|
||||||
// Build replica.
|
|
||||||
r := s3.NewReplica(db, rc.Name)
|
|
||||||
r.AccessKeyID = accessKeyID
|
|
||||||
r.SecretAccessKey = secretAccessKey
|
|
||||||
r.Region = region
|
|
||||||
r.Bucket = bucket
|
|
||||||
r.Path = path
|
|
||||||
|
|
||||||
if v := rc.Retention; v > 0 {
|
|
||||||
r.Retention = v
|
|
||||||
}
|
|
||||||
if v := rc.RetentionCheckInterval; v > 0 {
|
|
||||||
r.RetentionCheckInterval = v
|
|
||||||
}
|
|
||||||
if v := rc.SyncInterval; v > 0 {
|
|
||||||
r.SyncInterval = v
|
|
||||||
}
|
|
||||||
if v := rc.ValidationInterval; v > 0 {
|
|
||||||
r.ValidationInterval = v
|
|
||||||
}
|
|
||||||
return r, nil
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// expand returns an absolute path for s.
|
// expand returns an absolute path for s.
|
||||||
|
|||||||
17
cmd/litestream/main_notwindows.go
Normal file
17
cmd/litestream/main_notwindows.go
Normal file
@@ -0,0 +1,17 @@
|
|||||||
|
// +build !windows
|
||||||
|
|
||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
)
|
||||||
|
|
||||||
|
const defaultConfigPath = "/etc/litestream.yml"
|
||||||
|
|
||||||
|
func isWindowsService() (bool, error) {
|
||||||
|
return false, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func runWindowsService(ctx context.Context) error {
|
||||||
|
panic("cannot run windows service as unix process")
|
||||||
|
}
|
||||||
131
cmd/litestream/main_test.go
Normal file
131
cmd/litestream/main_test.go
Normal file
@@ -0,0 +1,131 @@
|
|||||||
|
package main_test
|
||||||
|
|
||||||
|
import (
|
||||||
|
"io/ioutil"
|
||||||
|
"path/filepath"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/benbjohnson/litestream"
|
||||||
|
main "github.com/benbjohnson/litestream/cmd/litestream"
|
||||||
|
"github.com/benbjohnson/litestream/s3"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestReadConfigFile(t *testing.T) {
|
||||||
|
// Ensure global AWS settings are propagated down to replica configurations.
|
||||||
|
t.Run("PropagateGlobalSettings", func(t *testing.T) {
|
||||||
|
filename := filepath.Join(t.TempDir(), "litestream.yml")
|
||||||
|
if err := ioutil.WriteFile(filename, []byte(`
|
||||||
|
access-key-id: XXX
|
||||||
|
secret-access-key: YYY
|
||||||
|
|
||||||
|
dbs:
|
||||||
|
- path: /path/to/db
|
||||||
|
replicas:
|
||||||
|
- url: s3://foo/bar
|
||||||
|
`[1:]), 0666); err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
config, err := main.ReadConfigFile(filename)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
} else if got, want := config.AccessKeyID, `XXX`; got != want {
|
||||||
|
t.Fatalf("AccessKeyID=%v, want %v", got, want)
|
||||||
|
} else if got, want := config.SecretAccessKey, `YYY`; got != want {
|
||||||
|
t.Fatalf("SecretAccessKey=%v, want %v", got, want)
|
||||||
|
} else if got, want := config.DBs[0].Replicas[0].AccessKeyID, `XXX`; got != want {
|
||||||
|
t.Fatalf("Replica.AccessKeyID=%v, want %v", got, want)
|
||||||
|
} else if got, want := config.DBs[0].Replicas[0].SecretAccessKey, `YYY`; got != want {
|
||||||
|
t.Fatalf("Replica.SecretAccessKey=%v, want %v", got, want)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestNewFileReplicaFromConfig(t *testing.T) {
|
||||||
|
r, err := main.NewReplicaFromConfig(&main.ReplicaConfig{Path: "/foo"}, nil)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
} else if r, ok := r.(*litestream.FileReplica); !ok {
|
||||||
|
t.Fatal("unexpected replica type")
|
||||||
|
} else if got, want := r.Path(), "/foo"; got != want {
|
||||||
|
t.Fatalf("Path=%s, want %s", got, want)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestNewS3ReplicaFromConfig(t *testing.T) {
|
||||||
|
t.Run("URL", func(t *testing.T) {
|
||||||
|
r, err := main.NewReplicaFromConfig(&main.ReplicaConfig{URL: "s3://foo/bar"}, nil)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
} else if r, ok := r.(*s3.Replica); !ok {
|
||||||
|
t.Fatal("unexpected replica type")
|
||||||
|
} else if got, want := r.Bucket, "foo"; got != want {
|
||||||
|
t.Fatalf("Bucket=%s, want %s", got, want)
|
||||||
|
} else if got, want := r.Path, "bar"; got != want {
|
||||||
|
t.Fatalf("Path=%s, want %s", got, want)
|
||||||
|
} else if got, want := r.Region, ""; got != want {
|
||||||
|
t.Fatalf("Region=%s, want %s", got, want)
|
||||||
|
} else if got, want := r.Endpoint, ""; got != want {
|
||||||
|
t.Fatalf("Endpoint=%s, want %s", got, want)
|
||||||
|
} else if got, want := r.ForcePathStyle, false; got != want {
|
||||||
|
t.Fatalf("ForcePathStyle=%v, want %v", got, want)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("MinIO", func(t *testing.T) {
|
||||||
|
r, err := main.NewReplicaFromConfig(&main.ReplicaConfig{URL: "s3://foo.localhost:9000/bar"}, nil)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
} else if r, ok := r.(*s3.Replica); !ok {
|
||||||
|
t.Fatal("unexpected replica type")
|
||||||
|
} else if got, want := r.Bucket, "foo"; got != want {
|
||||||
|
t.Fatalf("Bucket=%s, want %s", got, want)
|
||||||
|
} else if got, want := r.Path, "bar"; got != want {
|
||||||
|
t.Fatalf("Path=%s, want %s", got, want)
|
||||||
|
} else if got, want := r.Region, "us-east-1"; got != want {
|
||||||
|
t.Fatalf("Region=%s, want %s", got, want)
|
||||||
|
} else if got, want := r.Endpoint, "http://localhost:9000"; got != want {
|
||||||
|
t.Fatalf("Endpoint=%s, want %s", got, want)
|
||||||
|
} else if got, want := r.ForcePathStyle, true; got != want {
|
||||||
|
t.Fatalf("ForcePathStyle=%v, want %v", got, want)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("Backblaze", func(t *testing.T) {
|
||||||
|
r, err := main.NewReplicaFromConfig(&main.ReplicaConfig{URL: "s3://foo.s3.us-west-000.backblazeb2.com/bar"}, nil)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
} else if r, ok := r.(*s3.Replica); !ok {
|
||||||
|
t.Fatal("unexpected replica type")
|
||||||
|
} else if got, want := r.Bucket, "foo"; got != want {
|
||||||
|
t.Fatalf("Bucket=%s, want %s", got, want)
|
||||||
|
} else if got, want := r.Path, "bar"; got != want {
|
||||||
|
t.Fatalf("Path=%s, want %s", got, want)
|
||||||
|
} else if got, want := r.Region, "us-west-000"; got != want {
|
||||||
|
t.Fatalf("Region=%s, want %s", got, want)
|
||||||
|
} else if got, want := r.Endpoint, "https://s3.us-west-000.backblazeb2.com"; got != want {
|
||||||
|
t.Fatalf("Endpoint=%s, want %s", got, want)
|
||||||
|
} else if got, want := r.ForcePathStyle, true; got != want {
|
||||||
|
t.Fatalf("ForcePathStyle=%v, want %v", got, want)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("GCS", func(t *testing.T) {
|
||||||
|
r, err := main.NewReplicaFromConfig(&main.ReplicaConfig{URL: "s3://foo.storage.googleapis.com/bar"}, nil)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
} else if r, ok := r.(*s3.Replica); !ok {
|
||||||
|
t.Fatal("unexpected replica type")
|
||||||
|
} else if got, want := r.Bucket, "foo"; got != want {
|
||||||
|
t.Fatalf("Bucket=%s, want %s", got, want)
|
||||||
|
} else if got, want := r.Path, "bar"; got != want {
|
||||||
|
t.Fatalf("Path=%s, want %s", got, want)
|
||||||
|
} else if got, want := r.Region, "us-east-1"; got != want {
|
||||||
|
t.Fatalf("Region=%s, want %s", got, want)
|
||||||
|
} else if got, want := r.Endpoint, "https://storage.googleapis.com"; got != want {
|
||||||
|
t.Fatalf("Endpoint=%s, want %s", got, want)
|
||||||
|
} else if got, want := r.ForcePathStyle, true; got != want {
|
||||||
|
t.Fatalf("ForcePathStyle=%v, want %v", got, want)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
105
cmd/litestream/main_windows.go
Normal file
105
cmd/litestream/main_windows.go
Normal file
@@ -0,0 +1,105 @@
|
|||||||
|
// +build windows
|
||||||
|
|
||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"io"
|
||||||
|
"log"
|
||||||
|
"os"
|
||||||
|
|
||||||
|
"golang.org/x/sys/windows"
|
||||||
|
"golang.org/x/sys/windows/svc"
|
||||||
|
"golang.org/x/sys/windows/svc/eventlog"
|
||||||
|
)
|
||||||
|
|
||||||
|
const defaultConfigPath = `C:\Litestream\litestream.yml`
|
||||||
|
|
||||||
|
// serviceName is the Windows Service name.
|
||||||
|
const serviceName = "Litestream"
|
||||||
|
|
||||||
|
// isWindowsService returns true if currently executing within a Windows service.
|
||||||
|
func isWindowsService() (bool, error) {
|
||||||
|
return svc.IsWindowsService()
|
||||||
|
}
|
||||||
|
|
||||||
|
func runWindowsService(ctx context.Context) error {
|
||||||
|
// Attempt to install new log service. This will fail if already installed.
|
||||||
|
// We don't log the error because we don't have anywhere to log until we open the log.
|
||||||
|
_ = eventlog.InstallAsEventCreate(serviceName, eventlog.Error|eventlog.Warning|eventlog.Info)
|
||||||
|
|
||||||
|
elog, err := eventlog.Open(serviceName)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer elog.Close()
|
||||||
|
|
||||||
|
// Set eventlog as log writer while running.
|
||||||
|
log.SetOutput((*eventlogWriter)(elog))
|
||||||
|
defer log.SetOutput(os.Stderr)
|
||||||
|
|
||||||
|
log.Print("Litestream service starting")
|
||||||
|
|
||||||
|
if err := svc.Run(serviceName, &windowsService{ctx: ctx}); err != nil {
|
||||||
|
return errStop
|
||||||
|
}
|
||||||
|
|
||||||
|
log.Print("Litestream service stopped")
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// windowsService is an interface adapter for svc.Handler.
|
||||||
|
type windowsService struct {
|
||||||
|
ctx context.Context
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *windowsService) Execute(args []string, r <-chan svc.ChangeRequest, statusCh chan<- svc.Status) (svcSpecificEC bool, exitCode uint32) {
|
||||||
|
var err error
|
||||||
|
|
||||||
|
// Notify Windows that the service is starting up.
|
||||||
|
statusCh <- svc.Status{State: svc.StartPending}
|
||||||
|
|
||||||
|
// Instantiate replication command and load configuration.
|
||||||
|
c := NewReplicateCommand()
|
||||||
|
if c.Config, err = ReadConfigFile(DefaultConfigPath()); err != nil {
|
||||||
|
log.Printf("cannot load configuration: %s", err)
|
||||||
|
return true, 1
|
||||||
|
}
|
||||||
|
|
||||||
|
// Execute replication command.
|
||||||
|
if err := c.Run(s.ctx); err != nil {
|
||||||
|
log.Printf("cannot replicate: %s", err)
|
||||||
|
statusCh <- svc.Status{State: svc.StopPending}
|
||||||
|
return true, 2
|
||||||
|
}
|
||||||
|
|
||||||
|
// Notify Windows that the service is now running.
|
||||||
|
statusCh <- svc.Status{State: svc.Running, Accepts: svc.AcceptStop}
|
||||||
|
|
||||||
|
for {
|
||||||
|
select {
|
||||||
|
case req := <-r:
|
||||||
|
switch req.Cmd {
|
||||||
|
case svc.Stop:
|
||||||
|
c.Close()
|
||||||
|
statusCh <- svc.Status{State: svc.StopPending}
|
||||||
|
return false, windows.NO_ERROR
|
||||||
|
case svc.Interrogate:
|
||||||
|
statusCh <- req.CurrentStatus
|
||||||
|
default:
|
||||||
|
log.Printf("Litestream service received unexpected change request cmd: %d", req.Cmd)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Ensure implementation implements io.Writer interface.
|
||||||
|
var _ io.Writer = (*eventlogWriter)(nil)
|
||||||
|
|
||||||
|
// eventlogWriter is an adapter for using eventlog.Log as an io.Writer.
|
||||||
|
type eventlogWriter eventlog.Log
|
||||||
|
|
||||||
|
func (w *eventlogWriter) Write(p []byte) (n int, err error) {
|
||||||
|
elog := (*eventlog.Log)(w)
|
||||||
|
return 0, elog.Info(1, string(p))
|
||||||
|
}
|
||||||
@@ -2,7 +2,6 @@ package main
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"errors"
|
|
||||||
"flag"
|
"flag"
|
||||||
"fmt"
|
"fmt"
|
||||||
"log"
|
"log"
|
||||||
@@ -10,7 +9,6 @@ import (
|
|||||||
"net/http"
|
"net/http"
|
||||||
_ "net/http/pprof"
|
_ "net/http/pprof"
|
||||||
"os"
|
"os"
|
||||||
"os/signal"
|
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/benbjohnson/litestream"
|
"github.com/benbjohnson/litestream"
|
||||||
@@ -20,28 +18,34 @@ import (
|
|||||||
|
|
||||||
// ReplicateCommand represents a command that continuously replicates SQLite databases.
|
// ReplicateCommand represents a command that continuously replicates SQLite databases.
|
||||||
type ReplicateCommand struct {
|
type ReplicateCommand struct {
|
||||||
ConfigPath string
|
|
||||||
Config Config
|
Config Config
|
||||||
|
|
||||||
// List of managed databases specified in the config.
|
// List of managed databases specified in the config.
|
||||||
DBs []*litestream.DB
|
DBs []*litestream.DB
|
||||||
}
|
}
|
||||||
|
|
||||||
// Run loads all databases specified in the configuration.
|
func NewReplicateCommand() *ReplicateCommand {
|
||||||
func (c *ReplicateCommand) Run(ctx context.Context, args []string) (err error) {
|
return &ReplicateCommand{}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ParseFlags parses the CLI flags and loads the configuration file.
|
||||||
|
func (c *ReplicateCommand) ParseFlags(ctx context.Context, args []string) (err error) {
|
||||||
fs := flag.NewFlagSet("litestream-replicate", flag.ContinueOnError)
|
fs := flag.NewFlagSet("litestream-replicate", flag.ContinueOnError)
|
||||||
tracePath := fs.String("trace", "", "trace path")
|
tracePath := fs.String("trace", "", "trace path")
|
||||||
registerConfigFlag(fs, &c.ConfigPath)
|
configPath := registerConfigFlag(fs)
|
||||||
fs.Usage = c.Usage
|
fs.Usage = c.Usage
|
||||||
if err := fs.Parse(args); err != nil {
|
if err := fs.Parse(args); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
// Load configuration or use CLI args to build db/replica.
|
// Load configuration or use CLI args to build db/replica.
|
||||||
var config Config
|
|
||||||
if fs.NArg() == 1 {
|
if fs.NArg() == 1 {
|
||||||
return fmt.Errorf("must specify at least one replica URL for %s", fs.Arg(0))
|
return fmt.Errorf("must specify at least one replica URL for %s", fs.Arg(0))
|
||||||
} else if fs.NArg() > 1 {
|
} else if fs.NArg() > 1 {
|
||||||
|
if *configPath != "" {
|
||||||
|
return fmt.Errorf("cannot specify a replica URL and the -config flag")
|
||||||
|
}
|
||||||
|
|
||||||
dbConfig := &DBConfig{Path: fs.Arg(0)}
|
dbConfig := &DBConfig{Path: fs.Arg(0)}
|
||||||
for _, u := range fs.Args()[1:] {
|
for _, u := range fs.Args()[1:] {
|
||||||
dbConfig.Replicas = append(dbConfig.Replicas, &ReplicaConfig{
|
dbConfig.Replicas = append(dbConfig.Replicas, &ReplicaConfig{
|
||||||
@@ -49,14 +53,14 @@ func (c *ReplicateCommand) Run(ctx context.Context, args []string) (err error) {
|
|||||||
SyncInterval: 1 * time.Second,
|
SyncInterval: 1 * time.Second,
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
config.DBs = []*DBConfig{dbConfig}
|
c.Config.DBs = []*DBConfig{dbConfig}
|
||||||
} else if c.ConfigPath != "" {
|
} else {
|
||||||
config, err = ReadConfigFile(c.ConfigPath)
|
if *configPath == "" {
|
||||||
if err != nil {
|
*configPath = DefaultConfigPath()
|
||||||
|
}
|
||||||
|
if c.Config, err = ReadConfigFile(*configPath); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
} else {
|
|
||||||
return errors.New("-config flag or database/replica arguments required")
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Enable trace logging.
|
// Enable trace logging.
|
||||||
@@ -69,21 +73,20 @@ func (c *ReplicateCommand) Run(ctx context.Context, args []string) (err error) {
|
|||||||
litestream.Tracef = log.New(f, "", log.LstdFlags|log.LUTC|log.Lshortfile).Printf
|
litestream.Tracef = log.New(f, "", log.LstdFlags|log.LUTC|log.Lshortfile).Printf
|
||||||
}
|
}
|
||||||
|
|
||||||
// Setup signal handler.
|
return nil
|
||||||
ctx, cancel := context.WithCancel(ctx)
|
}
|
||||||
ch := make(chan os.Signal, 1)
|
|
||||||
signal.Notify(ch, os.Interrupt)
|
|
||||||
go func() { <-ch; cancel() }()
|
|
||||||
|
|
||||||
|
// Run loads all databases specified in the configuration.
|
||||||
|
func (c *ReplicateCommand) Run(ctx context.Context) (err error) {
|
||||||
// Display version information.
|
// Display version information.
|
||||||
fmt.Printf("litestream %s\n", Version)
|
log.Printf("litestream %s", Version)
|
||||||
|
|
||||||
if len(config.DBs) == 0 {
|
if len(c.Config.DBs) == 0 {
|
||||||
fmt.Println("no databases specified in configuration")
|
log.Println("no databases specified in configuration")
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, dbConfig := range config.DBs {
|
for _, dbConfig := range c.Config.DBs {
|
||||||
db, err := newDBFromConfig(&config, dbConfig)
|
db, err := NewDBFromConfig(dbConfig)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -97,41 +100,37 @@ func (c *ReplicateCommand) Run(ctx context.Context, args []string) (err error) {
|
|||||||
|
|
||||||
// Notify user that initialization is done.
|
// Notify user that initialization is done.
|
||||||
for _, db := range c.DBs {
|
for _, db := range c.DBs {
|
||||||
fmt.Printf("initialized db: %s\n", db.Path())
|
log.Printf("initialized db: %s", db.Path())
|
||||||
for _, r := range db.Replicas {
|
for _, r := range db.Replicas {
|
||||||
switch r := r.(type) {
|
switch r := r.(type) {
|
||||||
case *litestream.FileReplica:
|
case *litestream.FileReplica:
|
||||||
fmt.Printf("replicating to: name=%q type=%q path=%q\n", r.Name(), r.Type(), r.Path())
|
log.Printf("replicating to: name=%q type=%q path=%q", r.Name(), r.Type(), r.Path())
|
||||||
case *s3.Replica:
|
case *s3.Replica:
|
||||||
fmt.Printf("replicating to: name=%q type=%q bucket=%q path=%q region=%q\n", r.Name(), r.Type(), r.Bucket, r.Path, r.Region)
|
log.Printf("replicating to: name=%q type=%q bucket=%q path=%q region=%q endpoint=%q sync-interval=%s", r.Name(), r.Type(), r.Bucket, r.Path, r.Region, r.Endpoint, r.SyncInterval)
|
||||||
default:
|
default:
|
||||||
fmt.Printf("replicating to: name=%q type=%q\n", r.Name(), r.Type())
|
log.Printf("replicating to: name=%q type=%q", r.Name(), r.Type())
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Serve metrics over HTTP if enabled.
|
// Serve metrics over HTTP if enabled.
|
||||||
if config.Addr != "" {
|
if c.Config.Addr != "" {
|
||||||
_, port, _ := net.SplitHostPort(config.Addr)
|
hostport := c.Config.Addr
|
||||||
fmt.Printf("serving metrics on http://localhost:%s/metrics\n", port)
|
if host, port, _ := net.SplitHostPort(c.Config.Addr); port == "" {
|
||||||
|
return fmt.Errorf("must specify port for bind address: %q", c.Config.Addr)
|
||||||
|
} else if host == "" {
|
||||||
|
hostport = net.JoinHostPort("localhost", port)
|
||||||
|
}
|
||||||
|
|
||||||
|
log.Printf("serving metrics on http://%s/metrics", hostport)
|
||||||
go func() {
|
go func() {
|
||||||
http.Handle("/metrics", promhttp.Handler())
|
http.Handle("/metrics", promhttp.Handler())
|
||||||
if err := http.ListenAndServe(config.Addr, nil); err != nil {
|
if err := http.ListenAndServe(c.Config.Addr, nil); err != nil {
|
||||||
log.Printf("cannot start metrics server: %s", err)
|
log.Printf("cannot start metrics server: %s", err)
|
||||||
}
|
}
|
||||||
}()
|
}()
|
||||||
}
|
}
|
||||||
|
|
||||||
// Wait for signal to stop program.
|
|
||||||
<-ctx.Done()
|
|
||||||
signal.Reset()
|
|
||||||
|
|
||||||
// Gracefully close
|
|
||||||
if err := c.Close(); err != nil {
|
|
||||||
fmt.Fprintln(os.Stderr, err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -139,12 +138,13 @@ func (c *ReplicateCommand) Run(ctx context.Context, args []string) (err error) {
|
|||||||
func (c *ReplicateCommand) Close() (err error) {
|
func (c *ReplicateCommand) Close() (err error) {
|
||||||
for _, db := range c.DBs {
|
for _, db := range c.DBs {
|
||||||
if e := db.SoftClose(); e != nil {
|
if e := db.SoftClose(); e != nil {
|
||||||
fmt.Printf("error closing db: path=%s err=%s\n", db.Path(), e)
|
log.Printf("error closing db: path=%s err=%s", db.Path(), e)
|
||||||
if err == nil {
|
if err == nil {
|
||||||
err = e
|
err = e
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
// TODO(windows): Clear DBs
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -17,12 +17,11 @@ type RestoreCommand struct{}
|
|||||||
|
|
||||||
// Run executes the command.
|
// Run executes the command.
|
||||||
func (c *RestoreCommand) Run(ctx context.Context, args []string) (err error) {
|
func (c *RestoreCommand) Run(ctx context.Context, args []string) (err error) {
|
||||||
var configPath string
|
|
||||||
opt := litestream.NewRestoreOptions()
|
opt := litestream.NewRestoreOptions()
|
||||||
opt.Verbose = true
|
opt.Verbose = true
|
||||||
|
|
||||||
fs := flag.NewFlagSet("litestream-restore", flag.ContinueOnError)
|
fs := flag.NewFlagSet("litestream-restore", flag.ContinueOnError)
|
||||||
registerConfigFlag(fs, &configPath)
|
configPath := registerConfigFlag(fs)
|
||||||
fs.StringVar(&opt.OutputPath, "o", "", "output path")
|
fs.StringVar(&opt.OutputPath, "o", "", "output path")
|
||||||
fs.StringVar(&opt.ReplicaName, "replica", "", "replica name")
|
fs.StringVar(&opt.ReplicaName, "replica", "", "replica name")
|
||||||
fs.StringVar(&opt.Generation, "generation", "", "generation name")
|
fs.StringVar(&opt.Generation, "generation", "", "generation name")
|
||||||
@@ -59,15 +58,19 @@ func (c *RestoreCommand) Run(ctx context.Context, args []string) (err error) {
|
|||||||
// Determine replica & generation to restore from.
|
// Determine replica & generation to restore from.
|
||||||
var r litestream.Replica
|
var r litestream.Replica
|
||||||
if isURL(fs.Arg(0)) {
|
if isURL(fs.Arg(0)) {
|
||||||
|
if *configPath != "" {
|
||||||
|
return fmt.Errorf("cannot specify a replica URL and the -config flag")
|
||||||
|
}
|
||||||
if r, err = c.loadFromURL(ctx, fs.Arg(0), &opt); err != nil {
|
if r, err = c.loadFromURL(ctx, fs.Arg(0), &opt); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
} else if configPath != "" {
|
} else {
|
||||||
if r, err = c.loadFromConfig(ctx, fs.Arg(0), configPath, &opt); err != nil {
|
if *configPath == "" {
|
||||||
|
*configPath = DefaultConfigPath()
|
||||||
|
}
|
||||||
|
if r, err = c.loadFromConfig(ctx, fs.Arg(0), *configPath, &opt); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
} else {
|
|
||||||
return errors.New("config path or replica URL required")
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Return an error if no matching targets found.
|
// Return an error if no matching targets found.
|
||||||
@@ -80,7 +83,7 @@ func (c *RestoreCommand) Run(ctx context.Context, args []string) (err error) {
|
|||||||
|
|
||||||
// loadFromURL creates a replica & updates the restore options from a replica URL.
|
// loadFromURL creates a replica & updates the restore options from a replica URL.
|
||||||
func (c *RestoreCommand) loadFromURL(ctx context.Context, replicaURL string, opt *litestream.RestoreOptions) (litestream.Replica, error) {
|
func (c *RestoreCommand) loadFromURL(ctx context.Context, replicaURL string, opt *litestream.RestoreOptions) (litestream.Replica, error) {
|
||||||
r, err := NewReplicaFromURL(replicaURL)
|
r, err := NewReplicaFromConfig(&ReplicaConfig{URL: replicaURL}, nil)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@@ -104,7 +107,7 @@ func (c *RestoreCommand) loadFromConfig(ctx context.Context, dbPath, configPath
|
|||||||
if dbConfig == nil {
|
if dbConfig == nil {
|
||||||
return nil, fmt.Errorf("database not found in config: %s", dbPath)
|
return nil, fmt.Errorf("database not found in config: %s", dbPath)
|
||||||
}
|
}
|
||||||
db, err := newDBFromConfig(&config, dbConfig)
|
db, err := NewDBFromConfig(dbConfig)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -2,7 +2,6 @@ package main
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"errors"
|
|
||||||
"flag"
|
"flag"
|
||||||
"fmt"
|
"fmt"
|
||||||
"os"
|
"os"
|
||||||
@@ -17,9 +16,8 @@ type SnapshotsCommand struct{}
|
|||||||
|
|
||||||
// Run executes the command.
|
// Run executes the command.
|
||||||
func (c *SnapshotsCommand) Run(ctx context.Context, args []string) (err error) {
|
func (c *SnapshotsCommand) Run(ctx context.Context, args []string) (err error) {
|
||||||
var configPath string
|
|
||||||
fs := flag.NewFlagSet("litestream-snapshots", flag.ContinueOnError)
|
fs := flag.NewFlagSet("litestream-snapshots", flag.ContinueOnError)
|
||||||
registerConfigFlag(fs, &configPath)
|
configPath := registerConfigFlag(fs)
|
||||||
replicaName := fs.String("replica", "", "replica name")
|
replicaName := fs.String("replica", "", "replica name")
|
||||||
fs.Usage = c.Usage
|
fs.Usage = c.Usage
|
||||||
if err := fs.Parse(args); err != nil {
|
if err := fs.Parse(args); err != nil {
|
||||||
@@ -33,12 +31,19 @@ func (c *SnapshotsCommand) Run(ctx context.Context, args []string) (err error) {
|
|||||||
var db *litestream.DB
|
var db *litestream.DB
|
||||||
var r litestream.Replica
|
var r litestream.Replica
|
||||||
if isURL(fs.Arg(0)) {
|
if isURL(fs.Arg(0)) {
|
||||||
if r, err = NewReplicaFromURL(fs.Arg(0)); err != nil {
|
if *configPath != "" {
|
||||||
|
return fmt.Errorf("cannot specify a replica URL and the -config flag")
|
||||||
|
}
|
||||||
|
if r, err = NewReplicaFromConfig(&ReplicaConfig{URL: fs.Arg(0)}, nil); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
} else if configPath != "" {
|
} else {
|
||||||
|
if *configPath == "" {
|
||||||
|
*configPath = DefaultConfigPath()
|
||||||
|
}
|
||||||
|
|
||||||
// Load configuration.
|
// Load configuration.
|
||||||
config, err := ReadConfigFile(configPath)
|
config, err := ReadConfigFile(*configPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -48,7 +53,7 @@ func (c *SnapshotsCommand) Run(ctx context.Context, args []string) (err error) {
|
|||||||
return err
|
return err
|
||||||
} else if dbc := config.DBConfig(path); dbc == nil {
|
} else if dbc := config.DBConfig(path); dbc == nil {
|
||||||
return fmt.Errorf("database not found in config: %s", path)
|
return fmt.Errorf("database not found in config: %s", path)
|
||||||
} else if db, err = newDBFromConfig(&config, dbc); err != nil {
|
} else if db, err = NewDBFromConfig(dbc); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -58,8 +63,6 @@ func (c *SnapshotsCommand) Run(ctx context.Context, args []string) (err error) {
|
|||||||
return fmt.Errorf("replica %q not found for database %q", *replicaName, db.Path())
|
return fmt.Errorf("replica %q not found for database %q", *replicaName, db.Path())
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
} else {
|
|
||||||
return errors.New("config path or replica URL required")
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Find snapshots by db or replica.
|
// Find snapshots by db or replica.
|
||||||
|
|||||||
@@ -2,7 +2,6 @@ package main
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"errors"
|
|
||||||
"flag"
|
"flag"
|
||||||
"fmt"
|
"fmt"
|
||||||
"os"
|
"os"
|
||||||
@@ -17,9 +16,8 @@ type WALCommand struct{}
|
|||||||
|
|
||||||
// Run executes the command.
|
// Run executes the command.
|
||||||
func (c *WALCommand) Run(ctx context.Context, args []string) (err error) {
|
func (c *WALCommand) Run(ctx context.Context, args []string) (err error) {
|
||||||
var configPath string
|
|
||||||
fs := flag.NewFlagSet("litestream-wal", flag.ContinueOnError)
|
fs := flag.NewFlagSet("litestream-wal", flag.ContinueOnError)
|
||||||
registerConfigFlag(fs, &configPath)
|
configPath := registerConfigFlag(fs)
|
||||||
replicaName := fs.String("replica", "", "replica name")
|
replicaName := fs.String("replica", "", "replica name")
|
||||||
generation := fs.String("generation", "", "generation name")
|
generation := fs.String("generation", "", "generation name")
|
||||||
fs.Usage = c.Usage
|
fs.Usage = c.Usage
|
||||||
@@ -34,12 +32,19 @@ func (c *WALCommand) Run(ctx context.Context, args []string) (err error) {
|
|||||||
var db *litestream.DB
|
var db *litestream.DB
|
||||||
var r litestream.Replica
|
var r litestream.Replica
|
||||||
if isURL(fs.Arg(0)) {
|
if isURL(fs.Arg(0)) {
|
||||||
if r, err = NewReplicaFromURL(fs.Arg(0)); err != nil {
|
if *configPath != "" {
|
||||||
|
return fmt.Errorf("cannot specify a replica URL and the -config flag")
|
||||||
|
}
|
||||||
|
if r, err = NewReplicaFromConfig(&ReplicaConfig{URL: fs.Arg(0)}, nil); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
} else if configPath != "" {
|
} else {
|
||||||
|
if *configPath == "" {
|
||||||
|
*configPath = DefaultConfigPath()
|
||||||
|
}
|
||||||
|
|
||||||
// Load configuration.
|
// Load configuration.
|
||||||
config, err := ReadConfigFile(configPath)
|
config, err := ReadConfigFile(*configPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -49,7 +54,7 @@ func (c *WALCommand) Run(ctx context.Context, args []string) (err error) {
|
|||||||
return err
|
return err
|
||||||
} else if dbc := config.DBConfig(path); dbc == nil {
|
} else if dbc := config.DBConfig(path); dbc == nil {
|
||||||
return fmt.Errorf("database not found in config: %s", path)
|
return fmt.Errorf("database not found in config: %s", path)
|
||||||
} else if db, err = newDBFromConfig(&config, dbc); err != nil {
|
} else if db, err = NewDBFromConfig(dbc); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -59,8 +64,6 @@ func (c *WALCommand) Run(ctx context.Context, args []string) (err error) {
|
|||||||
return fmt.Errorf("replica %q not found for database %q", *replicaName, db.Path())
|
return fmt.Errorf("replica %q not found for database %q", *replicaName, db.Path())
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
} else {
|
|
||||||
return errors.New("config path or replica URL required")
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Find WAL files by db or replica.
|
// Find WAL files by db or replica.
|
||||||
|
|||||||
119
db.go
119
db.go
@@ -45,6 +45,7 @@ type DB struct {
|
|||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
path string // part to database
|
path string // part to database
|
||||||
db *sql.DB // target database
|
db *sql.DB // target database
|
||||||
|
f *os.File // long-running db file descriptor
|
||||||
rtx *sql.Tx // long running read transaction
|
rtx *sql.Tx // long running read transaction
|
||||||
pageSize int // page size, in bytes
|
pageSize int // page size, in bytes
|
||||||
notify chan struct{} // closes on WAL change
|
notify chan struct{} // closes on WAL change
|
||||||
@@ -259,6 +260,11 @@ func (db *DB) PageSize() int {
|
|||||||
|
|
||||||
// Open initializes the background monitoring goroutine.
|
// Open initializes the background monitoring goroutine.
|
||||||
func (db *DB) Open() (err error) {
|
func (db *DB) Open() (err error) {
|
||||||
|
// Validate fields on database.
|
||||||
|
if db.MinCheckpointPageN <= 0 {
|
||||||
|
return fmt.Errorf("minimum checkpoint page count required")
|
||||||
|
}
|
||||||
|
|
||||||
// Validate that all replica names are unique.
|
// Validate that all replica names are unique.
|
||||||
m := make(map[string]struct{})
|
m := make(map[string]struct{})
|
||||||
for _, r := range db.Replicas {
|
for _, r := range db.Replicas {
|
||||||
@@ -285,15 +291,23 @@ func (db *DB) Open() (err error) {
|
|||||||
// Close releases the read lock & closes the database. This method should only
|
// Close releases the read lock & closes the database. This method should only
|
||||||
// be called by tests as it causes the underlying database to be checkpointed.
|
// be called by tests as it causes the underlying database to be checkpointed.
|
||||||
func (db *DB) Close() (err error) {
|
func (db *DB) Close() (err error) {
|
||||||
if e := db.SoftClose(); e != nil && err == nil {
|
// Ensure replicas all stop replicating.
|
||||||
|
for _, r := range db.Replicas {
|
||||||
|
r.Stop(true)
|
||||||
|
}
|
||||||
|
|
||||||
|
if db.rtx != nil {
|
||||||
|
if e := db.releaseReadLock(); e != nil && err == nil {
|
||||||
err = e
|
err = e
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if db.db != nil {
|
if db.db != nil {
|
||||||
if e := db.db.Close(); e != nil && err == nil {
|
if e := db.db.Close(); e != nil && err == nil {
|
||||||
err = e
|
err = e
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -381,11 +395,34 @@ func (db *DB) init() (err error) {
|
|||||||
dsn := db.path
|
dsn := db.path
|
||||||
dsn += fmt.Sprintf("?_busy_timeout=%d", BusyTimeout.Milliseconds())
|
dsn += fmt.Sprintf("?_busy_timeout=%d", BusyTimeout.Milliseconds())
|
||||||
|
|
||||||
// Connect to SQLite database & enable WAL.
|
// Connect to SQLite database.
|
||||||
if db.db, err = sql.Open("sqlite3", dsn); err != nil {
|
if db.db, err = sql.Open("sqlite3", dsn); err != nil {
|
||||||
return err
|
return err
|
||||||
} else if _, err := db.db.Exec(`PRAGMA journal_mode = wal;`); err != nil {
|
}
|
||||||
|
|
||||||
|
// Open long-running database file descriptor. Required for non-OFD locks.
|
||||||
|
if db.f, err = os.Open(db.path); err != nil {
|
||||||
|
return fmt.Errorf("open db file descriptor: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Ensure database is closed if init fails.
|
||||||
|
// Initialization can retry on next sync.
|
||||||
|
defer func() {
|
||||||
|
if err != nil {
|
||||||
|
_ = db.releaseReadLock()
|
||||||
|
db.db.Close()
|
||||||
|
db.f.Close()
|
||||||
|
db.db, db.f = nil, nil
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
|
// Enable WAL and ensure it is set. New mode should be returned on success:
|
||||||
|
// https://www.sqlite.org/pragma.html#pragma_journal_mode
|
||||||
|
var mode string
|
||||||
|
if err := db.db.QueryRow(`PRAGMA journal_mode = wal;`).Scan(&mode); err != nil {
|
||||||
return fmt.Errorf("enable wal: %w", err)
|
return fmt.Errorf("enable wal: %w", err)
|
||||||
|
} else if mode != "wal" {
|
||||||
|
return fmt.Errorf("enable wal failed, mode=%q", mode)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Disable autocheckpoint for litestream's connection.
|
// Disable autocheckpoint for litestream's connection.
|
||||||
@@ -425,7 +462,7 @@ func (db *DB) init() (err error) {
|
|||||||
|
|
||||||
// If we have an existing shadow WAL, ensure the headers match.
|
// If we have an existing shadow WAL, ensure the headers match.
|
||||||
if err := db.verifyHeadersMatch(); err != nil {
|
if err := db.verifyHeadersMatch(); err != nil {
|
||||||
log.Printf("%s: init: cannot determine last wal position, clearing generation (%s)", db.path, err)
|
log.Printf("%s: init: cannot determine last wal position, clearing generation; %s", db.path, err)
|
||||||
if err := os.Remove(db.GenerationNamePath()); err != nil && !os.IsNotExist(err) {
|
if err := os.Remove(db.GenerationNamePath()); err != nil && !os.IsNotExist(err) {
|
||||||
return fmt.Errorf("remove generation name: %w", err)
|
return fmt.Errorf("remove generation name: %w", err)
|
||||||
}
|
}
|
||||||
@@ -475,7 +512,7 @@ func (db *DB) verifyHeadersMatch() error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if !bytes.Equal(hdr0, hdr1) {
|
if !bytes.Equal(hdr0, hdr1) {
|
||||||
return fmt.Errorf("wal header mismatch")
|
return fmt.Errorf("wal header mismatch %x <> %x on %s", hdr0, hdr1, shadowWALPath)
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
@@ -569,7 +606,7 @@ func (db *DB) SoftClose() (err error) {
|
|||||||
|
|
||||||
// Ensure replicas all stop replicating.
|
// Ensure replicas all stop replicating.
|
||||||
for _, r := range db.Replicas {
|
for _, r := range db.Replicas {
|
||||||
r.Stop()
|
r.Stop(false)
|
||||||
}
|
}
|
||||||
|
|
||||||
if db.rtx != nil {
|
if db.rtx != nil {
|
||||||
@@ -843,7 +880,7 @@ func (db *DB) verify() (info syncInfo, err error) {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return info, err
|
return info, err
|
||||||
}
|
}
|
||||||
info.walSize = fi.Size()
|
info.walSize = frameAlign(fi.Size(), db.pageSize)
|
||||||
info.walModTime = fi.ModTime()
|
info.walModTime = fi.ModTime()
|
||||||
db.walSizeGauge.Set(float64(fi.Size()))
|
db.walSizeGauge.Set(float64(fi.Size()))
|
||||||
|
|
||||||
@@ -867,7 +904,6 @@ func (db *DB) verify() (info syncInfo, err error) {
|
|||||||
}
|
}
|
||||||
info.shadowWALSize = frameAlign(fi.Size(), db.pageSize)
|
info.shadowWALSize = frameAlign(fi.Size(), db.pageSize)
|
||||||
|
|
||||||
// Truncate shadow WAL if there is a partial page.
|
|
||||||
// Exit if shadow WAL does not contain a full header.
|
// Exit if shadow WAL does not contain a full header.
|
||||||
if info.shadowWALSize < WALHeaderSize {
|
if info.shadowWALSize < WALHeaderSize {
|
||||||
info.reason = "short shadow wal"
|
info.reason = "short shadow wal"
|
||||||
@@ -900,9 +936,9 @@ func (db *DB) verify() (info syncInfo, err error) {
|
|||||||
// Verify last page synced still matches.
|
// Verify last page synced still matches.
|
||||||
if info.shadowWALSize > WALHeaderSize {
|
if info.shadowWALSize > WALHeaderSize {
|
||||||
offset := info.shadowWALSize - int64(db.pageSize+WALFrameHeaderSize)
|
offset := info.shadowWALSize - int64(db.pageSize+WALFrameHeaderSize)
|
||||||
if buf0, err := readFileAt(db.WALPath(), offset, int64(db.pageSize+WALFrameHeaderSize)); err != nil {
|
if buf0, err := readWALFileAt(db.WALPath(), offset, int64(db.pageSize+WALFrameHeaderSize)); err != nil {
|
||||||
return info, fmt.Errorf("cannot read last synced wal page: %w", err)
|
return info, fmt.Errorf("cannot read last synced wal page: %w", err)
|
||||||
} else if buf1, err := readFileAt(info.shadowWALPath, offset, int64(db.pageSize+WALFrameHeaderSize)); err != nil {
|
} else if buf1, err := readWALFileAt(info.shadowWALPath, offset, int64(db.pageSize+WALFrameHeaderSize)); err != nil {
|
||||||
return info, fmt.Errorf("cannot read last synced shadow wal page: %w", err)
|
return info, fmt.Errorf("cannot read last synced shadow wal page: %w", err)
|
||||||
} else if !bytes.Equal(buf0, buf1) {
|
} else if !bytes.Equal(buf0, buf1) {
|
||||||
info.reason = "wal overwritten by another process"
|
info.reason = "wal overwritten by another process"
|
||||||
@@ -1321,6 +1357,21 @@ func (db *DB) checkpointAndInit(generation, mode string) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Start a transaction. This will be promoted immediately after.
|
||||||
|
tx, err := db.db.Begin()
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("begin: %w", err)
|
||||||
|
}
|
||||||
|
defer func() { _ = rollback(tx) }()
|
||||||
|
|
||||||
|
// Insert into the lock table to promote to a write tx. The lock table
|
||||||
|
// insert will never actually occur because our tx will be rolled back,
|
||||||
|
// however, it will ensure our tx grabs the write lock. Unfortunately,
|
||||||
|
// we can't call "BEGIN IMMEDIATE" as we are already in a transaction.
|
||||||
|
if _, err := tx.ExecContext(db.ctx, `INSERT INTO _litestream_lock (id) VALUES (1);`); err != nil {
|
||||||
|
return fmt.Errorf("_litestream_lock: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
// Copy the end of the previous WAL before starting a new shadow WAL.
|
// Copy the end of the previous WAL before starting a new shadow WAL.
|
||||||
if _, err := db.copyToShadowWAL(shadowWALPath); err != nil {
|
if _, err := db.copyToShadowWAL(shadowWALPath); err != nil {
|
||||||
return fmt.Errorf("cannot copy to end of shadow wal: %w", err)
|
return fmt.Errorf("cannot copy to end of shadow wal: %w", err)
|
||||||
@@ -1338,6 +1389,10 @@ func (db *DB) checkpointAndInit(generation, mode string) error {
|
|||||||
return fmt.Errorf("cannot init shadow wal file: name=%s err=%w", newShadowWALPath, err)
|
return fmt.Errorf("cannot init shadow wal file: name=%s err=%w", newShadowWALPath, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Release write lock before checkpointing & exiting.
|
||||||
|
if err := tx.Rollback(); err != nil {
|
||||||
|
return fmt.Errorf("rollback post-checkpoint tx: %w", err)
|
||||||
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1449,20 +1504,6 @@ func RestoreReplica(ctx context.Context, r Replica, opt RestoreOptions) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func checksumFile(filename string) (uint64, error) {
|
|
||||||
f, err := os.Open(filename)
|
|
||||||
if err != nil {
|
|
||||||
return 0, err
|
|
||||||
}
|
|
||||||
defer f.Close()
|
|
||||||
|
|
||||||
h := crc64.New(crc64.MakeTable(crc64.ISO))
|
|
||||||
if _, err := io.Copy(h, f); err != nil {
|
|
||||||
return 0, err
|
|
||||||
}
|
|
||||||
return h.Sum64(), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// CalcRestoreTarget returns a replica & generation to restore from based on opt criteria.
|
// CalcRestoreTarget returns a replica & generation to restore from based on opt criteria.
|
||||||
func (db *DB) CalcRestoreTarget(ctx context.Context, opt RestoreOptions) (Replica, string, error) {
|
func (db *DB) CalcRestoreTarget(ctx context.Context, opt RestoreOptions) (Replica, string, error) {
|
||||||
var target struct {
|
var target struct {
|
||||||
@@ -1655,11 +1696,14 @@ func (db *DB) CRC64() (uint64, Pos, error) {
|
|||||||
}
|
}
|
||||||
pos.Offset = 0
|
pos.Offset = 0
|
||||||
|
|
||||||
chksum, err := checksumFile(db.Path())
|
// Seek to the beginning of the db file descriptor and checksum whole file.
|
||||||
if err != nil {
|
h := crc64.New(crc64.MakeTable(crc64.ISO))
|
||||||
|
if _, err := db.f.Seek(0, io.SeekStart); err != nil {
|
||||||
|
return 0, pos, err
|
||||||
|
} else if _, err := io.Copy(h, db.f); err != nil {
|
||||||
return 0, pos, err
|
return 0, pos, err
|
||||||
}
|
}
|
||||||
return chksum, pos, nil
|
return h.Sum64(), pos, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// RestoreOptions represents options for DB.Restore().
|
// RestoreOptions represents options for DB.Restore().
|
||||||
@@ -1791,24 +1835,3 @@ func headerByteOrder(hdr []byte) (binary.ByteOrder, error) {
|
|||||||
return nil, fmt.Errorf("invalid wal header magic: %x", magic)
|
return nil, fmt.Errorf("invalid wal header magic: %x", magic)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func copyFile(dst, src string) error {
|
|
||||||
r, err := os.Open(src)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer r.Close()
|
|
||||||
|
|
||||||
w, err := os.Create(dst)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer w.Close()
|
|
||||||
|
|
||||||
if _, err := io.Copy(w, r); err != nil {
|
|
||||||
return err
|
|
||||||
} else if err := w.Sync(); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|||||||
17
etc/build.ps1
Normal file
17
etc/build.ps1
Normal file
@@ -0,0 +1,17 @@
|
|||||||
|
[CmdletBinding()]
|
||||||
|
Param (
|
||||||
|
[Parameter(Mandatory = $true)]
|
||||||
|
[String] $Version
|
||||||
|
)
|
||||||
|
$ErrorActionPreference = "Stop"
|
||||||
|
|
||||||
|
# Update working directory.
|
||||||
|
Push-Location $PSScriptRoot
|
||||||
|
Trap {
|
||||||
|
Pop-Location
|
||||||
|
}
|
||||||
|
|
||||||
|
Invoke-Expression "candle.exe -nologo -arch x64 -ext WixUtilExtension -out litestream.wixobj -dVersion=`"$Version`" litestream.wxs"
|
||||||
|
Invoke-Expression "light.exe -nologo -spdb -ext WixUtilExtension -out `"litestream-${Version}.msi`" litestream.wixobj"
|
||||||
|
|
||||||
|
Pop-Location
|
||||||
89
etc/litestream.wxs
Normal file
89
etc/litestream.wxs
Normal file
@@ -0,0 +1,89 @@
|
|||||||
|
<?xml version="1.0" encoding="utf-8"?>
|
||||||
|
<Wix
|
||||||
|
xmlns="http://schemas.microsoft.com/wix/2006/wi"
|
||||||
|
xmlns:util="http://schemas.microsoft.com/wix/UtilExtension"
|
||||||
|
>
|
||||||
|
<?if $(sys.BUILDARCH)=x64 ?>
|
||||||
|
<?define PlatformProgramFiles = "ProgramFiles64Folder" ?>
|
||||||
|
<?else ?>
|
||||||
|
<?define PlatformProgramFiles = "ProgramFilesFolder" ?>
|
||||||
|
<?endif ?>
|
||||||
|
|
||||||
|
<Product
|
||||||
|
Id="*"
|
||||||
|
UpgradeCode="5371367e-58b3-4e52-be0d-46945eb71ce6"
|
||||||
|
Name="Litestream"
|
||||||
|
Version="$(var.Version)"
|
||||||
|
Manufacturer="Litestream"
|
||||||
|
Language="1033"
|
||||||
|
Codepage="1252"
|
||||||
|
>
|
||||||
|
<Package
|
||||||
|
Id="*"
|
||||||
|
Manufacturer="Litestream"
|
||||||
|
InstallScope="perMachine"
|
||||||
|
InstallerVersion="500"
|
||||||
|
Description="Litestream $(var.Version) installer"
|
||||||
|
Compressed="yes"
|
||||||
|
/>
|
||||||
|
|
||||||
|
<Media Id="1" Cabinet="litestream.cab" EmbedCab="yes"/>
|
||||||
|
|
||||||
|
<MajorUpgrade
|
||||||
|
Schedule="afterInstallInitialize"
|
||||||
|
DowngradeErrorMessage="A later version of [ProductName] is already installed. Setup will now exit."
|
||||||
|
/>
|
||||||
|
|
||||||
|
<Directory Id="TARGETDIR" Name="SourceDir">
|
||||||
|
<Directory Id="$(var.PlatformProgramFiles)">
|
||||||
|
<Directory Id="APPLICATIONROOTDIRECTORY" Name="Litestream"/>
|
||||||
|
</Directory>
|
||||||
|
</Directory>
|
||||||
|
|
||||||
|
<ComponentGroup Id="Files">
|
||||||
|
<Component Directory="APPLICATIONROOTDIRECTORY">
|
||||||
|
<File
|
||||||
|
Id="litestream.exe"
|
||||||
|
Name="litestream.exe"
|
||||||
|
Source="litestream.exe"
|
||||||
|
KeyPath="yes"
|
||||||
|
/>
|
||||||
|
|
||||||
|
<ServiceInstall
|
||||||
|
Id="InstallService"
|
||||||
|
Name="Litestream"
|
||||||
|
DisplayName="Litestream"
|
||||||
|
Description="Replicates SQLite databases"
|
||||||
|
ErrorControl="normal"
|
||||||
|
Start="auto"
|
||||||
|
Type="ownProcess"
|
||||||
|
>
|
||||||
|
<util:ServiceConfig
|
||||||
|
FirstFailureActionType="restart"
|
||||||
|
SecondFailureActionType="restart"
|
||||||
|
ThirdFailureActionType="restart"
|
||||||
|
RestartServiceDelayInSeconds="60"
|
||||||
|
/>
|
||||||
|
<ServiceDependency Id="wmiApSrv" />
|
||||||
|
</ServiceInstall>
|
||||||
|
|
||||||
|
<ServiceControl
|
||||||
|
Id="ServiceStateControl"
|
||||||
|
Name="Litestream"
|
||||||
|
Remove="uninstall"
|
||||||
|
Start="install"
|
||||||
|
Stop="both"
|
||||||
|
/>
|
||||||
|
<util:EventSource
|
||||||
|
Log="Application"
|
||||||
|
Name="Litestream"
|
||||||
|
EventMessageFile="%SystemRoot%\System32\EventCreate.exe"
|
||||||
|
/>
|
||||||
|
</Component>
|
||||||
|
</ComponentGroup>
|
||||||
|
|
||||||
|
<Feature Id="DefaultFeature" Level="1">
|
||||||
|
<ComponentGroupRef Id="Files" />
|
||||||
|
</Feature>
|
||||||
|
</Product>
|
||||||
|
</Wix>
|
||||||
@@ -6,5 +6,5 @@
|
|||||||
# - path: /path/to/primary/db # Database to replicate from
|
# - path: /path/to/primary/db # Database to replicate from
|
||||||
# replicas:
|
# replicas:
|
||||||
# - path: /path/to/replica # File-based replication
|
# - path: /path/to/replica # File-based replication
|
||||||
# - path: s3://my.bucket.com/db # S3-based replication
|
# - url: s3://my.bucket.com/db # S3-based replication
|
||||||
|
|
||||||
|
|||||||
1
go.mod
1
go.mod
@@ -8,5 +8,6 @@ require (
|
|||||||
github.com/mattn/go-sqlite3 v1.14.5
|
github.com/mattn/go-sqlite3 v1.14.5
|
||||||
github.com/pierrec/lz4/v4 v4.1.3
|
github.com/pierrec/lz4/v4 v4.1.3
|
||||||
github.com/prometheus/client_golang v1.9.0
|
github.com/prometheus/client_golang v1.9.0
|
||||||
|
golang.org/x/sys v0.0.0-20201214210602-f9fddec55a1e
|
||||||
gopkg.in/yaml.v2 v2.4.0
|
gopkg.in/yaml.v2 v2.4.0
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -36,6 +36,7 @@ const (
|
|||||||
|
|
||||||
// Litestream errors.
|
// Litestream errors.
|
||||||
var (
|
var (
|
||||||
|
ErrNoGeneration = errors.New("no generation available")
|
||||||
ErrNoSnapshots = errors.New("no snapshots available")
|
ErrNoSnapshots = errors.New("no snapshots available")
|
||||||
ErrChecksumMismatch = errors.New("invalid replica, checksum mismatch")
|
ErrChecksumMismatch = errors.New("invalid replica, checksum mismatch")
|
||||||
)
|
)
|
||||||
@@ -151,8 +152,9 @@ func readWALHeader(filename string) ([]byte, error) {
|
|||||||
return buf[:n], err
|
return buf[:n], err
|
||||||
}
|
}
|
||||||
|
|
||||||
// readFileAt reads a slice from a file.
|
// readWALFileAt reads a slice from a file. Do not use this with database files
|
||||||
func readFileAt(filename string, offset, n int64) ([]byte, error) {
|
// as it causes problems with non-OFD locks.
|
||||||
|
func readWALFileAt(filename string, offset, n int64) ([]byte, error) {
|
||||||
f, err := os.Open(filename)
|
f, err := os.Open(filename)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
|
|||||||
160
replica.go
160
replica.go
@@ -4,6 +4,7 @@ import (
|
|||||||
"context"
|
"context"
|
||||||
"encoding/binary"
|
"encoding/binary"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"hash/crc64"
|
||||||
"io"
|
"io"
|
||||||
"io/ioutil"
|
"io/ioutil"
|
||||||
"log"
|
"log"
|
||||||
@@ -31,10 +32,10 @@ type Replica interface {
|
|||||||
DB() *DB
|
DB() *DB
|
||||||
|
|
||||||
// Starts replicating in a background goroutine.
|
// Starts replicating in a background goroutine.
|
||||||
Start(ctx context.Context)
|
Start(ctx context.Context) error
|
||||||
|
|
||||||
// Stops all replication processing. Blocks until processing stopped.
|
// Stops all replication processing. Blocks until processing stopped.
|
||||||
Stop()
|
Stop(hard bool) error
|
||||||
|
|
||||||
// Returns the last replication position.
|
// Returns the last replication position.
|
||||||
LastPos() Pos
|
LastPos() Pos
|
||||||
@@ -90,6 +91,9 @@ type FileReplica struct {
|
|||||||
mu sync.RWMutex
|
mu sync.RWMutex
|
||||||
pos Pos // last position
|
pos Pos // last position
|
||||||
|
|
||||||
|
muf sync.Mutex
|
||||||
|
f *os.File // long-running file descriptor to avoid non-OFD lock issues
|
||||||
|
|
||||||
wg sync.WaitGroup
|
wg sync.WaitGroup
|
||||||
cancel func()
|
cancel func()
|
||||||
|
|
||||||
@@ -98,8 +102,11 @@ type FileReplica struct {
|
|||||||
walIndexGauge prometheus.Gauge
|
walIndexGauge prometheus.Gauge
|
||||||
walOffsetGauge prometheus.Gauge
|
walOffsetGauge prometheus.Gauge
|
||||||
|
|
||||||
|
// Frequency to create new snapshots.
|
||||||
|
SnapshotInterval time.Duration
|
||||||
|
|
||||||
// Time to keep snapshots and related WAL files.
|
// Time to keep snapshots and related WAL files.
|
||||||
// Database is snapshotted after interval and older WAL files are discarded.
|
// Database is snapshotted after interval, if needed, and older WAL files are discarded.
|
||||||
Retention time.Duration
|
Retention time.Duration
|
||||||
|
|
||||||
// Time between checks for retention.
|
// Time between checks for retention.
|
||||||
@@ -389,29 +396,45 @@ func (r *FileReplica) WALs(ctx context.Context) ([]*WALInfo, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Start starts replication for a given generation.
|
// Start starts replication for a given generation.
|
||||||
func (r *FileReplica) Start(ctx context.Context) {
|
func (r *FileReplica) Start(ctx context.Context) (err error) {
|
||||||
// Ignore if replica is being used sychronously.
|
// Ignore if replica is being used sychronously.
|
||||||
if !r.MonitorEnabled {
|
if !r.MonitorEnabled {
|
||||||
return
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Stop previous replication.
|
// Stop previous replication.
|
||||||
r.Stop()
|
r.Stop(false)
|
||||||
|
|
||||||
// Wrap context with cancelation.
|
// Wrap context with cancelation.
|
||||||
ctx, r.cancel = context.WithCancel(ctx)
|
ctx, r.cancel = context.WithCancel(ctx)
|
||||||
|
|
||||||
// Start goroutine to replicate data.
|
// Start goroutine to replicate data.
|
||||||
r.wg.Add(3)
|
r.wg.Add(4)
|
||||||
go func() { defer r.wg.Done(); r.monitor(ctx) }()
|
go func() { defer r.wg.Done(); r.monitor(ctx) }()
|
||||||
go func() { defer r.wg.Done(); r.retainer(ctx) }()
|
go func() { defer r.wg.Done(); r.retainer(ctx) }()
|
||||||
|
go func() { defer r.wg.Done(); r.snapshotter(ctx) }()
|
||||||
go func() { defer r.wg.Done(); r.validator(ctx) }()
|
go func() { defer r.wg.Done(); r.validator(ctx) }()
|
||||||
|
|
||||||
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Stop cancels any outstanding replication and blocks until finished.
|
// Stop cancels any outstanding replication and blocks until finished.
|
||||||
func (r *FileReplica) Stop() {
|
//
|
||||||
|
// Performing a hard stop will close the DB file descriptor which could release
|
||||||
|
// locks on per-process locks. Hard stops should only be performed when
|
||||||
|
// stopping the entire process.
|
||||||
|
func (r *FileReplica) Stop(hard bool) (err error) {
|
||||||
r.cancel()
|
r.cancel()
|
||||||
r.wg.Wait()
|
r.wg.Wait()
|
||||||
|
|
||||||
|
r.muf.Lock()
|
||||||
|
defer r.muf.Unlock()
|
||||||
|
if hard && r.f != nil {
|
||||||
|
if e := r.f.Close(); e != nil && err == nil {
|
||||||
|
err = e
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
// monitor runs in a separate goroutine and continuously replicates the DB.
|
// monitor runs in a separate goroutine and continuously replicates the DB.
|
||||||
@@ -446,7 +469,18 @@ func (r *FileReplica) monitor(ctx context.Context) {
|
|||||||
|
|
||||||
// retainer runs in a separate goroutine and handles retention.
|
// retainer runs in a separate goroutine and handles retention.
|
||||||
func (r *FileReplica) retainer(ctx context.Context) {
|
func (r *FileReplica) retainer(ctx context.Context) {
|
||||||
ticker := time.NewTicker(r.RetentionCheckInterval)
|
// Disable retention enforcement if retention period is non-positive.
|
||||||
|
if r.Retention <= 0 {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Ensure check interval is not longer than retention period.
|
||||||
|
checkInterval := r.RetentionCheckInterval
|
||||||
|
if checkInterval > r.Retention {
|
||||||
|
checkInterval = r.Retention
|
||||||
|
}
|
||||||
|
|
||||||
|
ticker := time.NewTicker(checkInterval)
|
||||||
defer ticker.Stop()
|
defer ticker.Stop()
|
||||||
|
|
||||||
for {
|
for {
|
||||||
@@ -462,6 +496,28 @@ func (r *FileReplica) retainer(ctx context.Context) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// snapshotter runs in a separate goroutine and handles snapshotting.
|
||||||
|
func (r *FileReplica) snapshotter(ctx context.Context) {
|
||||||
|
if r.SnapshotInterval <= 0 {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
ticker := time.NewTicker(r.SnapshotInterval)
|
||||||
|
defer ticker.Stop()
|
||||||
|
|
||||||
|
for {
|
||||||
|
select {
|
||||||
|
case <-ctx.Done():
|
||||||
|
return
|
||||||
|
case <-ticker.C:
|
||||||
|
if err := r.Snapshot(ctx); err != nil && err != ErrNoGeneration {
|
||||||
|
log.Printf("%s(%s): snapshotter error: %s", r.db.Path(), r.Name(), err)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// validator runs in a separate goroutine and handles periodic validation.
|
// validator runs in a separate goroutine and handles periodic validation.
|
||||||
func (r *FileReplica) validator(ctx context.Context) {
|
func (r *FileReplica) validator(ctx context.Context) {
|
||||||
// Initialize counters since validation occurs infrequently.
|
// Initialize counters since validation occurs infrequently.
|
||||||
@@ -531,8 +587,23 @@ func (r *FileReplica) CalcPos(ctx context.Context, generation string) (pos Pos,
|
|||||||
return pos, nil
|
return pos, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Snapshot copies the entire database to the replica path.
|
||||||
|
func (r *FileReplica) Snapshot(ctx context.Context) error {
|
||||||
|
// Find current position of database.
|
||||||
|
pos, err := r.db.Pos()
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("cannot determine current db generation: %w", err)
|
||||||
|
} else if pos.IsZero() {
|
||||||
|
return ErrNoGeneration
|
||||||
|
}
|
||||||
|
return r.snapshot(ctx, pos.Generation, pos.Index)
|
||||||
|
}
|
||||||
|
|
||||||
// snapshot copies the entire database to the replica path.
|
// snapshot copies the entire database to the replica path.
|
||||||
func (r *FileReplica) snapshot(ctx context.Context, generation string, index int) error {
|
func (r *FileReplica) snapshot(ctx context.Context, generation string, index int) error {
|
||||||
|
r.muf.Lock()
|
||||||
|
defer r.muf.Unlock()
|
||||||
|
|
||||||
// Acquire a read lock on the database during snapshot to prevent checkpoints.
|
// Acquire a read lock on the database during snapshot to prevent checkpoints.
|
||||||
tx, err := r.db.db.Begin()
|
tx, err := r.db.db.Begin()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -553,11 +624,50 @@ func (r *FileReplica) snapshot(ctx context.Context, generation string, index int
|
|||||||
|
|
||||||
if err := mkdirAll(filepath.Dir(snapshotPath), r.db.dirmode, r.db.diruid, r.db.dirgid); err != nil {
|
if err := mkdirAll(filepath.Dir(snapshotPath), r.db.dirmode, r.db.diruid, r.db.dirgid); err != nil {
|
||||||
return err
|
return err
|
||||||
} else if err := compressFile(r.db.Path(), snapshotPath, r.db.uid, r.db.gid); err != nil {
|
}
|
||||||
|
|
||||||
|
// Open db file descriptor, if not already open.
|
||||||
|
if r.f == nil {
|
||||||
|
if r.f, err = os.Open(r.db.Path()); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, err := r.f.Seek(0, io.SeekStart); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
log.Printf("%s(%s): snapshot: creating %s/%08x t=%s", r.db.Path(), r.Name(), generation, index, time.Since(startTime))
|
fi, err := r.f.Stat()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
w, err := createFile(snapshotPath+".tmp", fi.Mode(), r.db.uid, r.db.gid)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer w.Close()
|
||||||
|
|
||||||
|
zr := lz4.NewWriter(w)
|
||||||
|
defer zr.Close()
|
||||||
|
|
||||||
|
// Copy & compress file contents to temporary file.
|
||||||
|
if _, err := io.Copy(zr, r.f); err != nil {
|
||||||
|
return err
|
||||||
|
} else if err := zr.Close(); err != nil {
|
||||||
|
return err
|
||||||
|
} else if err := w.Sync(); err != nil {
|
||||||
|
return err
|
||||||
|
} else if err := w.Close(); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Move compressed file to final location.
|
||||||
|
if err := os.Rename(snapshotPath+".tmp", snapshotPath); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
log.Printf("%s(%s): snapshot: creating %s/%08x t=%s", r.db.Path(), r.Name(), generation, index, time.Since(startTime).Truncate(time.Millisecond))
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -756,7 +866,7 @@ func (r *FileReplica) compress(ctx context.Context, generation string) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
dst := filename + ".lz4"
|
dst := filename + ".lz4"
|
||||||
if err := compressFile(filename, dst, r.db.uid, r.db.gid); err != nil {
|
if err := compressWALFile(filename, dst, r.db.uid, r.db.gid); err != nil {
|
||||||
return err
|
return err
|
||||||
} else if err := os.Remove(filename); err != nil {
|
} else if err := os.Remove(filename); err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -1002,8 +1112,9 @@ func WALIndexAt(ctx context.Context, r Replica, generation string, maxIndex int,
|
|||||||
return index, nil
|
return index, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// compressFile compresses a file and replaces it with a new file with a .lz4 extension.
|
// compressWALFile compresses a file and replaces it with a new file with a .lz4 extension.
|
||||||
func compressFile(src, dst string, uid, gid int) error {
|
// Do not use this on database files because of issues with non-OFD locks.
|
||||||
|
func compressWALFile(src, dst string, uid, gid int) error {
|
||||||
r, err := os.Open(src)
|
r, err := os.Open(src)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -1053,7 +1164,6 @@ func ValidateReplica(ctx context.Context, r Replica) error {
|
|||||||
|
|
||||||
// Compute checksum of primary database under lock. This prevents a
|
// Compute checksum of primary database under lock. This prevents a
|
||||||
// sync from occurring and the database will not be written.
|
// sync from occurring and the database will not be written.
|
||||||
primaryPath := filepath.Join(tmpdir, "primary")
|
|
||||||
chksum0, pos, err := db.CRC64()
|
chksum0, pos, err := db.CRC64()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("cannot compute checksum: %w", err)
|
return fmt.Errorf("cannot compute checksum: %w", err)
|
||||||
@@ -1076,10 +1186,19 @@ func ValidateReplica(ctx context.Context, r Replica) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Open file handle for restored database.
|
// Open file handle for restored database.
|
||||||
chksum1, err := checksumFile(restorePath)
|
// NOTE: This open is ok as the restored database is not managed by litestream.
|
||||||
|
f, err := os.Open(restorePath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
defer f.Close()
|
||||||
|
|
||||||
|
// Read entire file into checksum.
|
||||||
|
h := crc64.New(crc64.MakeTable(crc64.ISO))
|
||||||
|
if _, err := io.Copy(h, f); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
chksum1 := h.Sum64()
|
||||||
|
|
||||||
status := "ok"
|
status := "ok"
|
||||||
mismatch := chksum0 != chksum1
|
mismatch := chksum0 != chksum1
|
||||||
@@ -1091,15 +1210,6 @@ func ValidateReplica(ctx context.Context, r Replica) error {
|
|||||||
// Validate checksums match.
|
// Validate checksums match.
|
||||||
if mismatch {
|
if mismatch {
|
||||||
internal.ReplicaValidationTotalCounterVec.WithLabelValues(db.Path(), r.Name(), "error").Inc()
|
internal.ReplicaValidationTotalCounterVec.WithLabelValues(db.Path(), r.Name(), "error").Inc()
|
||||||
|
|
||||||
// Compress mismatched databases and report temporary path for investigation.
|
|
||||||
if err := compressFile(primaryPath, primaryPath+".lz4", db.uid, db.gid); err != nil {
|
|
||||||
return fmt.Errorf("cannot compress primary db: %w", err)
|
|
||||||
} else if err := compressFile(restorePath, restorePath+".lz4", db.uid, db.gid); err != nil {
|
|
||||||
return fmt.Errorf("cannot compress replica db: %w", err)
|
|
||||||
}
|
|
||||||
log.Printf("%s(%s): validator: mismatch files @ %s", db.Path(), r.Name(), tmpdir)
|
|
||||||
|
|
||||||
return ErrChecksumMismatch
|
return ErrChecksumMismatch
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
180
s3/s3.go
180
s3/s3.go
@@ -7,8 +7,10 @@ import (
|
|||||||
"io"
|
"io"
|
||||||
"io/ioutil"
|
"io/ioutil"
|
||||||
"log"
|
"log"
|
||||||
|
"net"
|
||||||
"os"
|
"os"
|
||||||
"path"
|
"path"
|
||||||
|
"regexp"
|
||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
@@ -50,6 +52,9 @@ type Replica struct {
|
|||||||
snapshotMu sync.Mutex
|
snapshotMu sync.Mutex
|
||||||
pos litestream.Pos // last position
|
pos litestream.Pos // last position
|
||||||
|
|
||||||
|
muf sync.Mutex
|
||||||
|
f *os.File // long-lived read-only db file descriptor
|
||||||
|
|
||||||
wg sync.WaitGroup
|
wg sync.WaitGroup
|
||||||
cancel func()
|
cancel func()
|
||||||
|
|
||||||
@@ -72,10 +77,15 @@ type Replica struct {
|
|||||||
Region string
|
Region string
|
||||||
Bucket string
|
Bucket string
|
||||||
Path string
|
Path string
|
||||||
|
Endpoint string
|
||||||
|
ForcePathStyle bool
|
||||||
|
|
||||||
// Time between syncs with the shadow WAL.
|
// Time between syncs with the shadow WAL.
|
||||||
SyncInterval time.Duration
|
SyncInterval time.Duration
|
||||||
|
|
||||||
|
// Frequency to create new snapshots.
|
||||||
|
SnapshotInterval time.Duration
|
||||||
|
|
||||||
// Time to keep snapshots and related WAL files.
|
// Time to keep snapshots and related WAL files.
|
||||||
// Database is snapshotted after interval and older WAL files are discarded.
|
// Database is snapshotted after interval and older WAL files are discarded.
|
||||||
Retention time.Duration
|
Retention time.Duration
|
||||||
@@ -410,29 +420,47 @@ func (r *Replica) WALs(ctx context.Context) ([]*litestream.WALInfo, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Start starts replication for a given generation.
|
// Start starts replication for a given generation.
|
||||||
func (r *Replica) Start(ctx context.Context) {
|
func (r *Replica) Start(ctx context.Context) (err error) {
|
||||||
// Ignore if replica is being used sychronously.
|
// Ignore if replica is being used sychronously.
|
||||||
if !r.MonitorEnabled {
|
if !r.MonitorEnabled {
|
||||||
return
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Stop previous replication.
|
// Stop previous replication.
|
||||||
r.Stop()
|
r.Stop(false)
|
||||||
|
|
||||||
// Wrap context with cancelation.
|
// Wrap context with cancelation.
|
||||||
ctx, r.cancel = context.WithCancel(ctx)
|
ctx, r.cancel = context.WithCancel(ctx)
|
||||||
|
|
||||||
// Start goroutines to manage replica data.
|
// Start goroutines to manage replica data.
|
||||||
r.wg.Add(3)
|
r.wg.Add(4)
|
||||||
go func() { defer r.wg.Done(); r.monitor(ctx) }()
|
go func() { defer r.wg.Done(); r.monitor(ctx) }()
|
||||||
go func() { defer r.wg.Done(); r.retainer(ctx) }()
|
go func() { defer r.wg.Done(); r.retainer(ctx) }()
|
||||||
|
go func() { defer r.wg.Done(); r.snapshotter(ctx) }()
|
||||||
go func() { defer r.wg.Done(); r.validator(ctx) }()
|
go func() { defer r.wg.Done(); r.validator(ctx) }()
|
||||||
|
|
||||||
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Stop cancels any outstanding replication and blocks until finished.
|
// Stop cancels any outstanding replication and blocks until finished.
|
||||||
func (r *Replica) Stop() {
|
//
|
||||||
|
// Performing a hard stop will close the DB file descriptor which could release
|
||||||
|
// locks on per-process locks. Hard stops should only be performed when
|
||||||
|
// stopping the entire process.
|
||||||
|
func (r *Replica) Stop(hard bool) (err error) {
|
||||||
r.cancel()
|
r.cancel()
|
||||||
r.wg.Wait()
|
r.wg.Wait()
|
||||||
|
|
||||||
|
r.muf.Lock()
|
||||||
|
defer r.muf.Unlock()
|
||||||
|
|
||||||
|
if hard && r.f != nil {
|
||||||
|
if e := r.f.Close(); e != nil && err == nil {
|
||||||
|
err = e
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
// monitor runs in a separate goroutine and continuously replicates the DB.
|
// monitor runs in a separate goroutine and continuously replicates the DB.
|
||||||
@@ -475,7 +503,18 @@ func (r *Replica) monitor(ctx context.Context) {
|
|||||||
|
|
||||||
// retainer runs in a separate goroutine and handles retention.
|
// retainer runs in a separate goroutine and handles retention.
|
||||||
func (r *Replica) retainer(ctx context.Context) {
|
func (r *Replica) retainer(ctx context.Context) {
|
||||||
ticker := time.NewTicker(r.RetentionCheckInterval)
|
// Disable retention enforcement if retention period is non-positive.
|
||||||
|
if r.Retention <= 0 {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Ensure check interval is not longer than retention period.
|
||||||
|
checkInterval := r.RetentionCheckInterval
|
||||||
|
if checkInterval > r.Retention {
|
||||||
|
checkInterval = r.Retention
|
||||||
|
}
|
||||||
|
|
||||||
|
ticker := time.NewTicker(checkInterval)
|
||||||
defer ticker.Stop()
|
defer ticker.Stop()
|
||||||
|
|
||||||
for {
|
for {
|
||||||
@@ -491,6 +530,28 @@ func (r *Replica) retainer(ctx context.Context) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// snapshotter runs in a separate goroutine and handles snapshotting.
|
||||||
|
func (r *Replica) snapshotter(ctx context.Context) {
|
||||||
|
if r.SnapshotInterval <= 0 {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
ticker := time.NewTicker(r.SnapshotInterval)
|
||||||
|
defer ticker.Stop()
|
||||||
|
|
||||||
|
for {
|
||||||
|
select {
|
||||||
|
case <-ctx.Done():
|
||||||
|
return
|
||||||
|
case <-ticker.C:
|
||||||
|
if err := r.Snapshot(ctx); err != nil && err != litestream.ErrNoGeneration {
|
||||||
|
log.Printf("%s(%s): snapshotter error: %s", r.db.Path(), r.Name(), err)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// validator runs in a separate goroutine and handles periodic validation.
|
// validator runs in a separate goroutine and handles periodic validation.
|
||||||
func (r *Replica) validator(ctx context.Context) {
|
func (r *Replica) validator(ctx context.Context) {
|
||||||
// Initialize counters since validation occurs infrequently.
|
// Initialize counters since validation occurs infrequently.
|
||||||
@@ -568,8 +629,23 @@ func (r *Replica) CalcPos(ctx context.Context, generation string) (pos litestrea
|
|||||||
return pos, nil
|
return pos, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Snapshot copies the entire database to the replica path.
|
||||||
|
func (r *Replica) Snapshot(ctx context.Context) error {
|
||||||
|
// Find current position of database.
|
||||||
|
pos, err := r.db.Pos()
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("cannot determine current db generation: %w", err)
|
||||||
|
} else if pos.IsZero() {
|
||||||
|
return litestream.ErrNoGeneration
|
||||||
|
}
|
||||||
|
return r.snapshot(ctx, pos.Generation, pos.Index)
|
||||||
|
}
|
||||||
|
|
||||||
// snapshot copies the entire database to the replica path.
|
// snapshot copies the entire database to the replica path.
|
||||||
func (r *Replica) snapshot(ctx context.Context, generation string, index int) error {
|
func (r *Replica) snapshot(ctx context.Context, generation string, index int) error {
|
||||||
|
r.muf.Lock()
|
||||||
|
defer r.muf.Unlock()
|
||||||
|
|
||||||
// Acquire a read lock on the database during snapshot to prevent checkpoints.
|
// Acquire a read lock on the database during snapshot to prevent checkpoints.
|
||||||
tx, err := r.db.SQLDB().Begin()
|
tx, err := r.db.SQLDB().Begin()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -580,14 +656,21 @@ func (r *Replica) snapshot(ctx context.Context, generation string, index int) er
|
|||||||
}
|
}
|
||||||
defer func() { _ = tx.Rollback() }()
|
defer func() { _ = tx.Rollback() }()
|
||||||
|
|
||||||
// Open database file handle.
|
// Open long-lived file descriptor on database.
|
||||||
f, err := os.Open(r.db.Path())
|
if r.f == nil {
|
||||||
if err != nil {
|
if r.f, err = os.Open(r.db.Path()); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
defer f.Close()
|
}
|
||||||
|
|
||||||
fi, err := f.Stat()
|
// Move the file descriptor to the beginning. We only use one long lived
|
||||||
|
// file descriptor because some operating systems will remove the database
|
||||||
|
// lock when closing a separate file descriptor on the DB.
|
||||||
|
if _, err := r.f.Seek(0, io.SeekStart); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
fi, err := r.f.Stat()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -595,7 +678,7 @@ func (r *Replica) snapshot(ctx context.Context, generation string, index int) er
|
|||||||
pr, pw := io.Pipe()
|
pr, pw := io.Pipe()
|
||||||
zw := lz4.NewWriter(pw)
|
zw := lz4.NewWriter(pw)
|
||||||
go func() {
|
go func() {
|
||||||
if _, err := io.Copy(zw, f); err != nil {
|
if _, err := io.Copy(zw, r.f); err != nil {
|
||||||
_ = pw.CloseWithError(err)
|
_ = pw.CloseWithError(err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -616,8 +699,7 @@ func (r *Replica) snapshot(ctx context.Context, generation string, index int) er
|
|||||||
r.putOperationTotalCounter.Inc()
|
r.putOperationTotalCounter.Inc()
|
||||||
r.putOperationBytesCounter.Add(float64(fi.Size()))
|
r.putOperationBytesCounter.Add(float64(fi.Size()))
|
||||||
|
|
||||||
log.Printf("%s(%s): snapshot: creating %s/%08x t=%s", r.db.Path(), r.Name(), generation, index, time.Since(startTime))
|
log.Printf("%s(%s): snapshot: creating %s/%08x t=%s", r.db.Path(), r.Name(), generation, index, time.Since(startTime).Truncate(time.Millisecond))
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -646,17 +728,25 @@ func (r *Replica) Init(ctx context.Context) (err error) {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Look up region if not specified.
|
// Look up region if not specified and no endpoint is used.
|
||||||
|
// Endpoints are typically used for non-S3 object stores and do not
|
||||||
|
// necessarily require a region.
|
||||||
region := r.Region
|
region := r.Region
|
||||||
if region == "" {
|
if region == "" {
|
||||||
|
if r.Endpoint == "" {
|
||||||
if region, err = r.findBucketRegion(ctx, r.Bucket); err != nil {
|
if region, err = r.findBucketRegion(ctx, r.Bucket); err != nil {
|
||||||
return fmt.Errorf("cannot lookup bucket region: %w", err)
|
return fmt.Errorf("cannot lookup bucket region: %w", err)
|
||||||
}
|
}
|
||||||
|
} else {
|
||||||
|
region = "us-east-1" // default for non-S3 object stores
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Create new AWS session.
|
// Create new AWS session.
|
||||||
config := r.config()
|
config := r.config()
|
||||||
|
if region != "" {
|
||||||
config.Region = aws.String(region)
|
config.Region = aws.String(region)
|
||||||
|
}
|
||||||
sess, err := session.NewSession(config)
|
sess, err := session.NewSession(config)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("cannot create aws session: %w", err)
|
return fmt.Errorf("cannot create aws session: %w", err)
|
||||||
@@ -673,6 +763,12 @@ func (r *Replica) config() *aws.Config {
|
|||||||
if r.AccessKeyID != "" || r.SecretAccessKey != "" {
|
if r.AccessKeyID != "" || r.SecretAccessKey != "" {
|
||||||
config.Credentials = credentials.NewStaticCredentials(r.AccessKeyID, r.SecretAccessKey, "")
|
config.Credentials = credentials.NewStaticCredentials(r.AccessKeyID, r.SecretAccessKey, "")
|
||||||
}
|
}
|
||||||
|
if r.Endpoint != "" {
|
||||||
|
config.Endpoint = aws.String(r.Endpoint)
|
||||||
|
}
|
||||||
|
if r.ForcePathStyle {
|
||||||
|
config.S3ForcePathStyle = aws.Bool(r.ForcePathStyle)
|
||||||
|
}
|
||||||
return config
|
return config
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1027,6 +1123,60 @@ func (r *Replica) deleteGenerationBefore(ctx context.Context, generation string,
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ParseHost extracts data from a hostname depending on the service provider.
|
||||||
|
func ParseHost(s string) (bucket, region, endpoint string, forcePathStyle bool) {
|
||||||
|
// Extract port if one is specified.
|
||||||
|
host, port, err := net.SplitHostPort(s)
|
||||||
|
if err != nil {
|
||||||
|
host = s
|
||||||
|
}
|
||||||
|
|
||||||
|
// Default to path-based URLs, except for with AWS S3 itself.
|
||||||
|
forcePathStyle = true
|
||||||
|
|
||||||
|
// Extract fields from provider-specific host formats.
|
||||||
|
scheme := "https"
|
||||||
|
if a := localhostRegex.FindStringSubmatch(host); a != nil {
|
||||||
|
bucket, region = a[1], "us-east-1"
|
||||||
|
scheme, endpoint = "http", "localhost"
|
||||||
|
} else if a := gcsRegex.FindStringSubmatch(host); a != nil {
|
||||||
|
bucket, region = a[1], "us-east-1"
|
||||||
|
endpoint = "storage.googleapis.com"
|
||||||
|
} else if a := digitalOceanRegex.FindStringSubmatch(host); a != nil {
|
||||||
|
bucket, region = a[1], a[2]
|
||||||
|
endpoint = fmt.Sprintf("%s.digitaloceanspaces.com", region)
|
||||||
|
} else if a := linodeRegex.FindStringSubmatch(host); a != nil {
|
||||||
|
bucket, region = a[1], a[2]
|
||||||
|
endpoint = fmt.Sprintf("%s.linodeobjects.com", region)
|
||||||
|
} else if a := backblazeRegex.FindStringSubmatch(host); a != nil {
|
||||||
|
bucket, region = a[1], a[2]
|
||||||
|
endpoint = fmt.Sprintf("s3.%s.backblazeb2.com", region)
|
||||||
|
} else {
|
||||||
|
bucket = host
|
||||||
|
forcePathStyle = false
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add port back to endpoint, if available.
|
||||||
|
if endpoint != "" && port != "" {
|
||||||
|
endpoint = net.JoinHostPort(endpoint, port)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Prepend scheme to endpoint.
|
||||||
|
if endpoint != "" {
|
||||||
|
endpoint = scheme + "://" + endpoint
|
||||||
|
}
|
||||||
|
|
||||||
|
return bucket, region, endpoint, forcePathStyle
|
||||||
|
}
|
||||||
|
|
||||||
|
var (
|
||||||
|
localhostRegex = regexp.MustCompile(`^(?:(.+)\.)?localhost$`)
|
||||||
|
digitalOceanRegex = regexp.MustCompile(`^(?:(.+)\.)?([^.]+)\.digitaloceanspaces.com$`)
|
||||||
|
linodeRegex = regexp.MustCompile(`^(?:(.+)\.)?([^.]+)\.linodeobjects.com$`)
|
||||||
|
backblazeRegex = regexp.MustCompile(`^(?:(.+)\.)?s3.([^.]+)\.backblazeb2.com$`)
|
||||||
|
gcsRegex = regexp.MustCompile(`^(?:(.+)\.)?storage.googleapis.com$`)
|
||||||
|
)
|
||||||
|
|
||||||
// S3 metrics.
|
// S3 metrics.
|
||||||
var (
|
var (
|
||||||
operationTotalCounterVec = promauto.NewCounterVec(prometheus.CounterOpts{
|
operationTotalCounterVec = promauto.NewCounterVec(prometheus.CounterOpts{
|
||||||
|
|||||||
80
s3/s3_test.go
Normal file
80
s3/s3_test.go
Normal file
@@ -0,0 +1,80 @@
|
|||||||
|
package s3_test
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/benbjohnson/litestream/s3"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestParseHost(t *testing.T) {
|
||||||
|
// Ensure non-specific hosts return as buckets.
|
||||||
|
t.Run("S3", func(t *testing.T) {
|
||||||
|
bucket, region, endpoint, forcePathStyle := s3.ParseHost(`test.litestream.io`)
|
||||||
|
if got, want := bucket, `test.litestream.io`; got != want {
|
||||||
|
t.Fatalf("bucket=%q, want %q", got, want)
|
||||||
|
} else if got, want := region, ``; got != want {
|
||||||
|
t.Fatalf("region=%q, want %q", got, want)
|
||||||
|
} else if got, want := endpoint, ``; got != want {
|
||||||
|
t.Fatalf("endpoint=%q, want %q", got, want)
|
||||||
|
} else if got, want := forcePathStyle, false; got != want {
|
||||||
|
t.Fatalf("forcePathStyle=%v, want %v", got, want)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
// Ensure localhosts use an HTTP endpoint and extract the bucket name.
|
||||||
|
t.Run("Localhost", func(t *testing.T) {
|
||||||
|
t.Run("WithPort", func(t *testing.T) {
|
||||||
|
bucket, region, endpoint, forcePathStyle := s3.ParseHost(`test.localhost:9000`)
|
||||||
|
if got, want := bucket, `test`; got != want {
|
||||||
|
t.Fatalf("bucket=%q, want %q", got, want)
|
||||||
|
} else if got, want := region, `us-east-1`; got != want {
|
||||||
|
t.Fatalf("region=%q, want %q", got, want)
|
||||||
|
} else if got, want := endpoint, `http://localhost:9000`; got != want {
|
||||||
|
t.Fatalf("endpoint=%q, want %q", got, want)
|
||||||
|
} else if got, want := forcePathStyle, true; got != want {
|
||||||
|
t.Fatalf("forcePathStyle=%v, want %v", got, want)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("WithoutPort", func(t *testing.T) {
|
||||||
|
bucket, region, endpoint, forcePathStyle := s3.ParseHost(`test.localhost`)
|
||||||
|
if got, want := bucket, `test`; got != want {
|
||||||
|
t.Fatalf("bucket=%q, want %q", got, want)
|
||||||
|
} else if got, want := region, `us-east-1`; got != want {
|
||||||
|
t.Fatalf("region=%q, want %q", got, want)
|
||||||
|
} else if got, want := endpoint, `http://localhost`; got != want {
|
||||||
|
t.Fatalf("endpoint=%q, want %q", got, want)
|
||||||
|
} else if got, want := forcePathStyle, true; got != want {
|
||||||
|
t.Fatalf("forcePathStyle=%v, want %v", got, want)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
})
|
||||||
|
|
||||||
|
// Ensure backblaze B2 URLs extract bucket, region, & endpoint from host.
|
||||||
|
t.Run("Backblaze", func(t *testing.T) {
|
||||||
|
bucket, region, endpoint, forcePathStyle := s3.ParseHost(`test-123.s3.us-west-000.backblazeb2.com`)
|
||||||
|
if got, want := bucket, `test-123`; got != want {
|
||||||
|
t.Fatalf("bucket=%q, want %q", got, want)
|
||||||
|
} else if got, want := region, `us-west-000`; got != want {
|
||||||
|
t.Fatalf("region=%q, want %q", got, want)
|
||||||
|
} else if got, want := endpoint, `https://s3.us-west-000.backblazeb2.com`; got != want {
|
||||||
|
t.Fatalf("endpoint=%q, want %q", got, want)
|
||||||
|
} else if got, want := forcePathStyle, true; got != want {
|
||||||
|
t.Fatalf("forcePathStyle=%v, want %v", got, want)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
// Ensure GCS URLs extract bucket & endpoint from host.
|
||||||
|
t.Run("GCS", func(t *testing.T) {
|
||||||
|
bucket, region, endpoint, forcePathStyle := s3.ParseHost(`litestream.io.storage.googleapis.com`)
|
||||||
|
if got, want := bucket, `litestream.io`; got != want {
|
||||||
|
t.Fatalf("bucket=%q, want %q", got, want)
|
||||||
|
} else if got, want := region, `us-east-1`; got != want {
|
||||||
|
t.Fatalf("region=%q, want %q", got, want)
|
||||||
|
} else if got, want := endpoint, `https://storage.googleapis.com`; got != want {
|
||||||
|
t.Fatalf("endpoint=%q, want %q", got, want)
|
||||||
|
} else if got, want := forcePathStyle, true; got != want {
|
||||||
|
t.Fatalf("forcePathStyle=%v, want %v", got, want)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
Reference in New Issue
Block a user