Skip to content

Commit

Permalink
docs: update Docker tutorial
Browse files Browse the repository at this point in the history
Add documentation for CLI container.
  • Loading branch information
kris7t committed Jul 18, 2024
1 parent 2a5f3dc commit cd19c3c
Show file tree
Hide file tree
Showing 2 changed files with 183 additions and 20 deletions.
160 changes: 160 additions & 0 deletions subprojects/docs/src/learn/docker/cli.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,160 @@
---
SPDX-FileCopyrightText: 2024 The Refinery Authors
SPDX-License-Identifier: EPL-2.0
sidebar_position: 1
sidebar_label: CLI
---

# Command-line interface

You can run Refinery as a command-line applications via our [Docker container](https://github.com/graphs4value/refinery/pkgs/container/refinery-cli) on either `amd64` or `arm64` machines:

```shell
docker run --rm -it -v ${PWD}:/data ghcr.io/graphs4value/refinery-cli:@@@tools.refinery.release@@@
```

This will let you read input files and generate models in the current directory (`${PWD}`) of your terminal session.
Module imports (e.g., `import some::module.` to import `some/module.refinery`) relative to the current directory are also supported.

For example, to generate a model based on the file named `input.problem` in the current directory and write the results into the file named `output.refinery`, you may run the [`generate` subcommand](#generate) with

```shell
docker run --rm -it -v ${PWD}:/data ghcr.io/graphs4value/refinery-cli:@@@tools.refinery.release@@@ generate -o output.refinery input.problem
```

If you want Refinery CLI to print its documentation, run

```shell
docker run --rm -it -v ${PWD}:/data ghcr.io/graphs4value/refinery-cli:@@@tools.refinery.release@@@ -help
```

## The `generate` subcommand {#generate}

The `generate` subcommand generates a consistent concrete model from a partial model.

```shell
docker run --rm -it -v ${PWD}:/data ghcr.io/graphs4value/refinery-cli:@@@tools.refinery.release@@@ generate [options] input path
```

The `input path` should be a path to a `.problem` file relative to the current directory.
Due to Docker containerization, paths _outside_ the current directory (e.g., `../input.problem`) are not supported.

Passing `-` as the `input path` will read a partial model from the standard input.

By default, the generator is _deterministic_ and always outputs the same concrete model. See the [`-random-seed`](#generate-random-seed) option to customize this behavior.

See below for the list of supported `[options]`.

### `-output`, `-o` {#generate-output}

The output path for the generated model.
Passing `-o -` will write the generated model to the standard output.

When generating multiple models with [`-solution-number`](#generate-solution-number), the value `-` is not supported and individual solutions will be saved to numbered files.
For example, if you pass `-o output.refinery -n 10`, solutions will be saved as `output_001.refinery`, `output_002.refinery`, ..., `output_010.refinery`.

**Default value:** `-`, i.e., the solution is written to the standard output.

### `-random-seed`, `-r` {#generate-random-seed}

Random seed to control the behavior of model generation.

The same random seed value and Refinery release will produce the same output model for an input problem.
Models generated with different values of `-random-seed` are highly likely (but are not guaranteed) to be _substantially_ different.

**Default value:** `1`

### `-scope`, `-s` {#generate-scope}

Add [scope constraints](/learn/language/logic#type-scopes) to the input problem.

This option is especially useful if you want to generate models of multiple sizes from the same partial model.

For example, the command

```shell
docker run --rm -it -v ${PWD}:/data ghcr.io/graphs4value/refinery-cli:@@@tools.refinery.release@@@ generate -s File=20..25 input.problem
```

is equivalent to appending

```refinery title="input.problem"
scope File = 20..25.
```

to `input.problem`.
The syntax of the argument is equivalent to the [`scope`](/learn/language/logic#type-scopes) declaration, but you be careful with the handling of spaces in your shell.
Any number of `-s` arguments are supported. For example, the following argument lists are equivalent:

```shell
-scope File=20..25,Directory=3
-s File=20..25,Directory=3
-s File=20..25 -s Directory=3
-s "File = 20..25, Directory = 3"
-s "File = 20..25" -s "Directory = 3"
```

The `*` opeator also has to be quoted to avoid shell expansion:

```shell
-s "File=20..*"
```

### `-scope-override`, `-S` {#generate-scope-override}

Override [scope constraints](/learn/language/logic#type-scopes) to the input problem.

This argument is similar to [`-scope`](#generate-scope), but has higher precedence than the [`scope`](/learn/language/logic#type-scopes) declarations already present in the input file.
However, you can't override `scope` declarations in modules imported in the input file using the `import` statement.

For example, if we have

```refinery title="input.problem"
scope File = 20..25, Directory = 3.
```

in the input file, the arguments `-s File=10..12 input.problem` will be interpreted as

```refinery
scope File = 20..25, Directory = 3.
scope File = 10..12.
```

which results in an _unsatisfiable_ problem. If the use `-S File=10..12 input.problem` instead, the type scope for `File` is overridden as

```refinery
scope Directory = 3.
scope File = 10..12.
```

and model generation can proceed as requested. Since we had specified no override for `Directory`, its type scope declared in `input.problem` was preserved.

Scope overrides do not override additional scopes, i.e., `-s File=20..30 -S File=10..25` is interpreted as `-S File=20..25`.

### `-solution-number`, `-n` {#generate-solution-number}

The number of distinct solutions to generate.

Generated solutions are always different, but are frequently not _substantially_ different, i.e., the differences between generated solutions comprise only a few model elements.
You'll likely generate substantially different models by calling the generator multiple times with different [`-random-seed`](#generate-random-seed) values instead.

The generator will create [numbered output files](#generate-output) for each solution found.
The generation is considered successful if it finds at least one solution, but may find less than the requested number of solutions if no more exist.
In this case, there will be fewer output files than requested.

**Default value:** `1`

## Pre-release versions

You can take advantage of the most recent code submitted to our repository by using the `latest` tag instead.


```shell
docker run --pull always --rm -it -v ${PWD}:/data ghcr.io/graphs4value/refinery-cli:latest
```

:::warning

Pre-release versions may be unstable.

:::
Original file line number Diff line number Diff line change
Expand Up @@ -28,23 +28,14 @@ docker run --rm -it -p 8888:8888 ghcr.io/graphs4value/refinery:@@@tools.refinery

Once Docker pulls and starts the container, you can navigate to http://localhost:8888 to open the model generation interface and start editing.

Alternatively, you can follow the [instructions to set up a local development environment](/develop/contributing) and compile and run Refinery from source.

## Pre-release versions

You can take advantage of the most recent code submitted to our repository by using the `latest` tag instead.


```shell
docker run --pull always --rm -it -p 8888:8888 ghcr.io/graphs4value/refinery:latest
```
A [command-line interface (CLI)](cli) version of Refinery is also available as a Docker container.

Note that pre-release versions may be unstable.
Alternatively, you can follow the [instructions to set up a local development environment](/develop/contributing) and compile and run Refinery from source.

## Environmental variables

The Docker container supports the following environmental variables to customize its behavior.
Customizing these variable should only be needed if you want to _increase resource limits_ or _expose you Refinery instance over the network_ for others.
Customizing these variables should only be needed if you want to _increase resource limits_ or _expose your Refinery instance over the network_ for others.

Notes for **local-only instances** are highlighted with the :arrow_right: arrow emoji.

Expand Down Expand Up @@ -128,21 +119,20 @@ Timeout for model generation in seconds.

### Threading

:warning: Excessively large values may overload the server. Make sure that _all_ Refinery threads can run at the same time to avoid thread starvation.
:arrow_right: If you only run a single model generation task at a time, you don't need to adjust these settings.

:warning: Excessively large thread counts may overload the server. Make sure that _all_ Refinery threads can run at the same time to avoid thread starvation.

#### `REFINERY_XTEXT_THREAD_COUNT`

Number of threads used for non-blocking text editing operations. A value of `0` allows an _unlimited_ number of threads by running each semantics calculation in a new thread.

:warning: Excessively large values may overload the server. Make sure that _all_ Refinery threads can run at the same time to avoid thread starvation.

**Default value:** `1`

#### `REFINERY_XTEXT_LOCKING_THREAD_COUNT`

Number of threads used for text editing operations that lock the document. A value of `0` allows an _unlimited_ number of threads by running each semantics calculation in a new thread.


**Default value:** equal to `REFINERY_XTEXT_THREAD_COUNT`

#### `REFINERY_XTEXT_SEMANTICS_THREAD_COUNT`
Expand All @@ -151,15 +141,13 @@ Number of threads used for model semantics calculation. A value of `0` allows an

Must be at least as large as `REFINERY_XTEXT_THREAD_COUNT`.

:warning: Excessively large values may overload the server. Make sure that _all_ Refinery threads can run at the same time to avoid thread starvation.

**Default value:** equal to `REFINERY_XTEXT_THREAD_COUNT`

#### `REFINERY_MODEL_GENERATION_THREAD_COUNT`

Number of threads used for model semantics calculation. A value of `0` allows an _unlimited_ number of threads by running each semantics calculation in a new thread.

:warning: Excessively large values may overload the server. Make sure that _all_ Refinery threads can run at the same time to avoid thread starvation. Each model generation task may also demand a large amount of memory in addition to CPU time.
:warning: Each model generation task may also demand a large amount of memory in addition to CPU time.

**Default value:** equal to `REFINERY_XTEXT_THREAD_COUNT`

Expand All @@ -171,6 +159,21 @@ Modules (`.refinery` files) in this directory or colon-separated list of directo

:arrow_right: Use this in conjunction with the [mount volume (-v)](https://docs.docker.com/reference/cli/docker/container/run/#volume) option of `docker run` to work with multi-file projects in Refinery.

:warning: Make sure you only expose files that you want to make public. It's best to expose a directory that contains nothing other that `.refinery` files to minimize potential information leaks.
:warning: Only expose files that you want to make public. It's best to expose a directory that contains nothing other than `.refinery` files to minimize potential information leaks.

**Default value:** _empty_ (no directories are exposed)

## Pre-release versions

You can take advantage of the most recent code submitted to our repository by using the `latest` tag instead.


```shell
docker run --pull always --rm -it -p 8888:8888 ghcr.io/graphs4value/refinery:latest
```

:::warning

Pre-release versions may be unstable.

:::

0 comments on commit cd19c3c

Please sign in to comment.