A well known name has been chosen for some services to make attaching
a bit easier. This is typically just the service name with an er
suffix.
Many services also have a pty built in. This means at any point you
can send saneTerm
or resizeTerm
to poke the pty.
The following sections contain whatever information we could scrounge
together for you about our services (and middleware). If you're
writing a new service, you should really document it thoroughly here.
The examples use the conman repl, so you can try them out immediately
(see the main README for documentation on the conman repl, or just
make conman-repl-run LANGUAGE=python3
).
audio
This implements audio.
Data is one directional, attaching to the channel allows you to recieve messages from the service. To request that a file is played, a request must be written to a named pipe, /tmp/audio
.
.aiff
or .wav
files44,100htz
Requests to play files are formatted like so:
{
"File": "/path/to/file", // The path to the audio file
"Volume": 1.5, // A float64 value to determine audio volume. 1 is the file's native volume. Defaults to 1.
"DoesLoop": false, // Wether the file should be repeated or not.
"LoopCount": -1 // How many times to repeat the file. Set to a negative value to create an endless loop.
}
The user's code can determine what is being played by reading /tmp/audioStatus.json
, formatted as below.
{
"Sources": [], // A list of known sources.
"Running": false, // Wether or not anything is playing. true means sources are being played.
"Disabled": false // Wether or not the sound program is running. Should only be true if the repl is stopped.
}
Sources are formatted like so:
{
"Name": "file/name", // The path to the file.
"FileType": "wav/aiff", // The type of the file.
"Volume": 1.5, // The provided volume.
"Duration": 123, // The estimated duration of the file in milliseconds.
"Remaining": 122, // The estimated time remaining in milliseconds.
"Paused": false,
"Loop": 0, // How many more times the source will repeat.
// Negative values are constant and mean that it will repeat an infinite amount.
"ID": 1, // The ID of the source.
"EndTime": "2020-04-23T22:30:46.486250072Z", // Estimated time for the file to be done playing.
"StartTime": "2020-04-23T22:30:46.486250072Z" // When the file started playing.
}
To pause, change the remaining repeats, or change the volume, simply write to /tmp/audio
with a requested formatted like so:
{
"ID": 1, // The id of the source
"Volume": 1, // The volume for the source to be played at
"Paused": false, // Wether the file is paused or not.
"DoesLoop": false, // Wether the file should be repeated or not.
"LoopCount": -1 // How many times to repeat the file. Set to a negative value to create an endless loop.
}
The server reads from /tmp/audio
, and handles requesets (Requests are formatted as shown above). If a request is invalid, it is ignored.
Upon a valid request, the service determines if the request is to add a new decoder or edit a prexisting source by checking if an ID was provided.
If an ID is provided and the decoder exists, its edited as requested.
If an ID wasn't provided, a new decoder is added if:
A decoder is an interface used by the service to:
decoder.read()
reads from the via its underlying decoder. It determines the bit depth, then (assuming the bit depth isn't 32) converts it to 32bit ints, combines the channels into one, and amplifies the volume if the volume is not 1.
Upon being added, if handleAudio
isn't running, its started.
sendAudio
and handleAudio
Note: assume n is the sample size of the audio.
sendAudio
gets the data from decoders, and then mixes it. After mixing the audio, it's sent to the client in Messages formatted like so:
{
"Command": {
"Audio": {
"Data": [...sample] // The pcm data in its final form.
}
}
}
handleAudio
times when audio should be sent, and calls sendAudio
.
To ensure that audio never breaks up, while sacrificing the least latency possible, handleAudio
times sendAudio
and subtracts the total time from n/44100
sconds. If the result is negative, handleAudio
immediately sends it again. Otherwhise, handleAudio
waits the resulting time before sending audio again. This ensures that the user always has n to n*2 samples available.
Once the decoder finishes playing, it is removed. After all of the decoders are removed, handleAudio stops looping and waits for more requests.
When the repl is stopped, the soundManager
stops handleAudio
and sendAudio
, ignores new requests, resets the next ID for sources, and removes all remaining decoders.
Note: handleAudio
also calls sendAudio
before the loop starts and immediately calls it again after the loop starts, again to give the user those buffer samples.`
chatter
This implements the Multiplayer chat feature.
Sending it a message broadcasts all the messages to other sessions and saves it in a 100 message long scrollback. Attaching to the channel sends you back the last 100 messages that were saved.
> chat { chatMessage: { username: "replbot", text: "hello from replbot!" } }
upserted channel named chat
> chat { chatMessage: { username: "replbot", text: "another message" } }
> .attach chat from listener
created new client named listener
(chat -> listener) { chatMessage: { username: "replbot", text: "hello from replbot!" } }
(chat -> listener) { chatMessage: { username: "replbot", text: "another message" } }
> chat from listener { chatMessage: { username: "listener", text: "i am listening" } }
(chat) { chatMessage: { username: "listener", text: "i am listening" } }
> chat { chatMessage: { username: "replbot", text: "thanks!" } }
(chat -> listener) { chatMessage: { username: "replbot", text: "thanks!" } }
> chat { chatTyping: { username: "replbot", typing: true } }
(chat -> listener) { chatTyping: { username: "replbot", typing: true } }
> chat { chatTyping: { username: "replbot", typing: false } }
(chat -> listener) { chatTyping: { username: "replbot" } }
> chat from evalbot { chatMessage: { username: "evalbot", text: "hello everyone!" } }
created new client named evalbot
upserted channel named chat
(chat -> evalbot) { chatMessage: { username: "replbot", text: "hello from replbot!" } }
(chat -> evalbot) { chatMessage: { username: "replbot", text: "another message" } }
(chat -> evalbot) { chatMessage: { username: "listener", text: "i am listening" } }
(chat -> evalbot) { chatMessage: { username: "replbot", text: "thanks!" } }
(chat -> listener) { chatMessage: { username: "evalbot", text: "hello everyone!" } }
(chat) { chatMessage: { username: "evalbot", text: "hello everyone!" } }
> chat { chatTyping: { username: "replbot", typing: false } }
(chat -> evalbot) { chatTyping: { username: "replbot" } }
(chat -> listener) { chatTyping: { username: "replbot" } }
evaler
This service is deprecated. Now we are migrated to use interp
instead. eval
is basically an evaluation server. You send it code to
evaluate, and it sends you the result and output and/or errors. This
requires writing an evaluation server for each language. With
interp
, we just run the language's actual repl and pipe the pty
directly to the frontend.
With LANGUAGE=python3
:
> eval { eval: "2+2" }
upserted channel named eval
(eval) { result: "4" }
> eval { eval: 'print("Hello, world!")' }
(eval) { output: "Hello, world!\n" }
(eval) { result: "None" }
> eval { eval: '1 / 0' }
(eval) { error: "Traceback (most recent call last):\n File \"python\", line 1, in \u003cmodule\u003e\nZeroDivisionError: division by zero\n" }
> eval { eval: 'print("Hello, " + input("What is your name? ") + "!")' }
(eval) { output: "What is your name? " }
> eval { input: "Radon\n" }
(eval) { input: "Radon\n" }
(eval) { output: "Hello, Radon!\n" }
(eval) { result: "None" }
> eval { debuggerStart: "def factorial(n):\n if n == 0:\n return 1\n else:\n return n * factorial(n - 1)\nprint(factorial(5))" }
> eval { debuggerStep: {} }
(eval) { debuggerStatus: { stack: [ { function: "\u003cmodule\u003e", line: 1 } ] } }
> eval { debuggerStep: {} }
(eval) { debuggerStatus: { stack: [ { function: "\u003cmodule\u003e", line: 6 } ] } }
> eval { debuggerStep: {} }
(eval) { debuggerStatus: { stack: [ { function: "\u003cmodule\u003e", line: 6 }, { function: "factorial", line: 1 } ] } }
> eval { debuggerStep: {} }
(eval) { debuggerStatus: { stack: [ { function: "\u003cmodule\u003e", line: 6 }, { function: "factorial", line: 2 } ] } }
> eval { debuggerStep: {} }
(eval) { debuggerStatus: { stack: [ { function: "\u003cmodule\u003e", line: 6 }, { function: "factorial", line: 5 } ] } }
> eval { debuggerStep: {} }
(eval) { debuggerStatus: { stack: [ { function: "\u003cmodule\u003e", line: 6 }, { function: "factorial", line: 5 }, { function: "factorial", line: 1 } ] } }
> eval { debuggerStep: {} }
(eval) { debuggerStatus: { stack: [ { function: "\u003cmodule\u003e", line: 6 }, { function: "factorial", line: 5 }, { function: "factorial", line: 2 } ] } }
> eval { debuggerStep: {} }
(eval) { debuggerStatus: { stack: [ { function: "\u003cmodule\u003e", line: 6 }, { function: "factorial", line: 5 }, { function: "factorial", line: 5 } ] } }
> eval { debuggerStep: {} }
(eval) { debuggerStatus: { stack: [ { function: "\u003cmodule\u003e", line: 6 }, { function: "factorial", line: 5 }, { function: "factorial", line: 5 }, { function: "factorial", line: 1 } ] } }
exec
is a channel for running shell commands. You give it the
arguments and optionally some environment variables to override, and
it gives you the output and tells you whether the command succeeded.
When you connect to exec
, you will receive:
-> { state: Stopped | Running }
To run a command, send:
<- { exec: { ... } }
These are the supported keys for exec
(all but args
are optional):
args
: list of strings, e.g. ["git", "status"]
.env
: map from environment variables to their values. These will be
merged with any existing environment, and take precedence.blocking
: boolean on whether the request is blocking or not. If blocking is set to true, you can send many exec
requests without waiting for the previous one to exit. They will be queued up internally. If blocking is false, then sending an exec
request will yield an error that another program is running.Once the command finishes running, you will get a response:
-> { ok: {} | { error: "..." } }
All clients will receive the following messages:
-> { state: Running } // command started running
-> { output: "..." } // command produced output
-> { state: Stopped } // command stopped running
Only one command can run at a time when blocking
is false or not specified. Pending requests must wait until the running command finishes.
> exec { exec: { args: ["git", "init"] } }
upserted channel named exec
(exec) { state: "Stopped" }
(exec) { state: "Running" }
(exec) { output: "Initialized empty Git repository in /home/runner/.git/\n" }
(exec) { ok: {} }
(exec) { state: "Stopped" }
> exec { exec: { args: ["sh", "-c", "echo hi && false"] } }
(exec) { state: "Running" }
(exec) { output: "hi\n" }
(exec) { error: "exit status 1" }
(exec) { state: "Stopped" }
> exec { exec: { args: ["sh", "-c", 'echo "Hello, $NAME!" && echo "Goodbye."'], env: { "NAME": "Replbot" } } }
(exec) { state: "Running" }
(exec) { output: "Hello, Replbot!\nGoodbye.\n" }
(exec) { ok: {} }
(exec) { state: "Stopped" }
> exec { exec: { args: ["true"], env: { "PATH": "" } } }
(exec) { state: "Running" }
(exec) { error: "exec: \"true\": executable file not found in $PATH" }
(exec) { state: "Stopped" }
> .attach exec from watcher
created new client named watcher
(exec -> watcher) { state: "Stopped" }
> exec { exec: { args: ["sh", "-c", "echo hi && false"] } }
(exec) { state: "Running" }
(exec -> watcher) { state: "Running" }
(exec) { output: "hi\n" }
(exec -> watcher) { output: "hi\n" }
(exec) { error: "exit status 1" }
(exec -> watcher) { state: "Stopped" }
(exec) { state: "Stopped" }
When blocking
false, sending multiple execs before waiting for the last one to finish will yeild something like this.
> exec { exec: { args: ["sleep", "5"] } }
upserted channel named exec
(exec) { state: "Stopped" }
(exec) { state: "Running" }
> exec { exec: { args: ["sh", "-c", "echo hi"] } }
(exec) { error: "Already running" }
...
If blocking
was set to true, it would wait for the previous command to exit and then start running the next queued up command.
> exec { exec: { args: ["sleep", "5"], blocking: true } }
upserted channel named exec
(exec) { state: "Stopped" }
(exec) { state: "Running" }
> exec { exec: { args: ["sh", "-c", "echo hi"], blocking: true } }
(exec) { ok: {} }
(exec) { state: "Stopped" }
(exec) { state: "Running" }
(exec) { output: "hi\n" }
(exec) { ok: {} }
(exec) { state: "Stopped" }
the file service provides an interface to the underlying container's filesystem. Modifications made through this service will trigger a corresponding message to anyone listening to the modified object on fsevents. Changes will not persist, if that's what you want use gcsfiles.
Paths should ALWAYS be relative without any leading ./
or /
. Paths
should NEVER have a trailing /
even when refering to a directory. To refer
to a file or directoy inside the project use the path with no leading or
trailing characters (e.g. "dir/myfile.txt"
or "mydir"
). To refer to the
working directory (aka projects root) use ""
.
There is also a mechanism for polling for filesystem changes (CheckChanges
).
This is deprecated, please use fsevents.
Each issued command will have a corresponding ok
or error
respond message.
writing a file will create a new file (or overwrite an existing file) with
the provided content
> files { write: { path: "with_that_says_hi", content: "hi" } }
(files) { ok: {} }
Remove does what you expect. If provided with a directory, it will recursively delete the tree. If the file doesn't exist it will return an error.
> files { remove: { path: "a_file_or_dir" } }
(files) { ok: {} }
Move does what you expect
> files { move: { oldPath: "move_me", newPath: "to_here" } }
(files) { ok: {} }
Mkdir creates a directory. All parent directories will also be created if
they do not exist. If path is already a directory, it does nothing and returns { ok: {} }
.
> files { mkdir: { path: "create/a/bunch/of/dirs" } }
(files) { ok: {} }
Read will read the entire file contents and respond with a file
or error
.
> files { read: { path: "my_file" } }
(files) { file: { path: "my_file", content: "the content" } }
Readdir will return a directory listing or an error.
> files { readdir: { path: "my_folder" } }
(files) { files: [{ path: "the_only_file", type: "Regular" }] }
The snapshot service provides an interface allowing the client to choose when filesystem snapshots are taken.
The interface is
> snapshot { fsSnapshot: {} }
(snapshot) { ok: {} } | { error: "oh no" }
when an ok
response is received file system state is guaranteed to have been
durably saved at some point between sending fsSnapshot
and receiving ok
.
On error
the file system has not been saved and is still using the previous
snapshot (if one exists). Starting or restarting a container will always use the
most recent successful snapshot.
Used internally to get signed urls to access files. This probably isn't useful unless you're looking at gcsfiles.
deprecated, use LSP
This lets you hook into global file system events both performed by processes and other users. Every object you want to watch must be explicity subscribed to. Subscribing to a file will only provide modifications on that file (not deletes or creates). Subscribing to a directory will only give you deletes and creates inside that directory.
Paths should ALWAYS be relative without any leading ./
or /
. Paths
should NEVER have a trailing /
even when refering to a directory. To refer
to a file or directoy inside the project use the path with no leading or
trailing characters (e.g. "dir/myfile.txt"
or "mydir"
). To refer to the
working directory (aka projects root) use ""
.
example of subscribing to the current directory and then a file is created:
> fsevents { subscribeFile: { files: [ "" ] } }
// ... you create a file in cwd
(fsevents) { fileEvent: { file: { path: "new_file" }, op: "Create" } }
example of subscribing to new_file
and then an edit occurs:
> fsevents { subscribeFile: { files: [ "new_file" ] } }
// ... you edit new_file
(fsevents) { fileEvent: { file: { path: "new_file" }, op: "Modify" } }
The possible events when subscribing to a directory are:
{ op: Create, file: <newly created file> }
{ op: Move, file: <file being moved>, dest: <where it was moved to> }
{ op: Remove, file: <file being deleted> }
The possible events when subscribing to a file are:
{ op: Modify, file: <file being modified> }
Refer to files
for a list of all file ops.
Paths should ALWAYS be relative without any leading ./
or /
. Paths
should NEVER have a trailing /
even when refering to a directory. To refer
to a file or directoy inside the project use the path with no leading or
trailing characters (e.g. "dir/myfile.txt"
or "mydir"
). To refer to the
working directory (aka projects root) use ""
.
All operations will be persisted. There is an additional command
({ persist: { path: "<file or dir>" } }
) which will ensure the given path in
the container is persisted. Dotfiles at the path's root are not persisted.
> gcsfiles { persist: { path: "" }}
(gcsfiles) { ok: {} }
The command { persistMirror: { file: { path: "<file or dir>" } } }
behaves
the same as Persist except that it also deletes files from GCS if they are
not found locally.
Persist and PersistMirror do not persist files that are in the language
config's ignoreDirs
field. PersistMirror will remove anything that matches
from GCS.
TryRemove is similar to Remove except that it will not error if the provided path does not exist. This is useful for removing objects from GCS after they have already been removed from the container. If provided with a directory, it will recursively delete the tree.
> gcsfiles { tryRemove: { path: "some-file-or-dir-that-doesnt-exist" } }
(gcsfiles) { ok: {} }
interper
deprecated, use interp2
interper
Interp provides an interface to run prybar for
the current language. Once an interp channel is opened it will immediately lauch
the process. This means the prompt will be sent as output
and you can begin
sending input to prybar.
interp2 is responsible for its own presentation. All rendering (including
clearing, prompt, etc.) is handled internally with the state
message being
the only hint to the client. This means it interacts closely with the attached
pty and expects a fairly sophisticated terminal.
(in these examples >
is the prompt from prybar)
> .open interp2
(interp2) { state: Stopped }
(interp2) { output: ">" }
Sending runMain
will cause prybar to execute the language's entrypoint and drop
into a repl in the same context. When dropping into the repl the state will change
from Running
to Stopped
> interp2 { runMain: {} }
(interp2) { state: Running }
(interp2) { output: "user's output" }
(interp2) { state: Stopped }
(interp2) { output: ">" }
clear
will kill the currently running prybar and start a fresh process.
> interp2 { runMain: {} }
(interp2) { state: Running }
> interp2 { clear: {} }
(interp2) { state: Stopped }
(interp2) { output: ">" }
deprecated, use LSP
the lsp service will start up a language server
when created. It will stream all the language server's stdout as output
and input
messages will be sent to stdin.
> .open lsp
(lsp) { output: "<lsp message>" }
> lsp { input: "<lsp message>" }
If the language server dies a { state: stopped }
message will be sent.
ot:<filename>
the OT service provides
operational transforms
on files and scratch buffers. Each instance of ot is only responsible for one buffer. To
allow clients to share ot channels the convention of ot:<filename>
is used as the name for
the channel. This is only a convention though, the name of the channel has no effect on
its contents. To associate an ot instance with a file on disk, the otLinkFile
operation should be
used (see linking bellow).
Upon connecting to the service you will always recieve an otstatus
message. A fresh ot service
which hasn't recieved any ops should send back:
> .open ot
(ot) { otstatus: { content: "", version: 0, linkedFile: null, cursors: [] } }
you can then begin sending OT ops:
> ot { ot: { version: 0, ops: [{ insert: 'hi' }] } }
(ot) { ot: { version: 1, ops: [{ insert: 'hi' }], crc: <checksum> } }
each op will be acked with a matching message, and you will also get back an ok
response for each
ot
message after the server has accepted the op.
Once opened the OT service can be linked to a file on disk. This allows operations on the OT buffer to be reflected on disk. There is also a built in mechanism for operations on disk to be syncronized back to the buffer by generating OT ops.
Linking can only be performed on an ot service which hasn't yet received any ops. When linking to a file the ot service's buffer is set to the provided file by performing an insert op. This insert is also send to any clients.
> .open ot
(ot) { otstatus: { content: "", version: 0, linkedFile: null, cursors: [] } }
> ot { otLinkFile: { file: { path: "a_file" } } }
(ot) { ot: { version: 1, ops: [{ insert: 'the file content' }], crc: <checksum> } }
(ot) { ok: {} }
A linked file can have previous history, but you will only receive 1 insert op of the whole document as you can see above. However, you can always request the granual ops for that document, see the reconnecting section below.
Once linked, the associated file will not be updated automatically. A flush
command
must be sent to persist the content to disk.
> ot { flush: {} }
(ot) { ok: {} }
Once you disconnect, there can be other users or the filesystem writing to the ot channel. One option is to block any writes while you're reconnecting, resetting your local document's state and then going through the whole ot motion again, basically as if the client connected for the first time.
That solution is not fun and it ruins the experience on your client. There is an API that allows the client to request any ops the client have missed. What this allows you to do is to allow writes to your editor queuing up any ops it's creating. Then once the client reconnects, it can fetch the ops that were missed since it disconnected (based on the last version known to it). When the client fetches the missed ops, it updates the local document with them and fast-forwarding the queued up ops. Once queued up ops are fastforwarded, they are sent to the server as if they're new ops!
Fetching looks like this
> .open ot
(ot) { otstatus: { content: "hello world", version: 12, linkedFile: "main", cursors: [] } }
> ot { otFetchRequest: { versionFrom: 11, versionTo: 12 } } // versions are inclusing
(ot) { otFetchResponse: { packets: [{ version: 11, ops: [{insert: 'hel' }], crc: <checksum> }, { version: 12, ops: [{ skip: 3 }, { insert: 'lo ' }], crc: <checksum> }] } }
If the file is not linked, then you have to link it first, then fetch the ops you've missed. You get the latest version from the op you get after linking.
Another thing to note is that if the client didn't flush
, then there is no guarantee that the acked
ops were persisted, so you might want to keep those around until you know the flush went through!
deprecated, use package2
packager
You should probably always try to attach, either use ATTACH
or ATTACH_OR_CREATE
.
Packager2 is meant to be shared and multiple instances of packager2 might not play
nice together.
On connect you'll always receive:
-> { state: Stopped | Running }
To use the packager (either installing packages or updating state):
<- { ensurePackages: { ... } }
-> { ok: {} } | { error: "..." }
At any point you may receive:
-> { state: Running }
-> { output: "..." }
-> { state: Stopped }
Only the user who sent the ensurePackages will receive an ok or error response. Everyone (including the sender of ensurePackages) will receive state messages and output. Output messages will only be send when we're in the running state.
packager
TODO(turbio): this is incomplete
packager3 provides an interface to upm.
> packager3 { packageSearch: { query: "search string" } }
(packager3) { packageSearchResp: { results: [...packages...] } }
> packager3 { packageInfo: { pkg: { name: "package name" } } }
(packager3) { packageInfoResp: { pkg: { ...package... } } }
> packager3 { packageAdd: { pkgs: [...packages...] } }
(packager3) { output: "possible output from package" }
(packager3) { ok: { } }
> packager3 { packageRemove: { pkgs: [...packages...] } }
(packager3) { output: "possible output from package" }
(packager3) { ok: { } }
> packager3 { packageInstall: { } }
(packager3) { output: "possible output from package" }
(packager3) { ok: { } }
> packager3 { packageListSpecfile: { } }
(packager3) { PackageListSpecfileResp: { pkgs: [...packages...] } }
presencer
The presencer channel is autocreated by pid1 at startup so that the service can track the connection status as the first connection is made. Because of this you can always attach. Opening the channel will cause the client to receive a Roster message with a list of connected users, as well as a stream of Join and Part messages as users come and go.
Users are not de-duplicated, meaning if the same user joins twice, the client will get two Join messages. If the client cares about unique users then it will have to implement some reference counting.
If users have a session id attached to them so you can use that if it is relevant for your implementation.
shellrunner
shellrun combines the run2
service with bash. Initially you will be in the
stopped state and all input will be sent to bash. Once you send runMain
(entering the running state) input will instead go to the main run process and
the shell will be closed.
output:
state
: indicates who's in control. If stopped
the shell is running and
all interactions are sent to the shell. If running
the runner
process has focus.output
: output from either the runner or the shell.
input:runMain
: Should only be sent when the state is stopped
. It will
stop the running shell and start the runner.clear
: Should only be sent when the state is running
. It will stop
the running process and start up a shell.input
: Depending on state this will go to either the shell's input or the
runner's input.shellrunner
shellrun2 holds a service which combines interp2, run2, and a shell into one
service. This does not replace shellrun
, instead it is a composition
additioanl components. This service is intended to alleviate the burden on the
client to juggle the shell, run, and interp services. Without this service the
client must switch between these services based on the existence of .replit
and the run state.
This service connects to the shared well known channel for interp2 and run2, the
shell is launched directly from this services (meaning it will not be shared
with shellrun
like the other sub services. Switching between interp2 and run2
is only checked when runMain is sent and is entirely dependant on the presence
of a .replit.
output:
The output is basically the union of the interp2
, run2
, and shell
services.
state
: indicates who's in control. If stopped
the shell is running and
all interactions are sent to the shell. If running
the runner
process has focus.output
: output from either the runner or the shell.
input:runMain
: Should only be sent when the state is stopped
. It will
stop the running shell and start the runner.clear
: Should only be sent when the state is running
. It will stop
the running process and start up a shell.input
: Depending on state this will go to either the shell's input or the
runner's input.runner
use run2 instead, this is deprecated.
runner
Running will execute the language's configured entrypoint. It will also run the
compile command if available. If a Makefile
is present it will always run just
make
with no arguments.
Run's presentation logic should be implemented mostly by the consuming client.
It Has error
and execInfo
output which gives the client hints around how to
display. Clearing for example should must be handled by the client (although it
will empty the scrollback history on run).
> .open run2
(run2) { state: Stopped }
> run2 { runMain: {} }
(run2) { state: Running }
(run2) { output: "what is your name?" }
> run2 { input: "faris" }
(run2) { output: "hi faris" }
(run2) { state: Stopped }
sheller
This fires up bash under a pty. All input
you send will go to bash
and all output from bash will be sent as an output
message. If bash
ever exits a state { stopped }
will be sent and it will immediately
restart.
> .open shell
(shell) { output: "...prompt..." }
> shell { input: "echo hi" }
(shell) { output: "...hi..." }
> shell { input: "exit" }
(shell) { state: "Stopped" }
(shell) { output: "...prompt..." }
Opens up a TCP socket in the container. Mostly unused.
this is deprecated, only used in the classroom
Clippy wraps another service and then injects additional messages
whenever it sees certain regex matches in the wrapped service's
outgoing messages. This is used to give hints about common error
messages. We have configured run
, eval
, and interp
to use
clippy, for example.
With LANGUAGE=nodejs
:
> file { write: { path: 'index.js', content: 'window.alert("Hello!")' } }
upserted channel named file
(file) { ok: {} }
> run2 { runMain: {} }
upserted channel named run2
(run2) { state: "Stopped" }
(run2) { state: "Running" }
(run2) { execInfo: { command: [ "node", "index.js" ], reason: "run" } }
(run2) { output: "/home/runner/index.js:1\ow.alert<...SNIP...>ootstrap/node.js:622:3)\n" }
(run2) { hint: { text: "Looks like you're trying to use the window browser API in Nodejs. Try using an HTML repl instead." } }
(run2) { error: "exit status 1" }
(run2) { state: "Stopped" }
This gives a service a pty. Sending the messages resizeTerm
or saneTerm
will poke the pty.
RateLimit is a middleware to reduce the maximum messages send by a service per second. This is typically placed in from of a process sending output from a user process to avoid overwhelming clients with too many messages.
It's rate limited on a per message basis causing back pressure which ideally slows down the sending program program and causes output to be coalesced into fewer messages.
To improve interactivity when a user sends a message (like input) the service receives a burst of extra allowed outgoing messages. This way interactive terminal games stay fast.
There's also an initial burst allowing many messages to be sent instantly until the burst is used up. The burst recovers when messages are sent slower than the max output rate.
scrollback will store a services output and sent it to any new connections.
The output is handled on a line basis (denoted by a raw \n
). This is an
attempt to prevent issues where output is split unexpectly on escape sequences.
A maximum of 1000 lines are stored with each line having an unlimited length. New lines will push out previous lines when the limit is reached