pid1 services

A well known name has been chosen for some services to make attaching a bit easier. This is typically just the service name with an er suffix.

Many services also have a pty built in. This means at any point you can send saneTerm or resizeTerm to poke the pty.

The following sections contain whatever information we could scrounge together for you about our services (and middleware). If you're writing a new service, you should really document it thoroughly here. The examples use the conman repl, so you can try them out immediately (see the main README for documentation on the conman repl, or just make conman-repl-run LANGUAGE=python3).

services

chat

This implements the Multiplayer chat feature.

Sending it a message broadcasts all the messages to other sessions and saves it in a 100 message long scrollback. Attaching to the channel sends you back the last 100 messages that were saved.

> chat { chatMessage: { username: "replbot", text: "hello from replbot!" } }
upserted channel named chat
> chat { chatMessage: { username: "replbot", text: "another message" } }
> .attach chat from listener
created new client named listener
(chat -> listener) { chatMessage: { username: "replbot", text: "hello from replbot!" } }
(chat -> listener) { chatMessage: { username: "replbot", text: "another message" } }
> chat from listener { chatMessage: { username: "listener", text: "i am listening" } }
(chat) { chatMessage: { username: "listener", text: "i am listening" } }
> chat { chatMessage: { username: "replbot", text: "thanks!" } }
(chat -> listener) { chatMessage: { username: "replbot", text: "thanks!" } }
> chat { chatTyping: { username: "replbot", typing: true } }
(chat -> listener) { chatTyping: { username: "replbot", typing: true } }
> chat { chatTyping: { username: "replbot", typing: false } }
(chat -> listener) { chatTyping: { username: "replbot" } }
> chat from evalbot { chatMessage: { username: "evalbot", text: "hello everyone!" } }
created new client named evalbot
upserted channel named chat
(chat -> evalbot) { chatMessage: { username: "replbot", text: "hello from replbot!" } }
(chat -> evalbot) { chatMessage: { username: "replbot", text: "another message" } }
(chat -> evalbot) { chatMessage: { username: "listener", text: "i am listening" } }
(chat -> evalbot) { chatMessage: { username: "replbot", text: "thanks!" } }
(chat -> listener) { chatMessage: { username: "evalbot", text: "hello everyone!" } }
(chat) { chatMessage: { username: "evalbot", text: "hello everyone!" } }
> chat { chatTyping: { username: "replbot", typing: false } }
(chat -> evalbot) { chatTyping: { username: "replbot" } }
(chat -> listener) { chatTyping: { username: "replbot" } }

eval

This service is deprecated. Now we are migrated to use interp instead. eval is basically an evaluation server. You send it code to evaluate, and it sends you the result and output and/or errors. This requires writing an evaluation server for each language. With interp, we just run the language's actual repl and pipe the pty directly to the frontend.

With LANGUAGE=python3:

> eval { eval: "2+2" }
upserted channel named eval
(eval) { result: "4" }
> eval { eval: 'print("Hello, world!")' }
(eval) { output: "Hello, world!\n" }
(eval) { result: "None" }
> eval { eval: '1 / 0' }
(eval) { error: "Traceback (most recent call last):\n  File \"python\", line 1, in \u003cmodule\u003e\nZeroDivisionError: division by zero\n" }
> eval { eval: 'print("Hello, " + input("What is your name? ") + "!")' }
(eval) { output: "What is your name? " }
> eval { input: "Radon\n" }
(eval) { input: "Radon\n" }
(eval) { output: "Hello, Radon!\n" }
(eval) { result: "None" }
> eval { debuggerStart: "def factorial(n):\n  if n == 0:\n    return 1\n  else:\n    return n * factorial(n - 1)\nprint(factorial(5))" }
> eval { debuggerStep: {} }
(eval) { debuggerStatus: { stack: [ { function: "\u003cmodule\u003e", line: 1 } ] } }
> eval { debuggerStep: {} }
(eval) { debuggerStatus: { stack: [ { function: "\u003cmodule\u003e", line: 6 } ] } }
> eval { debuggerStep: {} }
(eval) { debuggerStatus: { stack: [ { function: "\u003cmodule\u003e", line: 6 }, { function: "factorial", line: 1 } ] } }
> eval { debuggerStep: {} }
(eval) { debuggerStatus: { stack: [ { function: "\u003cmodule\u003e", line: 6 }, { function: "factorial", line: 2 } ] } }
> eval { debuggerStep: {} }
(eval) { debuggerStatus: { stack: [ { function: "\u003cmodule\u003e", line: 6 }, { function: "factorial", line: 5 } ] } }
> eval { debuggerStep: {} }
(eval) { debuggerStatus: { stack: [ { function: "\u003cmodule\u003e", line: 6 }, { function: "factorial", line: 5 }, { function: "factorial", line: 1 } ] } }
> eval { debuggerStep: {} }
(eval) { debuggerStatus: { stack: [ { function: "\u003cmodule\u003e", line: 6 }, { function: "factorial", line: 5 }, { function: "factorial", line: 2 } ] } }
> eval { debuggerStep: {} }
(eval) { debuggerStatus: { stack: [ { function: "\u003cmodule\u003e", line: 6 }, { function: "factorial", line: 5 }, { function: "factorial", line: 5 } ] } }
> eval { debuggerStep: {} }
(eval) { debuggerStatus: { stack: [ { function: "\u003cmodule\u003e", line: 6 }, { function: "factorial", line: 5 }, { function: "factorial", line: 5 }, { function: "factorial", line: 1 } ] } }

exec

exec is a channel for running shell commands. You give it the arguments and optionally some environment variables to override, and it gives you the output and tells you whether the command succeeded.

When you connect to exec, you will receive:

-> { state: Stopped | Running }

To run a command, send:

<- { exec: { ... } }

These are the supported keys for exec (all but args are optional):

Once the command finishes running, you will get a response:

-> { ok: {} | { error: "..." } }

All clients will receive the following messages:

-> { state: Running }  // command started running
-> { output: "..." }   // command produced output
-> { state: Stopped }  // command stopped running

Only one command can run at a time when blocking is false or not specified. Pending requests must wait until the running command finishes.

> exec { exec: { args: ["git", "init"] } }
upserted channel named exec
(exec) { state: "Stopped" }
(exec) { state: "Running" }
(exec) { output: "Initialized empty Git repository in /home/runner/.git/\n" }
(exec) { ok: {} }
(exec) { state: "Stopped" }
> exec { exec: { args: ["sh", "-c", "echo hi && false"] } }
(exec) { state: "Running" }
(exec) { output: "hi\n" }
(exec) { error: "exit status 1" }
(exec) { state: "Stopped" }
> exec { exec: { args: ["sh", "-c", 'echo "Hello, $NAME!" && echo "Goodbye."'], env: { "NAME": "Replbot" } } }
(exec) { state: "Running" }
(exec) { output: "Hello, Replbot!\nGoodbye.\n" }
(exec) { ok: {} }
(exec) { state: "Stopped" }
> exec { exec: { args: ["true"], env: { "PATH": "" } } }
(exec) { state: "Running" }
(exec) { error: "exec: \"true\": executable file not found in $PATH" }
(exec) { state: "Stopped" }
> .attach exec from watcher
created new client named watcher
(exec -> watcher) { state: "Stopped" }
> exec { exec: { args: ["sh", "-c", "echo hi && false"] } }
(exec) { state: "Running" }
(exec -> watcher) { state: "Running" }
(exec) { output: "hi\n" }
(exec -> watcher) { output: "hi\n" }
(exec) { error: "exit status 1" }
(exec -> watcher) { state: "Stopped" }
(exec) { state: "Stopped" }

When blocking false, sending multiple execs before waiting for the last one to finish will yeild something like this.

> exec { exec: { args: ["sleep", "5"] } }
upserted channel named exec
(exec) { state: "Stopped" }
(exec) { state: "Running" }
> exec { exec: { args: ["sh", "-c", "echo hi"] } }
(exec) { error: "Already running" }
...

If blocking was set to true, it would wait for the previous command to exit and then start running the next queued up command.

> exec { exec: { args: ["sleep", "5"], blocking: true } }
upserted channel named exec
(exec) { state: "Stopped" }
(exec) { state: "Running" }
> exec { exec: { args: ["sh", "-c", "echo hi"], blocking: true } }
(exec) { ok: {} }
(exec) { state: "Stopped" }
(exec) { state: "Running" }
(exec) { output: "hi\n" }
(exec) { ok: {} }
(exec) { state: "Stopped" }

file (or files)

the file service provides an interface to the underlying container's filesystem. Modifications made through this service will trigger a corresponding message to anyone listening to the modified object on fsevents. Changes will not persist, if that's what you want use gcsfiles.

There is also a mechanism for polling for filesystem changes (CheckChanges). This is deprecated, please use fsevents.

Each issued command will have a corresponding ok or error respond message. When providing a path it's best to use a relative path without any leading / or ./. Pid1 will resolve this to the correct file for you. If you want to refer to the project root dir just use ..

writing a file will create a new file (or overwrite an existing file) with the provided content

> files { write: { path: "with_that_says_hi", content: "hi" } }
(files) { ok: {} }

Remove does what you expect. If provided with a directory, it will recursively delete the tree.

> files { write: { path: "with_that_says_hi", content: "hi" } }
(files) { ok: {} }

Move does what you expect

> files { move: { oldPath: "move_me", newPath: "to_here" } }
(files) { ok: {} }

Mkdir creates a directory. All parent directories will also be created if they do not exist.

> files { mkdir: { path: "create/a/bunch/of/dirs" } }
(files) { ok: {} }

Read will read the entire file contents and respond with a file or error.

> files { read: { path: "my_file" } }
(files) { file: { path: "my_file", content: "the content" } }

Readdir will return a directory listing or an error.

> files { readdir: { path: "my_folder" } }
(files) { files: [{ path: "the_only_file", type: "Regular" }] }

snapshot

The snapshot service provides an interface allowing the client to choose when filesystem snapshots are taken.

The interface is

> snapshot { fsSnapshot: {} }
(snapshot) { ok: {} } | { error: "oh no" }

when an ok response is received file system state is guaranteed to have been durably saved at some point between sending fsSnapshot and receiving ok. On error the file system has not been saved and is still using the previous snapshot (if one exists). Starting or restarting a container will always use the most recent successful snapshot.

fileauth

Used internally to get signed urls to access files. This probably isn't useful unless you're looking at gcsfiles.

format

deprecated, use LSP

fsevents

This lets you hook into global file system events both performed by processes and other users. Every object you want to watch must be explicity subscribed to. Subscribing to a file will only provide modifications on that file (not deletes or creates). Subscribing to a directory will only give you deletes and creates inside that directory.

example of subscribing to the current directory and then a file is created:

> fsevents { subscribeFile: { files: [ "." ] } }
// ... you create a file in cwd
(fsevents) { fileEvent: { file: { path: "new_file" }, op: "Create" } }

example of subscribing to new_file and then an edit occurs:

> fsevents { subscribeFile: { files: [ "new_file" ] } }
// ... you edit new_file
(fsevents) { fileEvent: { file: { path: "new_file" }, op: "Modify" } }

The possible events when subscribing to a directory are:

The possible events when subscribing to a file are:

gcsfiles

Refer to files for a list of all file ops.

All operations will be persisted. There is an additional command ({ persist: { path: "<file or dir>" } }) which will ensure the given path in the container is persisted. Dotfiles at the path's root are not persisted.

> gcsfiles { persist: { path: "" }}
(gcsfiles) { ok: {} }

The command { persistMirror: { file: { path: "<file or dir>" } } } behaves the same as Persist except that it also deletes files from GCS if they are not found locally.

Persist and PersistMirror do not persist files that are in the language config's ignoreDirs field. PersistMirror will remove anything that matches from GCS.

TryRemove is similar to Remove except that it will not error if the provided path does not exist. This is useful for removing objects from GCS after they have already been removed from the container. If provided with a directory, it will recursively delete the tree.

> gcsfiles { tryRemove: { path: "some-file-or-dir-that-doesnt-exist" } }
(gcsfiles) { ok: {} }

interp

deprecated, use interp2

interp2

Interp provides an interface to run prybar for the current language. Once an interp channel is opened it will immediately lauch the process. This means the prompt will be sent as output and you can begin sending input to prybar.

interp2 is responsible for its own presentation. All rendering (including clearing, prompt, etc.) is handled internally with the state message being the only hint to the client. This means it interacts closely with the attached pty and expects a fairly sophisticated terminal.

(in these examples > is the prompt from prybar)

> .open interp2
(interp2) { state: Stopped }
(interp2) { output: ">" }

Sending runMain will cause prybar to execute the language's entrypoint and drop into a repl in the same context. When dropping into the repl the state will change from Running to Stopped

> interp2 { runMain: {} }
(interp2) { state: Running }
(interp2) { output: "user's output" }
(interp2) { state: Stopped }
(interp2) { output: ">" }

clear will kill the currently running prybar and start a fresh process.

> interp2 { runMain: {} }
(interp2) { state: Running }
> interp2 { clear: {} }
(interp2) { state: Stopped }
(interp2) { output: ">" }

lint

deprecated, use LSP

lsp

the lsp service will start up a language server when created. It will stream all the language server's stdout as output and input messages will be sent to stdin.

> .open lsp
(lsp) { output: "<lsp message>" }
> lsp { input: "<lsp message>" }

If the language server dies a { state: stopped } message will be sent.

ot

the OT service provides operational transforms on files and scratch buffers. Each instance of ot is only responsible for one buffer. To allow clients to share ot channels the convention of ot:<filename> is used as the name for the channel. This is only a convention though, the name of the channel has no effect on its contents. To associate an ot instance with a file on disk, the otLinkFile operation should be used (see linking bellow).

Upon connecting to the service you will always recieve an otstatus message. A fresh ot service which hasn't recieved any ops should send back:

> .open ot
(ot) { otstatus: { content: "", version: 0, linkedFile: null, cursors: [] } }

you can then begin sending OT ops:

> ot { ot: { version: 0, ops: [{ insert: 'hi' }] } }
(ot) { ot: { version: 1, ops: [{ insert: 'hi' }], crc: <checksum> } }

each op will be acked with a matching message, and you will also get back an ok response for each ot message after the server has accepted the op.

linking

Once opened the OT service can be linked to a file on disk. This allows operations on the OT buffer to be reflected on disk. There is also a built in mechanism for operations on disk to be syncronized back to the buffer by generating OT ops.

Linking can only be performed on an ot service which hasn't yet received any ops. When linking to a file the ot service's buffer is set to the provided file by performing an insert op. This insert is also send to any clients.

> .open ot
(ot) { otstatus: { content: "", version: 0, linkedFile: null, cursors: [] } }
> ot { otLinkFile: { file: { path: "a_file" }  } }
(ot) { ot: { version: 1, ops: [{ insert: 'the file content' }], crc: <checksum> } }
(ot) { ok: {} }

Once linked, the associated file will not be updated automatically. A flush command must be sent to persist the content to disk.

> ot { flush: {} }
(ot) { ok: {} }

packager

deprecated, use package2

packager2

You should probably always try to attach, either use ATTACH or ATTACH_OR_CREATE. Packager2 is meant to be shared and multiple instances of packager2 might not play nice together.

On connect you'll always receive:

-> { state: Stopped | Running }

To use the packager (either installing packages or updating state):

<- { ensurePackages: { ... } }
-> { ok: {} } | { error: "..." }

At any point you may receive:

-> { state: Running }
-> { output: "..." }
-> { state: Stopped }

Only the user who sent the ensurePackages will receive an ok or error response. Everyone (including the sender of ensurePackages) will receive state messages and output. Output messages will only be send when we're in the running state.

packager3

TODO(turbio): this is incomplete

packager3 provides an interface to upm.

> packager3 { packageSearch: { query: "search string" } }
(packager3) { packageSearchResp: { results: [...packages...] } }
> packager3 { packageInfo: { pkg: { name: "package name"  } } }
(packager3) { packageInfoResp: { pkg: { ...package... } } }
> packager3 { packageAdd: { pkgs: [...packages...] } }
(packager3) { output: "possible output from package" }
(packager3) { ok: { } }
> packager3 { packageRemove: { pkgs: [...packages...] } }
(packager3) { output: "possible output from package" }
(packager3) { ok: { } }
> packager3 { packageInstall: { } }
(packager3) { output: "possible output from package" }
(packager3) { ok: { } }
> packager3 { packageListSpecfile: { } }
(packager3) { PackageListSpecfileResp: { pkgs: [...packages...] } }

presence

The presencer channel is autocreated by pid1 at startup so that the service can track the connection status as the first connection is made. Because of this you can always attach. Opening the channel will cause the client to receive a Roster message with a list of connected users, as well as a stream of Join and Part messages as users come and go.

Users are not de-duplicated, meaning if the same user joins twice, the client will get two Join messages. If the client cares about unique users then it will have to implement some reference counting.

If users have a session id attached to them so you can use that if it is relevant for your implementation.

shellrun

shellrun combines the run2 service with bash. Initially you will be in the stopped state and all input will be sent to bash. Once you send runMain (entering the running state) input will instead go to the main run process and the shell will be closed.

output:

run

use run2 instead, this is deprecated.

run2

Running will execute the language's configured entrypoint. It will also run the compile command if available. If a Makefile is present it will always run just make with no arguments.

Run's presentation logic should be implemented mostly by the consuming client. It Has error and execInfo output which gives the client hints around how to display. Clearing for example should must be handled by the client (although it will empty the scrollback history on run).

> .open run2
(run2) { state: Stopped }
> run2 { runMain: {} }
(run2) { state: Running }
(run2) { output: "what is your name?" }
> run2 { input: "faris" }
(run2) { output: "hi faris" }
(run2) { state: Stopped }

shell

This fires up bash under a pty. All input you send will go to bash and all output from bash will be sent as an output message. If bash ever exits a state { stopped } will be sent and it will immediately restart.

> .open shell
(shell) { output: "...prompt..." }
> shell { input: "echo hi" }
(shell) { output: "...hi..." }
> shell { input: "exit" }
(shell) { state: "Stopped" }
(shell) { output: "...prompt..." }

socket

Opens up a TCP socket in the container. Mostly unused.

unittest

this is deprecated, only used in the classroom

Middleware

clippy

Clippy wraps another service and then injects additional messages whenever it sees certain regex matches in the wrapped service's outgoing messages. This is used to give hints about common error messages. We have configured run, eval, and interp to use clippy, for example.

With LANGUAGE=nodejs:

> file { write: { path: 'index.js', content: 'window.alert("Hello!")' } }
upserted channel named file
(file) { ok: {} }
> run2 { runMain: {} }
upserted channel named run2
(run2) { state: "Stopped" }
(run2) { state: "Running" }
(run2) { execInfo: { command: [ "node", "index.js" ], reason: "run" } }
(run2) { output: "/home/runner/index.js:1\ow.alert<...SNIP...>ootstrap/node.js:622:3)\n" }
(run2) { hint: { text: "Looks like you're trying to use the window browser API in Nodejs. Try using an HTML repl instead." } }
(run2) { error: "exit status 1" }
(run2) { state: "Stopped" }

termed

This gives a service a pty. Sending the messages resizeTerm or saneTerm will poke the pty.

rateLimit

RateLimit is a middleware to reduce the maximum messages send by a service per second. This is typically placed in from of a process sending output from a user process to avoid overwhelming clients with too many messages.

It's rate limited on a per message basis causing back pressure which ideally slows down the sending program program and causes output to be coalesced into fewer messages.

To improve interactivity when a user sends a message (like input) the service receives a burst of extra allowed outgoing messages. This way interactive terminal games stay fast.

There's also an initial burst allowing many messages to be sent instantly until the burst is used up. The burst recovers when messages are sent slower than the max output rate.

scrollback

scrollback will store a services output and sent it to any new connections.

The output is handled on a line basis (denoted by a raw \n). This is an attempt to prevent issues where output is split unexpectly on escape sequences.

A maximum of 1000 lines are stored with each line having an unlimited length. New lines will push out previous lines when the limit is reached