This documentation is deprecated
Please see the new LoopBack documentation site.
Skip to end of metadata
Go to start of metadata

Control StrongLoop Process Manager.  This command enables you to:

  • Start, stop, and restart applications that are under management of StrongLoop PM.
  • Start and stop application clusters; change number of workers in a cluster.
  • Start and stop CPU profilingtake heap snapshots, and perform object tracking.
  • Modify environment settings while an application is running.


Icon

This command works only with applications running under control of StrongLoop Process Manager. For local use, that means you must run the application with slc start, not node . or slc run (that don't run applications in Process Manager).

SYNOPSIS

slc ctl [options] [sub-command]

OPTIONS

 -C, --control <ctl>      
Control endpoint for Process Manager.  For a remote Process Manager, this must specify the URL on which the Process Manager is listening.

If Process Manager is using HTTP authentication then you must set valid credentials in the URL, in the form http://username:password@example.com:7654.

To tunnel over SSH using an HTTP URL, use the protocol http+ssh., for example http+ssh://example.com:7654.

    • The SSH username defaults to your current user.  Override the default with the SSH_USER environment variable. 
    • Authentication defaults to your current ssh-agent.  Override the default with the SSH_KEY environment variable specifying the path of an existing private key to use. 
    • SSH port defaults to 22.  Override the default by setting the SSH_PORT environment variable.

Use the STRONGLOOP_PM  environment variable to set a default value for the --control option; this eliminates the need to supply the option every time.

If you don't specify a channel with this option, the tool uses the following in this order of precedence:

  1. STRONGLOOP_PM environment variable, that can specify a local domain path, or an HTTP URL.  Use an HTTP URL to specify a remote Process Manager.  Use localhost for a local Process Manager. The URL must specify at least the process manager's listen port, such as http://example.com:7654 (default is 8701).
  2. ./pmctl: Process Manager running the current working directory, if any.
  3. ~/.strong-pm/pmctl: Process Manager running in the user's home directory.

  4. /var/lib/strong-pm/pmctl: Process Manager installed by slc pm-install.
  5. http://localhost:8701: Process Manager running on localhost.

STANDARD OPTIONS

-h, --help
Display help information.

-v, --version
Display version number.

SUB-COMMANDS

The default sub-command is status.

This command has three types of sub-commands:

  • Global commands that apply to Process Manager itself.
  • Commands that apply to a specific service.
  • Commands that apply to a specific worker process.

Icon

When you deploy an application to Process Manager, you give the deployed application instance a name, referred to as the service name and indicated in command arguments as <service>. By default, it is the name property from the application's package.json.

Process Manager also automatically generates an integer ID for each application it's managing. Typically, the IDs start with one (1) and are incremented with each application deployed; however, the value of ID is not guaranteed. Always determine it with slc ctl status once you've deployed an app.

A service becomes available over the network at http://hostname:port where:

  • hostname is the name of the host running Process Manager
  • port is 3000 + service ID.

For example, if Process Manager is running on my.host.com, then service ID 1 is available at http://my.host.com:3001, service ID 2 at http://my.host.com:3002, and so on.

CommandDescriptionArguments
Global sub-commands

info

Display information about Process Manager. 

ls

List services under management. 
shutdownStop the process manager and all applications under management. 

Service sub-commands (apply to a specific service)

The argument <service> is the name or ID of a service.

create <service>Create application instance service.

<service>, name or ID of the service to create.

cluster-restart <service>Restart the current application cluster workers.

<service>, name of target service.

env[-get] <service> [env...]

List specified environment variables for <service>. If none are given, list all variables.

<service>, name or ID of target service.

<env>, one or more environment variables.

env-set <service> <env>=<val>...

Set one or more environment variables for <service> and hard restart it with new environment.

<service>, name or ID of target service.

One or more environment variables, <env>, and corresponding value <val>.

env-set <service> PORT=<n>Run service on the specified port instead of the automatically-generated port.

Normally, PM sets the port to a value guaranteed to be different for each app: Use this sub-command to override this behavior.

Do not specify a port already in use. Doing so will cause the app to crash.

<service>, name or ID of target service.

<n>, integer port number to use.

env-unset <service> <env>...

Unset one or more environment variables for <service> and hard restart it with the new environment.

<service>, name or ID of target service.

<env>, one or more environment variables.

log-dump <service> [--follow] 

Empty the log buffer, dumping the contents to stdout.

Use --follow to continuously dump the log buffer to stdout.

<service>, name or ID of target service.
npmls <service> [depth]List dependencies of service with id <service>

<service>, name or ID of target service.

depth, an integer limit of levels for which to list dependencies; default is no limit.

remove <service>Remove <service>.<service>, name or ID of target service.
restart <service>

Hard stop the current application: kill the supervisor and its workers with SIGTERM; then restart the current application with new configuration.

<service>, name or ID of target service.
set-size <service> <n>Set cluster size for <service> to <n> workers.

<service>, name or ID of target service.

<n>, positive integer.

start <service>Start <service>.<service>, name or ID of target service.

status [service]

Report status. This is the default command.service, optional name of target service. Default is to show status for all services.
stop <service>

Hard stop <service>: Kill the supervisor and its workers with SIGTERM.

<service>, name or ID of target service.
soft-stop <service>

Notify workers they are being disconnected, give them a grace period to close existing connections, then stop the current application.

<service>, name or ID of target service.
soft-restart <service>

Notify workers they are being disconnected, give them a grace period to close existing connections, then restart the current application with new configuration.

<service>, name or ID of target service.

tracing-start <service>  

Restart all workers with tracing on.<service>, name or ID of target service.
tracing-stop <service> Restart all workers with tracing off.<service>, name or ID of target service.

Worker process sub-commands (apply to a specific worker process)

The argument <worker> is a worker specification; either:

  • <service_id>.1.<worker_id>, where <service_id> is the service ID and <worker_id> is the worker ID.
  • <service_id>.1.<process_id>, where <service_id> is the service ID and <process_id> is the worker process ID (PID).
cpu-start <worker> [timeout [stalls] ]

Start CPU profiling on worker or process ID <id>. Use cpu-stop to save the profile data.

NOTE: Requires Node version 0.11 or higher.

Saves profiling data to a file you can view with Chrome Dev Tools.  See CPU profiling for more information.

<worker>, a worker specification (see above).

Linux only:

[timeout], timeout period (ms) for Smart profiling. Start CPU profiling when the specified process's Node event loop stalls for more than the specified timeout period.

[stalls], number of event loop stalls after which the profiler will be stopped automatically (default is 0, never auto-stop).

For more information, see Smart profiling with slc.

cpu-stop <worker> [filename]

Stop CPU profiling on worker process <id> and save results in file name.cpuprofile.

<worker>, a worker specification (see above).

filename, optional base file name; default is node.<id>.cpuprofile.

heap-snapshot <worker> [filename]

Save heap snapshot data for worker process <id> and save results in file name.heapsnapshot.

Saves profiling data to a file you can view with Chrome Dev Tools.  See Heap memory profiling for more information.

<worker>, a worker specification (see above).

filename, optional base file name; default is node.<id>.heapsnapshot.

objects-start <worker>Start tracking objects on worker process <id>.

<worker>, a worker specification (see above).

objects-stop <worker>Stop tracking objects on worker process <id>.

<worker>, a worker specification (see above).

patch <worker> <file>

Apply patch <file> to <id> to get custom metrics.

<worker>, a worker specification (see above).

<file>, file name.

 

 

  • No labels