StrongLoop Arc and slc are no longer under active development, and will soon be deprecated. Arc's features are being included in the IBM API Connect Developer Toolkit: Please use it instead.
Skip to end of metadata
Go to start of metadata


Splunk searches, monitors, analyzes and visualizes machine-generated big data from websites, applications, servers, networks, sensors and mobile devices. 

Run Splunk  locally to log to a local file or syslog and configure Splunk to read the logs from there. Run Splunk on a remote server and configure the Splunk universal forwarder to securely forward logs events to the Splunk server.

Configure application logging

First, install a logging middleware module; for example:

Alternatively you can also use BunyanWinstonStrong-logger or other popular Node logging frameworks.

Then, add the logging middleware to your application.

Adding logging middleware to a LoopBack application

For a LoopBack application, add logging middleware as described in Defining middleware:

Edit server/middleware.json and add the logging middleware in the initial phase:

Running the application

This example is for a LoopBack application.

Start the application locally:

If you're running on a remote host, then build and deploy to Process Manager running on your remote host.

Then dump the log to the console as follows:

This dumps the last MB of logs to the console.

Now execute a GET query from the API explorer and check the generated logging on the console log:

Screen Shot 2014-09-15 at 1.58.24 PM

The default middleware logger logs to console output. However since the application is running in a cluster, it will log to a managed log file called supervisor.log under <application root>

2014-09-15T22:50:40.109Z pid:1431 worker:1 GET /api/Cars 304 60.964 ms - -

Here you see the logged entry of the API call with the Timestamp, Process ID (1431), Worker ID, Response Code and Response Time Logged

Execute a few other API endpoint calls from the Loopback API explorer and check associated log entries.

The above log shows two “Get()” operations, /api/Cars/count and /api/Locations, and then a POST operation, where the application encountered an assertion error:


Screen Shot 2014-09-15 at 4.53.40 PM

This error resulted in the running worker process crashing. However, because the cluster is being managed by StrongLoop Controller, another worker process (1516) is spun up and added to the master. Consecutive API calls are automatically routed to the new worker:

Logging to Syslog

When running an application in production, the StrongLoop Process Manager service logs directly to syslog.

For example, below is shown the tailing output of syslog (/var/log/system.log).  You can see the two worker processes 1553 and 1554 log to syslog, where as the master process 1551 logs to supervisor.log as usual. 

Invoking one get()Operation from the API explorer routed the call to worker process 1 as indicated by the logging in Syslog.

Using Splunk

If you don't already have it, download Splunk Enterprise and install the version for your operating system.

Once installed, boot the splunkd daemon process with the Splunk start scripts. For example, if you installed Splunk to /opt/splunk then start Splunk as follows:

The last command configures the listener port as 9997.

After startup, the Splunk server provides:

Log in into Splunk using the admin console:

Screen Shot 2014-09-15 at 10.22.16 PM

Screen Shot 2014-09-15 at 10.25.52 PM

Configure the data input

Splunk can receive the log data using either a direct file input or via a listening TCP or UDP port. It has a pre-built parser for standard logs like syslog. If we were logging from the Node application to a remote syslog file on the Splunk server itself, then we can simply setup the remote syslog file as the managed input on Splunk. The screenshots below depict a file system monitoring of syslog.

This workflow is triggered by selecting the “manage data input” menu item on the Splunk console.

Screen Shot 2014-09-15 at 10.35.20 PM

Screen Shot 2014-09-15 at 10.35.44 PM

Screen Shot 2014-09-15 at 10.36.37 PM

Screen Shot 2014-09-15 at 10.37.01 PM

Using the Universal Forwarder

Alternately you can use the Splunk Universal forwarder to forward the logs from the server hosting the Node application to a remote Splunk server.

Download the Universal Forwarder and install the version for your operating system.

You can write a small shell script to configure the Universal Forwarder on the Node server as shown below. For example, if the Universal Forwarder is installed under/Applications, the script would look like:

Executing this shell script creates the monitor for autoforwarding events in syslog to Splunk. This feature can be used to read any logs including the supervisor log which stores aggregated Node logs managed by StrongLoop Controller. You just have to make sure that we point at the path for the supervisor.log instead of syslog, if you prefer not logging to syslog.

This tells us that the monitor for reading the syslog has been setup successfully.

View and configure Node.js event data

You can execute API calls from the API explorer of Loopback and then we can simply log in into Splunk and search for the API calls of interest

For example, to check all location-related API calls, search for “api/locations”…

Screen Shot 2014-09-15 at 11.34.51 PM

You can see the timestamp, process ID, and worker ID as well as specific API endpoint calls along with response times, host and source. Some endpoints are simply GetCounts(), while others are findOne() for locations and so on.

Similarly search for “api/cars” to find all the APIs endpoints executed for the cars model along with associated metrics and server/process/app information.

Screen Shot 2014-09-15 at 11.39.47 PM

Splunk has multiple ways of aggregating the event data reported into Dashboards and reports. For more information, see Splunk Dashboards and Visualization.



  • No labels