The StrongLoop Arc Tracing module enables you to analyze performance and execution of Node applications to discover bottlenecks and trace code execution paths. You can display up to five hours of data to discover how applications perform over time.
You can drill down into specific function calls and execution paths for HTTP and database requests, to see how and where your application is spending time. Tracing provides powerful "flame graph" visualization of an application's function call stack and the corresponding execution times to help you track down where the application spends its time.
The Tracing module displays data at several different levels:
- Initially, you see the resource consumption timeline, a chart of CPU use and memory consumption over time.
Click anywhere on the chart to display information on the selected interval.
- You'll then see the Trace sequences view with detailed information on the selected time interval.
Click on one of the trace sequences to drill down into its execution path.
- You'll then see Waterfalls and Synchronous vs. asynchronous waterfall views for that trace sequence.
- Finally, you'll see the Synchronous code flame graph and Synchronous raw data, powerful tools to enable you to pinpoint exactly where your app is spending its time.
The following table lists the backends for which tracing has been tested. It may work with other versions of the stated Node drivers, but has not been tested with them.
|MongoDB||mongodb 1.x - 2.x|
Contact StrongLoop for a license key to use tracing.
To view tracing data for an app running locally on the same system where you're running Arc, you don't need to do anything further.
To view tracing data for a remote application running on a remote server, you must push the license key to the remote server running Process Manager. For instructions, see Setting your license key on a remote host.
Procedure to view application tracing
To view tracing for an application:
Start StrongLoop Arc.
Start an instance of StrongLoop Process Manager (PM) either locally or on a remote server. For example, the easiest way to run an app locally is:
For more information, see Using Process Manager.
- If you ran Process Manager with slc start, as shown in the example in step 2, skip this step.
Otherwise, for Process Manager on a remote server, deploy your application using either Arc or
slc. For more information, see Building and deploying.
- In Arc, choose Process Manager in the module selector and connect to the running PM instance as described in Connecting to Process Manager from Arc. For example, to connect to a PM running locally as shown in the example in step 2:
- For Strong PM, enter
- For Port, enter 8701.
- For Strong PM, enter
- In Arc, choose Tracing.
- Click On to turn on tracing:
You'll see a message like
starting tracing... 0 of 4 processes.
Once all processes are loaded, you will see tracing data. It may take a few seconds to load all the processes.
The Host drop-down selector lists Process Managers to which you've connected; by default the first one is selected. Click the Host drop-down selector to choose a different Process Manager.
By default, the first application process ID is selected. Click on another process ID to view data for it.
You will see the application resource consumption timeline.
Resource consumption timeline
This chart shows:
- Blue line: Total CPU use (standard Linux load average over 1 minute for all CPU cores).
- Orange line: Process memory consumption (in MBytes).
An orange triangle at the bottom of the chart indicates a statistical anomaly beyond a three-sigma variation.
StrongLoop collects data at approximately 20s intervals for up to 5 hrs of time. Use this chart to pick a specific time interval you want to investigate.
Mouse over the chart to display the values at each point in time, and click on a point to display details of that interval.
Click a point on the resource consumption chart to display the trace sequences at that point; for example:
This table lists all the trace sequences active in the application during the selected interval, including HTTP endpoints, database operations, and other external calls such as memcache. The number in a green circle is the number of distinct code paths in the sequence.
Click on a row in the table to display details about that trace sequence.
Click the green arrows at the upper right to move to the next and previous intervals.
The table displays for each trace sequence:
- Total: Time spent in this trace sequence.
- Async: Percentage of time spent waiting for a callback.
- Sync: Percentage of time spent executing synchronous code.
Click on one of the routes to display the code path executed when responding to HTTP requests to that route; for example:
Displays details for the trace sequence (HTTP route, database operation or other operation) and how much time the application spent in it.
Click the right arrow to display the next trace sequence. Click the left arrow to display the previous trace sequence. If there is only one execution path for the selected trace sequence, then the arrows are disabled.
Synchronous versus asynchronous waterfall
The top bar is the time spent initiating the original function call. The lower bar is the time spent executing the callback function, and the line connecting the two represents the time spent waiting for a response; for example, for a database server to return a value.
Asynchronous callbacks are represented by lines connecting the time of the callback being scheduled to when it was invoked.
Click the bar to display the top costs (in terms of time spent) in the Inspector.
Synchronous code flame graph
Provides rich information about the function call stack including application code and the third-party modules the code uses, enabling you to see where an app is spending the most time.
The flame graph shows each module in a different colored block. For example,
express might be shown in green,
loopback in purple, and
async in purple. Blocks show function names when available. Unnamed callback functions are labeled “anonymous”. The selected call is always shown in yellow.
Read the flame graph from the bottom up. The bottom rectangle is the root of the call-stack tree, the application's initial function call, functions it calls are above that, and so on. The horizontal axis is percentage of time executing a function.
Mouse over a bar to display details of that particular call in the Inspector at right.
Synchronous raw data
Mouse over a bar to display details of that particular call in the Inspector at right. Click on a line to keep the display at that point.
The Inspector displays different information, depending on whether you clicked on a bar in Synchronous vs. Asynchronous Waterfall or Synchronous Code Flame Graph / Raw Data.
Synchronous vs. Asynchronous Waterfall (Event Loop) view
- Name: Module and function with the largest self cost in this waterfall, that is, the function that has the longest time, not counting time to execute child functions.
- Cost: Execution time in microseconds.
- Top Costs: Percentage of time spent executing each child function.
Synchronous Code Flame Graph or Raw Data view
- Name: Module and function names.
- Version: Version of the module being used.
- Line and Column: Line and column in the source file where the function call occurs.
- Total/self/child cost: Time spent executing this call stack (total), the function (self), and functions it calls (child). In the flame graph, child calls are shown above the calling function (or "self" when you select that function call).
- Total/self/child percentage: Percentage of time spent this call stack (total), the function (self), and functions it calls (child).