LTTng-UST Analyses | ||
---|---|---|
![]() |
![]() |
|
LTTng Kernel Analysis | Trace synchronization |
The Userspace traces are taken on an application level. With kernel traces, you know what events you will have as the domain is known and cloistered. Userspace traces can contain pretty much anything. Some analyses are offered if certain events are enabled.
The Call Stack view allows the user to visualize the call stack per thread over time, if the application and trace provide this information.
To open this view go in Window -> Show View -> Other... and select Tracing/Call Stack in the list. The view shows the call stack information for the currently selected trace. Conversely, you can select a trace and expand it in the Project Explorer then expand LTTng-UST CallStack Analysis (the trace must be loaded) and open Call Stack.
The table on the left-hand side of the view shows the threads and call stack. The function name, depth, entry and exit time and duration are shown for the call stack at the selected time.
Double-clicking on a function entry in the table will zoom the time graph to the selected function's range of execution.
The time graph on the right-hand side of the view shows the call stack state graphically over time. The function name is visible on each call stack event if size permits. The color of each call stack event is randomly assigned based on the function name, allowing for easy identification of repeated calls to the same function.
Clicking on the time graph will set the current time and consequently update the table with the current call stack information.
Shift-clicking on the time graph will select a time range. When the selection is a time range, the begin time is used to update the stack information.
Double-clicking on a call stack event will zoom the time graph to the selected function's range of execution.
Clicking the Select Next Event or Select Previous Event or using the left and right arrows will navigate to the next or previous call stack event, and select the function currently at the top of the call stack. Note that pressing the Shift key at the same time will update the selection end time of the current selection.
Clicking the
Import Mapping File (
) icon will open a file selection dialog, allowing you to import a text file containing mappings from function addresses to function names. If the callstack provider for the current trace type only provides function addresses, a mapping file will be required to get the function names in the view. See the following sections for an example with LTTng-UST traces.
There is support in the LTTng-UST integration plugin to display the callstack of applications traced with the liblttng-ust-cyg-profile.so library (see the liblttng-ust-cyg-profile man page for additional information). To do so, you need to:
lttng enable-event -u -a
lttng add-context -u -t vpid -t vtid -t procname
LD_PRELOAD=/usr/lib/liblttng-ust-cyg-profile.so ./myprogram
Once you load the resulting trace, the Callstack View should be populated with the relevant information.
Note that for non-trivial applications, liblttng-ust-cyg-profile generates a lot of events! You may need to increase the channel's subbuffer size to avoid lost events. Refer to the LTTng documentation.
For traces taken with LTTng-UST 2.8 or later, the Callstack View should show the function names automatically, since it will make use of the debug information statedump events (which are enabled when using enable-event -u -a).
For traces taken with prior versions of UST, you would need to set the path to the binary file or mapping manually:
If you followed the steps in the previous section, you should have a Callstack View populated with function entries and exits. However, the view will display the function addresses instead of names in the intervals, which are not very useful by themselves. To get the actual function names, you need to:
Then either:
OR
nm myprogram > mapping.txt
(If you are dealing with C++ executables, you may want to use nm --demangle instead to get readable function names.)
The view should now update to display the function names instead. Make sure the binary used for taking the trace is the one used for this step too (otherwise, there is a good chance of the addresses not being the same).
See Control Flow View's Using the mouse , Using the keyboard and Zoom region .
See Control Flow View's Marker Axis .
The Memory Usage view allows the user to visualize the active memory usage per thread over time, if the application and trace provide this information.
The view shows the memory consumption for the currently selected trace.
The time chart plots heap memory usage graphically over time. There is one line per process, unassigned memory usage is mapped to "Other".
In this implementation, the user needs to trace while hooking the liblttng-ust-libc-wrapper by running LD_PRELOAD=liblttng-ust-libc-wrapper.so <exename>. This will add tracepoints to memory allocation and freeing to the heap, NOT shared memory or stack usage. If the contexts vtid and procname are enabled, then the view will associate the heap usage to processes. As detailed earlier, to enable the contexts, see the Adding Contexts to Channels and Events of a Domain section. Or if using the command-line:
lttng add-context -u -t vtid -t procname
If thread information is available the view will look like this:
If thread information is not available it will look like this:
The time axis is aligned with other views that support automatic time axis alignment (see Automatic Time Axis Alignment).
Please note this view will not show shared memory or stack memory usage.
The Memory Usage chart is usable with the mouse. The following actions are set:
The Memory Usage View toolbar, located at the top right of the view, has shortcut buttons to perform common actions:
![]() |
Align Views | Disable and enable the automatic time axis alignment of time-based views. Disabling the alignment in the this view will disable this feature across all the views because it's a workspace preference |
Please note this view will not show shared memory or stack memory usage.
Starting with LTTng 2.8, the tracer can now provide enough information to associate trace events with their location in the original source code.
To make use of this feature, first make sure your binaries are compiled with debug information (-g), so that the instruction pointers can be mapped to source code locations. This lookup is made using the addr2line command-line utility, which needs to be installed and on the $PATH of the system running Trace Compass. addr2line is available in most Linux distributions, Mac OS X, Windows using Cygwin and others.
The following trace events need to be present in the trace:
as well as the following contexts:
For ease of use, you can simply enable all the UST events when setting up your session:
lttng enable-event -u -a lttng add-context -u -t vpid -t ip
Note that you can also create and configure your session using the Control View.
If you want to track source locations in shared libraries loaded by the application, you also need to enable the "lttng_ust_dl:*" events, as well as preload the UST library providing them when running your program:
LD_PRELOAD=/path/to/liblttng-ust-dl.so ./myprogram
If all the required information is present, then the Source Location column of the Event Table should be populated accordingly, and the Open Source Code action should be available. Refer to the section Event Source Lookup for more details.
The Binary Location information should be present even if the original binaries are not available, since it only makes use of information found in the trace. A + denotes a relative address (i.e. an offset within the object itself), whereas a @ denotes an absolute address, for non-position-independent objects.
Example of a trace with debug info and corresponding Source Lookup information, showing a tracepoint originating from a shared library
To resolve addresses to function names and source code locations, the analysis makes use of the binary files (executables or shared libraries) present on the system. By default, it will look for the file paths as they are found in the trace, which means that it should work out-of-the-box if the trace was taken on the same machine that Trace Compass is running.
It is possible to configure a root directory that will be used as a prefix for all file path resolutions. The button to open the configuration dialog is called Configure how addresses are mapped to function names and is currently located in the Call Stack View. Note that the Call Stack View will also make use of this configuration to resolve its function names.
The symbol configuration dialog for LTTng-UST 2.8+ traces
This can be useful if a trace was taken on a remote target, and an image of that target is available locally.
If a binary file is being traced on a target, the paths in the trace will refer to the paths on the target. For example, if they are:
and an image of that target is copied locally on the system at /home/user/project/image, which means the binaries above end up at:
Then selecting the /home/user/project/image directory in the configuration dialog above will allow Trace Compass to read the debug symbols correctly.
Note that this path prefix will apply to both binary file and source file locations, which may or may not be desirable.
![]() |
![]() |
![]() |
LTTng Kernel Analysis | Trace synchronization |