summaryrefslogtreecommitdiff
path: root/chromium/third_party/lcov-1.9
diff options
context:
space:
mode:
authorZeno Albisser <zeno.albisser@digia.com>2013-08-15 21:46:11 +0200
committerZeno Albisser <zeno.albisser@digia.com>2013-08-15 21:46:11 +0200
commit679147eead574d186ebf3069647b4c23e8ccace6 (patch)
treefc247a0ac8ff119f7c8550879ebb6d3dd8d1ff69 /chromium/third_party/lcov-1.9
downloadqtwebengine-chromium-679147eead574d186ebf3069647b4c23e8ccace6.tar.gz
Initial import.
Diffstat (limited to 'chromium/third_party/lcov-1.9')
-rw-r--r--chromium/third_party/lcov-1.9/CHANGES419
-rw-r--r--chromium/third_party/lcov-1.9/COPYING339
-rw-r--r--chromium/third_party/lcov-1.9/Makefile99
-rw-r--r--chromium/third_party/lcov-1.9/README137
-rw-r--r--chromium/third_party/lcov-1.9/README.chromium14
-rwxr-xr-xchromium/third_party/lcov-1.9/bin/gendesc226
-rwxr-xr-xchromium/third_party/lcov-1.9/bin/genhtml5648
-rwxr-xr-xchromium/third_party/lcov-1.9/bin/geninfo3068
-rwxr-xr-xchromium/third_party/lcov-1.9/bin/genpng384
-rwxr-xr-xchromium/third_party/lcov-1.9/bin/install.sh71
-rwxr-xr-xchromium/third_party/lcov-1.9/bin/lcov4175
-rwxr-xr-xchromium/third_party/lcov-1.9/bin/updateversion.pl146
-rw-r--r--chromium/third_party/lcov-1.9/contrib/galaxy/CHANGES1
-rw-r--r--chromium/third_party/lcov-1.9/contrib/galaxy/README48
-rwxr-xr-xchromium/third_party/lcov-1.9/contrib/galaxy/conglomerate_functions.pl195
-rwxr-xr-xchromium/third_party/lcov-1.9/contrib/galaxy/gen_makefile.sh129
-rwxr-xr-xchromium/third_party/lcov-1.9/contrib/galaxy/genflat.pl1238
-rwxr-xr-xchromium/third_party/lcov-1.9/contrib/galaxy/posterize.pl312
-rw-r--r--chromium/third_party/lcov-1.9/descriptions.tests2990
-rw-r--r--chromium/third_party/lcov-1.9/example/Makefile98
-rw-r--r--chromium/third_party/lcov-1.9/example/README6
-rw-r--r--chromium/third_party/lcov-1.9/example/descriptions.txt10
-rw-r--r--chromium/third_party/lcov-1.9/example/example.c60
-rw-r--r--chromium/third_party/lcov-1.9/example/gauss.h6
-rw-r--r--chromium/third_party/lcov-1.9/example/iterate.h6
-rw-r--r--chromium/third_party/lcov-1.9/example/methods/gauss.c48
-rw-r--r--chromium/third_party/lcov-1.9/example/methods/iterate.c45
-rw-r--r--chromium/third_party/lcov-1.9/lcovrc130
-rw-r--r--chromium/third_party/lcov-1.9/man/gendesc.178
-rw-r--r--chromium/third_party/lcov-1.9/man/genhtml.1502
-rw-r--r--chromium/third_party/lcov-1.9/man/geninfo.1366
-rw-r--r--chromium/third_party/lcov-1.9/man/genpng.1101
-rw-r--r--chromium/third_party/lcov-1.9/man/lcov.1707
-rw-r--r--chromium/third_party/lcov-1.9/man/lcovrc.5608
-rw-r--r--chromium/third_party/lcov-1.9/rpm/lcov.spec48
35 files changed, 22458 insertions, 0 deletions
diff --git a/chromium/third_party/lcov-1.9/CHANGES b/chromium/third_party/lcov-1.9/CHANGES
new file mode 100644
index 00000000000..1ff82400fa1
--- /dev/null
+++ b/chromium/third_party/lcov-1.9/CHANGES
@@ -0,0 +1,419 @@
+Version 1.9
+===========
+
+genhtml:
+- Improved wording for branch representation tooltip text
+- Fixed vertical alignment of HTML branch representation
+
+geninfo:
+- Improved warning message about --initial not generating branch coverage data
+- Debugging messages are now printed to STDERR instead of STDOUT
+- Fixed problem with some .gcno files. Reported by gui@futarque.com.
+ (file.gcno: reached unexpected end of file)
+- Fixed problem with relative build paths. Reported by zhanbiao2000@gmail.com.
+ (cannot find an entry for ^#src#test.c.gcov in .gcno file, skipping file!)
+- Fixed problem where coverage data is missing for some files. Reported by
+ weston_schmidt@open-roadster.com
+- Fixed problem where exclusion markers are ignored when gathering
+ initial coverage data. Reported by ahmed_osman@mentor.com.
+- Fixed large execution counts showing as negative numbers in HTML output.
+ Reported by kkyriako@yahoo.com.
+- Fixed problem that incorrectly associated branches outside of a block with
+ branches inside the first block
+
+lcov:
+- Fixed problem that made lcov ignore --kernel-directory parameters when
+ specifying --initial. Reported by hjia@redhat.com.
+- Added --list-full-path option to prevent lcov from truncating paths in list
+ output
+- Added lcov_list_width and lcov_list_truncate_max directives to the
+ lcov configuration file to allow for list output customization
+- Improved list output
+
+COPYING:
+- Added license text to better comply with GPL recommendations
+
+
+Version 1.8
+===========
+
+gendesc:
+- Fixed problem with single word descriptions
+
+genhtml:
+- Added support for branch coverage measurements
+- Added --demangle-cpp option to convert C++ function names to human readable
+ format. Based on a patch by slava.semushin@gmail.com.
+- Improved color legend: legend display takes up less space in HTML output
+- Improved coverage rate limits: all coverage types use the same limits
+ unless specified otherwise
+- Fixed CRLF line breaks in source code when generating html output. Based
+ on patch by michael.knigge@set-software.de.
+- Fixed warning when $HOME is not set
+- Fixed problem with --baseline-file option. Reported by sixarm@gmail.com.
+ (Undefined subroutine &main::add_fnccounts called at genhtml line 4560.)
+- Fixed problem with --baseline-file option and files without function
+ coverage data (Can't use an undefined value as a HASH reference at genhtml
+ line 4441.)
+- Fixed short-name option ambiguities
+- Fixed --highlight option not showing line data from converted test data
+- Fixed warnings about undefined value used. Reported by nikita@zhuk.fi.
+- Fixed error when processing tracefiles without function data. Reported
+ by richard.corden@gmail.com (Can't use an undefined value as a HASH
+ reference at genhtml line 1506.)
+
+geninfo:
+- Added support for branch coverage measurements
+- Added support for exclusion markers: Users can exclude lines of code from
+ coverage reports by adding keywords to the source code.
+- Added --derive-func-data option
+- Added --debug option to better debug problems with graph files
+- Fixed CRLF line breaks in source code when generating tracefiles. Based on
+ patch by michael.knigge@set-software.de.
+- Fixed problems with unnamed source files
+- Fixed warning when $HOME is not set. Reported by acalando@free.fr.
+- Fixed errors when processing unnamed source files
+- Fixed help text typo
+- Fixed errors when processing incomplete function names in .bb files
+- Fixed filename prefix detection
+- Fixed problem with matching filename
+- Fixed problem when LANG is set to non-english locale. Reported by
+ benoit_belbezet@yahoo.fr.
+- Fixed short-name option ambiguities
+
+genpng:
+- Fixed runtime-warning
+
+lcov:
+- Added support for branch coverage measurements
+- Added support for the linux-2.6.31 upstream gcov kernel support
+- Added --from-package and --to-package options
+- Added --derive-func-data option
+- Added overall coverage result output for more operations
+- Improved output of lcov --list
+- Improved gcov-kernel handling
+- Fixed minor problem with --diff
+- Fixed double-counting of function data
+- Fixed warning when $HOME is not set. Reported by acalando@free.fr.
+- Fixed error when combining tracefiles without function data. Reported by
+ richard.corden@gmail.com. (Can't use an undefined value as a HASH reference
+ at lcov line 1341.)
+- Fixed help text typo
+- Fixed filename prefix detection
+- Fixed lcov ignoring information about converted test data
+
+README:
+- Added note to mention required -lgcov switch during linking
+
+
+Version 1.7:
+============
+
+gendesc:
+- Updated error and warning messages
+- Updated man page
+
+genhtml:
+- Added function coverage data display patch by tomzo@nefkom.net (default is on)
+- Added --function-coverage to enable function coverage display
+- Added --no-function-coverage to disable function coverage display
+- Added sorting option in HTLM output (default is on)
+- Added --sort to enable sorting
+- Added --no-sort to disable sorting
+- Added --html-gzip to create gzip-compressed HTML output (patch by
+ dnozay@vmware.com)
+- Fixed problem when using --baseline-file on coverage data files that
+ contain data for files not found in the baseline file
+- Updated error and warning messages
+- Updated man page
+
+geninfo:
+- Added function coverage data collection patch by tomzo@nefkom.net
+- Added more verbose output when a "ERROR: reading string" error occurs
+ (patch by scott.heavner@philips.com)
+- Fixed geninfo not working with directory names containing spaces (reported
+ by jeffconnelly@users.sourceforge.net)
+- Fixed "ERROR: reading string" problem with gcc 4.1
+- Fixed problem with function names that contain non-alphanumerical characters
+- Fixed problem with gcc versions before 3.3
+- Updated error and warning messages
+- Updated man page
+
+genpng:
+- Updated error and warning messages
+- Updated man page
+
+lcov:
+- Added support for function coverage data for adding/diffing tracefiles
+- Added --no-recursion option to disable recursion into sub-directories
+ while scanning for gcov data files
+- Fixed lcov -z not working with directory names containing spaces (reported
+ by Jeff Connelly)
+- Updated error and warning messages
+- Updated man page
+
+lcov.spec:
+- Updated of description and title information
+
+lcovrc:
+- Added genhtml_function_hi_limit
+- Added genhtml_function_med_limit
+- Added genhtml_function_coverage
+- Added genhtml_sort
+- Updated man page
+
+Makefile:
+- Updated info text
+
+
+Version 1.6:
+============
+
+geninfo:
+- Added libtool compatibility patch by thomas@apestaart.org (default is on)
+- Added --compat-libtool option to enable libtool compatibility mode
+- Added --no-compat-libtool option to disable libtool compatibility mode
+- Changed default for line checksumming to off
+- Added --checksum option to enable line checksumming
+- Added --gcov-tool option
+- Added --ignore-errors option
+- Added --initial option to generate zero coverage from graph files
+- Removed automatic test name modification on s390
+- Added --checksum option
+- Updated man page
+
+lcov:
+- Added libtool compatibility patch by thomas@apestaart.org
+- Added --compat-libtool option to enable libtool compatibility mode
+- Added --no-compat-libtool option to disable libtool compatibility mode
+- Added --checksum option to enable line checksumming
+- Added --gcov-tool option
+- Added --ignore-errors option
+- Added --initial option to generate zero coverage from graph files
+- Updated help text
+- Updated man page
+- Fixed lcov not working when -k is specified more than once
+- Fixed lcov not deleting .gcda files when specifying -z and -d
+
+lcovrc:
+- Added geninfo_compat_libtool option
+- Added geninfo_checksum option
+- Removed geninfo_no_checksum option from example lcovrc
+- Updated man page
+
+README:
+- Added description of lcovrc file
+
+
+Version 1.5:
+============
+
+genhtml:
+- Added check for invalid characters in test names
+- Added --legend option
+- Added --html-prolog option
+- Added --html-epilog option
+- Added --html-extension option
+- Added warning when specifying --no-prefix and --prefix
+- Reworked help text to make it more readable
+
+geninfo:
+- Renamed 'sles9' compatibility mode to 'hammer' compatibility mode
+- Added support for mandrake gcc 3.3.2
+- Fixed bbg file reading in hammer compatibility mode
+- Added check for invalid characters in test names
+- Added --base-directory option
+
+lcov:
+- Added check for invalid characters in test names
+- Added --base-directory option
+
+
+Version 1.4:
+============
+
+All:
+- Added configuration file support
+
+genhtml:
+- Fixed help text message
+- Fixed handling of special characters in file- and directory names
+- Added description of --css-file option to man page
+
+geninfo:
+- Added support for GCOV file format as used by GCC 3.3.3 on SUSE SLES9
+- Fixed error text message
+- Added check to abort processing if no source code file is available
+- Added workaround for a problem where geninfo could not find source code
+ files for a C++ project
+- Fixed 'branch'-statement parsing for GCC>=3.3 .gcov files
+- Fixed exec count-statement parsing for GCC>=3.3 .gcov files
+- Fixed .gcno-file parser (some lines were not counted as being instrumented)
+
+lcov:
+- Modified path for temporary files from '.' to '/tmp'
+- Fixed comments
+- Removed unused function 'escape_shell'
+
+lcovrc:
+- Introduced sample configuration file
+
+Makefile:
+- Added rule to install configuration file
+- Fixed installation path for man pages
+
+
+Version 1.3:
+============
+
+All:
+- Added compatibility for gcc-3.4
+
+lcov:
+- Modified --diff function to better cope with ambiguous entries in patch files
+- Modified --capture option to use modprobe before insmod (needed for 2.6)
+- Added --path option required for --diff function
+
+
+Version 1.2:
+============
+
+All:
+- Added compatibility for gcc-3.3
+- Adjusted LCOV-URL (http://ltp.sourceforge.net/coverage/lcov.php)
+- Minor changes to whitespaces/line breaks/spelling
+- Modified call mechanism so that parameters for external commands are not
+ parsed by the shell mechanism anymore (no more problems with special
+ characters in paths/filenames)
+- Added checksumming mechanism: each tracefile now contains a checksum for
+ each instrumented line to detect incompatible data
+
+Makefile:
+- Added rule to build source RPM
+- Changed install path for executables (/usr/local/bin -> /usr/bin)
+
+lcov.spec:
+- Modified to support building source rpms
+
+updateversion.pl:
+- Modified to include update of release number in spec file
+
+genhtml:
+- Fixed bug which would not correctly associate data sets with an empty
+ test name (only necessary when using --show-details in genhtml)
+- Implemented new command line option '--nochecksum' to suppress generation
+ of checksums
+- Implemented new command line option '--highlight' which highlights lines of
+ code which were only covered in converted tracefiles (see '--diff' option of
+ lcov)
+
+geninfo:
+- Added workaround for a bug in gcov shipped with gcc-3.2 which aborts when
+ encountering empty .da files
+- Fixed geninfo so that it does not abort after encountering empty .bb files
+- Added function to collect branch coverage data
+- Added check for gcov tool
+- Added check for the '--preserve-paths' option of gcov; if available, this
+ will prevent losing .gcov files in some special cases (include files with
+ same name in different subdirectories)
+- Implemented new command line option '--follow' to control whether or not
+ links should be followed while searching for .da files.
+- Implemented new command line option '--nochecksum' to suppress generation
+ of checksums
+
+lcov:
+- Fixed bug which would not correctly associate data sets with an empty
+ test name (only necessary when using --show-details in genhtml)
+- Cleaned up internal command line option check
+- Files are now processed in alphabetical order
+- Added message when reading tracefiles
+- Implemented new command line option '--nochecksum' to suppress generation
+ of checksums
+- Implemented new command line option '--diff' which allows converting
+ coverage data from an older source code version by using a diff file
+ to map line numbers
+- Implemented new command line option '--follow' to control whether or not
+ links should be followed while searching for .da files.
+
+genpng:
+- Added support for the highlighting option of genhtml
+- Corrected tab to spaces conversion
+- Modified genpng to take number of spaces to use in place of tab as input
+ instead of replacement string
+
+
+Version 1.1:
+============
+
+- Added CHANGES file
+- Added Makefile implementing the following targets:
+ * install : install LCOV scripts and man pages
+ * uninstall : revert previous installation
+ * dist : create lcov.tar.gz file and lcov.rpm file
+ * clean : clean up example directory, remove .tar and .rpm files
+- Added man pages for all scripts
+- Added example program to demonstrate the use of LCOV with a userspace
+ application
+- Implemented RPM build process
+- New directory structure:
+ * bin : contains all executables
+ * example : contains a userspace example for LCOV
+ * man : contains man pages
+ * rpm : contains files required for the RPM build process
+- LCOV-scripts are now in bin/
+- Removed .pl-extension from LCOV-script files
+- Renamed readme.txt to README
+
+README:
+- Adjusted mailing list address to ltp-coverage@lists.sourceforge.net
+- Fixed incorrect parameter '--output-filename' in example LCOV call
+- Removed tool descriptions and turned them into man pages
+- Installation instructions now refer to RPM and tarball
+
+descriptions.tests:
+- Fixed some spelling errors
+
+genhtml:
+- Fixed bug which resulted in an error when trying to combine .info files
+ containing data without a test name
+- Fixed bug which would not correctly handle data files in directories
+ with names containing some special characters ('+', etc.)
+- Added check for empty tracefiles to prevent division-by-zeros
+- Implemented new command line option --num-spaces / the number of spaces
+ which replace a tab in source code view is now user defined
+- Fixed tab expansion so that in source code view, a tab doesn't produce a
+ fixed number of spaces, but as many spaces as are needed to advance to the
+ next tab position
+- Output directory is now created if it doesn't exist
+- Renamed "overview page" to "directory view page"
+- HTML output pages are now titled "LCOV" instead of "GCOV"
+- Information messages are now printed to STDERR instead of STDOUT
+
+geninfo:
+- Fixed bug which would not allow .info files to be generated in directories
+ with names containing some special characters
+- Information messages are now printed to STDERR instead of STDOUT
+
+lcov:
+- Fixed bug which would cause lcov to fail when the tool is installed in
+ a path with a name containing some special characters
+- Implemented new command line option '--add-tracefile' which allows the
+ combination of data from several tracefiles
+- Implemented new command line option '--list' which lists the contents
+ of a tracefile
+- Implemented new command line option '--extract' which allows extracting
+ data for a particular set of files from a tracefile
+- Implemented new command line option '--remove' which allows removing
+ data for a particular set of files from a tracefile
+- Renamed '--reset' to '--zerocounters' to avoid a naming ambiguity with
+ '--remove'
+- Changed name of gcov kernel directory from /proc/gcov to a global constant
+ so that it may be changed easily when required in future versions
+- Information messages are now printed to STDERR instead of STDOUT
+
+
+Version 1.0 (2002-09-05):
+=========================
+
+- Initial version
+
diff --git a/chromium/third_party/lcov-1.9/COPYING b/chromium/third_party/lcov-1.9/COPYING
new file mode 100644
index 00000000000..d511905c164
--- /dev/null
+++ b/chromium/third_party/lcov-1.9/COPYING
@@ -0,0 +1,339 @@
+ GNU GENERAL PUBLIC LICENSE
+ Version 2, June 1991
+
+ Copyright (C) 1989, 1991 Free Software Foundation, Inc.,
+ 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+ Preamble
+
+ The licenses for most software are designed to take away your
+freedom to share and change it. By contrast, the GNU General Public
+License is intended to guarantee your freedom to share and change free
+software--to make sure the software is free for all its users. This
+General Public License applies to most of the Free Software
+Foundation's software and to any other program whose authors commit to
+using it. (Some other Free Software Foundation software is covered by
+the GNU Lesser General Public License instead.) You can apply it to
+your programs, too.
+
+ When we speak of free software, we are referring to freedom, not
+price. Our General Public Licenses are designed to make sure that you
+have the freedom to distribute copies of free software (and charge for
+this service if you wish), that you receive source code or can get it
+if you want it, that you can change the software or use pieces of it
+in new free programs; and that you know you can do these things.
+
+ To protect your rights, we need to make restrictions that forbid
+anyone to deny you these rights or to ask you to surrender the rights.
+These restrictions translate to certain responsibilities for you if you
+distribute copies of the software, or if you modify it.
+
+ For example, if you distribute copies of such a program, whether
+gratis or for a fee, you must give the recipients all the rights that
+you have. You must make sure that they, too, receive or can get the
+source code. And you must show them these terms so they know their
+rights.
+
+ We protect your rights with two steps: (1) copyright the software, and
+(2) offer you this license which gives you legal permission to copy,
+distribute and/or modify the software.
+
+ Also, for each author's protection and ours, we want to make certain
+that everyone understands that there is no warranty for this free
+software. If the software is modified by someone else and passed on, we
+want its recipients to know that what they have is not the original, so
+that any problems introduced by others will not reflect on the original
+authors' reputations.
+
+ Finally, any free program is threatened constantly by software
+patents. We wish to avoid the danger that redistributors of a free
+program will individually obtain patent licenses, in effect making the
+program proprietary. To prevent this, we have made it clear that any
+patent must be licensed for everyone's free use or not licensed at all.
+
+ The precise terms and conditions for copying, distribution and
+modification follow.
+
+ GNU GENERAL PUBLIC LICENSE
+ TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
+
+ 0. This License applies to any program or other work which contains
+a notice placed by the copyright holder saying it may be distributed
+under the terms of this General Public License. The "Program", below,
+refers to any such program or work, and a "work based on the Program"
+means either the Program or any derivative work under copyright law:
+that is to say, a work containing the Program or a portion of it,
+either verbatim or with modifications and/or translated into another
+language. (Hereinafter, translation is included without limitation in
+the term "modification".) Each licensee is addressed as "you".
+
+Activities other than copying, distribution and modification are not
+covered by this License; they are outside its scope. The act of
+running the Program is not restricted, and the output from the Program
+is covered only if its contents constitute a work based on the
+Program (independent of having been made by running the Program).
+Whether that is true depends on what the Program does.
+
+ 1. You may copy and distribute verbatim copies of the Program's
+source code as you receive it, in any medium, provided that you
+conspicuously and appropriately publish on each copy an appropriate
+copyright notice and disclaimer of warranty; keep intact all the
+notices that refer to this License and to the absence of any warranty;
+and give any other recipients of the Program a copy of this License
+along with the Program.
+
+You may charge a fee for the physical act of transferring a copy, and
+you may at your option offer warranty protection in exchange for a fee.
+
+ 2. You may modify your copy or copies of the Program or any portion
+of it, thus forming a work based on the Program, and copy and
+distribute such modifications or work under the terms of Section 1
+above, provided that you also meet all of these conditions:
+
+ a) You must cause the modified files to carry prominent notices
+ stating that you changed the files and the date of any change.
+
+ b) You must cause any work that you distribute or publish, that in
+ whole or in part contains or is derived from the Program or any
+ part thereof, to be licensed as a whole at no charge to all third
+ parties under the terms of this License.
+
+ c) If the modified program normally reads commands interactively
+ when run, you must cause it, when started running for such
+ interactive use in the most ordinary way, to print or display an
+ announcement including an appropriate copyright notice and a
+ notice that there is no warranty (or else, saying that you provide
+ a warranty) and that users may redistribute the program under
+ these conditions, and telling the user how to view a copy of this
+ License. (Exception: if the Program itself is interactive but
+ does not normally print such an announcement, your work based on
+ the Program is not required to print an announcement.)
+
+These requirements apply to the modified work as a whole. If
+identifiable sections of that work are not derived from the Program,
+and can be reasonably considered independent and separate works in
+themselves, then this License, and its terms, do not apply to those
+sections when you distribute them as separate works. But when you
+distribute the same sections as part of a whole which is a work based
+on the Program, the distribution of the whole must be on the terms of
+this License, whose permissions for other licensees extend to the
+entire whole, and thus to each and every part regardless of who wrote it.
+
+Thus, it is not the intent of this section to claim rights or contest
+your rights to work written entirely by you; rather, the intent is to
+exercise the right to control the distribution of derivative or
+collective works based on the Program.
+
+In addition, mere aggregation of another work not based on the Program
+with the Program (or with a work based on the Program) on a volume of
+a storage or distribution medium does not bring the other work under
+the scope of this License.
+
+ 3. You may copy and distribute the Program (or a work based on it,
+under Section 2) in object code or executable form under the terms of
+Sections 1 and 2 above provided that you also do one of the following:
+
+ a) Accompany it with the complete corresponding machine-readable
+ source code, which must be distributed under the terms of Sections
+ 1 and 2 above on a medium customarily used for software interchange; or,
+
+ b) Accompany it with a written offer, valid for at least three
+ years, to give any third party, for a charge no more than your
+ cost of physically performing source distribution, a complete
+ machine-readable copy of the corresponding source code, to be
+ distributed under the terms of Sections 1 and 2 above on a medium
+ customarily used for software interchange; or,
+
+ c) Accompany it with the information you received as to the offer
+ to distribute corresponding source code. (This alternative is
+ allowed only for noncommercial distribution and only if you
+ received the program in object code or executable form with such
+ an offer, in accord with Subsection b above.)
+
+The source code for a work means the preferred form of the work for
+making modifications to it. For an executable work, complete source
+code means all the source code for all modules it contains, plus any
+associated interface definition files, plus the scripts used to
+control compilation and installation of the executable. However, as a
+special exception, the source code distributed need not include
+anything that is normally distributed (in either source or binary
+form) with the major components (compiler, kernel, and so on) of the
+operating system on which the executable runs, unless that component
+itself accompanies the executable.
+
+If distribution of executable or object code is made by offering
+access to copy from a designated place, then offering equivalent
+access to copy the source code from the same place counts as
+distribution of the source code, even though third parties are not
+compelled to copy the source along with the object code.
+
+ 4. You may not copy, modify, sublicense, or distribute the Program
+except as expressly provided under this License. Any attempt
+otherwise to copy, modify, sublicense or distribute the Program is
+void, and will automatically terminate your rights under this License.
+However, parties who have received copies, or rights, from you under
+this License will not have their licenses terminated so long as such
+parties remain in full compliance.
+
+ 5. You are not required to accept this License, since you have not
+signed it. However, nothing else grants you permission to modify or
+distribute the Program or its derivative works. These actions are
+prohibited by law if you do not accept this License. Therefore, by
+modifying or distributing the Program (or any work based on the
+Program), you indicate your acceptance of this License to do so, and
+all its terms and conditions for copying, distributing or modifying
+the Program or works based on it.
+
+ 6. Each time you redistribute the Program (or any work based on the
+Program), the recipient automatically receives a license from the
+original licensor to copy, distribute or modify the Program subject to
+these terms and conditions. You may not impose any further
+restrictions on the recipients' exercise of the rights granted herein.
+You are not responsible for enforcing compliance by third parties to
+this License.
+
+ 7. If, as a consequence of a court judgment or allegation of patent
+infringement or for any other reason (not limited to patent issues),
+conditions are imposed on you (whether by court order, agreement or
+otherwise) that contradict the conditions of this License, they do not
+excuse you from the conditions of this License. If you cannot
+distribute so as to satisfy simultaneously your obligations under this
+License and any other pertinent obligations, then as a consequence you
+may not distribute the Program at all. For example, if a patent
+license would not permit royalty-free redistribution of the Program by
+all those who receive copies directly or indirectly through you, then
+the only way you could satisfy both it and this License would be to
+refrain entirely from distribution of the Program.
+
+If any portion of this section is held invalid or unenforceable under
+any particular circumstance, the balance of the section is intended to
+apply and the section as a whole is intended to apply in other
+circumstances.
+
+It is not the purpose of this section to induce you to infringe any
+patents or other property right claims or to contest validity of any
+such claims; this section has the sole purpose of protecting the
+integrity of the free software distribution system, which is
+implemented by public license practices. Many people have made
+generous contributions to the wide range of software distributed
+through that system in reliance on consistent application of that
+system; it is up to the author/donor to decide if he or she is willing
+to distribute software through any other system and a licensee cannot
+impose that choice.
+
+This section is intended to make thoroughly clear what is believed to
+be a consequence of the rest of this License.
+
+ 8. If the distribution and/or use of the Program is restricted in
+certain countries either by patents or by copyrighted interfaces, the
+original copyright holder who places the Program under this License
+may add an explicit geographical distribution limitation excluding
+those countries, so that distribution is permitted only in or among
+countries not thus excluded. In such case, this License incorporates
+the limitation as if written in the body of this License.
+
+ 9. The Free Software Foundation may publish revised and/or new versions
+of the General Public License from time to time. Such new versions will
+be similar in spirit to the present version, but may differ in detail to
+address new problems or concerns.
+
+Each version is given a distinguishing version number. If the Program
+specifies a version number of this License which applies to it and "any
+later version", you have the option of following the terms and conditions
+either of that version or of any later version published by the Free
+Software Foundation. If the Program does not specify a version number of
+this License, you may choose any version ever published by the Free Software
+Foundation.
+
+ 10. If you wish to incorporate parts of the Program into other free
+programs whose distribution conditions are different, write to the author
+to ask for permission. For software which is copyrighted by the Free
+Software Foundation, write to the Free Software Foundation; we sometimes
+make exceptions for this. Our decision will be guided by the two goals
+of preserving the free status of all derivatives of our free software and
+of promoting the sharing and reuse of software generally.
+
+ NO WARRANTY
+
+ 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
+FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
+OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
+PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
+OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
+TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
+PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
+REPAIR OR CORRECTION.
+
+ 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
+WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
+REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
+INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
+OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
+TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
+YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
+PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGES.
+
+ END OF TERMS AND CONDITIONS
+
+ How to Apply These Terms to Your New Programs
+
+ If you develop a new program, and you want it to be of the greatest
+possible use to the public, the best way to achieve this is to make it
+free software which everyone can redistribute and change under these terms.
+
+ To do so, attach the following notices to the program. It is safest
+to attach them to the start of each source file to most effectively
+convey the exclusion of warranty; and each file should have at least
+the "copyright" line and a pointer to where the full notice is found.
+
+ <one line to give the program's name and a brief idea of what it does.>
+ Copyright (C) <year> <name of author>
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License along
+ with this program; if not, write to the Free Software Foundation, Inc.,
+ 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+
+Also add information on how to contact you by electronic and paper mail.
+
+If the program is interactive, make it output a short notice like this
+when it starts in an interactive mode:
+
+ Gnomovision version 69, Copyright (C) year name of author
+ Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
+ This is free software, and you are welcome to redistribute it
+ under certain conditions; type `show c' for details.
+
+The hypothetical commands `show w' and `show c' should show the appropriate
+parts of the General Public License. Of course, the commands you use may
+be called something other than `show w' and `show c'; they could even be
+mouse-clicks or menu items--whatever suits your program.
+
+You should also get your employer (if you work as a programmer) or your
+school, if any, to sign a "copyright disclaimer" for the program, if
+necessary. Here is a sample; alter the names:
+
+ Yoyodyne, Inc., hereby disclaims all copyright interest in the program
+ `Gnomovision' (which makes passes at compilers) written by James Hacker.
+
+ <signature of Ty Coon>, 1 April 1989
+ Ty Coon, President of Vice
+
+This General Public License does not permit incorporating your program into
+proprietary programs. If your program is a subroutine library, you may
+consider it more useful to permit linking proprietary applications with the
+library. If this is what you want to do, use the GNU Lesser General
+Public License instead of this License.
diff --git a/chromium/third_party/lcov-1.9/Makefile b/chromium/third_party/lcov-1.9/Makefile
new file mode 100644
index 00000000000..93fc4192073
--- /dev/null
+++ b/chromium/third_party/lcov-1.9/Makefile
@@ -0,0 +1,99 @@
+#
+# Makefile for LCOV
+#
+# Make targets:
+# - install: install LCOV tools and man pages on the system
+# - uninstall: remove tools and man pages from the system
+# - dist: create files required for distribution, i.e. the lcov.tar.gz
+# and the lcov.rpm file. Just make sure to adjust the VERSION
+# and RELEASE variables below - both version and date strings
+# will be updated in all necessary files.
+# - clean: remove all generated files
+#
+
+VERSION := 1.9
+RELEASE := 1
+
+CFG_DIR := $(PREFIX)/etc
+BIN_DIR := $(PREFIX)/usr/bin
+MAN_DIR := $(PREFIX)/usr/share/man
+TMP_DIR := /tmp/lcov-tmp.$(shell echo $$$$)
+FILES := $(wildcard bin/*) $(wildcard man/*) README CHANGES Makefile \
+ $(wildcard rpm/*) lcovrc
+
+.PHONY: all info clean install uninstall rpms
+
+all: info
+
+info:
+ @echo "Available make targets:"
+ @echo " install : install binaries and man pages in PREFIX (default /)"
+ @echo " uninstall : delete binaries and man pages from PREFIX (default /)"
+ @echo " dist : create packages (RPM, tarball) ready for distribution"
+
+clean:
+ rm -f lcov-*.tar.gz
+ rm -f lcov-*.rpm
+ make -C example clean
+
+install:
+ bin/install.sh bin/lcov $(BIN_DIR)/lcov -m 755
+ bin/install.sh bin/genhtml $(BIN_DIR)/genhtml -m 755
+ bin/install.sh bin/geninfo $(BIN_DIR)/geninfo -m 755
+ bin/install.sh bin/genpng $(BIN_DIR)/genpng -m 755
+ bin/install.sh bin/gendesc $(BIN_DIR)/gendesc -m 755
+ bin/install.sh man/lcov.1 $(MAN_DIR)/man1/lcov.1 -m 644
+ bin/install.sh man/genhtml.1 $(MAN_DIR)/man1/genhtml.1 -m 644
+ bin/install.sh man/geninfo.1 $(MAN_DIR)/man1/geninfo.1 -m 644
+ bin/install.sh man/genpng.1 $(MAN_DIR)/man1/genpng.1 -m 644
+ bin/install.sh man/gendesc.1 $(MAN_DIR)/man1/gendesc.1 -m 644
+ bin/install.sh man/lcovrc.5 $(MAN_DIR)/man5/lcovrc.5 -m 644
+ bin/install.sh lcovrc $(CFG_DIR)/lcovrc -m 644
+
+uninstall:
+ bin/install.sh --uninstall bin/lcov $(BIN_DIR)/lcov
+ bin/install.sh --uninstall bin/genhtml $(BIN_DIR)/genhtml
+ bin/install.sh --uninstall bin/geninfo $(BIN_DIR)/geninfo
+ bin/install.sh --uninstall bin/genpng $(BIN_DIR)/genpng
+ bin/install.sh --uninstall bin/gendesc $(BIN_DIR)/gendesc
+ bin/install.sh --uninstall man/lcov.1 $(MAN_DIR)/man1/lcov.1
+ bin/install.sh --uninstall man/genhtml.1 $(MAN_DIR)/man1/genhtml.1
+ bin/install.sh --uninstall man/geninfo.1 $(MAN_DIR)/man1/geninfo.1
+ bin/install.sh --uninstall man/genpng.1 $(MAN_DIR)/man1/genpng.1
+ bin/install.sh --uninstall man/gendesc.1 $(MAN_DIR)/man1/gendesc.1
+ bin/install.sh --uninstall man/lcovrc.5 $(MAN_DIR)/man5/lcovrc.5
+ bin/install.sh --uninstall lcovrc $(CFG_DIR)/lcovrc
+
+dist: lcov-$(VERSION).tar.gz lcov-$(VERSION)-$(RELEASE).noarch.rpm \
+ lcov-$(VERSION)-$(RELEASE).src.rpm
+
+lcov-$(VERSION).tar.gz: $(FILES)
+ mkdir $(TMP_DIR)
+ mkdir $(TMP_DIR)/lcov-$(VERSION)
+ cp -r * $(TMP_DIR)/lcov-$(VERSION)
+ find $(TMP_DIR)/lcov-$(VERSION) -name CVS -type d | xargs rm -rf
+ make -C $(TMP_DIR)/lcov-$(VERSION) clean
+ bin/updateversion.pl $(TMP_DIR)/lcov-$(VERSION) $(VERSION) $(RELEASE)
+ cd $(TMP_DIR) ; \
+ tar cfz $(TMP_DIR)/lcov-$(VERSION).tar.gz lcov-$(VERSION)
+ mv $(TMP_DIR)/lcov-$(VERSION).tar.gz .
+ rm -rf $(TMP_DIR)
+
+lcov-$(VERSION)-$(RELEASE).noarch.rpm: rpms
+lcov-$(VERSION)-$(RELEASE).src.rpm: rpms
+
+rpms: lcov-$(VERSION).tar.gz
+ mkdir $(TMP_DIR)
+ mkdir $(TMP_DIR)/BUILD
+ mkdir $(TMP_DIR)/RPMS
+ mkdir $(TMP_DIR)/SOURCES
+ mkdir $(TMP_DIR)/SRPMS
+ cp lcov-$(VERSION).tar.gz $(TMP_DIR)/SOURCES
+ cd $(TMP_DIR)/BUILD ; \
+ tar xfz $(TMP_DIR)/SOURCES/lcov-$(VERSION).tar.gz \
+ lcov-$(VERSION)/rpm/lcov.spec
+ rpmbuild --define '_topdir $(TMP_DIR)' \
+ -ba $(TMP_DIR)/BUILD/lcov-$(VERSION)/rpm/lcov.spec
+ mv $(TMP_DIR)/RPMS/noarch/lcov-$(VERSION)-$(RELEASE).noarch.rpm .
+ mv $(TMP_DIR)/SRPMS/lcov-$(VERSION)-$(RELEASE).src.rpm .
+ rm -rf $(TMP_DIR)
diff --git a/chromium/third_party/lcov-1.9/README b/chromium/third_party/lcov-1.9/README
new file mode 100644
index 00000000000..654216295ca
--- /dev/null
+++ b/chromium/third_party/lcov-1.9/README
@@ -0,0 +1,137 @@
+-------------------------------------------------
+- README file for the LTP GCOV extension (LCOV) -
+- Last changes: 2010-08-06 -
+-------------------------------------------------
+
+Description
+-----------
+ LCOV is an extension of GCOV, a GNU tool which provides information about
+ what parts of a program are actually executed (i.e. "covered") while running
+ a particular test case. The extension consists of a set of PERL scripts
+ which build on the textual GCOV output to implement the following enhanced
+ functionality:
+
+ * HTML based output: coverage rates are additionally indicated using bar
+ graphs and specific colors.
+
+ * Support for large projects: overview pages allow quick browsing of
+ coverage data by providing three levels of detail: directory view,
+ file view and source code view.
+
+ LCOV was initially designed to support Linux kernel coverage measurements,
+ but works as well for coverage measurements on standard user space
+ applications.
+
+
+Further README contents
+-----------------------
+ 1. Included files
+ 2. Installing LCOV
+ 3. An example of how to access kernel coverage data
+ 4. An example of how to access coverage data for a user space program
+ 5. Questions and Comments
+
+
+
+1. Important files
+------------------
+ README - This README file
+ CHANGES - List of changes between releases
+ bin/lcov - Tool for capturing LCOV coverage data
+ bin/genhtml - Tool for creating HTML output from LCOV data
+ bin/gendesc - Tool for creating description files as used by genhtml
+ bin/geninfo - Internal tool (creates LCOV data files)
+ bin/genpng - Internal tool (creates png overviews of source files)
+ bin/install.sh - Internal tool (takes care of un-/installing)
+ descriptions.tests - Test descriptions for the LTP suite, use with gendesc
+ man - Directory containing man pages for included tools
+ example - Directory containing an example to demonstrate LCOV
+ lcovrc - LCOV configuration file
+ Makefile - Makefile providing 'install' and 'uninstall' targets
+
+
+2. Installing LCOV
+------------------
+The LCOV package is available as either RPM or tarball from:
+
+ http://ltp.sourceforge.net/coverage/lcov.php
+
+To install the tarball, unpack it to a directory and run:
+
+ make install
+
+Use anonymous CVS for the most recent (but possibly unstable) version:
+
+ cvs -d:pserver:anonymous@ltp.cvs.sourceforge.net:/cvsroot/ltp login
+
+(simply press the ENTER key when asked for a password)
+
+ cvs -z3 -d:pserver:anonymous@ltp.cvs.sourceforge.net:/cvsroot/ltp export -D now utils
+
+Change to the utils/analysis/lcov directory and type:
+
+ make install
+
+
+3. An example of how to access kernel coverage data
+---------------------------------------------------
+Requirements: get and install the gcov-kernel package from
+
+ http://sourceforge.net/projects/ltp
+
+Copy the resulting gcov kernel module file to either the system wide modules
+directory or the same directory as the PERL scripts. As root, do the following:
+
+ a) Resetting counters
+
+ lcov --zerocounters
+
+ b) Capturing the current coverage state to a file
+
+ lcov --capture --output-file kernel.info
+
+ c) Getting HTML output
+
+ genhtml kernel.info
+
+Point the web browser of your choice to the resulting index.html file.
+
+
+4. An example of how to access coverage data for a user space program
+---------------------------------------------------------------------
+Requirements: compile the program in question using GCC with the options
+-fprofile-arcs and -ftest-coverage. During linking, make sure to specify
+-lgcov or -coverage.
+
+Assuming the compile directory is called "appdir", do the following:
+
+ a) Resetting counters
+
+ lcov --directory appdir --zerocounters
+
+ b) Capturing the current coverage state to a file (works only after the
+ application has been started and stopped at least once)
+
+ lcov --directory appdir --capture --output-file app.info
+
+ c) Getting HTML output
+
+ genhtml app.info
+
+Point the web browser of your choice to the resulting index.html file.
+
+Please note that independently of where the application is installed or
+from which directory it is run, the --directory statement needs to
+point to the directory in which the application was compiled.
+
+For further information on the gcc profiling mechanism, please also
+consult the gcov man page.
+
+
+5. Questions and comments
+-------------------------
+See the included man pages for more information on how to use the LCOV tools.
+
+Please email further questions or comments regarding this tool to the
+LTP Mailing list at ltp-coverage@lists.sourceforge.net
+
diff --git a/chromium/third_party/lcov-1.9/README.chromium b/chromium/third_party/lcov-1.9/README.chromium
new file mode 100644
index 00000000000..d0daef7849b
--- /dev/null
+++ b/chromium/third_party/lcov-1.9/README.chromium
@@ -0,0 +1,14 @@
+Name: LCOV - the LTP GCOV extension
+Short Name: lcov
+URL: http://ltp.sourceforge.net/coverage/lcov.php
+Version: 1.9
+License: GPLv2 or later
+License File: COPYING
+Security Critical: no
+Local Modifications: None
+Description:
+This directory contains a stock lcov-1.9. The existing lcov is v1.6 and need
+to stay here to prevent breakages of existing coverage schemes.
+
+N.B.: This code will not be shipped with Chrome.
+ It is only part of the build/test infrastructure.
diff --git a/chromium/third_party/lcov-1.9/bin/gendesc b/chromium/third_party/lcov-1.9/bin/gendesc
new file mode 100755
index 00000000000..522ef69a6db
--- /dev/null
+++ b/chromium/third_party/lcov-1.9/bin/gendesc
@@ -0,0 +1,226 @@
+#!/usr/bin/perl -w
+#
+# Copyright (c) International Business Machines Corp., 2002
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or (at
+# your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+#
+#
+# gendesc
+#
+# This script creates a description file as understood by genhtml.
+# Input file format:
+#
+# For each test case:
+# <test name><optional whitespace>
+# <at least one whitespace character (blank/tab)><test description>
+#
+# Actual description may consist of several lines. By default, output is
+# written to stdout. Test names consist of alphanumeric characters
+# including _ and -.
+#
+#
+# History:
+# 2002-09-02: created by Peter Oberparleiter <Peter.Oberparleiter@de.ibm.com>
+#
+
+use strict;
+use File::Basename;
+use Getopt::Long;
+
+
+# Constants
+our $lcov_version = 'LCOV version 1.9';
+our $lcov_url = "http://ltp.sourceforge.net/coverage/lcov.php";
+our $tool_name = basename($0);
+
+
+# Prototypes
+sub print_usage(*);
+sub gen_desc();
+sub warn_handler($);
+sub die_handler($);
+
+
+# Global variables
+our $help;
+our $version;
+our $output_filename;
+our $input_filename;
+
+
+#
+# Code entry point
+#
+
+$SIG{__WARN__} = \&warn_handler;
+$SIG{__DIE__} = \&die_handler;
+
+# Prettify version string
+$lcov_version =~ s/\$\s*Revision\s*:?\s*(\S+)\s*\$/$1/;
+
+# Parse command line options
+if (!GetOptions("output-filename=s" => \$output_filename,
+ "version" =>\$version,
+ "help|?" => \$help
+ ))
+{
+ print(STDERR "Use $tool_name --help to get usage information\n");
+ exit(1);
+}
+
+$input_filename = $ARGV[0];
+
+# Check for help option
+if ($help)
+{
+ print_usage(*STDOUT);
+ exit(0);
+}
+
+# Check for version option
+if ($version)
+{
+ print("$tool_name: $lcov_version\n");
+ exit(0);
+}
+
+
+# Check for input filename
+if (!$input_filename)
+{
+ die("No input filename specified\n".
+ "Use $tool_name --help to get usage information\n");
+}
+
+# Do something
+gen_desc();
+
+
+#
+# print_usage(handle)
+#
+# Write out command line usage information to given filehandle.
+#
+
+sub print_usage(*)
+{
+ local *HANDLE = $_[0];
+
+ print(HANDLE <<END_OF_USAGE)
+Usage: $tool_name [OPTIONS] INPUTFILE
+
+Convert a test case description file into a format as understood by genhtml.
+
+ -h, --help Print this help, then exit
+ -v, --version Print version number, then exit
+ -o, --output-filename FILENAME Write description to FILENAME
+
+For more information see: $lcov_url
+END_OF_USAGE
+ ;
+}
+
+
+#
+# gen_desc()
+#
+# Read text file INPUT_FILENAME and convert the contained description to a
+# format as understood by genhtml, i.e.
+#
+# TN:<test name>
+# TD:<test description>
+#
+# If defined, write output to OUTPUT_FILENAME, otherwise to stdout.
+#
+# Die on error.
+#
+
+sub gen_desc()
+{
+ local *INPUT_HANDLE;
+ local *OUTPUT_HANDLE;
+ my $empty_line = "ignore";
+
+ open(INPUT_HANDLE, $input_filename)
+ or die("ERROR: cannot open $input_filename!\n");
+
+ # Open output file for writing
+ if ($output_filename)
+ {
+ open(OUTPUT_HANDLE, ">$output_filename")
+ or die("ERROR: cannot create $output_filename!\n");
+ }
+ else
+ {
+ *OUTPUT_HANDLE = *STDOUT;
+ }
+
+ # Process all lines in input file
+ while (<INPUT_HANDLE>)
+ {
+ chomp($_);
+
+ if (/^(\w[\w-]*)(\s*)$/)
+ {
+ # Matched test name
+ # Name starts with alphanum or _, continues with
+ # alphanum, _ or -
+ print(OUTPUT_HANDLE "TN: $1\n");
+ $empty_line = "ignore";
+ }
+ elsif (/^(\s+)(\S.*?)\s*$/)
+ {
+ # Matched test description
+ if ($empty_line eq "insert")
+ {
+ # Write preserved empty line
+ print(OUTPUT_HANDLE "TD: \n");
+ }
+ print(OUTPUT_HANDLE "TD: $2\n");
+ $empty_line = "observe";
+ }
+ elsif (/^\s*$/)
+ {
+ # Matched empty line to preserve paragraph separation
+ # inside description text
+ if ($empty_line eq "observe")
+ {
+ $empty_line = "insert";
+ }
+ }
+ }
+
+ # Close output file if defined
+ if ($output_filename)
+ {
+ close(OUTPUT_HANDLE);
+ }
+
+ close(INPUT_HANDLE);
+}
+
+sub warn_handler($)
+{
+ my ($msg) = @_;
+
+ warn("$tool_name: $msg");
+}
+
+sub die_handler($)
+{
+ my ($msg) = @_;
+
+ die("$tool_name: $msg");
+}
diff --git a/chromium/third_party/lcov-1.9/bin/genhtml b/chromium/third_party/lcov-1.9/bin/genhtml
new file mode 100755
index 00000000000..d74063a4fdb
--- /dev/null
+++ b/chromium/third_party/lcov-1.9/bin/genhtml
@@ -0,0 +1,5648 @@
+#!/usr/bin/perl -w
+#
+# Copyright (c) International Business Machines Corp., 2002,2010
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or (at
+# your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+#
+#
+# genhtml
+#
+# This script generates HTML output from .info files as created by the
+# geninfo script. Call it with --help and refer to the genhtml man page
+# to get information on usage and available options.
+#
+#
+# History:
+# 2002-08-23 created by Peter Oberparleiter <Peter.Oberparleiter@de.ibm.com>
+# IBM Lab Boeblingen
+# based on code by Manoj Iyer <manjo@mail.utexas.edu> and
+# Megan Bock <mbock@us.ibm.com>
+# IBM Austin
+# 2002-08-27 / Peter Oberparleiter: implemented frame view
+# 2002-08-29 / Peter Oberparleiter: implemented test description filtering
+# so that by default only descriptions for test cases which
+# actually hit some source lines are kept
+# 2002-09-05 / Peter Oberparleiter: implemented --no-sourceview
+# 2002-09-05 / Mike Kobler: One of my source file paths includes a "+" in
+# the directory name. I found that genhtml.pl died when it
+# encountered it. I was able to fix the problem by modifying
+# the string with the escape character before parsing it.
+# 2002-10-26 / Peter Oberparleiter: implemented --num-spaces
+# 2003-04-07 / Peter Oberparleiter: fixed bug which resulted in an error
+# when trying to combine .info files containing data without
+# a test name
+# 2003-04-10 / Peter Oberparleiter: extended fix by Mike to also cover
+# other special characters
+# 2003-04-30 / Peter Oberparleiter: made info write to STDERR, not STDOUT
+# 2003-07-10 / Peter Oberparleiter: added line checksum support
+# 2004-08-09 / Peter Oberparleiter: added configuration file support
+# 2005-03-04 / Cal Pierog: added legend to HTML output, fixed coloring of
+# "good coverage" background
+# 2006-03-18 / Marcus Boerger: added --custom-intro, --custom-outro and
+# overwrite --no-prefix if --prefix is present
+# 2006-03-20 / Peter Oberparleiter: changes to custom_* function (rename
+# to html_prolog/_epilog, minor modifications to implementation),
+# changed prefix/noprefix handling to be consistent with current
+# logic
+# 2006-03-20 / Peter Oberparleiter: added --html-extension option
+# 2008-07-14 / Tom Zoerner: added --function-coverage command line option;
+# added function table to source file page
+# 2008-08-13 / Peter Oberparleiter: modified function coverage
+# implementation (now enabled per default),
+# introduced sorting option (enabled per default)
+#
+
+use strict;
+use File::Basename;
+use Getopt::Long;
+use Digest::MD5 qw(md5_base64);
+
+
+# Global constants
+our $title = "LCOV - code coverage report";
+our $lcov_version = 'LCOV version 1.9';
+our $lcov_url = "http://ltp.sourceforge.net/coverage/lcov.php";
+our $tool_name = basename($0);
+
+# Specify coverage rate limits (in %) for classifying file entries
+# HI: $hi_limit <= rate <= 100 graph color: green
+# MED: $med_limit <= rate < $hi_limit graph color: orange
+# LO: 0 <= rate < $med_limit graph color: red
+
+# For line coverage/all coverage types if not specified
+our $hi_limit = 90;
+our $med_limit = 75;
+
+# For function coverage
+our $fn_hi_limit;
+our $fn_med_limit;
+
+# For branch coverage
+our $br_hi_limit;
+our $br_med_limit;
+
+# Width of overview image
+our $overview_width = 80;
+
+# Resolution of overview navigation: this number specifies the maximum
+# difference in lines between the position a user selected from the overview
+# and the position the source code window is scrolled to.
+our $nav_resolution = 4;
+
+# Clicking a line in the overview image should show the source code view at
+# a position a bit further up so that the requested line is not the first
+# line in the window. This number specifies that offset in lines.
+our $nav_offset = 10;
+
+# Clicking on a function name should show the source code at a position a
+# few lines before the first line of code of that function. This number
+# specifies that offset in lines.
+our $func_offset = 2;
+
+our $overview_title = "top level";
+
+# Width for line coverage information in the source code view
+our $line_field_width = 12;
+
+# Width for branch coverage information in the source code view
+our $br_field_width = 16;
+
+# Internal Constants
+
+# Header types
+our $HDR_DIR = 0;
+our $HDR_FILE = 1;
+our $HDR_SOURCE = 2;
+our $HDR_TESTDESC = 3;
+our $HDR_FUNC = 4;
+
+# Sort types
+our $SORT_FILE = 0;
+our $SORT_LINE = 1;
+our $SORT_FUNC = 2;
+our $SORT_BRANCH = 3;
+
+# Fileview heading types
+our $HEAD_NO_DETAIL = 1;
+our $HEAD_DETAIL_HIDDEN = 2;
+our $HEAD_DETAIL_SHOWN = 3;
+
+# Offsets for storing branch coverage data in vectors
+our $BR_BLOCK = 0;
+our $BR_BRANCH = 1;
+our $BR_TAKEN = 2;
+our $BR_VEC_ENTRIES = 3;
+our $BR_VEC_WIDTH = 32;
+
+# Additional offsets used when converting branch coverage data to HTML
+our $BR_LEN = 3;
+our $BR_OPEN = 4;
+our $BR_CLOSE = 5;
+
+# Branch data combination types
+our $BR_SUB = 0;
+our $BR_ADD = 1;
+
+# Data related prototypes
+sub print_usage(*);
+sub gen_html();
+sub html_create($$);
+sub process_dir($);
+sub process_file($$$);
+sub info(@);
+sub read_info_file($);
+sub get_info_entry($);
+sub set_info_entry($$$$$$$$$;$$$$$$);
+sub get_prefix(@);
+sub shorten_prefix($);
+sub get_dir_list(@);
+sub get_relative_base_path($);
+sub read_testfile($);
+sub get_date_string();
+sub create_sub_dir($);
+sub subtract_counts($$);
+sub add_counts($$);
+sub apply_baseline($$);
+sub remove_unused_descriptions();
+sub get_found_and_hit($);
+sub get_affecting_tests($$$);
+sub combine_info_files($$);
+sub merge_checksums($$$);
+sub combine_info_entries($$$);
+sub apply_prefix($$);
+sub system_no_output($@);
+sub read_config($);
+sub apply_config($);
+sub get_html_prolog($);
+sub get_html_epilog($);
+sub write_dir_page($$$$$$$$$$$$$$$$$);
+sub classify_rate($$$$);
+sub br_taken_add($$);
+sub br_taken_sub($$);
+sub br_ivec_len($);
+sub br_ivec_get($$);
+sub br_ivec_push($$$$);
+sub combine_brcount($$$);
+sub get_br_found_and_hit($);
+sub warn_handler($);
+sub die_handler($);
+
+
+# HTML related prototypes
+sub escape_html($);
+sub get_bar_graph_code($$$);
+
+sub write_png_files();
+sub write_htaccess_file();
+sub write_css_file();
+sub write_description_file($$$$$$$);
+sub write_function_table(*$$$$$$$$$$);
+
+sub write_html(*$);
+sub write_html_prolog(*$$);
+sub write_html_epilog(*$;$);
+
+sub write_header(*$$$$$$$$$$);
+sub write_header_prolog(*$);
+sub write_header_line(*@);
+sub write_header_epilog(*$);
+
+sub write_file_table(*$$$$$$$);
+sub write_file_table_prolog(*$@);
+sub write_file_table_entry(*$$$@);
+sub write_file_table_detail_entry(*$@);
+sub write_file_table_epilog(*);
+
+sub write_test_table_prolog(*$);
+sub write_test_table_entry(*$$);
+sub write_test_table_epilog(*);
+
+sub write_source($$$$$$$);
+sub write_source_prolog(*);
+sub write_source_line(*$$$$$$);
+sub write_source_epilog(*);
+
+sub write_frameset(*$$$);
+sub write_overview_line(*$$$);
+sub write_overview(*$$$$);
+
+# External prototype (defined in genpng)
+sub gen_png($$$@);
+
+
+# Global variables & initialization
+our %info_data; # Hash containing all data from .info file
+our $dir_prefix; # Prefix to remove from all sub directories
+our %test_description; # Hash containing test descriptions if available
+our $date = get_date_string();
+
+our @info_filenames; # List of .info files to use as data source
+our $test_title; # Title for output as written to each page header
+our $output_directory; # Name of directory in which to store output
+our $base_filename; # Optional name of file containing baseline data
+our $desc_filename; # Name of file containing test descriptions
+our $css_filename; # Optional name of external stylesheet file to use
+our $quiet; # If set, suppress information messages
+our $help; # Help option flag
+our $version; # Version option flag
+our $show_details; # If set, generate detailed directory view
+our $no_prefix; # If set, do not remove filename prefix
+our $func_coverage = 1; # If set, generate function coverage statistics
+our $no_func_coverage; # Disable func_coverage
+our $br_coverage = 1; # If set, generate branch coverage statistics
+our $no_br_coverage; # Disable br_coverage
+our $sort = 1; # If set, provide directory listings with sorted entries
+our $no_sort; # Disable sort
+our $frames; # If set, use frames for source code view
+our $keep_descriptions; # If set, do not remove unused test case descriptions
+our $no_sourceview; # If set, do not create a source code view for each file
+our $highlight; # If set, highlight lines covered by converted data only
+our $legend; # If set, include legend in output
+our $tab_size = 8; # Number of spaces to use in place of tab
+our $config; # Configuration file contents
+our $html_prolog_file; # Custom HTML prolog file (up to and including <body>)
+our $html_epilog_file; # Custom HTML epilog file (from </body> onwards)
+our $html_prolog; # Actual HTML prolog
+our $html_epilog; # Actual HTML epilog
+our $html_ext = "html"; # Extension for generated HTML files
+our $html_gzip = 0; # Compress with gzip
+our $demangle_cpp = 0; # Demangle C++ function names
+our @fileview_sortlist;
+our @fileview_sortname = ("", "-sort-l", "-sort-f", "-sort-b");
+our @funcview_sortlist;
+our @rate_name = ("Lo", "Med", "Hi");
+our @rate_png = ("ruby.png", "amber.png", "emerald.png");
+
+our $cwd = `pwd`; # Current working directory
+chomp($cwd);
+our $tool_dir = dirname($0); # Directory where genhtml tool is installed
+
+
+#
+# Code entry point
+#
+
+$SIG{__WARN__} = \&warn_handler;
+$SIG{__DIE__} = \&die_handler;
+
+# Prettify version string
+$lcov_version =~ s/\$\s*Revision\s*:?\s*(\S+)\s*\$/$1/;
+
+# Add current working directory if $tool_dir is not already an absolute path
+if (! ($tool_dir =~ /^\/(.*)$/))
+{
+ $tool_dir = "$cwd/$tool_dir";
+}
+
+# Read configuration file if available
+if (defined($ENV{"HOME"}) && (-r $ENV{"HOME"}."/.lcovrc"))
+{
+ $config = read_config($ENV{"HOME"}."/.lcovrc");
+}
+elsif (-r "/etc/lcovrc")
+{
+ $config = read_config("/etc/lcovrc");
+}
+
+if ($config)
+{
+ # Copy configuration file values to variables
+ apply_config({
+ "genhtml_css_file" => \$css_filename,
+ "genhtml_hi_limit" => \$hi_limit,
+ "genhtml_med_limit" => \$med_limit,
+ "genhtml_line_field_width" => \$line_field_width,
+ "genhtml_overview_width" => \$overview_width,
+ "genhtml_nav_resolution" => \$nav_resolution,
+ "genhtml_nav_offset" => \$nav_offset,
+ "genhtml_keep_descriptions" => \$keep_descriptions,
+ "genhtml_no_prefix" => \$no_prefix,
+ "genhtml_no_source" => \$no_sourceview,
+ "genhtml_num_spaces" => \$tab_size,
+ "genhtml_highlight" => \$highlight,
+ "genhtml_legend" => \$legend,
+ "genhtml_html_prolog" => \$html_prolog_file,
+ "genhtml_html_epilog" => \$html_epilog_file,
+ "genhtml_html_extension" => \$html_ext,
+ "genhtml_html_gzip" => \$html_gzip,
+ "genhtml_function_hi_limit" => \$fn_hi_limit,
+ "genhtml_function_med_limit" => \$fn_med_limit,
+ "genhtml_function_coverage" => \$func_coverage,
+ "genhtml_branch_hi_limit" => \$br_hi_limit,
+ "genhtml_branch_med_limit" => \$br_med_limit,
+ "genhtml_branch_coverage" => \$br_coverage,
+ "genhtml_branch_field_width" => \$br_field_width,
+ "genhtml_sort" => \$sort,
+ });
+}
+
+# Copy limit values if not specified
+$fn_hi_limit = $hi_limit if (!defined($fn_hi_limit));
+$fn_med_limit = $med_limit if (!defined($fn_med_limit));
+$br_hi_limit = $hi_limit if (!defined($br_hi_limit));
+$br_med_limit = $med_limit if (!defined($br_med_limit));
+
+# Parse command line options
+if (!GetOptions("output-directory|o=s" => \$output_directory,
+ "title|t=s" => \$test_title,
+ "description-file|d=s" => \$desc_filename,
+ "keep-descriptions|k" => \$keep_descriptions,
+ "css-file|c=s" => \$css_filename,
+ "baseline-file|b=s" => \$base_filename,
+ "prefix|p=s" => \$dir_prefix,
+ "num-spaces=i" => \$tab_size,
+ "no-prefix" => \$no_prefix,
+ "no-sourceview" => \$no_sourceview,
+ "show-details|s" => \$show_details,
+ "frames|f" => \$frames,
+ "highlight" => \$highlight,
+ "legend" => \$legend,
+ "quiet|q" => \$quiet,
+ "help|h|?" => \$help,
+ "version|v" => \$version,
+ "html-prolog=s" => \$html_prolog_file,
+ "html-epilog=s" => \$html_epilog_file,
+ "html-extension=s" => \$html_ext,
+ "html-gzip" => \$html_gzip,
+ "function-coverage" => \$func_coverage,
+ "no-function-coverage" => \$no_func_coverage,
+ "branch-coverage" => \$br_coverage,
+ "no-branch-coverage" => \$no_br_coverage,
+ "sort" => \$sort,
+ "no-sort" => \$no_sort,
+ "demangle-cpp" => \$demangle_cpp,
+ ))
+{
+ print(STDERR "Use $tool_name --help to get usage information\n");
+ exit(1);
+} else {
+ # Merge options
+ if ($no_func_coverage) {
+ $func_coverage = 0;
+ }
+ if ($no_br_coverage) {
+ $br_coverage = 0;
+ }
+
+ # Merge sort options
+ if ($no_sort) {
+ $sort = 0;
+ }
+}
+
+@info_filenames = @ARGV;
+
+# Check for help option
+if ($help)
+{
+ print_usage(*STDOUT);
+ exit(0);
+}
+
+# Check for version option
+if ($version)
+{
+ print("$tool_name: $lcov_version\n");
+ exit(0);
+}
+
+# Check for info filename
+if (!@info_filenames)
+{
+ die("No filename specified\n".
+ "Use $tool_name --help to get usage information\n");
+}
+
+# Generate a title if none is specified
+if (!$test_title)
+{
+ if (scalar(@info_filenames) == 1)
+ {
+ # Only one filename specified, use it as title
+ $test_title = basename($info_filenames[0]);
+ }
+ else
+ {
+ # More than one filename specified, used default title
+ $test_title = "unnamed";
+ }
+}
+
+# Make sure css_filename is an absolute path (in case we're changing
+# directories)
+if ($css_filename)
+{
+ if (!($css_filename =~ /^\/(.*)$/))
+ {
+ $css_filename = $cwd."/".$css_filename;
+ }
+}
+
+# Make sure tab_size is within valid range
+if ($tab_size < 1)
+{
+ print(STDERR "ERROR: invalid number of spaces specified: ".
+ "$tab_size!\n");
+ exit(1);
+}
+
+# Get HTML prolog and epilog
+$html_prolog = get_html_prolog($html_prolog_file);
+$html_epilog = get_html_epilog($html_epilog_file);
+
+# Issue a warning if --no-sourceview is enabled together with --frames
+if ($no_sourceview && defined($frames))
+{
+ warn("WARNING: option --frames disabled because --no-sourceview ".
+ "was specified!\n");
+ $frames = undef;
+}
+
+# Issue a warning if --no-prefix is enabled together with --prefix
+if ($no_prefix && defined($dir_prefix))
+{
+ warn("WARNING: option --prefix disabled because --no-prefix was ".
+ "specified!\n");
+ $dir_prefix = undef;
+}
+
+@fileview_sortlist = ($SORT_FILE);
+@funcview_sortlist = ($SORT_FILE);
+
+if ($sort) {
+ push(@fileview_sortlist, $SORT_LINE);
+ push(@fileview_sortlist, $SORT_FUNC) if ($func_coverage);
+ push(@fileview_sortlist, $SORT_BRANCH) if ($br_coverage);
+ push(@funcview_sortlist, $SORT_LINE);
+}
+
+if ($frames)
+{
+ # Include genpng code needed for overview image generation
+ do("$tool_dir/genpng");
+}
+
+# Ensure that the c++filt tool is available when using --demangle-cpp
+if ($demangle_cpp)
+{
+ if (system_no_output(3, "c++filt", "--version")) {
+ die("ERROR: could not find c++filt tool needed for ".
+ "--demangle-cpp\n");
+ }
+}
+
+# Make sure output_directory exists, create it if necessary
+if ($output_directory)
+{
+ stat($output_directory);
+
+ if (! -e _)
+ {
+ create_sub_dir($output_directory);
+ }
+}
+
+# Do something
+gen_html();
+
+exit(0);
+
+
+
+#
+# print_usage(handle)
+#
+# Print usage information.
+#
+
+sub print_usage(*)
+{
+ local *HANDLE = $_[0];
+
+ print(HANDLE <<END_OF_USAGE);
+Usage: $tool_name [OPTIONS] INFOFILE(S)
+
+Create HTML output for coverage data found in INFOFILE. Note that INFOFILE
+may also be a list of filenames.
+
+Misc:
+ -h, --help Print this help, then exit
+ -v, --version Print version number, then exit
+ -q, --quiet Do not print progress messages
+
+Operation:
+ -o, --output-directory OUTDIR Write HTML output to OUTDIR
+ -s, --show-details Generate detailed directory view
+ -d, --description-file DESCFILE Read test case descriptions from DESCFILE
+ -k, --keep-descriptions Do not remove unused test descriptions
+ -b, --baseline-file BASEFILE Use BASEFILE as baseline file
+ -p, --prefix PREFIX Remove PREFIX from all directory names
+ --no-prefix Do not remove prefix from directory names
+ --(no-)function-coverage Enable (disable) function coverage display
+ --(no-)branch-coverage Enable (disable) branch coverage display
+
+HTML output:
+ -f, --frames Use HTML frames for source code view
+ -t, --title TITLE Display TITLE in header of all pages
+ -c, --css-file CSSFILE Use external style sheet file CSSFILE
+ --no-source Do not create source code view
+ --num-spaces NUM Replace tabs with NUM spaces in source view
+ --highlight Highlight lines with converted-only data
+ --legend Include color legend in HTML output
+ --html-prolog FILE Use FILE as HTML prolog for generated pages
+ --html-epilog FILE Use FILE as HTML epilog for generated pages
+ --html-extension EXT Use EXT as filename extension for pages
+ --html-gzip Use gzip to compress HTML
+ --(no-)sort Enable (disable) sorted coverage views
+ --demangle-cpp Demangle C++ function names
+
+For more information see: $lcov_url
+END_OF_USAGE
+ ;
+}
+
+
+#
+# get_rate(found, hit)
+#
+# Return a relative value for the specified found&hit values
+# which is used for sorting the corresponding entries in a
+# file list.
+#
+
+sub get_rate($$)
+{
+ my ($found, $hit) = @_;
+
+ if ($found == 0) {
+ return 10000;
+ }
+ return int($hit * 1000 / $found) * 10 + 2 - (1 / $found);
+}
+
+
+#
+# get_overall_line(found, hit, name_singular, name_plural)
+#
+# Return a string containing overall information for the specified
+# found/hit data.
+#
+
+sub get_overall_line($$$$)
+{
+ my ($found, $hit, $name_sn, $name_pl) = @_;
+ my $name;
+
+ return "no data found" if (!defined($found) || $found == 0);
+ $name = ($found == 1) ? $name_sn : $name_pl;
+ return sprintf("%.1f%% (%d of %d %s)", $hit * 100 / $found, $hit,
+ $found, $name);
+}
+
+
+#
+# print_overall_rate(ln_do, ln_found, ln_hit, fn_do, fn_found, fn_hit, br_do
+# br_found, br_hit)
+#
+# Print overall coverage rates for the specified coverage types.
+#
+
+sub print_overall_rate($$$$$$$$$)
+{
+ my ($ln_do, $ln_found, $ln_hit, $fn_do, $fn_found, $fn_hit,
+ $br_do, $br_found, $br_hit) = @_;
+
+ info("Overall coverage rate:\n");
+ info(" lines......: %s\n",
+ get_overall_line($ln_found, $ln_hit, "line", "lines"))
+ if ($ln_do);
+ info(" functions..: %s\n",
+ get_overall_line($fn_found, $fn_hit, "function", "functions"))
+ if ($fn_do);
+ info(" branches...: %s\n",
+ get_overall_line($br_found, $br_hit, "branch", "branches"))
+ if ($br_do);
+}
+
+
+#
+# gen_html()
+#
+# Generate a set of HTML pages from contents of .info file INFO_FILENAME.
+# Files will be written to the current directory. If provided, test case
+# descriptions will be read from .tests file TEST_FILENAME and included
+# in ouput.
+#
+# Die on error.
+#
+
+sub gen_html()
+{
+ local *HTML_HANDLE;
+ my %overview;
+ my %base_data;
+ my $lines_found;
+ my $lines_hit;
+ my $fn_found;
+ my $fn_hit;
+ my $br_found;
+ my $br_hit;
+ my $overall_found = 0;
+ my $overall_hit = 0;
+ my $total_fn_found = 0;
+ my $total_fn_hit = 0;
+ my $total_br_found = 0;
+ my $total_br_hit = 0;
+ my $dir_name;
+ my $link_name;
+ my @dir_list;
+ my %new_info;
+
+ # Read in all specified .info files
+ foreach (@info_filenames)
+ {
+ %new_info = %{read_info_file($_)};
+
+ # Combine %new_info with %info_data
+ %info_data = %{combine_info_files(\%info_data, \%new_info)};
+ }
+
+ info("Found %d entries.\n", scalar(keys(%info_data)));
+
+ # Read and apply baseline data if specified
+ if ($base_filename)
+ {
+ # Read baseline file
+ info("Reading baseline file $base_filename\n");
+ %base_data = %{read_info_file($base_filename)};
+ info("Found %d entries.\n", scalar(keys(%base_data)));
+
+ # Apply baseline
+ info("Subtracting baseline data.\n");
+ %info_data = %{apply_baseline(\%info_data, \%base_data)};
+ }
+
+ @dir_list = get_dir_list(keys(%info_data));
+
+ if ($no_prefix)
+ {
+ # User requested that we leave filenames alone
+ info("User asked not to remove filename prefix\n");
+ }
+ elsif (!defined($dir_prefix))
+ {
+ # Get prefix common to most directories in list
+ $dir_prefix = get_prefix(@dir_list);
+
+ if ($dir_prefix)
+ {
+ info("Found common filename prefix \"$dir_prefix\"\n");
+ }
+ else
+ {
+ info("No common filename prefix found!\n");
+ $no_prefix=1;
+ }
+ }
+ else
+ {
+ info("Using user-specified filename prefix \"".
+ "$dir_prefix\"\n");
+ }
+
+ # Read in test description file if specified
+ if ($desc_filename)
+ {
+ info("Reading test description file $desc_filename\n");
+ %test_description = %{read_testfile($desc_filename)};
+
+ # Remove test descriptions which are not referenced
+ # from %info_data if user didn't tell us otherwise
+ if (!$keep_descriptions)
+ {
+ remove_unused_descriptions();
+ }
+ }
+
+ # Change to output directory if specified
+ if ($output_directory)
+ {
+ chdir($output_directory)
+ or die("ERROR: cannot change to directory ".
+ "$output_directory!\n");
+ }
+
+ info("Writing .css and .png files.\n");
+ write_css_file();
+ write_png_files();
+
+ if ($html_gzip)
+ {
+ info("Writing .htaccess file.\n");
+ write_htaccess_file();
+ }
+
+ info("Generating output.\n");
+
+ # Process each subdirectory and collect overview information
+ foreach $dir_name (@dir_list)
+ {
+ ($lines_found, $lines_hit, $fn_found, $fn_hit,
+ $br_found, $br_hit)
+ = process_dir($dir_name);
+
+ # Remove prefix if applicable
+ if (!$no_prefix && $dir_prefix)
+ {
+ # Match directory names beginning with $dir_prefix
+ $dir_name = apply_prefix($dir_name, $dir_prefix);
+ }
+
+ # Generate name for directory overview HTML page
+ if ($dir_name =~ /^\/(.*)$/)
+ {
+ $link_name = substr($dir_name, 1)."/index.$html_ext";
+ }
+ else
+ {
+ $link_name = $dir_name."/index.$html_ext";
+ }
+
+ $overview{$dir_name} = [$lines_found, $lines_hit, $fn_found,
+ $fn_hit, $br_found, $br_hit, $link_name,
+ get_rate($lines_found, $lines_hit),
+ get_rate($fn_found, $fn_hit),
+ get_rate($br_found, $br_hit)];
+ $overall_found += $lines_found;
+ $overall_hit += $lines_hit;
+ $total_fn_found += $fn_found;
+ $total_fn_hit += $fn_hit;
+ $total_br_found += $br_found;
+ $total_br_hit += $br_hit;
+ }
+
+ # Generate overview page
+ info("Writing directory view page.\n");
+
+ # Create sorted pages
+ foreach (@fileview_sortlist) {
+ write_dir_page($fileview_sortname[$_], ".", "", $test_title,
+ undef, $overall_found, $overall_hit,
+ $total_fn_found, $total_fn_hit, $total_br_found,
+ $total_br_hit, \%overview, {}, {}, {}, 0, $_);
+ }
+
+ # Check if there are any test case descriptions to write out
+ if (%test_description)
+ {
+ info("Writing test case description file.\n");
+ write_description_file( \%test_description,
+ $overall_found, $overall_hit,
+ $total_fn_found, $total_fn_hit,
+ $total_br_found, $total_br_hit);
+ }
+
+ print_overall_rate(1, $overall_found, $overall_hit,
+ $func_coverage, $total_fn_found, $total_fn_hit,
+ $br_coverage, $total_br_found, $total_br_hit);
+
+ chdir($cwd);
+}
+
+#
+# html_create(handle, filename)
+#
+
+sub html_create($$)
+{
+ my $handle = $_[0];
+ my $filename = $_[1];
+
+ if ($html_gzip)
+ {
+ open($handle, "|gzip -c >$filename")
+ or die("ERROR: cannot open $filename for writing ".
+ "(gzip)!\n");
+ }
+ else
+ {
+ open($handle, ">$filename")
+ or die("ERROR: cannot open $filename for writing!\n");
+ }
+}
+
+sub write_dir_page($$$$$$$$$$$$$$$$$)
+{
+ my ($name, $rel_dir, $base_dir, $title, $trunc_dir, $overall_found,
+ $overall_hit, $total_fn_found, $total_fn_hit, $total_br_found,
+ $total_br_hit, $overview, $testhash, $testfnchash, $testbrhash,
+ $view_type, $sort_type) = @_;
+
+ # Generate directory overview page including details
+ html_create(*HTML_HANDLE, "$rel_dir/index$name.$html_ext");
+ if (!defined($trunc_dir)) {
+ $trunc_dir = "";
+ }
+ write_html_prolog(*HTML_HANDLE, $base_dir, "LCOV - $title$trunc_dir");
+ write_header(*HTML_HANDLE, $view_type, $trunc_dir, $rel_dir,
+ $overall_found, $overall_hit, $total_fn_found,
+ $total_fn_hit, $total_br_found, $total_br_hit, $sort_type);
+ write_file_table(*HTML_HANDLE, $base_dir, $overview, $testhash,
+ $testfnchash, $testbrhash, $view_type, $sort_type);
+ write_html_epilog(*HTML_HANDLE, $base_dir);
+ close(*HTML_HANDLE);
+}
+
+
+#
+# process_dir(dir_name)
+#
+
+sub process_dir($)
+{
+ my $abs_dir = $_[0];
+ my $trunc_dir;
+ my $rel_dir = $abs_dir;
+ my $base_dir;
+ my $filename;
+ my %overview;
+ my $lines_found;
+ my $lines_hit;
+ my $fn_found;
+ my $fn_hit;
+ my $br_found;
+ my $br_hit;
+ my $overall_found=0;
+ my $overall_hit=0;
+ my $total_fn_found=0;
+ my $total_fn_hit=0;
+ my $total_br_found = 0;
+ my $total_br_hit = 0;
+ my $base_name;
+ my $extension;
+ my $testdata;
+ my %testhash;
+ my $testfncdata;
+ my %testfnchash;
+ my $testbrdata;
+ my %testbrhash;
+ my @sort_list;
+ local *HTML_HANDLE;
+
+ # Remove prefix if applicable
+ if (!$no_prefix)
+ {
+ # Match directory name beginning with $dir_prefix
+ $rel_dir = apply_prefix($rel_dir, $dir_prefix);
+ }
+
+ $trunc_dir = $rel_dir;
+
+ # Remove leading /
+ if ($rel_dir =~ /^\/(.*)$/)
+ {
+ $rel_dir = substr($rel_dir, 1);
+ }
+
+ $base_dir = get_relative_base_path($rel_dir);
+
+ create_sub_dir($rel_dir);
+
+ # Match filenames which specify files in this directory, not including
+ # sub-directories
+ foreach $filename (grep(/^\Q$abs_dir\E\/[^\/]*$/,keys(%info_data)))
+ {
+ my $page_link;
+ my $func_link;
+
+ ($lines_found, $lines_hit, $fn_found, $fn_hit, $br_found,
+ $br_hit, $testdata, $testfncdata, $testbrdata) =
+ process_file($trunc_dir, $rel_dir, $filename);
+
+ $base_name = basename($filename);
+
+ if ($no_sourceview) {
+ $page_link = "";
+ } elsif ($frames) {
+ # Link to frameset page
+ $page_link = "$base_name.gcov.frameset.$html_ext";
+ } else {
+ # Link directory to source code view page
+ $page_link = "$base_name.gcov.$html_ext";
+ }
+ $overview{$base_name} = [$lines_found, $lines_hit, $fn_found,
+ $fn_hit, $br_found, $br_hit,
+ $page_link,
+ get_rate($lines_found, $lines_hit),
+ get_rate($fn_found, $fn_hit),
+ get_rate($br_found, $br_hit)];
+
+ $testhash{$base_name} = $testdata;
+ $testfnchash{$base_name} = $testfncdata;
+ $testbrhash{$base_name} = $testbrdata;
+
+ $overall_found += $lines_found;
+ $overall_hit += $lines_hit;
+
+ $total_fn_found += $fn_found;
+ $total_fn_hit += $fn_hit;
+
+ $total_br_found += $br_found;
+ $total_br_hit += $br_hit;
+ }
+
+ # Create sorted pages
+ foreach (@fileview_sortlist) {
+ # Generate directory overview page (without details)
+ write_dir_page($fileview_sortname[$_], $rel_dir, $base_dir,
+ $test_title, $trunc_dir, $overall_found,
+ $overall_hit, $total_fn_found, $total_fn_hit,
+ $total_br_found, $total_br_hit, \%overview, {},
+ {}, {}, 1, $_);
+ if (!$show_details) {
+ next;
+ }
+ # Generate directory overview page including details
+ write_dir_page("-detail".$fileview_sortname[$_], $rel_dir,
+ $base_dir, $test_title, $trunc_dir,
+ $overall_found, $overall_hit, $total_fn_found,
+ $total_fn_hit, $total_br_found, $total_br_hit,
+ \%overview, \%testhash, \%testfnchash,
+ \%testbrhash, 1, $_);
+ }
+
+ # Calculate resulting line counts
+ return ($overall_found, $overall_hit, $total_fn_found, $total_fn_hit,
+ $total_br_found, $total_br_hit);
+}
+
+
+#
+# get_converted_lines(testdata)
+#
+# Return hash of line numbers of those lines which were only covered in
+# converted data sets.
+#
+
+sub get_converted_lines($)
+{
+ my $testdata = $_[0];
+ my $testcount;
+ my %converted;
+ my %nonconverted;
+ my $hash;
+ my $testcase;
+ my $line;
+ my %result;
+
+
+ # Get a hash containing line numbers with positive counts both for
+ # converted and original data sets
+ foreach $testcase (keys(%{$testdata}))
+ {
+ # Check to see if this is a converted data set
+ if ($testcase =~ /,diff$/)
+ {
+ $hash = \%converted;
+ }
+ else
+ {
+ $hash = \%nonconverted;
+ }
+
+ $testcount = $testdata->{$testcase};
+ # Add lines with a positive count to hash
+ foreach $line (keys%{$testcount})
+ {
+ if ($testcount->{$line} > 0)
+ {
+ $hash->{$line} = 1;
+ }
+ }
+ }
+
+ # Combine both hashes to resulting list
+ foreach $line (keys(%converted))
+ {
+ if (!defined($nonconverted{$line}))
+ {
+ $result{$line} = 1;
+ }
+ }
+
+ return \%result;
+}
+
+
+sub write_function_page($$$$$$$$$$$$$$$$$$)
+{
+ my ($base_dir, $rel_dir, $trunc_dir, $base_name, $title,
+ $lines_found, $lines_hit, $fn_found, $fn_hit, $br_found, $br_hit,
+ $sumcount, $funcdata, $sumfnccount, $testfncdata, $sumbrcount,
+ $testbrdata, $sort_type) = @_;
+ my $pagetitle;
+ my $filename;
+
+ # Generate function table for this file
+ if ($sort_type == 0) {
+ $filename = "$rel_dir/$base_name.func.$html_ext";
+ } else {
+ $filename = "$rel_dir/$base_name.func-sort-c.$html_ext";
+ }
+ html_create(*HTML_HANDLE, $filename);
+ $pagetitle = "LCOV - $title - $trunc_dir/$base_name - functions";
+ write_html_prolog(*HTML_HANDLE, $base_dir, $pagetitle);
+ write_header(*HTML_HANDLE, 4, "$trunc_dir/$base_name",
+ "$rel_dir/$base_name", $lines_found, $lines_hit,
+ $fn_found, $fn_hit, $br_found, $br_hit, $sort_type);
+ write_function_table(*HTML_HANDLE, "$base_name.gcov.$html_ext",
+ $sumcount, $funcdata,
+ $sumfnccount, $testfncdata, $sumbrcount,
+ $testbrdata, $base_name,
+ $base_dir, $sort_type);
+ write_html_epilog(*HTML_HANDLE, $base_dir, 1);
+ close(*HTML_HANDLE);
+}
+
+
+#
+# process_file(trunc_dir, rel_dir, filename)
+#
+
+sub process_file($$$)
+{
+ info("Processing file ".apply_prefix($_[2], $dir_prefix)."\n");
+
+ my $trunc_dir = $_[0];
+ my $rel_dir = $_[1];
+ my $filename = $_[2];
+ my $base_name = basename($filename);
+ my $base_dir = get_relative_base_path($rel_dir);
+ my $testdata;
+ my $testcount;
+ my $sumcount;
+ my $funcdata;
+ my $checkdata;
+ my $testfncdata;
+ my $sumfnccount;
+ my $testbrdata;
+ my $sumbrcount;
+ my $lines_found;
+ my $lines_hit;
+ my $fn_found;
+ my $fn_hit;
+ my $br_found;
+ my $br_hit;
+ my $converted;
+ my @source;
+ my $pagetitle;
+ local *HTML_HANDLE;
+
+ ($testdata, $sumcount, $funcdata, $checkdata, $testfncdata,
+ $sumfnccount, $testbrdata, $sumbrcount, $lines_found, $lines_hit,
+ $fn_found, $fn_hit, $br_found, $br_hit)
+ = get_info_entry($info_data{$filename});
+
+ # Return after this point in case user asked us not to generate
+ # source code view
+ if ($no_sourceview)
+ {
+ return ($lines_found, $lines_hit, $fn_found, $fn_hit,
+ $br_found, $br_hit, $testdata, $testfncdata,
+ $testbrdata);
+ }
+
+ $converted = get_converted_lines($testdata);
+ # Generate source code view for this file
+ html_create(*HTML_HANDLE, "$rel_dir/$base_name.gcov.$html_ext");
+ $pagetitle = "LCOV - $test_title - $trunc_dir/$base_name";
+ write_html_prolog(*HTML_HANDLE, $base_dir, $pagetitle);
+ write_header(*HTML_HANDLE, 2, "$trunc_dir/$base_name",
+ "$rel_dir/$base_name", $lines_found, $lines_hit,
+ $fn_found, $fn_hit, $br_found, $br_hit, 0);
+ @source = write_source(*HTML_HANDLE, $filename, $sumcount, $checkdata,
+ $converted, $funcdata, $sumbrcount);
+
+ write_html_epilog(*HTML_HANDLE, $base_dir, 1);
+ close(*HTML_HANDLE);
+
+ if ($func_coverage) {
+ # Create function tables
+ foreach (@funcview_sortlist) {
+ write_function_page($base_dir, $rel_dir, $trunc_dir,
+ $base_name, $test_title,
+ $lines_found, $lines_hit,
+ $fn_found, $fn_hit, $br_found,
+ $br_hit, $sumcount,
+ $funcdata, $sumfnccount,
+ $testfncdata, $sumbrcount,
+ $testbrdata, $_);
+ }
+ }
+
+ # Additional files are needed in case of frame output
+ if (!$frames)
+ {
+ return ($lines_found, $lines_hit, $fn_found, $fn_hit,
+ $br_found, $br_hit, $testdata, $testfncdata,
+ $testbrdata);
+ }
+
+ # Create overview png file
+ gen_png("$rel_dir/$base_name.gcov.png", $overview_width, $tab_size,
+ @source);
+
+ # Create frameset page
+ html_create(*HTML_HANDLE,
+ "$rel_dir/$base_name.gcov.frameset.$html_ext");
+ write_frameset(*HTML_HANDLE, $base_dir, $base_name, $pagetitle);
+ close(*HTML_HANDLE);
+
+ # Write overview frame
+ html_create(*HTML_HANDLE,
+ "$rel_dir/$base_name.gcov.overview.$html_ext");
+ write_overview(*HTML_HANDLE, $base_dir, $base_name, $pagetitle,
+ scalar(@source));
+ close(*HTML_HANDLE);
+
+ return ($lines_found, $lines_hit, $fn_found, $fn_hit, $br_found,
+ $br_hit, $testdata, $testfncdata, $testbrdata);
+}
+
+
+#
+# read_info_file(info_filename)
+#
+# Read in the contents of the .info file specified by INFO_FILENAME. Data will
+# be returned as a reference to a hash containing the following mappings:
+#
+# %result: for each filename found in file -> \%data
+#
+# %data: "test" -> \%testdata
+# "sum" -> \%sumcount
+# "func" -> \%funcdata
+# "found" -> $lines_found (number of instrumented lines found in file)
+# "hit" -> $lines_hit (number of executed lines in file)
+# "check" -> \%checkdata
+# "testfnc" -> \%testfncdata
+# "sumfnc" -> \%sumfnccount
+# "testbr" -> \%testbrdata
+# "sumbr" -> \%sumbrcount
+#
+# %testdata : name of test affecting this file -> \%testcount
+# %testfncdata: name of test affecting this file -> \%testfnccount
+# %testbrdata: name of test affecting this file -> \%testbrcount
+#
+# %testcount : line number -> execution count for a single test
+# %testfnccount: function name -> execution count for a single test
+# %testbrcount : line number -> branch coverage data for a single test
+# %sumcount : line number -> execution count for all tests
+# %sumfnccount : function name -> execution count for all tests
+# %sumbrcount : line number -> branch coverage data for all tests
+# %funcdata : function name -> line number
+# %checkdata : line number -> checksum of source code line
+# $brdata : vector of items: block, branch, taken
+#
+# Note that .info file sections referring to the same file and test name
+# will automatically be combined by adding all execution counts.
+#
+# Note that if INFO_FILENAME ends with ".gz", it is assumed that the file
+# is compressed using GZIP. If available, GUNZIP will be used to decompress
+# this file.
+#
+# Die on error.
+#
+
+sub read_info_file($)
+{
+ my $tracefile = $_[0]; # Name of tracefile
+ my %result; # Resulting hash: file -> data
+ my $data; # Data handle for current entry
+ my $testdata; # " "
+ my $testcount; # " "
+ my $sumcount; # " "
+ my $funcdata; # " "
+ my $checkdata; # " "
+ my $testfncdata;
+ my $testfnccount;
+ my $sumfnccount;
+ my $testbrdata;
+ my $testbrcount;
+ my $sumbrcount;
+ my $line; # Current line read from .info file
+ my $testname; # Current test name
+ my $filename; # Current filename
+ my $hitcount; # Count for lines hit
+ my $count; # Execution count of current line
+ my $negative; # If set, warn about negative counts
+ my $changed_testname; # If set, warn about changed testname
+ my $line_checksum; # Checksum of current line
+ my $br_found;
+ my $br_hit;
+ local *INFO_HANDLE; # Filehandle for .info file
+
+ info("Reading data file $tracefile\n");
+
+ # Check if file exists and is readable
+ stat($_[0]);
+ if (!(-r _))
+ {
+ die("ERROR: cannot read file $_[0]!\n");
+ }
+
+ # Check if this is really a plain file
+ if (!(-f _))
+ {
+ die("ERROR: not a plain file: $_[0]!\n");
+ }
+
+ # Check for .gz extension
+ if ($_[0] =~ /\.gz$/)
+ {
+ # Check for availability of GZIP tool
+ system_no_output(1, "gunzip" ,"-h")
+ and die("ERROR: gunzip command not available!\n");
+
+ # Check integrity of compressed file
+ system_no_output(1, "gunzip", "-t", $_[0])
+ and die("ERROR: integrity check failed for ".
+ "compressed file $_[0]!\n");
+
+ # Open compressed file
+ open(INFO_HANDLE, "gunzip -c $_[0]|")
+ or die("ERROR: cannot start gunzip to decompress ".
+ "file $_[0]!\n");
+ }
+ else
+ {
+ # Open decompressed file
+ open(INFO_HANDLE, $_[0])
+ or die("ERROR: cannot read file $_[0]!\n");
+ }
+
+ $testname = "";
+ while (<INFO_HANDLE>)
+ {
+ chomp($_);
+ $line = $_;
+
+ # Switch statement
+ foreach ($line)
+ {
+ /^TN:([^,]*)(,diff)?/ && do
+ {
+ # Test name information found
+ $testname = defined($1) ? $1 : "";
+ if ($testname =~ s/\W/_/g)
+ {
+ $changed_testname = 1;
+ }
+ $testname .= $2 if (defined($2));
+ last;
+ };
+
+ /^[SK]F:(.*)/ && do
+ {
+ # Filename information found
+ # Retrieve data for new entry
+ $filename = $1;
+
+ $data = $result{$filename};
+ ($testdata, $sumcount, $funcdata, $checkdata,
+ $testfncdata, $sumfnccount, $testbrdata,
+ $sumbrcount) =
+ get_info_entry($data);
+
+ if (defined($testname))
+ {
+ $testcount = $testdata->{$testname};
+ $testfnccount = $testfncdata->{$testname};
+ $testbrcount = $testbrdata->{$testname};
+ }
+ else
+ {
+ $testcount = {};
+ $testfnccount = {};
+ $testbrcount = {};
+ }
+ last;
+ };
+
+ /^DA:(\d+),(-?\d+)(,[^,\s]+)?/ && do
+ {
+ # Fix negative counts
+ $count = $2 < 0 ? 0 : $2;
+ if ($2 < 0)
+ {
+ $negative = 1;
+ }
+ # Execution count found, add to structure
+ # Add summary counts
+ $sumcount->{$1} += $count;
+
+ # Add test-specific counts
+ if (defined($testname))
+ {
+ $testcount->{$1} += $count;
+ }
+
+ # Store line checksum if available
+ if (defined($3))
+ {
+ $line_checksum = substr($3, 1);
+
+ # Does it match a previous definition
+ if (defined($checkdata->{$1}) &&
+ ($checkdata->{$1} ne
+ $line_checksum))
+ {
+ die("ERROR: checksum mismatch ".
+ "at $filename:$1\n");
+ }
+
+ $checkdata->{$1} = $line_checksum;
+ }
+ last;
+ };
+
+ /^FN:(\d+),([^,]+)/ && do
+ {
+ # Function data found, add to structure
+ $funcdata->{$2} = $1;
+
+ # Also initialize function call data
+ if (!defined($sumfnccount->{$2})) {
+ $sumfnccount->{$2} = 0;
+ }
+ if (defined($testname))
+ {
+ if (!defined($testfnccount->{$2})) {
+ $testfnccount->{$2} = 0;
+ }
+ }
+ last;
+ };
+
+ /^FNDA:(\d+),([^,]+)/ && do
+ {
+ # Function call count found, add to structure
+ # Add summary counts
+ $sumfnccount->{$2} += $1;
+
+ # Add test-specific counts
+ if (defined($testname))
+ {
+ $testfnccount->{$2} += $1;
+ }
+ last;
+ };
+
+ /^BRDA:(\d+),(\d+),(\d+),(\d+|-)/ && do {
+ # Branch coverage data found
+ my ($line, $block, $branch, $taken) =
+ ($1, $2, $3, $4);
+
+ $sumbrcount->{$line} =
+ br_ivec_push($sumbrcount->{$line},
+ $block, $branch, $taken);
+
+ # Add test-specific counts
+ if (defined($testname)) {
+ $testbrcount->{$line} =
+ br_ivec_push(
+ $testbrcount->{$line},
+ $block, $branch,
+ $taken);
+ }
+ last;
+ };
+
+ /^end_of_record/ && do
+ {
+ # Found end of section marker
+ if ($filename)
+ {
+ # Store current section data
+ if (defined($testname))
+ {
+ $testdata->{$testname} =
+ $testcount;
+ $testfncdata->{$testname} =
+ $testfnccount;
+ $testbrdata->{$testname} =
+ $testbrcount;
+ }
+
+ set_info_entry($data, $testdata,
+ $sumcount, $funcdata,
+ $checkdata, $testfncdata,
+ $sumfnccount,
+ $testbrdata,
+ $sumbrcount);
+ $result{$filename} = $data;
+ last;
+ }
+ };
+
+ # default
+ last;
+ }
+ }
+ close(INFO_HANDLE);
+
+ # Calculate lines_found and lines_hit for each file
+ foreach $filename (keys(%result))
+ {
+ $data = $result{$filename};
+
+ ($testdata, $sumcount, undef, undef, $testfncdata,
+ $sumfnccount, $testbrdata, $sumbrcount) =
+ get_info_entry($data);
+
+ # Filter out empty files
+ if (scalar(keys(%{$sumcount})) == 0)
+ {
+ delete($result{$filename});
+ next;
+ }
+ # Filter out empty test cases
+ foreach $testname (keys(%{$testdata}))
+ {
+ if (!defined($testdata->{$testname}) ||
+ scalar(keys(%{$testdata->{$testname}})) == 0)
+ {
+ delete($testdata->{$testname});
+ delete($testfncdata->{$testname});
+ }
+ }
+
+ $data->{"found"} = scalar(keys(%{$sumcount}));
+ $hitcount = 0;
+
+ foreach (keys(%{$sumcount}))
+ {
+ if ($sumcount->{$_} > 0) { $hitcount++; }
+ }
+
+ $data->{"hit"} = $hitcount;
+
+ # Get found/hit values for function call data
+ $data->{"f_found"} = scalar(keys(%{$sumfnccount}));
+ $hitcount = 0;
+
+ foreach (keys(%{$sumfnccount})) {
+ if ($sumfnccount->{$_} > 0) {
+ $hitcount++;
+ }
+ }
+ $data->{"f_hit"} = $hitcount;
+
+ # Get found/hit values for branch data
+ ($br_found, $br_hit) = get_br_found_and_hit($sumbrcount);
+
+ $data->{"b_found"} = $br_found;
+ $data->{"b_hit"} = $br_hit;
+ }
+
+ if (scalar(keys(%result)) == 0)
+ {
+ die("ERROR: no valid records found in tracefile $tracefile\n");
+ }
+ if ($negative)
+ {
+ warn("WARNING: negative counts found in tracefile ".
+ "$tracefile\n");
+ }
+ if ($changed_testname)
+ {
+ warn("WARNING: invalid characters removed from testname in ".
+ "tracefile $tracefile\n");
+ }
+
+ return(\%result);
+}
+
+
+#
+# get_info_entry(hash_ref)
+#
+# Retrieve data from an entry of the structure generated by read_info_file().
+# Return a list of references to hashes:
+# (test data hash ref, sum count hash ref, funcdata hash ref, checkdata hash
+# ref, testfncdata hash ref, sumfnccount hash ref, lines found, lines hit,
+# functions found, functions hit)
+#
+
+sub get_info_entry($)
+{
+ my $testdata_ref = $_[0]->{"test"};
+ my $sumcount_ref = $_[0]->{"sum"};
+ my $funcdata_ref = $_[0]->{"func"};
+ my $checkdata_ref = $_[0]->{"check"};
+ my $testfncdata = $_[0]->{"testfnc"};
+ my $sumfnccount = $_[0]->{"sumfnc"};
+ my $testbrdata = $_[0]->{"testbr"};
+ my $sumbrcount = $_[0]->{"sumbr"};
+ my $lines_found = $_[0]->{"found"};
+ my $lines_hit = $_[0]->{"hit"};
+ my $fn_found = $_[0]->{"f_found"};
+ my $fn_hit = $_[0]->{"f_hit"};
+ my $br_found = $_[0]->{"b_found"};
+ my $br_hit = $_[0]->{"b_hit"};
+
+ return ($testdata_ref, $sumcount_ref, $funcdata_ref, $checkdata_ref,
+ $testfncdata, $sumfnccount, $testbrdata, $sumbrcount,
+ $lines_found, $lines_hit, $fn_found, $fn_hit,
+ $br_found, $br_hit);
+}
+
+
+#
+# set_info_entry(hash_ref, testdata_ref, sumcount_ref, funcdata_ref,
+# checkdata_ref, testfncdata_ref, sumfcncount_ref,
+# testbrdata_ref, sumbrcount_ref[,lines_found,
+# lines_hit, f_found, f_hit, $b_found, $b_hit])
+#
+# Update the hash referenced by HASH_REF with the provided data references.
+#
+
+sub set_info_entry($$$$$$$$$;$$$$$$)
+{
+ my $data_ref = $_[0];
+
+ $data_ref->{"test"} = $_[1];
+ $data_ref->{"sum"} = $_[2];
+ $data_ref->{"func"} = $_[3];
+ $data_ref->{"check"} = $_[4];
+ $data_ref->{"testfnc"} = $_[5];
+ $data_ref->{"sumfnc"} = $_[6];
+ $data_ref->{"testbr"} = $_[7];
+ $data_ref->{"sumbr"} = $_[8];
+
+ if (defined($_[9])) { $data_ref->{"found"} = $_[9]; }
+ if (defined($_[10])) { $data_ref->{"hit"} = $_[10]; }
+ if (defined($_[11])) { $data_ref->{"f_found"} = $_[11]; }
+ if (defined($_[12])) { $data_ref->{"f_hit"} = $_[12]; }
+ if (defined($_[13])) { $data_ref->{"b_found"} = $_[13]; }
+ if (defined($_[14])) { $data_ref->{"b_hit"} = $_[14]; }
+}
+
+
+#
+# add_counts(data1_ref, data2_ref)
+#
+# DATA1_REF and DATA2_REF are references to hashes containing a mapping
+#
+# line number -> execution count
+#
+# Return a list (RESULT_REF, LINES_FOUND, LINES_HIT) where RESULT_REF
+# is a reference to a hash containing the combined mapping in which
+# execution counts are added.
+#
+
+sub add_counts($$)
+{
+ my %data1 = %{$_[0]}; # Hash 1
+ my %data2 = %{$_[1]}; # Hash 2
+ my %result; # Resulting hash
+ my $line; # Current line iteration scalar
+ my $data1_count; # Count of line in hash1
+ my $data2_count; # Count of line in hash2
+ my $found = 0; # Total number of lines found
+ my $hit = 0; # Number of lines with a count > 0
+
+ foreach $line (keys(%data1))
+ {
+ $data1_count = $data1{$line};
+ $data2_count = $data2{$line};
+
+ # Add counts if present in both hashes
+ if (defined($data2_count)) { $data1_count += $data2_count; }
+
+ # Store sum in %result
+ $result{$line} = $data1_count;
+
+ $found++;
+ if ($data1_count > 0) { $hit++; }
+ }
+
+ # Add lines unique to data2
+ foreach $line (keys(%data2))
+ {
+ # Skip lines already in data1
+ if (defined($data1{$line})) { next; }
+
+ # Copy count from data2
+ $result{$line} = $data2{$line};
+
+ $found++;
+ if ($result{$line} > 0) { $hit++; }
+ }
+
+ return (\%result, $found, $hit);
+}
+
+
+#
+# merge_checksums(ref1, ref2, filename)
+#
+# REF1 and REF2 are references to hashes containing a mapping
+#
+# line number -> checksum
+#
+# Merge checksum lists defined in REF1 and REF2 and return reference to
+# resulting hash. Die if a checksum for a line is defined in both hashes
+# but does not match.
+#
+
+sub merge_checksums($$$)
+{
+ my $ref1 = $_[0];
+ my $ref2 = $_[1];
+ my $filename = $_[2];
+ my %result;
+ my $line;
+
+ foreach $line (keys(%{$ref1}))
+ {
+ if (defined($ref2->{$line}) &&
+ ($ref1->{$line} ne $ref2->{$line}))
+ {
+ die("ERROR: checksum mismatch at $filename:$line\n");
+ }
+ $result{$line} = $ref1->{$line};
+ }
+
+ foreach $line (keys(%{$ref2}))
+ {
+ $result{$line} = $ref2->{$line};
+ }
+
+ return \%result;
+}
+
+
+#
+# merge_func_data(funcdata1, funcdata2, filename)
+#
+
+sub merge_func_data($$$)
+{
+ my ($funcdata1, $funcdata2, $filename) = @_;
+ my %result;
+ my $func;
+
+ if (defined($funcdata1)) {
+ %result = %{$funcdata1};
+ }
+
+ foreach $func (keys(%{$funcdata2})) {
+ my $line1 = $result{$func};
+ my $line2 = $funcdata2->{$func};
+
+ if (defined($line1) && ($line1 != $line2)) {
+ warn("WARNING: function data mismatch at ".
+ "$filename:$line2\n");
+ next;
+ }
+ $result{$func} = $line2;
+ }
+
+ return \%result;
+}
+
+
+#
+# add_fnccount(fnccount1, fnccount2)
+#
+# Add function call count data. Return list (fnccount_added, f_found, f_hit)
+#
+
+sub add_fnccount($$)
+{
+ my ($fnccount1, $fnccount2) = @_;
+ my %result;
+ my $fn_found;
+ my $fn_hit;
+ my $function;
+
+ if (defined($fnccount1)) {
+ %result = %{$fnccount1};
+ }
+ foreach $function (keys(%{$fnccount2})) {
+ $result{$function} += $fnccount2->{$function};
+ }
+ $fn_found = scalar(keys(%result));
+ $fn_hit = 0;
+ foreach $function (keys(%result)) {
+ if ($result{$function} > 0) {
+ $fn_hit++;
+ }
+ }
+
+ return (\%result, $fn_found, $fn_hit);
+}
+
+#
+# add_testfncdata(testfncdata1, testfncdata2)
+#
+# Add function call count data for several tests. Return reference to
+# added_testfncdata.
+#
+
+sub add_testfncdata($$)
+{
+ my ($testfncdata1, $testfncdata2) = @_;
+ my %result;
+ my $testname;
+
+ foreach $testname (keys(%{$testfncdata1})) {
+ if (defined($testfncdata2->{$testname})) {
+ my $fnccount;
+
+ # Function call count data for this testname exists
+ # in both data sets: add
+ ($fnccount) = add_fnccount(
+ $testfncdata1->{$testname},
+ $testfncdata2->{$testname});
+ $result{$testname} = $fnccount;
+ next;
+ }
+ # Function call count data for this testname is unique to
+ # data set 1: copy
+ $result{$testname} = $testfncdata1->{$testname};
+ }
+
+ # Add count data for testnames unique to data set 2
+ foreach $testname (keys(%{$testfncdata2})) {
+ if (!defined($result{$testname})) {
+ $result{$testname} = $testfncdata2->{$testname};
+ }
+ }
+ return \%result;
+}
+
+
+#
+# brcount_to_db(brcount)
+#
+# Convert brcount data to the following format:
+#
+# db: line number -> block hash
+# block hash: block number -> branch hash
+# branch hash: branch number -> taken value
+#
+
+sub brcount_to_db($)
+{
+ my ($brcount) = @_;
+ my $line;
+ my $db;
+
+ # Add branches from first count to database
+ foreach $line (keys(%{$brcount})) {
+ my $brdata = $brcount->{$line};
+ my $i;
+ my $num = br_ivec_len($brdata);
+
+ for ($i = 0; $i < $num; $i++) {
+ my ($block, $branch, $taken) = br_ivec_get($brdata, $i);
+
+ $db->{$line}->{$block}->{$branch} = $taken;
+ }
+ }
+
+ return $db;
+}
+
+
+#
+# db_to_brcount(db)
+#
+# Convert branch coverage data back to brcount format.
+#
+
+sub db_to_brcount($)
+{
+ my ($db) = @_;
+ my $line;
+ my $brcount = {};
+ my $br_found = 0;
+ my $br_hit = 0;
+
+ # Convert database back to brcount format
+ foreach $line (sort({$a <=> $b} keys(%{$db}))) {
+ my $ldata = $db->{$line};
+ my $brdata;
+ my $block;
+
+ foreach $block (sort({$a <=> $b} keys(%{$ldata}))) {
+ my $bdata = $ldata->{$block};
+ my $branch;
+
+ foreach $branch (sort({$a <=> $b} keys(%{$bdata}))) {
+ my $taken = $bdata->{$branch};
+
+ $br_found++;
+ $br_hit++ if ($taken ne "-" && $taken > 0);
+ $brdata = br_ivec_push($brdata, $block,
+ $branch, $taken);
+ }
+ }
+ $brcount->{$line} = $brdata;
+ }
+
+ return ($brcount, $br_found, $br_hit);
+}
+
+
+#
+# combine_brcount(brcount1, brcount2, type)
+#
+# If add is BR_ADD, add branch coverage data and return list (brcount_added,
+# br_found, br_hit). If add is BR_SUB, subtract the taken values of brcount2
+# from brcount1 and return (brcount_sub, br_found, br_hit).
+#
+
+sub combine_brcount($$$)
+{
+ my ($brcount1, $brcount2, $type) = @_;
+ my $line;
+ my $block;
+ my $branch;
+ my $taken;
+ my $db;
+ my $br_found = 0;
+ my $br_hit = 0;
+ my $result;
+
+ # Convert branches from first count to database
+ $db = brcount_to_db($brcount1);
+ # Combine values from database and second count
+ foreach $line (keys(%{$brcount2})) {
+ my $brdata = $brcount2->{$line};
+ my $num = br_ivec_len($brdata);
+ my $i;
+
+ for ($i = 0; $i < $num; $i++) {
+ ($block, $branch, $taken) = br_ivec_get($brdata, $i);
+ my $new_taken = $db->{$line}->{$block}->{$branch};
+
+ if ($type == $BR_ADD) {
+ $new_taken = br_taken_add($new_taken, $taken);
+ } elsif ($type == $BR_SUB) {
+ $new_taken = br_taken_sub($new_taken, $taken);
+ }
+ $db->{$line}->{$block}->{$branch} = $new_taken
+ if (defined($new_taken));
+ }
+ }
+ # Convert database back to brcount format
+ ($result, $br_found, $br_hit) = db_to_brcount($db);
+
+ return ($result, $br_found, $br_hit);
+}
+
+
+#
+# add_testbrdata(testbrdata1, testbrdata2)
+#
+# Add branch coverage data for several tests. Return reference to
+# added_testbrdata.
+#
+
+sub add_testbrdata($$)
+{
+ my ($testbrdata1, $testbrdata2) = @_;
+ my %result;
+ my $testname;
+
+ foreach $testname (keys(%{$testbrdata1})) {
+ if (defined($testbrdata2->{$testname})) {
+ my $brcount;
+
+ # Branch coverage data for this testname exists
+ # in both data sets: add
+ ($brcount) = combine_brcount($testbrdata1->{$testname},
+ $testbrdata2->{$testname}, $BR_ADD);
+ $result{$testname} = $brcount;
+ next;
+ }
+ # Branch coverage data for this testname is unique to
+ # data set 1: copy
+ $result{$testname} = $testbrdata1->{$testname};
+ }
+
+ # Add count data for testnames unique to data set 2
+ foreach $testname (keys(%{$testbrdata2})) {
+ if (!defined($result{$testname})) {
+ $result{$testname} = $testbrdata2->{$testname};
+ }
+ }
+ return \%result;
+}
+
+
+#
+# combine_info_entries(entry_ref1, entry_ref2, filename)
+#
+# Combine .info data entry hashes referenced by ENTRY_REF1 and ENTRY_REF2.
+# Return reference to resulting hash.
+#
+
+sub combine_info_entries($$$)
+{
+ my $entry1 = $_[0]; # Reference to hash containing first entry
+ my $testdata1;
+ my $sumcount1;
+ my $funcdata1;
+ my $checkdata1;
+ my $testfncdata1;
+ my $sumfnccount1;
+ my $testbrdata1;
+ my $sumbrcount1;
+
+ my $entry2 = $_[1]; # Reference to hash containing second entry
+ my $testdata2;
+ my $sumcount2;
+ my $funcdata2;
+ my $checkdata2;
+ my $testfncdata2;
+ my $sumfnccount2;
+ my $testbrdata2;
+ my $sumbrcount2;
+
+ my %result; # Hash containing combined entry
+ my %result_testdata;
+ my $result_sumcount = {};
+ my $result_funcdata;
+ my $result_testfncdata;
+ my $result_sumfnccount;
+ my $result_testbrdata;
+ my $result_sumbrcount;
+ my $lines_found;
+ my $lines_hit;
+ my $fn_found;
+ my $fn_hit;
+ my $br_found;
+ my $br_hit;
+
+ my $testname;
+ my $filename = $_[2];
+
+ # Retrieve data
+ ($testdata1, $sumcount1, $funcdata1, $checkdata1, $testfncdata1,
+ $sumfnccount1, $testbrdata1, $sumbrcount1) = get_info_entry($entry1);
+ ($testdata2, $sumcount2, $funcdata2, $checkdata2, $testfncdata2,
+ $sumfnccount2, $testbrdata2, $sumbrcount2) = get_info_entry($entry2);
+
+ # Merge checksums
+ $checkdata1 = merge_checksums($checkdata1, $checkdata2, $filename);
+
+ # Combine funcdata
+ $result_funcdata = merge_func_data($funcdata1, $funcdata2, $filename);
+
+ # Combine function call count data
+ $result_testfncdata = add_testfncdata($testfncdata1, $testfncdata2);
+ ($result_sumfnccount, $fn_found, $fn_hit) =
+ add_fnccount($sumfnccount1, $sumfnccount2);
+
+ # Combine branch coverage data
+ $result_testbrdata = add_testbrdata($testbrdata1, $testbrdata2);
+ ($result_sumbrcount, $br_found, $br_hit) =
+ combine_brcount($sumbrcount1, $sumbrcount2, $BR_ADD);
+
+ # Combine testdata
+ foreach $testname (keys(%{$testdata1}))
+ {
+ if (defined($testdata2->{$testname}))
+ {
+ # testname is present in both entries, requires
+ # combination
+ ($result_testdata{$testname}) =
+ add_counts($testdata1->{$testname},
+ $testdata2->{$testname});
+ }
+ else
+ {
+ # testname only present in entry1, add to result
+ $result_testdata{$testname} = $testdata1->{$testname};
+ }
+
+ # update sum count hash
+ ($result_sumcount, $lines_found, $lines_hit) =
+ add_counts($result_sumcount,
+ $result_testdata{$testname});
+ }
+
+ foreach $testname (keys(%{$testdata2}))
+ {
+ # Skip testnames already covered by previous iteration
+ if (defined($testdata1->{$testname})) { next; }
+
+ # testname only present in entry2, add to result hash
+ $result_testdata{$testname} = $testdata2->{$testname};
+
+ # update sum count hash
+ ($result_sumcount, $lines_found, $lines_hit) =
+ add_counts($result_sumcount,
+ $result_testdata{$testname});
+ }
+
+ # Calculate resulting sumcount
+
+ # Store result
+ set_info_entry(\%result, \%result_testdata, $result_sumcount,
+ $result_funcdata, $checkdata1, $result_testfncdata,
+ $result_sumfnccount, $result_testbrdata,
+ $result_sumbrcount, $lines_found, $lines_hit,
+ $fn_found, $fn_hit, $br_found, $br_hit);
+
+ return(\%result);
+}
+
+
+#
+# combine_info_files(info_ref1, info_ref2)
+#
+# Combine .info data in hashes referenced by INFO_REF1 and INFO_REF2. Return
+# reference to resulting hash.
+#
+
+sub combine_info_files($$)
+{
+ my %hash1 = %{$_[0]};
+ my %hash2 = %{$_[1]};
+ my $filename;
+
+ foreach $filename (keys(%hash2))
+ {
+ if ($hash1{$filename})
+ {
+ # Entry already exists in hash1, combine them
+ $hash1{$filename} =
+ combine_info_entries($hash1{$filename},
+ $hash2{$filename},
+ $filename);
+ }
+ else
+ {
+ # Entry is unique in both hashes, simply add to
+ # resulting hash
+ $hash1{$filename} = $hash2{$filename};
+ }
+ }
+
+ return(\%hash1);
+}
+
+
+#
+# get_prefix(filename_list)
+#
+# Search FILENAME_LIST for a directory prefix which is common to as many
+# list entries as possible, so that removing this prefix will minimize the
+# sum of the lengths of all resulting shortened filenames.
+#
+
+sub get_prefix(@)
+{
+ my @filename_list = @_; # provided list of filenames
+ my %prefix; # mapping: prefix -> sum of lengths
+ my $current; # Temporary iteration variable
+
+ # Find list of prefixes
+ foreach (@filename_list)
+ {
+ # Need explicit assignment to get a copy of $_ so that
+ # shortening the contained prefix does not affect the list
+ $current = shorten_prefix($_);
+ while ($current = shorten_prefix($current))
+ {
+ # Skip rest if the remaining prefix has already been
+ # added to hash
+ if ($prefix{$current}) { last; }
+
+ # Initialize with 0
+ $prefix{$current}="0";
+ }
+
+ }
+
+ # Calculate sum of lengths for all prefixes
+ foreach $current (keys(%prefix))
+ {
+ foreach (@filename_list)
+ {
+ # Add original length
+ $prefix{$current} += length($_);
+
+ # Check whether prefix matches
+ if (substr($_, 0, length($current)) eq $current)
+ {
+ # Subtract prefix length for this filename
+ $prefix{$current} -= length($current);
+ }
+ }
+ }
+
+ # Find and return prefix with minimal sum
+ $current = (keys(%prefix))[0];
+
+ foreach (keys(%prefix))
+ {
+ if ($prefix{$_} < $prefix{$current})
+ {
+ $current = $_;
+ }
+ }
+
+ return($current);
+}
+
+
+#
+# shorten_prefix(prefix)
+#
+# Return PREFIX shortened by last directory component.
+#
+
+sub shorten_prefix($)
+{
+ my @list = split("/", $_[0]);
+
+ pop(@list);
+ return join("/", @list);
+}
+
+
+
+#
+# get_dir_list(filename_list)
+#
+# Return sorted list of directories for each entry in given FILENAME_LIST.
+#
+
+sub get_dir_list(@)
+{
+ my %result;
+
+ foreach (@_)
+ {
+ $result{shorten_prefix($_)} = "";
+ }
+
+ return(sort(keys(%result)));
+}
+
+
+#
+# get_relative_base_path(subdirectory)
+#
+# Return a relative path string which references the base path when applied
+# in SUBDIRECTORY.
+#
+# Example: get_relative_base_path("fs/mm") -> "../../"
+#
+
+sub get_relative_base_path($)
+{
+ my $result = "";
+ my $index;
+
+ # Make an empty directory path a special case
+ if (!$_[0]) { return(""); }
+
+ # Count number of /s in path
+ $index = ($_[0] =~ s/\//\//g);
+
+ # Add a ../ to $result for each / in the directory path + 1
+ for (; $index>=0; $index--)
+ {
+ $result .= "../";
+ }
+
+ return $result;
+}
+
+
+#
+# read_testfile(test_filename)
+#
+# Read in file TEST_FILENAME which contains test descriptions in the format:
+#
+# TN:<whitespace><test name>
+# TD:<whitespace><test description>
+#
+# for each test case. Return a reference to a hash containing a mapping
+#
+# test name -> test description.
+#
+# Die on error.
+#
+
+sub read_testfile($)
+{
+ my %result;
+ my $test_name;
+ my $changed_testname;
+ local *TEST_HANDLE;
+
+ open(TEST_HANDLE, "<".$_[0])
+ or die("ERROR: cannot open $_[0]!\n");
+
+ while (<TEST_HANDLE>)
+ {
+ chomp($_);
+
+ # Match lines beginning with TN:<whitespace(s)>
+ if (/^TN:\s+(.*?)\s*$/)
+ {
+ # Store name for later use
+ $test_name = $1;
+ if ($test_name =~ s/\W/_/g)
+ {
+ $changed_testname = 1;
+ }
+ }
+
+ # Match lines beginning with TD:<whitespace(s)>
+ if (/^TD:\s+(.*?)\s*$/)
+ {
+ # Check for empty line
+ if ($1)
+ {
+ # Add description to hash
+ $result{$test_name} .= " $1";
+ }
+ else
+ {
+ # Add empty line
+ $result{$test_name} .= "\n\n";
+ }
+ }
+ }
+
+ close(TEST_HANDLE);
+
+ if ($changed_testname)
+ {
+ warn("WARNING: invalid characters removed from testname in ".
+ "descriptions file $_[0]\n");
+ }
+
+ return \%result;
+}
+
+
+#
+# escape_html(STRING)
+#
+# Return a copy of STRING in which all occurrences of HTML special characters
+# are escaped.
+#
+
+sub escape_html($)
+{
+ my $string = $_[0];
+
+ if (!$string) { return ""; }
+
+ $string =~ s/&/&amp;/g; # & -> &amp;
+ $string =~ s/</&lt;/g; # < -> &lt;
+ $string =~ s/>/&gt;/g; # > -> &gt;
+ $string =~ s/\"/&quot;/g; # " -> &quot;
+
+ while ($string =~ /^([^\t]*)(\t)/)
+ {
+ my $replacement = " "x($tab_size - (length($1) % $tab_size));
+ $string =~ s/^([^\t]*)(\t)/$1$replacement/;
+ }
+
+ $string =~ s/\n/<br>/g; # \n -> <br>
+
+ return $string;
+}
+
+
+#
+# get_date_string()
+#
+# Return the current date in the form: yyyy-mm-dd
+#
+
+sub get_date_string()
+{
+ my $year;
+ my $month;
+ my $day;
+
+ ($year, $month, $day) = (localtime())[5, 4, 3];
+
+ return sprintf("%d-%02d-%02d", $year+1900, $month+1, $day);
+}
+
+
+#
+# create_sub_dir(dir_name)
+#
+# Create subdirectory DIR_NAME if it does not already exist, including all its
+# parent directories.
+#
+# Die on error.
+#
+
+sub create_sub_dir($)
+{
+ my ($dir) = @_;
+
+ system("mkdir", "-p" ,$dir)
+ and die("ERROR: cannot create directory $dir!\n");
+}
+
+
+#
+# write_description_file(descriptions, overall_found, overall_hit,
+# total_fn_found, total_fn_hit, total_br_found,
+# total_br_hit)
+#
+# Write HTML file containing all test case descriptions. DESCRIPTIONS is a
+# reference to a hash containing a mapping
+#
+# test case name -> test case description
+#
+# Die on error.
+#
+
+sub write_description_file($$$$$$$)
+{
+ my %description = %{$_[0]};
+ my $found = $_[1];
+ my $hit = $_[2];
+ my $fn_found = $_[3];
+ my $fn_hit = $_[4];
+ my $br_found = $_[5];
+ my $br_hit = $_[6];
+ my $test_name;
+ local *HTML_HANDLE;
+
+ html_create(*HTML_HANDLE,"descriptions.$html_ext");
+ write_html_prolog(*HTML_HANDLE, "", "LCOV - test case descriptions");
+ write_header(*HTML_HANDLE, 3, "", "", $found, $hit, $fn_found,
+ $fn_hit, $br_found, $br_hit, 0);
+
+ write_test_table_prolog(*HTML_HANDLE,
+ "Test case descriptions - alphabetical list");
+
+ foreach $test_name (sort(keys(%description)))
+ {
+ write_test_table_entry(*HTML_HANDLE, $test_name,
+ escape_html($description{$test_name}));
+ }
+
+ write_test_table_epilog(*HTML_HANDLE);
+ write_html_epilog(*HTML_HANDLE, "");
+
+ close(*HTML_HANDLE);
+}
+
+
+
+#
+# write_png_files()
+#
+# Create all necessary .png files for the HTML-output in the current
+# directory. .png-files are used as bar graphs.
+#
+# Die on error.
+#
+
+sub write_png_files()
+{
+ my %data;
+ local *PNG_HANDLE;
+
+ $data{"ruby.png"} =
+ [0x89, 0x50, 0x4e, 0x47, 0x0d, 0x0a, 0x1a, 0x0a, 0x00, 0x00,
+ 0x00, 0x0d, 0x49, 0x48, 0x44, 0x52, 0x00, 0x00, 0x00, 0x01,
+ 0x00, 0x00, 0x00, 0x01, 0x01, 0x03, 0x00, 0x00, 0x00, 0x25,
+ 0xdb, 0x56, 0xca, 0x00, 0x00, 0x00, 0x07, 0x74, 0x49, 0x4d,
+ 0x45, 0x07, 0xd2, 0x07, 0x11, 0x0f, 0x18, 0x10, 0x5d, 0x57,
+ 0x34, 0x6e, 0x00, 0x00, 0x00, 0x09, 0x70, 0x48, 0x59, 0x73,
+ 0x00, 0x00, 0x0b, 0x12, 0x00, 0x00, 0x0b, 0x12, 0x01, 0xd2,
+ 0xdd, 0x7e, 0xfc, 0x00, 0x00, 0x00, 0x04, 0x67, 0x41, 0x4d,
+ 0x41, 0x00, 0x00, 0xb1, 0x8f, 0x0b, 0xfc, 0x61, 0x05, 0x00,
+ 0x00, 0x00, 0x06, 0x50, 0x4c, 0x54, 0x45, 0xff, 0x35, 0x2f,
+ 0x00, 0x00, 0x00, 0xd0, 0x33, 0x9a, 0x9d, 0x00, 0x00, 0x00,
+ 0x0a, 0x49, 0x44, 0x41, 0x54, 0x78, 0xda, 0x63, 0x60, 0x00,
+ 0x00, 0x00, 0x02, 0x00, 0x01, 0xe5, 0x27, 0xde, 0xfc, 0x00,
+ 0x00, 0x00, 0x00, 0x49, 0x45, 0x4e, 0x44, 0xae, 0x42, 0x60,
+ 0x82];
+ $data{"amber.png"} =
+ [0x89, 0x50, 0x4e, 0x47, 0x0d, 0x0a, 0x1a, 0x0a, 0x00, 0x00,
+ 0x00, 0x0d, 0x49, 0x48, 0x44, 0x52, 0x00, 0x00, 0x00, 0x01,
+ 0x00, 0x00, 0x00, 0x01, 0x01, 0x03, 0x00, 0x00, 0x00, 0x25,
+ 0xdb, 0x56, 0xca, 0x00, 0x00, 0x00, 0x07, 0x74, 0x49, 0x4d,
+ 0x45, 0x07, 0xd2, 0x07, 0x11, 0x0f, 0x28, 0x04, 0x98, 0xcb,
+ 0xd6, 0xe0, 0x00, 0x00, 0x00, 0x09, 0x70, 0x48, 0x59, 0x73,
+ 0x00, 0x00, 0x0b, 0x12, 0x00, 0x00, 0x0b, 0x12, 0x01, 0xd2,
+ 0xdd, 0x7e, 0xfc, 0x00, 0x00, 0x00, 0x04, 0x67, 0x41, 0x4d,
+ 0x41, 0x00, 0x00, 0xb1, 0x8f, 0x0b, 0xfc, 0x61, 0x05, 0x00,
+ 0x00, 0x00, 0x06, 0x50, 0x4c, 0x54, 0x45, 0xff, 0xe0, 0x50,
+ 0x00, 0x00, 0x00, 0xa2, 0x7a, 0xda, 0x7e, 0x00, 0x00, 0x00,
+ 0x0a, 0x49, 0x44, 0x41, 0x54, 0x78, 0xda, 0x63, 0x60, 0x00,
+ 0x00, 0x00, 0x02, 0x00, 0x01, 0xe5, 0x27, 0xde, 0xfc, 0x00,
+ 0x00, 0x00, 0x00, 0x49, 0x45, 0x4e, 0x44, 0xae, 0x42, 0x60,
+ 0x82];
+ $data{"emerald.png"} =
+ [0x89, 0x50, 0x4e, 0x47, 0x0d, 0x0a, 0x1a, 0x0a, 0x00, 0x00,
+ 0x00, 0x0d, 0x49, 0x48, 0x44, 0x52, 0x00, 0x00, 0x00, 0x01,
+ 0x00, 0x00, 0x00, 0x01, 0x01, 0x03, 0x00, 0x00, 0x00, 0x25,
+ 0xdb, 0x56, 0xca, 0x00, 0x00, 0x00, 0x07, 0x74, 0x49, 0x4d,
+ 0x45, 0x07, 0xd2, 0x07, 0x11, 0x0f, 0x22, 0x2b, 0xc9, 0xf5,
+ 0x03, 0x33, 0x00, 0x00, 0x00, 0x09, 0x70, 0x48, 0x59, 0x73,
+ 0x00, 0x00, 0x0b, 0x12, 0x00, 0x00, 0x0b, 0x12, 0x01, 0xd2,
+ 0xdd, 0x7e, 0xfc, 0x00, 0x00, 0x00, 0x04, 0x67, 0x41, 0x4d,
+ 0x41, 0x00, 0x00, 0xb1, 0x8f, 0x0b, 0xfc, 0x61, 0x05, 0x00,
+ 0x00, 0x00, 0x06, 0x50, 0x4c, 0x54, 0x45, 0x1b, 0xea, 0x59,
+ 0x0a, 0x0a, 0x0a, 0x0f, 0xba, 0x50, 0x83, 0x00, 0x00, 0x00,
+ 0x0a, 0x49, 0x44, 0x41, 0x54, 0x78, 0xda, 0x63, 0x60, 0x00,
+ 0x00, 0x00, 0x02, 0x00, 0x01, 0xe5, 0x27, 0xde, 0xfc, 0x00,
+ 0x00, 0x00, 0x00, 0x49, 0x45, 0x4e, 0x44, 0xae, 0x42, 0x60,
+ 0x82];
+ $data{"snow.png"} =
+ [0x89, 0x50, 0x4e, 0x47, 0x0d, 0x0a, 0x1a, 0x0a, 0x00, 0x00,
+ 0x00, 0x0d, 0x49, 0x48, 0x44, 0x52, 0x00, 0x00, 0x00, 0x01,
+ 0x00, 0x00, 0x00, 0x01, 0x01, 0x03, 0x00, 0x00, 0x00, 0x25,
+ 0xdb, 0x56, 0xca, 0x00, 0x00, 0x00, 0x07, 0x74, 0x49, 0x4d,
+ 0x45, 0x07, 0xd2, 0x07, 0x11, 0x0f, 0x1e, 0x1d, 0x75, 0xbc,
+ 0xef, 0x55, 0x00, 0x00, 0x00, 0x09, 0x70, 0x48, 0x59, 0x73,
+ 0x00, 0x00, 0x0b, 0x12, 0x00, 0x00, 0x0b, 0x12, 0x01, 0xd2,
+ 0xdd, 0x7e, 0xfc, 0x00, 0x00, 0x00, 0x04, 0x67, 0x41, 0x4d,
+ 0x41, 0x00, 0x00, 0xb1, 0x8f, 0x0b, 0xfc, 0x61, 0x05, 0x00,
+ 0x00, 0x00, 0x06, 0x50, 0x4c, 0x54, 0x45, 0xff, 0xff, 0xff,
+ 0x00, 0x00, 0x00, 0x55, 0xc2, 0xd3, 0x7e, 0x00, 0x00, 0x00,
+ 0x0a, 0x49, 0x44, 0x41, 0x54, 0x78, 0xda, 0x63, 0x60, 0x00,
+ 0x00, 0x00, 0x02, 0x00, 0x01, 0xe5, 0x27, 0xde, 0xfc, 0x00,
+ 0x00, 0x00, 0x00, 0x49, 0x45, 0x4e, 0x44, 0xae, 0x42, 0x60,
+ 0x82];
+ $data{"glass.png"} =
+ [0x89, 0x50, 0x4e, 0x47, 0x0d, 0x0a, 0x1a, 0x0a, 0x00, 0x00,
+ 0x00, 0x0d, 0x49, 0x48, 0x44, 0x52, 0x00, 0x00, 0x00, 0x01,
+ 0x00, 0x00, 0x00, 0x01, 0x01, 0x03, 0x00, 0x00, 0x00, 0x25,
+ 0xdb, 0x56, 0xca, 0x00, 0x00, 0x00, 0x04, 0x67, 0x41, 0x4d,
+ 0x41, 0x00, 0x00, 0xb1, 0x8f, 0x0b, 0xfc, 0x61, 0x05, 0x00,
+ 0x00, 0x00, 0x06, 0x50, 0x4c, 0x54, 0x45, 0xff, 0xff, 0xff,
+ 0x00, 0x00, 0x00, 0x55, 0xc2, 0xd3, 0x7e, 0x00, 0x00, 0x00,
+ 0x01, 0x74, 0x52, 0x4e, 0x53, 0x00, 0x40, 0xe6, 0xd8, 0x66,
+ 0x00, 0x00, 0x00, 0x01, 0x62, 0x4b, 0x47, 0x44, 0x00, 0x88,
+ 0x05, 0x1d, 0x48, 0x00, 0x00, 0x00, 0x09, 0x70, 0x48, 0x59,
+ 0x73, 0x00, 0x00, 0x0b, 0x12, 0x00, 0x00, 0x0b, 0x12, 0x01,
+ 0xd2, 0xdd, 0x7e, 0xfc, 0x00, 0x00, 0x00, 0x07, 0x74, 0x49,
+ 0x4d, 0x45, 0x07, 0xd2, 0x07, 0x13, 0x0f, 0x08, 0x19, 0xc4,
+ 0x40, 0x56, 0x10, 0x00, 0x00, 0x00, 0x0a, 0x49, 0x44, 0x41,
+ 0x54, 0x78, 0x9c, 0x63, 0x60, 0x00, 0x00, 0x00, 0x02, 0x00,
+ 0x01, 0x48, 0xaf, 0xa4, 0x71, 0x00, 0x00, 0x00, 0x00, 0x49,
+ 0x45, 0x4e, 0x44, 0xae, 0x42, 0x60, 0x82];
+ $data{"updown.png"} =
+ [0x89, 0x50, 0x4e, 0x47, 0x0d, 0x0a, 0x1a, 0x0a, 0x00, 0x00,
+ 0x00, 0x0d, 0x49, 0x48, 0x44, 0x52, 0x00, 0x00, 0x00, 0x0a,
+ 0x00, 0x00, 0x00, 0x0e, 0x08, 0x06, 0x00, 0x00, 0x00, 0x16,
+ 0xa3, 0x8d, 0xab, 0x00, 0x00, 0x00, 0x3c, 0x49, 0x44, 0x41,
+ 0x54, 0x28, 0xcf, 0x63, 0x60, 0x40, 0x03, 0xff, 0xa1, 0x00,
+ 0x5d, 0x9c, 0x11, 0x5d, 0x11, 0x8a, 0x24, 0x23, 0x23, 0x23,
+ 0x86, 0x42, 0x6c, 0xa6, 0x20, 0x2b, 0x66, 0xc4, 0xa7, 0x08,
+ 0x59, 0x31, 0x23, 0x21, 0x45, 0x30, 0xc0, 0xc4, 0x30, 0x60,
+ 0x80, 0xfa, 0x6e, 0x24, 0x3e, 0x78, 0x48, 0x0a, 0x70, 0x62,
+ 0xa2, 0x90, 0x81, 0xd8, 0x44, 0x01, 0x00, 0xe9, 0x5c, 0x2f,
+ 0xf5, 0xe2, 0x9d, 0x0f, 0xf9, 0x00, 0x00, 0x00, 0x00, 0x49,
+ 0x45, 0x4e, 0x44, 0xae, 0x42, 0x60, 0x82] if ($sort);
+ foreach (keys(%data))
+ {
+ open(PNG_HANDLE, ">".$_)
+ or die("ERROR: cannot create $_!\n");
+ binmode(PNG_HANDLE);
+ print(PNG_HANDLE map(chr,@{$data{$_}}));
+ close(PNG_HANDLE);
+ }
+}
+
+
+#
+# write_htaccess_file()
+#
+
+sub write_htaccess_file()
+{
+ local *HTACCESS_HANDLE;
+ my $htaccess_data;
+
+ open(*HTACCESS_HANDLE, ">.htaccess")
+ or die("ERROR: cannot open .htaccess for writing!\n");
+
+ $htaccess_data = (<<"END_OF_HTACCESS")
+AddEncoding x-gzip .html
+END_OF_HTACCESS
+ ;
+
+ print(HTACCESS_HANDLE $htaccess_data);
+ close(*HTACCESS_HANDLE);
+}
+
+
+#
+# write_css_file()
+#
+# Write the cascading style sheet file gcov.css to the current directory.
+# This file defines basic layout attributes of all generated HTML pages.
+#
+
+sub write_css_file()
+{
+ local *CSS_HANDLE;
+
+ # Check for a specified external style sheet file
+ if ($css_filename)
+ {
+ # Simply copy that file
+ system("cp", $css_filename, "gcov.css")
+ and die("ERROR: cannot copy file $css_filename!\n");
+ return;
+ }
+
+ open(CSS_HANDLE, ">gcov.css")
+ or die ("ERROR: cannot open gcov.css for writing!\n");
+
+
+ # *************************************************************
+
+ my $css_data = ($_=<<"END_OF_CSS")
+ /* All views: initial background and text color */
+ body
+ {
+ color: #000000;
+ background-color: #FFFFFF;
+ }
+
+ /* All views: standard link format*/
+ a:link
+ {
+ color: #284FA8;
+ text-decoration: underline;
+ }
+
+ /* All views: standard link - visited format */
+ a:visited
+ {
+ color: #00CB40;
+ text-decoration: underline;
+ }
+
+ /* All views: standard link - activated format */
+ a:active
+ {
+ color: #FF0040;
+ text-decoration: underline;
+ }
+
+ /* All views: main title format */
+ td.title
+ {
+ text-align: center;
+ padding-bottom: 10px;
+ font-family: sans-serif;
+ font-size: 20pt;
+ font-style: italic;
+ font-weight: bold;
+ }
+
+ /* All views: header item format */
+ td.headerItem
+ {
+ text-align: right;
+ padding-right: 6px;
+ font-family: sans-serif;
+ font-weight: bold;
+ vertical-align: top;
+ white-space: nowrap;
+ }
+
+ /* All views: header item value format */
+ td.headerValue
+ {
+ text-align: left;
+ color: #284FA8;
+ font-family: sans-serif;
+ font-weight: bold;
+ white-space: nowrap;
+ }
+
+ /* All views: header item coverage table heading */
+ td.headerCovTableHead
+ {
+ text-align: center;
+ padding-right: 6px;
+ padding-left: 6px;
+ padding-bottom: 0px;
+ font-family: sans-serif;
+ font-size: 80%;
+ white-space: nowrap;
+ }
+
+ /* All views: header item coverage table entry */
+ td.headerCovTableEntry
+ {
+ text-align: right;
+ color: #284FA8;
+ font-family: sans-serif;
+ font-weight: bold;
+ white-space: nowrap;
+ padding-left: 12px;
+ padding-right: 4px;
+ background-color: #DAE7FE;
+ }
+
+ /* All views: header item coverage table entry for high coverage rate */
+ td.headerCovTableEntryHi
+ {
+ text-align: right;
+ color: #000000;
+ font-family: sans-serif;
+ font-weight: bold;
+ white-space: nowrap;
+ padding-left: 12px;
+ padding-right: 4px;
+ background-color: #A7FC9D;
+ }
+
+ /* All views: header item coverage table entry for medium coverage rate */
+ td.headerCovTableEntryMed
+ {
+ text-align: right;
+ color: #000000;
+ font-family: sans-serif;
+ font-weight: bold;
+ white-space: nowrap;
+ padding-left: 12px;
+ padding-right: 4px;
+ background-color: #FFEA20;
+ }
+
+ /* All views: header item coverage table entry for ow coverage rate */
+ td.headerCovTableEntryLo
+ {
+ text-align: right;
+ color: #000000;
+ font-family: sans-serif;
+ font-weight: bold;
+ white-space: nowrap;
+ padding-left: 12px;
+ padding-right: 4px;
+ background-color: #FF0000;
+ }
+
+ /* All views: header legend value for legend entry */
+ td.headerValueLeg
+ {
+ text-align: left;
+ color: #000000;
+ font-family: sans-serif;
+ font-size: 80%;
+ white-space: nowrap;
+ padding-top: 4px;
+ }
+
+ /* All views: color of horizontal ruler */
+ td.ruler
+ {
+ background-color: #6688D4;
+ }
+
+ /* All views: version string format */
+ td.versionInfo
+ {
+ text-align: center;
+ padding-top: 2px;
+ font-family: sans-serif;
+ font-style: italic;
+ }
+
+ /* Directory view/File view (all)/Test case descriptions:
+ table headline format */
+ td.tableHead
+ {
+ text-align: center;
+ color: #FFFFFF;
+ background-color: #6688D4;
+ font-family: sans-serif;
+ font-size: 120%;
+ font-weight: bold;
+ white-space: nowrap;
+ padding-left: 4px;
+ padding-right: 4px;
+ }
+
+ span.tableHeadSort
+ {
+ padding-right: 4px;
+ }
+
+ /* Directory view/File view (all): filename entry format */
+ td.coverFile
+ {
+ text-align: left;
+ padding-left: 10px;
+ padding-right: 20px;
+ color: #284FA8;
+ background-color: #DAE7FE;
+ font-family: monospace;
+ }
+
+ /* Directory view/File view (all): bar-graph entry format*/
+ td.coverBar
+ {
+ padding-left: 10px;
+ padding-right: 10px;
+ background-color: #DAE7FE;
+ }
+
+ /* Directory view/File view (all): bar-graph outline color */
+ td.coverBarOutline
+ {
+ background-color: #000000;
+ }
+
+ /* Directory view/File view (all): percentage entry for files with
+ high coverage rate */
+ td.coverPerHi
+ {
+ text-align: right;
+ padding-left: 10px;
+ padding-right: 10px;
+ background-color: #A7FC9D;
+ font-weight: bold;
+ font-family: sans-serif;
+ }
+
+ /* Directory view/File view (all): line count entry for files with
+ high coverage rate */
+ td.coverNumHi
+ {
+ text-align: right;
+ padding-left: 10px;
+ padding-right: 10px;
+ background-color: #A7FC9D;
+ white-space: nowrap;
+ font-family: sans-serif;
+ }
+
+ /* Directory view/File view (all): percentage entry for files with
+ medium coverage rate */
+ td.coverPerMed
+ {
+ text-align: right;
+ padding-left: 10px;
+ padding-right: 10px;
+ background-color: #FFEA20;
+ font-weight: bold;
+ font-family: sans-serif;
+ }
+
+ /* Directory view/File view (all): line count entry for files with
+ medium coverage rate */
+ td.coverNumMed
+ {
+ text-align: right;
+ padding-left: 10px;
+ padding-right: 10px;
+ background-color: #FFEA20;
+ white-space: nowrap;
+ font-family: sans-serif;
+ }
+
+ /* Directory view/File view (all): percentage entry for files with
+ low coverage rate */
+ td.coverPerLo
+ {
+ text-align: right;
+ padding-left: 10px;
+ padding-right: 10px;
+ background-color: #FF0000;
+ font-weight: bold;
+ font-family: sans-serif;
+ }
+
+ /* Directory view/File view (all): line count entry for files with
+ low coverage rate */
+ td.coverNumLo
+ {
+ text-align: right;
+ padding-left: 10px;
+ padding-right: 10px;
+ background-color: #FF0000;
+ white-space: nowrap;
+ font-family: sans-serif;
+ }
+
+ /* File view (all): "show/hide details" link format */
+ a.detail:link
+ {
+ color: #B8D0FF;
+ font-size:80%;
+ }
+
+ /* File view (all): "show/hide details" link - visited format */
+ a.detail:visited
+ {
+ color: #B8D0FF;
+ font-size:80%;
+ }
+
+ /* File view (all): "show/hide details" link - activated format */
+ a.detail:active
+ {
+ color: #FFFFFF;
+ font-size:80%;
+ }
+
+ /* File view (detail): test name entry */
+ td.testName
+ {
+ text-align: right;
+ padding-right: 10px;
+ background-color: #DAE7FE;
+ font-family: sans-serif;
+ }
+
+ /* File view (detail): test percentage entry */
+ td.testPer
+ {
+ text-align: right;
+ padding-left: 10px;
+ padding-right: 10px;
+ background-color: #DAE7FE;
+ font-family: sans-serif;
+ }
+
+ /* File view (detail): test lines count entry */
+ td.testNum
+ {
+ text-align: right;
+ padding-left: 10px;
+ padding-right: 10px;
+ background-color: #DAE7FE;
+ font-family: sans-serif;
+ }
+
+ /* Test case descriptions: test name format*/
+ dt
+ {
+ font-family: sans-serif;
+ font-weight: bold;
+ }
+
+ /* Test case descriptions: description table body */
+ td.testDescription
+ {
+ padding-top: 10px;
+ padding-left: 30px;
+ padding-bottom: 10px;
+ padding-right: 30px;
+ background-color: #DAE7FE;
+ }
+
+ /* Source code view: function entry */
+ td.coverFn
+ {
+ text-align: left;
+ padding-left: 10px;
+ padding-right: 20px;
+ color: #284FA8;
+ background-color: #DAE7FE;
+ font-family: monospace;
+ }
+
+ /* Source code view: function entry zero count*/
+ td.coverFnLo
+ {
+ text-align: right;
+ padding-left: 10px;
+ padding-right: 10px;
+ background-color: #FF0000;
+ font-weight: bold;
+ font-family: sans-serif;
+ }
+
+ /* Source code view: function entry nonzero count*/
+ td.coverFnHi
+ {
+ text-align: right;
+ padding-left: 10px;
+ padding-right: 10px;
+ background-color: #DAE7FE;
+ font-weight: bold;
+ font-family: sans-serif;
+ }
+
+ /* Source code view: source code format */
+ pre.source
+ {
+ font-family: monospace;
+ white-space: pre;
+ margin-top: 2px;
+ }
+
+ /* Source code view: line number format */
+ span.lineNum
+ {
+ background-color: #EFE383;
+ }
+
+ /* Source code view: format for lines which were executed */
+ td.lineCov,
+ span.lineCov
+ {
+ background-color: #CAD7FE;
+ }
+
+ /* Source code view: format for Cov legend */
+ span.coverLegendCov
+ {
+ padding-left: 10px;
+ padding-right: 10px;
+ padding-bottom: 2px;
+ background-color: #CAD7FE;
+ }
+
+ /* Source code view: format for lines which were not executed */
+ td.lineNoCov,
+ span.lineNoCov
+ {
+ background-color: #FF6230;
+ }
+
+ /* Source code view: format for NoCov legend */
+ span.coverLegendNoCov
+ {
+ padding-left: 10px;
+ padding-right: 10px;
+ padding-bottom: 2px;
+ background-color: #FF6230;
+ }
+
+ /* Source code view (function table): standard link - visited format */
+ td.lineNoCov > a:visited,
+ td.lineCov > a:visited
+ {
+ color: black;
+ text-decoration: underline;
+ }
+
+ /* Source code view: format for lines which were executed only in a
+ previous version */
+ span.lineDiffCov
+ {
+ background-color: #B5F7AF;
+ }
+
+ /* Source code view: format for branches which were executed
+ * and taken */
+ span.branchCov
+ {
+ background-color: #CAD7FE;
+ }
+
+ /* Source code view: format for branches which were executed
+ * but not taken */
+ span.branchNoCov
+ {
+ background-color: #FF6230;
+ }
+
+ /* Source code view: format for branches which were not executed */
+ span.branchNoExec
+ {
+ background-color: #FF6230;
+ }
+
+ /* Source code view: format for the source code heading line */
+ pre.sourceHeading
+ {
+ white-space: pre;
+ font-family: monospace;
+ font-weight: bold;
+ margin: 0px;
+ }
+
+ /* All views: header legend value for low rate */
+ td.headerValueLegL
+ {
+ font-family: sans-serif;
+ text-align: center;
+ white-space: nowrap;
+ padding-left: 4px;
+ padding-right: 2px;
+ background-color: #FF0000;
+ font-size: 80%;
+ }
+
+ /* All views: header legend value for med rate */
+ td.headerValueLegM
+ {
+ font-family: sans-serif;
+ text-align: center;
+ white-space: nowrap;
+ padding-left: 2px;
+ padding-right: 2px;
+ background-color: #FFEA20;
+ font-size: 80%;
+ }
+
+ /* All views: header legend value for hi rate */
+ td.headerValueLegH
+ {
+ font-family: sans-serif;
+ text-align: center;
+ white-space: nowrap;
+ padding-left: 2px;
+ padding-right: 4px;
+ background-color: #A7FC9D;
+ font-size: 80%;
+ }
+
+ /* All views except source code view: legend format for low coverage */
+ span.coverLegendCovLo
+ {
+ padding-left: 10px;
+ padding-right: 10px;
+ padding-top: 2px;
+ background-color: #FF0000;
+ }
+
+ /* All views except source code view: legend format for med coverage */
+ span.coverLegendCovMed
+ {
+ padding-left: 10px;
+ padding-right: 10px;
+ padding-top: 2px;
+ background-color: #FFEA20;
+ }
+
+ /* All views except source code view: legend format for hi coverage */
+ span.coverLegendCovHi
+ {
+ padding-left: 10px;
+ padding-right: 10px;
+ padding-top: 2px;
+ background-color: #A7FC9D;
+ }
+END_OF_CSS
+ ;
+
+ # *************************************************************
+
+
+ # Remove leading tab from all lines
+ $css_data =~ s/^\t//gm;
+
+ print(CSS_HANDLE $css_data);
+
+ close(CSS_HANDLE);
+}
+
+
+#
+# get_bar_graph_code(base_dir, cover_found, cover_hit)
+#
+# Return a string containing HTML code which implements a bar graph display
+# for a coverage rate of cover_hit * 100 / cover_found.
+#
+
+sub get_bar_graph_code($$$)
+{
+ my $rate;
+ my $alt;
+ my $width;
+ my $remainder;
+ my $png_name;
+ my $graph_code;
+
+ # Check number of instrumented lines
+ if ($_[1] == 0) { return ""; }
+
+ $rate = $_[2] * 100 / $_[1];
+ $alt = sprintf("%.1f", $rate)."%";
+ $width = sprintf("%.0f", $rate);
+ $remainder = sprintf("%d", 100-$width);
+
+ # Decide which .png file to use
+ $png_name = $rate_png[classify_rate($_[1], $_[2], $med_limit,
+ $hi_limit)];
+
+ if ($width == 0)
+ {
+ # Zero coverage
+ $graph_code = (<<END_OF_HTML)
+ <table border=0 cellspacing=0 cellpadding=1><tr><td class="coverBarOutline"><img src="$_[0]snow.png" width=100 height=10 alt="$alt"></td></tr></table>
+END_OF_HTML
+ ;
+ }
+ elsif ($width == 100)
+ {
+ # Full coverage
+ $graph_code = (<<END_OF_HTML)
+ <table border=0 cellspacing=0 cellpadding=1><tr><td class="coverBarOutline"><img src="$_[0]$png_name" width=100 height=10 alt="$alt"></td></tr></table>
+END_OF_HTML
+ ;
+ }
+ else
+ {
+ # Positive coverage
+ $graph_code = (<<END_OF_HTML)
+ <table border=0 cellspacing=0 cellpadding=1><tr><td class="coverBarOutline"><img src="$_[0]$png_name" width=$width height=10 alt="$alt"><img src="$_[0]snow.png" width=$remainder height=10 alt="$alt"></td></tr></table>
+END_OF_HTML
+ ;
+ }
+
+ # Remove leading tabs from all lines
+ $graph_code =~ s/^\t+//gm;
+ chomp($graph_code);
+
+ return($graph_code);
+}
+
+#
+# sub classify_rate(found, hit, med_limit, high_limit)
+#
+# Return 0 for low rate, 1 for medium rate and 2 for hi rate.
+#
+
+sub classify_rate($$$$)
+{
+ my ($found, $hit, $med, $hi) = @_;
+ my $rate;
+
+ if ($found == 0) {
+ return 2;
+ }
+ $rate = $hit * 100 / $found;
+ if ($rate < $med) {
+ return 0;
+ } elsif ($rate < $hi) {
+ return 1;
+ }
+ return 2;
+}
+
+
+#
+# write_html(filehandle, html_code)
+#
+# Write out HTML_CODE to FILEHANDLE while removing a leading tabulator mark
+# in each line of HTML_CODE.
+#
+
+sub write_html(*$)
+{
+ local *HTML_HANDLE = $_[0];
+ my $html_code = $_[1];
+
+ # Remove leading tab from all lines
+ $html_code =~ s/^\t//gm;
+
+ print(HTML_HANDLE $html_code)
+ or die("ERROR: cannot write HTML data ($!)\n");
+}
+
+
+#
+# write_html_prolog(filehandle, base_dir, pagetitle)
+#
+# Write an HTML prolog common to all HTML files to FILEHANDLE. PAGETITLE will
+# be used as HTML page title. BASE_DIR contains a relative path which points
+# to the base directory.
+#
+
+sub write_html_prolog(*$$)
+{
+ my $basedir = $_[1];
+ my $pagetitle = $_[2];
+ my $prolog;
+
+ $prolog = $html_prolog;
+ $prolog =~ s/\@pagetitle\@/$pagetitle/g;
+ $prolog =~ s/\@basedir\@/$basedir/g;
+
+ write_html($_[0], $prolog);
+}
+
+
+#
+# write_header_prolog(filehandle, base_dir)
+#
+# Write beginning of page header HTML code.
+#
+
+sub write_header_prolog(*$)
+{
+ # *************************************************************
+
+ write_html($_[0], <<END_OF_HTML)
+ <table width="100%" border=0 cellspacing=0 cellpadding=0>
+ <tr><td class="title">$title</td></tr>
+ <tr><td class="ruler"><img src="$_[1]glass.png" width=3 height=3 alt=""></td></tr>
+
+ <tr>
+ <td width="100%">
+ <table cellpadding=1 border=0 width="100%">
+END_OF_HTML
+ ;
+
+ # *************************************************************
+}
+
+
+#
+# write_header_line(handle, content)
+#
+# Write a header line with the specified table contents.
+#
+
+sub write_header_line(*@)
+{
+ my ($handle, @content) = @_;
+ my $entry;
+
+ write_html($handle, " <tr>\n");
+ foreach $entry (@content) {
+ my ($width, $class, $text, $colspan) = @{$entry};
+
+ if (defined($width)) {
+ $width = " width=\"$width\"";
+ } else {
+ $width = "";
+ }
+ if (defined($class)) {
+ $class = " class=\"$class\"";
+ } else {
+ $class = "";
+ }
+ if (defined($colspan)) {
+ $colspan = " colspan=\"$colspan\"";
+ } else {
+ $colspan = "";
+ }
+ $text = "" if (!defined($text));
+ write_html($handle,
+ " <td$width$class$colspan>$text</td>\n");
+ }
+ write_html($handle, " </tr>\n");
+}
+
+
+#
+# write_header_epilog(filehandle, base_dir)
+#
+# Write end of page header HTML code.
+#
+
+sub write_header_epilog(*$)
+{
+ # *************************************************************
+
+ write_html($_[0], <<END_OF_HTML)
+ <tr><td><img src="$_[1]glass.png" width=3 height=3 alt=""></td></tr>
+ </table>
+ </td>
+ </tr>
+
+ <tr><td class="ruler"><img src="$_[1]glass.png" width=3 height=3 alt=""></td></tr>
+ </table>
+
+END_OF_HTML
+ ;
+
+ # *************************************************************
+}
+
+
+#
+# write_file_table_prolog(handle, file_heading, ([heading, num_cols], ...))
+#
+# Write heading for file table.
+#
+
+sub write_file_table_prolog(*$@)
+{
+ my ($handle, $file_heading, @columns) = @_;
+ my $num_columns = 0;
+ my $file_width;
+ my $col;
+ my $width;
+
+ $width = 20 if (scalar(@columns) == 1);
+ $width = 10 if (scalar(@columns) == 2);
+ $width = 8 if (scalar(@columns) > 2);
+
+ foreach $col (@columns) {
+ my ($heading, $cols) = @{$col};
+
+ $num_columns += $cols;
+ }
+ $file_width = 100 - $num_columns * $width;
+
+ # Table definition
+ write_html($handle, <<END_OF_HTML);
+ <center>
+ <table width="80%" cellpadding=1 cellspacing=1 border=0>
+
+ <tr>
+ <td width="$file_width%"><br></td>
+END_OF_HTML
+ # Empty first row
+ foreach $col (@columns) {
+ my ($heading, $cols) = @{$col};
+
+ while ($cols-- > 0) {
+ write_html($handle, <<END_OF_HTML);
+ <td width="$width%"></td>
+END_OF_HTML
+ }
+ }
+ # Next row
+ write_html($handle, <<END_OF_HTML);
+ </tr>
+
+ <tr>
+ <td class="tableHead">$file_heading</td>
+END_OF_HTML
+ # Heading row
+ foreach $col (@columns) {
+ my ($heading, $cols) = @{$col};
+ my $colspan = "";
+
+ $colspan = " colspan=$cols" if ($cols > 1);
+ write_html($handle, <<END_OF_HTML);
+ <td class="tableHead"$colspan>$heading</td>
+END_OF_HTML
+ }
+ write_html($handle, <<END_OF_HTML);
+ </tr>
+END_OF_HTML
+}
+
+
+# write_file_table_entry(handle, base_dir, filename, page_link,
+# ([ found, hit, med_limit, hi_limit, graph ], ..)
+#
+# Write an entry of the file table.
+#
+
+sub write_file_table_entry(*$$$@)
+{
+ my ($handle, $base_dir, $filename, $page_link, @entries) = @_;
+ my $file_code;
+ my $entry;
+
+ # Add link to source if provided
+ if (defined($page_link) && $page_link ne "") {
+ $file_code = "<a href=\"$page_link\">$filename</a>";
+ } else {
+ $file_code = $filename;
+ }
+
+ # First column: filename
+ write_html($handle, <<END_OF_HTML);
+ <tr>
+ <td class="coverFile">$file_code</td>
+END_OF_HTML
+ # Columns as defined
+ foreach $entry (@entries) {
+ my ($found, $hit, $med, $hi, $graph) = @{$entry};
+ my $bar_graph;
+ my $class;
+ my $rate;
+
+ # Generate bar graph if requested
+ if ($graph) {
+ $bar_graph = get_bar_graph_code($base_dir, $found,
+ $hit);
+ write_html($handle, <<END_OF_HTML);
+ <td class="coverBar" align="center">
+ $bar_graph
+ </td>
+END_OF_HTML
+ }
+ # Get rate color and text
+ if ($found == 0) {
+ $rate = "-";
+ $class = "Hi";
+ } else {
+ $rate = sprintf("%.1f&nbsp;%%", $hit * 100 / $found);
+ $class = $rate_name[classify_rate($found, $hit,
+ $med, $hi)];
+ }
+ write_html($handle, <<END_OF_HTML);
+ <td class="coverPer$class">$rate</td>
+ <td class="coverNum$class">$hit / $found</td>
+END_OF_HTML
+ }
+ # End of row
+ write_html($handle, <<END_OF_HTML);
+ </tr>
+END_OF_HTML
+}
+
+
+#
+# write_file_table_detail_entry(filehandle, test_name, ([found, hit], ...))
+#
+# Write entry for detail section in file table.
+#
+
+sub write_file_table_detail_entry(*$@)
+{
+ my ($handle, $test, @entries) = @_;
+ my $entry;
+
+ if ($test eq "") {
+ $test = "<span style=\"font-style:italic\">&lt;unnamed&gt;</span>";
+ } elsif ($test =~ /^(.*),diff$/) {
+ $test = $1." (converted)";
+ }
+ # Testname
+ write_html($handle, <<END_OF_HTML);
+ <tr>
+ <td class="testName" colspan=2>$test</td>
+END_OF_HTML
+ # Test data
+ foreach $entry (@entries) {
+ my ($found, $hit) = @{$entry};
+ my $rate = "-";
+
+ if ($found > 0) {
+ $rate = sprintf("%.1f&nbsp;%%", $hit * 100 / $found);
+ }
+ write_html($handle, <<END_OF_HTML);
+ <td class="testPer">$rate</td>
+ <td class="testNum">$hit&nbsp;/&nbsp;$found</td>
+END_OF_HTML
+ }
+
+ write_html($handle, <<END_OF_HTML);
+ </tr>
+
+END_OF_HTML
+
+ # *************************************************************
+}
+
+
+#
+# write_file_table_epilog(filehandle)
+#
+# Write end of file table HTML code.
+#
+
+sub write_file_table_epilog(*)
+{
+ # *************************************************************
+
+ write_html($_[0], <<END_OF_HTML)
+ </table>
+ </center>
+ <br>
+
+END_OF_HTML
+ ;
+
+ # *************************************************************
+}
+
+
+#
+# write_test_table_prolog(filehandle, table_heading)
+#
+# Write heading for test case description table.
+#
+
+sub write_test_table_prolog(*$)
+{
+ # *************************************************************
+
+ write_html($_[0], <<END_OF_HTML)
+ <center>
+ <table width="80%" cellpadding=2 cellspacing=1 border=0>
+
+ <tr>
+ <td><br></td>
+ </tr>
+
+ <tr>
+ <td class="tableHead">$_[1]</td>
+ </tr>
+
+ <tr>
+ <td class="testDescription">
+ <dl>
+END_OF_HTML
+ ;
+
+ # *************************************************************
+}
+
+
+#
+# write_test_table_entry(filehandle, test_name, test_description)
+#
+# Write entry for the test table.
+#
+
+sub write_test_table_entry(*$$)
+{
+ # *************************************************************
+
+ write_html($_[0], <<END_OF_HTML)
+ <dt>$_[1]<a name="$_[1]">&nbsp;</a></dt>
+ <dd>$_[2]<br><br></dd>
+END_OF_HTML
+ ;
+
+ # *************************************************************
+}
+
+
+#
+# write_test_table_epilog(filehandle)
+#
+# Write end of test description table HTML code.
+#
+
+sub write_test_table_epilog(*)
+{
+ # *************************************************************
+
+ write_html($_[0], <<END_OF_HTML)
+ </dl>
+ </td>
+ </tr>
+ </table>
+ </center>
+ <br>
+
+END_OF_HTML
+ ;
+
+ # *************************************************************
+}
+
+
+sub fmt_centered($$)
+{
+ my ($width, $text) = @_;
+ my $w0 = length($text);
+ my $w1 = int(($width - $w0) / 2);
+ my $w2 = $width - $w0 - $w1;
+
+ return (" "x$w1).$text.(" "x$w2);
+}
+
+
+#
+# write_source_prolog(filehandle)
+#
+# Write start of source code table.
+#
+
+sub write_source_prolog(*)
+{
+ my $lineno_heading = " ";
+ my $branch_heading = "";
+ my $line_heading = fmt_centered($line_field_width, "Line data");
+ my $source_heading = " Source code";
+
+ if ($br_coverage) {
+ $branch_heading = fmt_centered($br_field_width, "Branch data").
+ " ";
+ }
+ # *************************************************************
+
+ write_html($_[0], <<END_OF_HTML)
+ <table cellpadding=0 cellspacing=0 border=0>
+ <tr>
+ <td><br></td>
+ </tr>
+ <tr>
+ <td>
+<pre class="sourceHeading">${lineno_heading}${branch_heading}${line_heading} ${source_heading}</pre>
+<pre class="source">
+END_OF_HTML
+ ;
+
+ # *************************************************************
+}
+
+
+#
+# get_branch_blocks(brdata)
+#
+# Group branches that belong to the same basic block.
+#
+# Returns: [block1, block2, ...]
+# block: [branch1, branch2, ...]
+# branch: [block_num, branch_num, taken_count, text_length, open, close]
+#
+
+sub get_branch_blocks($)
+{
+ my ($brdata) = @_;
+ my $last_block_num;
+ my $block = [];
+ my @blocks;
+ my $i;
+ my $num = br_ivec_len($brdata);
+
+ # Group branches
+ for ($i = 0; $i < $num; $i++) {
+ my ($block_num, $branch, $taken) = br_ivec_get($brdata, $i);
+ my $br;
+
+ if (defined($last_block_num) && $block_num != $last_block_num) {
+ push(@blocks, $block);
+ $block = [];
+ }
+ $br = [$block_num, $branch, $taken, 3, 0, 0];
+ push(@{$block}, $br);
+ $last_block_num = $block_num;
+ }
+ push(@blocks, $block) if (scalar(@{$block}) > 0);
+
+ # Add braces to first and last branch in group
+ foreach $block (@blocks) {
+ $block->[0]->[$BR_OPEN] = 1;
+ $block->[0]->[$BR_LEN]++;
+ $block->[scalar(@{$block}) - 1]->[$BR_CLOSE] = 1;
+ $block->[scalar(@{$block}) - 1]->[$BR_LEN]++;
+ }
+
+ return @blocks;
+}
+
+#
+# get_block_len(block)
+#
+# Calculate total text length of all branches in a block of branches.
+#
+
+sub get_block_len($)
+{
+ my ($block) = @_;
+ my $len = 0;
+ my $branch;
+
+ foreach $branch (@{$block}) {
+ $len += $branch->[$BR_LEN];
+ }
+
+ return $len;
+}
+
+
+#
+# get_branch_html(brdata)
+#
+# Return a list of HTML lines which represent the specified branch coverage
+# data in source code view.
+#
+
+sub get_branch_html($)
+{
+ my ($brdata) = @_;
+ my @blocks = get_branch_blocks($brdata);
+ my $block;
+ my $branch;
+ my $line_len = 0;
+ my $line = []; # [branch2|" ", branch|" ", ...]
+ my @lines; # [line1, line2, ...]
+ my @result;
+
+ # Distribute blocks to lines
+ foreach $block (@blocks) {
+ my $block_len = get_block_len($block);
+
+ # Does this block fit into the current line?
+ if ($line_len + $block_len <= $br_field_width) {
+ # Add it
+ $line_len += $block_len;
+ push(@{$line}, @{$block});
+ next;
+ } elsif ($block_len <= $br_field_width) {
+ # It would fit if the line was empty - add it to new
+ # line
+ push(@lines, $line);
+ $line_len = $block_len;
+ $line = [ @{$block} ];
+ next;
+ }
+ # Split the block into several lines
+ foreach $branch (@{$block}) {
+ if ($line_len + $branch->[$BR_LEN] >= $br_field_width) {
+ # Start a new line
+ if (($line_len + 1 <= $br_field_width) &&
+ scalar(@{$line}) > 0 &&
+ !$line->[scalar(@$line) - 1]->[$BR_CLOSE]) {
+ # Try to align branch symbols to be in
+ # one # row
+ push(@{$line}, " ");
+ }
+ push(@lines, $line);
+ $line_len = 0;
+ $line = [];
+ }
+ push(@{$line}, $branch);
+ $line_len += $branch->[$BR_LEN];
+ }
+ }
+ push(@lines, $line);
+
+ # Convert to HTML
+ foreach $line (@lines) {
+ my $current = "";
+ my $current_len = 0;
+
+ foreach $branch (@$line) {
+ # Skip alignment space
+ if ($branch eq " ") {
+ $current .= " ";
+ $current_len++;
+ next;
+ }
+
+ my ($block_num, $br_num, $taken, $len, $open, $close) =
+ @{$branch};
+ my $class;
+ my $title;
+ my $text;
+
+ if ($taken eq '-') {
+ $class = "branchNoExec";
+ $text = " # ";
+ $title = "Branch $br_num was not executed";
+ } elsif ($taken == 0) {
+ $class = "branchNoCov";
+ $text = " - ";
+ $title = "Branch $br_num was not taken";
+ } else {
+ $class = "branchCov";
+ $text = " + ";
+ $title = "Branch $br_num was taken $taken ".
+ "time";
+ $title .= "s" if ($taken > 1);
+ }
+ $current .= "[" if ($open);
+ $current .= "<span class=\"$class\" title=\"$title\">";
+ $current .= $text."</span>";
+ $current .= "]" if ($close);
+ $current_len += $len;
+ }
+
+ # Right-align result text
+ if ($current_len < $br_field_width) {
+ $current = (" "x($br_field_width - $current_len)).
+ $current;
+ }
+ push(@result, $current);
+ }
+
+ return @result;
+}
+
+
+#
+# format_count(count, width)
+#
+# Return a right-aligned representation of count that fits in width characters.
+#
+
+sub format_count($$)
+{
+ my ($count, $width) = @_;
+ my $result;
+ my $exp;
+
+ $result = sprintf("%*.0f", $width, $count);
+ while (length($result) > $width) {
+ last if ($count < 10);
+ $exp++;
+ $count = int($count/10);
+ $result = sprintf("%*s", $width, ">$count*10^$exp");
+ }
+ return $result;
+}
+
+#
+# write_source_line(filehandle, line_num, source, hit_count, converted,
+# brdata, add_anchor)
+#
+# Write formatted source code line. Return a line in a format as needed
+# by gen_png()
+#
+
+sub write_source_line(*$$$$$$)
+{
+ my ($handle, $line, $source, $count, $converted, $brdata,
+ $add_anchor) = @_;
+ my $source_format;
+ my $count_format;
+ my $result;
+ my $anchor_start = "";
+ my $anchor_end = "";
+ my $count_field_width = $line_field_width - 1;
+ my @br_html;
+ my $html;
+
+ # Get branch HTML data for this line
+ @br_html = get_branch_html($brdata) if ($br_coverage);
+
+ if (!defined($count)) {
+ $result = "";
+ $source_format = "";
+ $count_format = " "x$count_field_width;
+ }
+ elsif ($count == 0) {
+ $result = $count;
+ $source_format = '<span class="lineNoCov">';
+ $count_format = format_count($count, $count_field_width);
+ }
+ elsif ($converted && defined($highlight)) {
+ $result = "*".$count;
+ $source_format = '<span class="lineDiffCov">';
+ $count_format = format_count($count, $count_field_width);
+ }
+ else {
+ $result = $count;
+ $source_format = '<span class="lineCov">';
+ $count_format = format_count($count, $count_field_width);
+ }
+ $result .= ":".$source;
+
+ # Write out a line number navigation anchor every $nav_resolution
+ # lines if necessary
+ if ($add_anchor)
+ {
+ $anchor_start = "<a name=\"$_[1]\">";
+ $anchor_end = "</a>";
+ }
+
+
+ # *************************************************************
+
+ $html = $anchor_start;
+ $html .= "<span class=\"lineNum\">".sprintf("%8d", $line)." </span>";
+ $html .= shift(@br_html).":" if ($br_coverage);
+ $html .= "$source_format$count_format : ";
+ $html .= escape_html($source);
+ $html .= "</span>" if ($source_format);
+ $html .= $anchor_end."\n";
+
+ write_html($handle, $html);
+
+ if ($br_coverage) {
+ # Add lines for overlong branch information
+ foreach (@br_html) {
+ write_html($handle, "<span class=\"lineNum\">".
+ " </span>$_\n");
+ }
+ }
+ # *************************************************************
+
+ return($result);
+}
+
+
+#
+# write_source_epilog(filehandle)
+#
+# Write end of source code table.
+#
+
+sub write_source_epilog(*)
+{
+ # *************************************************************
+
+ write_html($_[0], <<END_OF_HTML)
+ </pre>
+ </td>
+ </tr>
+ </table>
+ <br>
+
+END_OF_HTML
+ ;
+
+ # *************************************************************
+}
+
+
+#
+# write_html_epilog(filehandle, base_dir[, break_frames])
+#
+# Write HTML page footer to FILEHANDLE. BREAK_FRAMES should be set when
+# this page is embedded in a frameset, clicking the URL link will then
+# break this frameset.
+#
+
+sub write_html_epilog(*$;$)
+{
+ my $basedir = $_[1];
+ my $break_code = "";
+ my $epilog;
+
+ if (defined($_[2]))
+ {
+ $break_code = " target=\"_parent\"";
+ }
+
+ # *************************************************************
+
+ write_html($_[0], <<END_OF_HTML)
+ <table width="100%" border=0 cellspacing=0 cellpadding=0>
+ <tr><td class="ruler"><img src="$_[1]glass.png" width=3 height=3 alt=""></td></tr>
+ <tr><td class="versionInfo">Generated by: <a href="$lcov_url"$break_code>$lcov_version</a></td></tr>
+ </table>
+ <br>
+END_OF_HTML
+ ;
+
+ $epilog = $html_epilog;
+ $epilog =~ s/\@basedir\@/$basedir/g;
+
+ write_html($_[0], $epilog);
+}
+
+
+#
+# write_frameset(filehandle, basedir, basename, pagetitle)
+#
+#
+
+sub write_frameset(*$$$)
+{
+ my $frame_width = $overview_width + 40;
+
+ # *************************************************************
+
+ write_html($_[0], <<END_OF_HTML)
+ <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Frameset//EN">
+
+ <html lang="en">
+
+ <head>
+ <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+ <title>$_[3]</title>
+ <link rel="stylesheet" type="text/css" href="$_[1]gcov.css">
+ </head>
+
+ <frameset cols="$frame_width,*">
+ <frame src="$_[2].gcov.overview.$html_ext" name="overview">
+ <frame src="$_[2].gcov.$html_ext" name="source">
+ <noframes>
+ <center>Frames not supported by your browser!<br></center>
+ </noframes>
+ </frameset>
+
+ </html>
+END_OF_HTML
+ ;
+
+ # *************************************************************
+}
+
+
+#
+# sub write_overview_line(filehandle, basename, line, link)
+#
+#
+
+sub write_overview_line(*$$$)
+{
+ my $y1 = $_[2] - 1;
+ my $y2 = $y1 + $nav_resolution - 1;
+ my $x2 = $overview_width - 1;
+
+ # *************************************************************
+
+ write_html($_[0], <<END_OF_HTML)
+ <area shape="rect" coords="0,$y1,$x2,$y2" href="$_[1].gcov.$html_ext#$_[3]" target="source" alt="overview">
+END_OF_HTML
+ ;
+
+ # *************************************************************
+}
+
+
+#
+# write_overview(filehandle, basedir, basename, pagetitle, lines)
+#
+#
+
+sub write_overview(*$$$$)
+{
+ my $index;
+ my $max_line = $_[4] - 1;
+ my $offset;
+
+ # *************************************************************
+
+ write_html($_[0], <<END_OF_HTML)
+ <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
+
+ <html lang="en">
+
+ <head>
+ <title>$_[3]</title>
+ <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+ <link rel="stylesheet" type="text/css" href="$_[1]gcov.css">
+ </head>
+
+ <body>
+ <map name="overview">
+END_OF_HTML
+ ;
+
+ # *************************************************************
+
+ # Make $offset the next higher multiple of $nav_resolution
+ $offset = ($nav_offset + $nav_resolution - 1) / $nav_resolution;
+ $offset = sprintf("%d", $offset ) * $nav_resolution;
+
+ # Create image map for overview image
+ for ($index = 1; $index <= $_[4]; $index += $nav_resolution)
+ {
+ # Enforce nav_offset
+ if ($index < $offset + 1)
+ {
+ write_overview_line($_[0], $_[2], $index, 1);
+ }
+ else
+ {
+ write_overview_line($_[0], $_[2], $index, $index - $offset);
+ }
+ }
+
+ # *************************************************************
+
+ write_html($_[0], <<END_OF_HTML)
+ </map>
+
+ <center>
+ <a href="$_[2].gcov.$html_ext#top" target="source">Top</a><br><br>
+ <img src="$_[2].gcov.png" width=$overview_width height=$max_line alt="Overview" border=0 usemap="#overview">
+ </center>
+ </body>
+ </html>
+END_OF_HTML
+ ;
+
+ # *************************************************************
+}
+
+
+# format_rate(found, hit)
+#
+# Return formatted percent string for coverage rate.
+#
+
+sub format_rate($$)
+{
+ return $_[0] == 0 ? "-" : sprintf("%.1f", $_[1] * 100 / $_[0])." %";
+}
+
+
+sub max($$)
+{
+ my ($a, $b) = @_;
+
+ return $a if ($a > $b);
+ return $b;
+}
+
+
+#
+# write_header(filehandle, type, trunc_file_name, rel_file_name, lines_found,
+# lines_hit, funcs_found, funcs_hit, sort_type)
+#
+# Write a complete standard page header. TYPE may be (0, 1, 2, 3, 4)
+# corresponding to (directory view header, file view header, source view
+# header, test case description header, function view header)
+#
+
+sub write_header(*$$$$$$$$$$)
+{
+ local *HTML_HANDLE = $_[0];
+ my $type = $_[1];
+ my $trunc_name = $_[2];
+ my $rel_filename = $_[3];
+ my $lines_found = $_[4];
+ my $lines_hit = $_[5];
+ my $fn_found = $_[6];
+ my $fn_hit = $_[7];
+ my $br_found = $_[8];
+ my $br_hit = $_[9];
+ my $sort_type = $_[10];
+ my $base_dir;
+ my $view;
+ my $test;
+ my $base_name;
+ my $style;
+ my $rate;
+ my @row_left;
+ my @row_right;
+ my $num_rows;
+ my $i;
+
+ $base_name = basename($rel_filename);
+
+ # Prepare text for "current view" field
+ if ($type == $HDR_DIR)
+ {
+ # Main overview
+ $base_dir = "";
+ $view = $overview_title;
+ }
+ elsif ($type == $HDR_FILE)
+ {
+ # Directory overview
+ $base_dir = get_relative_base_path($rel_filename);
+ $view = "<a href=\"$base_dir"."index.$html_ext\">".
+ "$overview_title</a> - $trunc_name";
+ }
+ elsif ($type == $HDR_SOURCE || $type == $HDR_FUNC)
+ {
+ # File view
+ my $dir_name = dirname($rel_filename);
+
+ $base_dir = get_relative_base_path($dir_name);
+ if ($frames)
+ {
+ # Need to break frameset when clicking any of these
+ # links
+ $view = "<a href=\"$base_dir"."index.$html_ext\" ".
+ "target=\"_parent\">$overview_title</a> - ".
+ "<a href=\"index.$html_ext\" target=\"_parent\">".
+ "$dir_name</a> - $base_name";
+ }
+ else
+ {
+ $view = "<a href=\"$base_dir"."index.$html_ext\">".
+ "$overview_title</a> - ".
+ "<a href=\"index.$html_ext\">".
+ "$dir_name</a> - $base_name";
+ }
+
+ # Add function suffix
+ if ($func_coverage) {
+ $view .= "<span style=\"font-size: 80%;\">";
+ if ($type == $HDR_SOURCE) {
+ $view .= " (source / <a href=\"$base_name.func.$html_ext\">functions</a>)";
+ } elsif ($type == $HDR_FUNC) {
+ $view .= " (<a href=\"$base_name.gcov.$html_ext\">source</a> / functions)";
+ }
+ $view .= "</span>";
+ }
+ }
+ elsif ($type == $HDR_TESTDESC)
+ {
+ # Test description header
+ $base_dir = "";
+ $view = "<a href=\"$base_dir"."index.$html_ext\">".
+ "$overview_title</a> - test case descriptions";
+ }
+
+ # Prepare text for "test" field
+ $test = escape_html($test_title);
+
+ # Append link to test description page if available
+ if (%test_description && ($type != $HDR_TESTDESC))
+ {
+ if ($frames && ($type == $HDR_SOURCE || $type == $HDR_FUNC))
+ {
+ # Need to break frameset when clicking this link
+ $test .= " ( <span style=\"font-size:80%;\">".
+ "<a href=\"$base_dir".
+ "descriptions.$html_ext\" target=\"_parent\">".
+ "view descriptions</a></span> )";
+ }
+ else
+ {
+ $test .= " ( <span style=\"font-size:80%;\">".
+ "<a href=\"$base_dir".
+ "descriptions.$html_ext\">".
+ "view descriptions</a></span> )";
+ }
+ }
+
+ # Write header
+ write_header_prolog(*HTML_HANDLE, $base_dir);
+
+ # Left row
+ push(@row_left, [[ "10%", "headerItem", "Current view:" ],
+ [ "35%", "headerValue", $view ]]);
+ push(@row_left, [[undef, "headerItem", "Test:"],
+ [undef, "headerValue", $test]]);
+ push(@row_left, [[undef, "headerItem", "Date:"],
+ [undef, "headerValue", $date]]);
+
+ # Right row
+ if ($legend && ($type == $HDR_SOURCE || $type == $HDR_FUNC)) {
+ my $text = <<END_OF_HTML;
+ Lines:
+ <span class="coverLegendCov">hit</span>
+ <span class="coverLegendNoCov">not hit</span>
+END_OF_HTML
+ if ($br_coverage) {
+ $text .= <<END_OF_HTML;
+ | Branches:
+ <span class="coverLegendCov">+</span> taken
+ <span class="coverLegendNoCov">-</span> not taken
+ <span class="coverLegendNoCov">#</span> not executed
+END_OF_HTML
+ }
+ push(@row_left, [[undef, "headerItem", "Legend:"],
+ [undef, "headerValueLeg", $text]]);
+ } elsif ($legend && ($type != $HDR_TESTDESC)) {
+ my $text = <<END_OF_HTML;
+ Rating:
+ <span class="coverLegendCovLo" title="Coverage rates below $med_limit % are classified as low">low: &lt; $med_limit %</span>
+ <span class="coverLegendCovMed" title="Coverage rates between $med_limit % and $hi_limit % are classified as medium">medium: &gt;= $med_limit %</span>
+ <span class="coverLegendCovHi" title="Coverage rates of $hi_limit % and more are classified as high">high: &gt;= $hi_limit %</span>
+END_OF_HTML
+ push(@row_left, [[undef, "headerItem", "Legend:"],
+ [undef, "headerValueLeg", $text]]);
+ }
+ if ($type == $HDR_TESTDESC) {
+ push(@row_right, [[ "55%" ]]);
+ } else {
+ push(@row_right, [["15%", undef, undef ],
+ ["10%", "headerCovTableHead", "Hit" ],
+ ["10%", "headerCovTableHead", "Total" ],
+ ["15%", "headerCovTableHead", "Coverage"]]);
+ }
+ # Line coverage
+ $style = $rate_name[classify_rate($lines_found, $lines_hit,
+ $med_limit, $hi_limit)];
+ $rate = format_rate($lines_found, $lines_hit);
+ push(@row_right, [[undef, "headerItem", "Lines:"],
+ [undef, "headerCovTableEntry", $lines_hit],
+ [undef, "headerCovTableEntry", $lines_found],
+ [undef, "headerCovTableEntry$style", $rate]])
+ if ($type != $HDR_TESTDESC);
+ # Function coverage
+ if ($func_coverage) {
+ $style = $rate_name[classify_rate($fn_found, $fn_hit,
+ $fn_med_limit, $fn_hi_limit)];
+ $rate = format_rate($fn_found, $fn_hit);
+ push(@row_right, [[undef, "headerItem", "Functions:"],
+ [undef, "headerCovTableEntry", $fn_hit],
+ [undef, "headerCovTableEntry", $fn_found],
+ [undef, "headerCovTableEntry$style", $rate]])
+ if ($type != $HDR_TESTDESC);
+ }
+ # Branch coverage
+ if ($br_coverage) {
+ $style = $rate_name[classify_rate($br_found, $br_hit,
+ $br_med_limit, $br_hi_limit)];
+ $rate = format_rate($br_found, $br_hit);
+ push(@row_right, [[undef, "headerItem", "Branches:"],
+ [undef, "headerCovTableEntry", $br_hit],
+ [undef, "headerCovTableEntry", $br_found],
+ [undef, "headerCovTableEntry$style", $rate]])
+ if ($type != $HDR_TESTDESC);
+ }
+
+ # Print rows
+ $num_rows = max(scalar(@row_left), scalar(@row_right));
+ for ($i = 0; $i < $num_rows; $i++) {
+ my $left = $row_left[$i];
+ my $right = $row_right[$i];
+
+ if (!defined($left)) {
+ $left = [[undef, undef, undef], [undef, undef, undef]];
+ }
+ if (!defined($right)) {
+ $right = [];
+ }
+ write_header_line(*HTML_HANDLE, @{$left},
+ [ $i == 0 ? "5%" : undef, undef, undef],
+ @{$right});
+ }
+
+ # Fourth line
+ write_header_epilog(*HTML_HANDLE, $base_dir);
+}
+
+
+#
+# get_sorted_keys(hash_ref, sort_type)
+#
+
+sub get_sorted_keys($$)
+{
+ my ($hash, $type) = @_;
+
+ if ($type == $SORT_FILE) {
+ # Sort by name
+ return sort(keys(%{$hash}));
+ } elsif ($type == $SORT_LINE) {
+ # Sort by line coverage
+ return sort({$hash->{$a}[7] <=> $hash->{$b}[7]} keys(%{$hash}));
+ } elsif ($type == $SORT_FUNC) {
+ # Sort by function coverage;
+ return sort({$hash->{$a}[8] <=> $hash->{$b}[8]} keys(%{$hash}));
+ } elsif ($type == $SORT_BRANCH) {
+ # Sort by br coverage;
+ return sort({$hash->{$a}[9] <=> $hash->{$b}[9]} keys(%{$hash}));
+ }
+}
+
+sub get_sort_code($$$)
+{
+ my ($link, $alt, $base) = @_;
+ my $png;
+ my $link_start;
+ my $link_end;
+
+ if (!defined($link)) {
+ $png = "glass.png";
+ $link_start = "";
+ $link_end = "";
+ } else {
+ $png = "updown.png";
+ $link_start = '<a href="'.$link.'">';
+ $link_end = "</a>";
+ }
+
+ return ' <span class="tableHeadSort">'.$link_start.
+ '<img src="'.$base.$png.'" width=10 height=14 '.
+ 'alt="'.$alt.'" title="'.$alt.'" border=0>'.$link_end.'</span>';
+}
+
+sub get_file_code($$$$)
+{
+ my ($type, $text, $sort_button, $base) = @_;
+ my $result = $text;
+ my $link;
+
+ if ($sort_button) {
+ if ($type == $HEAD_NO_DETAIL) {
+ $link = "index.$html_ext";
+ } else {
+ $link = "index-detail.$html_ext";
+ }
+ }
+ $result .= get_sort_code($link, "Sort by name", $base);
+
+ return $result;
+}
+
+sub get_line_code($$$$$)
+{
+ my ($type, $sort_type, $text, $sort_button, $base) = @_;
+ my $result = $text;
+ my $sort_link;
+
+ if ($type == $HEAD_NO_DETAIL) {
+ # Just text
+ if ($sort_button) {
+ $sort_link = "index-sort-l.$html_ext";
+ }
+ } elsif ($type == $HEAD_DETAIL_HIDDEN) {
+ # Text + link to detail view
+ $result .= ' ( <a class="detail" href="index-detail'.
+ $fileview_sortname[$sort_type].'.'.$html_ext.
+ '">show details</a> )';
+ if ($sort_button) {
+ $sort_link = "index-sort-l.$html_ext";
+ }
+ } else {
+ # Text + link to standard view
+ $result .= ' ( <a class="detail" href="index'.
+ $fileview_sortname[$sort_type].'.'.$html_ext.
+ '">hide details</a> )';
+ if ($sort_button) {
+ $sort_link = "index-detail-sort-l.$html_ext";
+ }
+ }
+ # Add sort button
+ $result .= get_sort_code($sort_link, "Sort by line coverage", $base);
+
+ return $result;
+}
+
+sub get_func_code($$$$)
+{
+ my ($type, $text, $sort_button, $base) = @_;
+ my $result = $text;
+ my $link;
+
+ if ($sort_button) {
+ if ($type == $HEAD_NO_DETAIL) {
+ $link = "index-sort-f.$html_ext";
+ } else {
+ $link = "index-detail-sort-f.$html_ext";
+ }
+ }
+ $result .= get_sort_code($link, "Sort by function coverage", $base);
+ return $result;
+}
+
+sub get_br_code($$$$)
+{
+ my ($type, $text, $sort_button, $base) = @_;
+ my $result = $text;
+ my $link;
+
+ if ($sort_button) {
+ if ($type == $HEAD_NO_DETAIL) {
+ $link = "index-sort-b.$html_ext";
+ } else {
+ $link = "index-detail-sort-b.$html_ext";
+ }
+ }
+ $result .= get_sort_code($link, "Sort by branch coverage", $base);
+ return $result;
+}
+
+#
+# write_file_table(filehandle, base_dir, overview, testhash, testfnchash,
+# testbrhash, fileview, sort_type)
+#
+# Write a complete file table. OVERVIEW is a reference to a hash containing
+# the following mapping:
+#
+# filename -> "lines_found,lines_hit,funcs_found,funcs_hit,page_link,
+# func_link"
+#
+# TESTHASH is a reference to the following hash:
+#
+# filename -> \%testdata
+# %testdata: name of test affecting this file -> \%testcount
+# %testcount: line number -> execution count for a single test
+#
+# Heading of first column is "Filename" if FILEVIEW is true, "Directory name"
+# otherwise.
+#
+
+sub write_file_table(*$$$$$$$)
+{
+ local *HTML_HANDLE = $_[0];
+ my $base_dir = $_[1];
+ my $overview = $_[2];
+ my $testhash = $_[3];
+ my $testfnchash = $_[4];
+ my $testbrhash = $_[5];
+ my $fileview = $_[6];
+ my $sort_type = $_[7];
+ my $filename;
+ my $bar_graph;
+ my $hit;
+ my $found;
+ my $fn_found;
+ my $fn_hit;
+ my $br_found;
+ my $br_hit;
+ my $page_link;
+ my $testname;
+ my $testdata;
+ my $testfncdata;
+ my $testbrdata;
+ my %affecting_tests;
+ my $line_code = "";
+ my $func_code;
+ my $br_code;
+ my $file_code;
+ my @head_columns;
+
+ # Determine HTML code for column headings
+ if (($base_dir ne "") && $show_details)
+ {
+ my $detailed = keys(%{$testhash});
+
+ $file_code = get_file_code($detailed ? $HEAD_DETAIL_HIDDEN :
+ $HEAD_NO_DETAIL,
+ $fileview ? "Filename" : "Directory",
+ $sort && $sort_type != $SORT_FILE,
+ $base_dir);
+ $line_code = get_line_code($detailed ? $HEAD_DETAIL_SHOWN :
+ $HEAD_DETAIL_HIDDEN,
+ $sort_type,
+ "Line Coverage",
+ $sort && $sort_type != $SORT_LINE,
+ $base_dir);
+ $func_code = get_func_code($detailed ? $HEAD_DETAIL_HIDDEN :
+ $HEAD_NO_DETAIL,
+ "Functions",
+ $sort && $sort_type != $SORT_FUNC,
+ $base_dir);
+ $br_code = get_br_code($detailed ? $HEAD_DETAIL_HIDDEN :
+ $HEAD_NO_DETAIL,
+ "Branches",
+ $sort && $sort_type != $SORT_BRANCH,
+ $base_dir);
+ } else {
+ $file_code = get_file_code($HEAD_NO_DETAIL,
+ $fileview ? "Filename" : "Directory",
+ $sort && $sort_type != $SORT_FILE,
+ $base_dir);
+ $line_code = get_line_code($HEAD_NO_DETAIL, $sort_type, "Line Coverage",
+ $sort && $sort_type != $SORT_LINE,
+ $base_dir);
+ $func_code = get_func_code($HEAD_NO_DETAIL, "Functions",
+ $sort && $sort_type != $SORT_FUNC,
+ $base_dir);
+ $br_code = get_br_code($HEAD_NO_DETAIL, "Branches",
+ $sort && $sort_type != $SORT_BRANCH,
+ $base_dir);
+ }
+ push(@head_columns, [ $line_code, 3 ]);
+ push(@head_columns, [ $func_code, 2]) if ($func_coverage);
+ push(@head_columns, [ $br_code, 2]) if ($br_coverage);
+
+ write_file_table_prolog(*HTML_HANDLE, $file_code, @head_columns);
+
+ foreach $filename (get_sorted_keys($overview, $sort_type))
+ {
+ my @columns;
+ ($found, $hit, $fn_found, $fn_hit, $br_found, $br_hit,
+ $page_link) = @{$overview->{$filename}};
+
+ # Line coverage
+ push(@columns, [$found, $hit, $med_limit, $hi_limit, 1]);
+ # Function coverage
+ if ($func_coverage) {
+ push(@columns, [$fn_found, $fn_hit, $fn_med_limit,
+ $fn_hi_limit, 0]);
+ }
+ # Branch coverage
+ if ($br_coverage) {
+ push(@columns, [$br_found, $br_hit, $br_med_limit,
+ $br_hi_limit, 0]);
+ }
+ write_file_table_entry(*HTML_HANDLE, $base_dir, $filename,
+ $page_link, @columns);
+
+ $testdata = $testhash->{$filename};
+ $testfncdata = $testfnchash->{$filename};
+ $testbrdata = $testbrhash->{$filename};
+
+ # Check whether we should write test specific coverage
+ # as well
+ if (!($show_details && $testdata)) { next; }
+
+ # Filter out those tests that actually affect this file
+ %affecting_tests = %{ get_affecting_tests($testdata,
+ $testfncdata, $testbrdata) };
+
+ # Does any of the tests affect this file at all?
+ if (!%affecting_tests) { next; }
+
+ foreach $testname (keys(%affecting_tests))
+ {
+ my @results;
+ ($found, $hit, $fn_found, $fn_hit, $br_found, $br_hit) =
+ split(",", $affecting_tests{$testname});
+
+ # Insert link to description of available
+ if ($test_description{$testname})
+ {
+ $testname = "<a href=\"$base_dir".
+ "descriptions.$html_ext#$testname\">".
+ "$testname</a>";
+ }
+
+ push(@results, [$found, $hit]);
+ push(@results, [$fn_found, $fn_hit]) if ($func_coverage);
+ push(@results, [$br_found, $br_hit]) if ($br_coverage);
+ write_file_table_detail_entry(*HTML_HANDLE, $testname,
+ @results);
+ }
+ }
+
+ write_file_table_epilog(*HTML_HANDLE);
+}
+
+
+#
+# get_found_and_hit(hash)
+#
+# Return the count for entries (found) and entries with an execution count
+# greater than zero (hit) in a hash (linenumber -> execution count) as
+# a list (found, hit)
+#
+
+sub get_found_and_hit($)
+{
+ my %hash = %{$_[0]};
+ my $found = 0;
+ my $hit = 0;
+
+ # Calculate sum
+ $found = 0;
+ $hit = 0;
+
+ foreach (keys(%hash))
+ {
+ $found++;
+ if ($hash{$_}>0) { $hit++; }
+ }
+
+ return ($found, $hit);
+}
+
+
+#
+# get_func_found_and_hit(sumfnccount)
+#
+# Return (f_found, f_hit) for sumfnccount
+#
+
+sub get_func_found_and_hit($)
+{
+ my ($sumfnccount) = @_;
+ my $function;
+ my $fn_found;
+ my $fn_hit;
+
+ $fn_found = scalar(keys(%{$sumfnccount}));
+ $fn_hit = 0;
+ foreach $function (keys(%{$sumfnccount})) {
+ if ($sumfnccount->{$function} > 0) {
+ $fn_hit++;
+ }
+ }
+ return ($fn_found, $fn_hit);
+}
+
+
+#
+# br_taken_to_num(taken)
+#
+# Convert a branch taken value .info format to number format.
+#
+
+sub br_taken_to_num($)
+{
+ my ($taken) = @_;
+
+ return 0 if ($taken eq '-');
+ return $taken + 1;
+}
+
+
+#
+# br_num_to_taken(taken)
+#
+# Convert a branch taken value in number format to .info format.
+#
+
+sub br_num_to_taken($)
+{
+ my ($taken) = @_;
+
+ return '-' if ($taken == 0);
+ return $taken - 1;
+}
+
+
+#
+# br_taken_add(taken1, taken2)
+#
+# Return the result of taken1 + taken2 for 'branch taken' values.
+#
+
+sub br_taken_add($$)
+{
+ my ($t1, $t2) = @_;
+
+ return $t1 if (!defined($t2));
+ return $t2 if (!defined($t1));
+ return $t1 if ($t2 eq '-');
+ return $t2 if ($t1 eq '-');
+ return $t1 + $t2;
+}
+
+
+#
+# br_taken_sub(taken1, taken2)
+#
+# Return the result of taken1 - taken2 for 'branch taken' values. Return 0
+# if the result would become negative.
+#
+
+sub br_taken_sub($$)
+{
+ my ($t1, $t2) = @_;
+
+ return $t1 if (!defined($t2));
+ return undef if (!defined($t1));
+ return $t1 if ($t1 eq '-');
+ return $t1 if ($t2 eq '-');
+ return 0 if $t2 > $t1;
+ return $t1 - $t2;
+}
+
+
+#
+# br_ivec_len(vector)
+#
+# Return the number of entries in the branch coverage vector.
+#
+
+sub br_ivec_len($)
+{
+ my ($vec) = @_;
+
+ return 0 if (!defined($vec));
+ return (length($vec) * 8 / $BR_VEC_WIDTH) / $BR_VEC_ENTRIES;
+}
+
+
+#
+# br_ivec_get(vector, number)
+#
+# Return an entry from the branch coverage vector.
+#
+
+sub br_ivec_get($$)
+{
+ my ($vec, $num) = @_;
+ my $block;
+ my $branch;
+ my $taken;
+ my $offset = $num * $BR_VEC_ENTRIES;
+
+ # Retrieve data from vector
+ $block = vec($vec, $offset + $BR_BLOCK, $BR_VEC_WIDTH);
+ $branch = vec($vec, $offset + $BR_BRANCH, $BR_VEC_WIDTH);
+ $taken = vec($vec, $offset + $BR_TAKEN, $BR_VEC_WIDTH);
+
+ # Decode taken value from an integer
+ $taken = br_num_to_taken($taken);
+
+ return ($block, $branch, $taken);
+}
+
+
+#
+# br_ivec_push(vector, block, branch, taken)
+#
+# Add an entry to the branch coverage vector. If an entry with the same
+# branch ID already exists, add the corresponding taken values.
+#
+
+sub br_ivec_push($$$$)
+{
+ my ($vec, $block, $branch, $taken) = @_;
+ my $offset;
+ my $num = br_ivec_len($vec);
+ my $i;
+
+ $vec = "" if (!defined($vec));
+
+ # Check if branch already exists in vector
+ for ($i = 0; $i < $num; $i++) {
+ my ($v_block, $v_branch, $v_taken) = br_ivec_get($vec, $i);
+
+ next if ($v_block != $block || $v_branch != $branch);
+
+ # Add taken counts
+ $taken = br_taken_add($taken, $v_taken);
+ last;
+ }
+
+ $offset = $i * $BR_VEC_ENTRIES;
+ $taken = br_taken_to_num($taken);
+
+ # Add to vector
+ vec($vec, $offset + $BR_BLOCK, $BR_VEC_WIDTH) = $block;
+ vec($vec, $offset + $BR_BRANCH, $BR_VEC_WIDTH) = $branch;
+ vec($vec, $offset + $BR_TAKEN, $BR_VEC_WIDTH) = $taken;
+
+ return $vec;
+}
+
+
+#
+# get_br_found_and_hit(sumbrcount)
+#
+# Return (br_found, br_hit) for sumbrcount
+#
+
+sub get_br_found_and_hit($)
+{
+ my ($sumbrcount) = @_;
+ my $line;
+ my $br_found = 0;
+ my $br_hit = 0;
+
+ foreach $line (keys(%{$sumbrcount})) {
+ my $brdata = $sumbrcount->{$line};
+ my $i;
+ my $num = br_ivec_len($brdata);
+
+ for ($i = 0; $i < $num; $i++) {
+ my $taken;
+
+ (undef, undef, $taken) = br_ivec_get($brdata, $i);
+
+ $br_found++;
+ $br_hit++ if ($taken ne "-" && $taken > 0);
+ }
+ }
+
+ return ($br_found, $br_hit);
+}
+
+
+#
+# get_affecting_tests(testdata, testfncdata, testbrdata)
+#
+# HASHREF contains a mapping filename -> (linenumber -> exec count). Return
+# a hash containing mapping filename -> "lines found, lines hit" for each
+# filename which has a nonzero hit count.
+#
+
+sub get_affecting_tests($$$)
+{
+ my ($testdata, $testfncdata, $testbrdata) = @_;
+ my $testname;
+ my $testcount;
+ my $testfnccount;
+ my $testbrcount;
+ my %result;
+ my $found;
+ my $hit;
+ my $fn_found;
+ my $fn_hit;
+ my $br_found;
+ my $br_hit;
+
+ foreach $testname (keys(%{$testdata}))
+ {
+ # Get (line number -> count) hash for this test case
+ $testcount = $testdata->{$testname};
+ $testfnccount = $testfncdata->{$testname};
+ $testbrcount = $testbrdata->{$testname};
+
+ # Calculate sum
+ ($found, $hit) = get_found_and_hit($testcount);
+ ($fn_found, $fn_hit) = get_func_found_and_hit($testfnccount);
+ ($br_found, $br_hit) = get_br_found_and_hit($testbrcount);
+
+ if ($hit>0)
+ {
+ $result{$testname} = "$found,$hit,$fn_found,$fn_hit,".
+ "$br_found,$br_hit";
+ }
+ }
+
+ return(\%result);
+}
+
+
+sub get_hash_reverse($)
+{
+ my ($hash) = @_;
+ my %result;
+
+ foreach (keys(%{$hash})) {
+ $result{$hash->{$_}} = $_;
+ }
+
+ return \%result;
+}
+
+#
+# write_source(filehandle, source_filename, count_data, checksum_data,
+# converted_data, func_data, sumbrcount)
+#
+# Write an HTML view of a source code file. Returns a list containing
+# data as needed by gen_png().
+#
+# Die on error.
+#
+
+sub write_source($$$$$$$)
+{
+ local *HTML_HANDLE = $_[0];
+ local *SOURCE_HANDLE;
+ my $source_filename = $_[1];
+ my %count_data;
+ my $line_number;
+ my @result;
+ my $checkdata = $_[3];
+ my $converted = $_[4];
+ my $funcdata = $_[5];
+ my $sumbrcount = $_[6];
+ my $datafunc = get_hash_reverse($funcdata);
+ my $add_anchor;
+
+ if ($_[2])
+ {
+ %count_data = %{$_[2]};
+ }
+
+ open(SOURCE_HANDLE, "<".$source_filename)
+ or die("ERROR: cannot open $source_filename for reading!\n");
+
+ write_source_prolog(*HTML_HANDLE);
+
+ for ($line_number = 1; <SOURCE_HANDLE> ; $line_number++)
+ {
+ chomp($_);
+
+ # Also remove CR from line-end
+ s/\015$//;
+
+ # Source code matches coverage data?
+ if (defined($checkdata->{$line_number}) &&
+ ($checkdata->{$line_number} ne md5_base64($_)))
+ {
+ die("ERROR: checksum mismatch at $source_filename:".
+ "$line_number\n");
+ }
+
+ $add_anchor = 0;
+ if ($frames) {
+ if (($line_number - 1) % $nav_resolution == 0) {
+ $add_anchor = 1;
+ }
+ }
+ if ($func_coverage) {
+ if ($line_number == 1) {
+ $add_anchor = 1;
+ } elsif (defined($datafunc->{$line_number +
+ $func_offset})) {
+ $add_anchor = 1;
+ }
+ }
+ push (@result,
+ write_source_line(HTML_HANDLE, $line_number,
+ $_, $count_data{$line_number},
+ $converted->{$line_number},
+ $sumbrcount->{$line_number}, $add_anchor));
+ }
+
+ close(SOURCE_HANDLE);
+ write_source_epilog(*HTML_HANDLE);
+ return(@result);
+}
+
+
+sub funcview_get_func_code($$$)
+{
+ my ($name, $base, $type) = @_;
+ my $result;
+ my $link;
+
+ if ($sort && $type == 1) {
+ $link = "$name.func.$html_ext";
+ }
+ $result = "Function Name";
+ $result .= get_sort_code($link, "Sort by function name", $base);
+
+ return $result;
+}
+
+sub funcview_get_count_code($$$)
+{
+ my ($name, $base, $type) = @_;
+ my $result;
+ my $link;
+
+ if ($sort && $type == 0) {
+ $link = "$name.func-sort-c.$html_ext";
+ }
+ $result = "Hit count";
+ $result .= get_sort_code($link, "Sort by hit count", $base);
+
+ return $result;
+}
+
+#
+# funcview_get_sorted(funcdata, sumfncdata, sort_type)
+#
+# Depending on the value of sort_type, return a list of functions sorted
+# by name (type 0) or by the associated call count (type 1).
+#
+
+sub funcview_get_sorted($$$)
+{
+ my ($funcdata, $sumfncdata, $type) = @_;
+
+ if ($type == 0) {
+ return sort(keys(%{$funcdata}));
+ }
+ return sort({$sumfncdata->{$b} <=> $sumfncdata->{$a}}
+ keys(%{$sumfncdata}));
+}
+
+#
+# write_function_table(filehandle, source_file, sumcount, funcdata,
+# sumfnccount, testfncdata, sumbrcount, testbrdata,
+# base_name, base_dir, sort_type)
+#
+# Write an HTML table listing all functions in a source file, including
+# also function call counts and line coverages inside of each function.
+#
+# Die on error.
+#
+
+sub write_function_table(*$$$$$$$$$$)
+{
+ local *HTML_HANDLE = $_[0];
+ my $source = $_[1];
+ my $sumcount = $_[2];
+ my $funcdata = $_[3];
+ my $sumfncdata = $_[4];
+ my $testfncdata = $_[5];
+ my $sumbrcount = $_[6];
+ my $testbrdata = $_[7];
+ my $name = $_[8];
+ my $base = $_[9];
+ my $type = $_[10];
+ my $func;
+ my $func_code;
+ my $count_code;
+
+ # Get HTML code for headings
+ $func_code = funcview_get_func_code($name, $base, $type);
+ $count_code = funcview_get_count_code($name, $base, $type);
+ write_html(*HTML_HANDLE, <<END_OF_HTML)
+ <center>
+ <table width="60%" cellpadding=1 cellspacing=1 border=0>
+ <tr><td><br></td></tr>
+ <tr>
+ <td width="80%" class="tableHead">$func_code</td>
+ <td width="20%" class="tableHead">$count_code</td>
+ </tr>
+END_OF_HTML
+ ;
+
+ # Get a sorted table
+ foreach $func (funcview_get_sorted($funcdata, $sumfncdata, $type)) {
+ if (!defined($funcdata->{$func}))
+ {
+ next;
+ }
+
+ my $startline = $funcdata->{$func} - $func_offset;
+ my $name = $func;
+ my $count = $sumfncdata->{$name};
+ my $countstyle;
+
+ # Demangle C++ function names if requested
+ if ($demangle_cpp) {
+ $name = `c++filt "$name"`;
+ chomp($name);
+ }
+ # Escape any remaining special characters
+ $name = escape_html($name);
+ if ($startline < 1) {
+ $startline = 1;
+ }
+ if ($count == 0) {
+ $countstyle = "coverFnLo";
+ } else {
+ $countstyle = "coverFnHi";
+ }
+
+ write_html(*HTML_HANDLE, <<END_OF_HTML)
+ <tr>
+ <td class="coverFn"><a href="$source#$startline">$name</a></td>
+ <td class="$countstyle">$count</td>
+ </tr>
+END_OF_HTML
+ ;
+ }
+ write_html(*HTML_HANDLE, <<END_OF_HTML)
+ </table>
+ <br>
+ </center>
+END_OF_HTML
+ ;
+}
+
+
+#
+# info(printf_parameter)
+#
+# Use printf to write PRINTF_PARAMETER to stdout only when the $quiet flag
+# is not set.
+#
+
+sub info(@)
+{
+ if (!$quiet)
+ {
+ # Print info string
+ printf(@_);
+ }
+}
+
+
+#
+# subtract_counts(data_ref, base_ref)
+#
+
+sub subtract_counts($$)
+{
+ my %data = %{$_[0]};
+ my %base = %{$_[1]};
+ my $line;
+ my $data_count;
+ my $base_count;
+ my $hit = 0;
+ my $found = 0;
+
+ foreach $line (keys(%data))
+ {
+ $found++;
+ $data_count = $data{$line};
+ $base_count = $base{$line};
+
+ if (defined($base_count))
+ {
+ $data_count -= $base_count;
+
+ # Make sure we don't get negative numbers
+ if ($data_count<0) { $data_count = 0; }
+ }
+
+ $data{$line} = $data_count;
+ if ($data_count > 0) { $hit++; }
+ }
+
+ return (\%data, $found, $hit);
+}
+
+
+#
+# subtract_fnccounts(data, base)
+#
+# Subtract function call counts found in base from those in data.
+# Return (data, f_found, f_hit).
+#
+
+sub subtract_fnccounts($$)
+{
+ my %data;
+ my %base;
+ my $func;
+ my $data_count;
+ my $base_count;
+ my $fn_hit = 0;
+ my $fn_found = 0;
+
+ %data = %{$_[0]} if (defined($_[0]));
+ %base = %{$_[1]} if (defined($_[1]));
+ foreach $func (keys(%data)) {
+ $fn_found++;
+ $data_count = $data{$func};
+ $base_count = $base{$func};
+
+ if (defined($base_count)) {
+ $data_count -= $base_count;
+
+ # Make sure we don't get negative numbers
+ if ($data_count < 0) {
+ $data_count = 0;
+ }
+ }
+
+ $data{$func} = $data_count;
+ if ($data_count > 0) {
+ $fn_hit++;
+ }
+ }
+
+ return (\%data, $fn_found, $fn_hit);
+}
+
+
+#
+# apply_baseline(data_ref, baseline_ref)
+#
+# Subtract the execution counts found in the baseline hash referenced by
+# BASELINE_REF from actual data in DATA_REF.
+#
+
+sub apply_baseline($$)
+{
+ my %data_hash = %{$_[0]};
+ my %base_hash = %{$_[1]};
+ my $filename;
+ my $testname;
+ my $data;
+ my $data_testdata;
+ my $data_funcdata;
+ my $data_checkdata;
+ my $data_testfncdata;
+ my $data_testbrdata;
+ my $data_count;
+ my $data_testfnccount;
+ my $data_testbrcount;
+ my $base;
+ my $base_checkdata;
+ my $base_sumfnccount;
+ my $base_sumbrcount;
+ my $base_count;
+ my $sumcount;
+ my $sumfnccount;
+ my $sumbrcount;
+ my $found;
+ my $hit;
+ my $fn_found;
+ my $fn_hit;
+ my $br_found;
+ my $br_hit;
+
+ foreach $filename (keys(%data_hash))
+ {
+ # Get data set for data and baseline
+ $data = $data_hash{$filename};
+ $base = $base_hash{$filename};
+
+ # Skip data entries for which no base entry exists
+ if (!defined($base))
+ {
+ next;
+ }
+
+ # Get set entries for data and baseline
+ ($data_testdata, undef, $data_funcdata, $data_checkdata,
+ $data_testfncdata, undef, $data_testbrdata) =
+ get_info_entry($data);
+ (undef, $base_count, undef, $base_checkdata, undef,
+ $base_sumfnccount, undef, $base_sumbrcount) =
+ get_info_entry($base);
+
+ # Check for compatible checksums
+ merge_checksums($data_checkdata, $base_checkdata, $filename);
+
+ # sumcount has to be calculated anew
+ $sumcount = {};
+ $sumfnccount = {};
+ $sumbrcount = {};
+
+ # For each test case, subtract test specific counts
+ foreach $testname (keys(%{$data_testdata}))
+ {
+ # Get counts of both data and baseline
+ $data_count = $data_testdata->{$testname};
+ $data_testfnccount = $data_testfncdata->{$testname};
+ $data_testbrcount = $data_testbrdata->{$testname};
+
+ ($data_count, undef, $hit) =
+ subtract_counts($data_count, $base_count);
+ ($data_testfnccount) =
+ subtract_fnccounts($data_testfnccount,
+ $base_sumfnccount);
+ ($data_testbrcount) =
+ combine_brcount($data_testbrcount,
+ $base_sumbrcount, $BR_SUB);
+
+
+ # Check whether this test case did hit any line at all
+ if ($hit > 0)
+ {
+ # Write back resulting hash
+ $data_testdata->{$testname} = $data_count;
+ $data_testfncdata->{$testname} =
+ $data_testfnccount;
+ $data_testbrdata->{$testname} =
+ $data_testbrcount;
+ }
+ else
+ {
+ # Delete test case which did not impact this
+ # file
+ delete($data_testdata->{$testname});
+ delete($data_testfncdata->{$testname});
+ delete($data_testbrdata->{$testname});
+ }
+
+ # Add counts to sum of counts
+ ($sumcount, $found, $hit) =
+ add_counts($sumcount, $data_count);
+ ($sumfnccount, $fn_found, $fn_hit) =
+ add_fnccount($sumfnccount, $data_testfnccount);
+ ($sumbrcount, $br_found, $br_hit) =
+ combine_brcount($sumbrcount, $data_testbrcount,
+ $BR_ADD);
+ }
+
+ # Write back resulting entry
+ set_info_entry($data, $data_testdata, $sumcount, $data_funcdata,
+ $data_checkdata, $data_testfncdata, $sumfnccount,
+ $data_testbrdata, $sumbrcount, $found, $hit,
+ $fn_found, $fn_hit, $br_found, $br_hit);
+
+ $data_hash{$filename} = $data;
+ }
+
+ return (\%data_hash);
+}
+
+
+#
+# remove_unused_descriptions()
+#
+# Removes all test descriptions from the global hash %test_description which
+# are not present in %info_data.
+#
+
+sub remove_unused_descriptions()
+{
+ my $filename; # The current filename
+ my %test_list; # Hash containing found test names
+ my $test_data; # Reference to hash test_name -> count_data
+ my $before; # Initial number of descriptions
+ my $after; # Remaining number of descriptions
+
+ $before = scalar(keys(%test_description));
+
+ foreach $filename (keys(%info_data))
+ {
+ ($test_data) = get_info_entry($info_data{$filename});
+ foreach (keys(%{$test_data}))
+ {
+ $test_list{$_} = "";
+ }
+ }
+
+ # Remove descriptions for tests which are not in our list
+ foreach (keys(%test_description))
+ {
+ if (!defined($test_list{$_}))
+ {
+ delete($test_description{$_});
+ }
+ }
+
+ $after = scalar(keys(%test_description));
+ if ($after < $before)
+ {
+ info("Removed ".($before - $after).
+ " unused descriptions, $after remaining.\n");
+ }
+}
+
+
+#
+# apply_prefix(filename, prefix)
+#
+# If FILENAME begins with PREFIX, remove PREFIX from FILENAME and return
+# resulting string, otherwise return FILENAME.
+#
+
+sub apply_prefix($$)
+{
+ my $filename = $_[0];
+ my $prefix = $_[1];
+
+ if (defined($prefix) && ($prefix ne ""))
+ {
+ if ($filename =~ /^\Q$prefix\E\/(.*)$/)
+ {
+ return substr($filename, length($prefix) + 1);
+ }
+ }
+
+ return $filename;
+}
+
+
+#
+# system_no_output(mode, parameters)
+#
+# Call an external program using PARAMETERS while suppressing depending on
+# the value of MODE:
+#
+# MODE & 1: suppress STDOUT
+# MODE & 2: suppress STDERR
+#
+# Return 0 on success, non-zero otherwise.
+#
+
+sub system_no_output($@)
+{
+ my $mode = shift;
+ my $result;
+ local *OLD_STDERR;
+ local *OLD_STDOUT;
+
+ # Save old stdout and stderr handles
+ ($mode & 1) && open(OLD_STDOUT, ">>&STDOUT");
+ ($mode & 2) && open(OLD_STDERR, ">>&STDERR");
+
+ # Redirect to /dev/null
+ ($mode & 1) && open(STDOUT, ">/dev/null");
+ ($mode & 2) && open(STDERR, ">/dev/null");
+
+ system(@_);
+ $result = $?;
+
+ # Close redirected handles
+ ($mode & 1) && close(STDOUT);
+ ($mode & 2) && close(STDERR);
+
+ # Restore old handles
+ ($mode & 1) && open(STDOUT, ">>&OLD_STDOUT");
+ ($mode & 2) && open(STDERR, ">>&OLD_STDERR");
+
+ return $result;
+}
+
+
+#
+# read_config(filename)
+#
+# Read configuration file FILENAME and return a reference to a hash containing
+# all valid key=value pairs found.
+#
+
+sub read_config($)
+{
+ my $filename = $_[0];
+ my %result;
+ my $key;
+ my $value;
+ local *HANDLE;
+
+ if (!open(HANDLE, "<$filename"))
+ {
+ warn("WARNING: cannot read configuration file $filename\n");
+ return undef;
+ }
+ while (<HANDLE>)
+ {
+ chomp;
+ # Skip comments
+ s/#.*//;
+ # Remove leading blanks
+ s/^\s+//;
+ # Remove trailing blanks
+ s/\s+$//;
+ next unless length;
+ ($key, $value) = split(/\s*=\s*/, $_, 2);
+ if (defined($key) && defined($value))
+ {
+ $result{$key} = $value;
+ }
+ else
+ {
+ warn("WARNING: malformed statement in line $. ".
+ "of configuration file $filename\n");
+ }
+ }
+ close(HANDLE);
+ return \%result;
+}
+
+
+#
+# apply_config(REF)
+#
+# REF is a reference to a hash containing the following mapping:
+#
+# key_string => var_ref
+#
+# where KEY_STRING is a keyword and VAR_REF is a reference to an associated
+# variable. If the global configuration hash CONFIG contains a value for
+# keyword KEY_STRING, VAR_REF will be assigned the value for that keyword.
+#
+
+sub apply_config($)
+{
+ my $ref = $_[0];
+
+ foreach (keys(%{$ref}))
+ {
+ if (defined($config->{$_}))
+ {
+ ${$ref->{$_}} = $config->{$_};
+ }
+ }
+}
+
+
+#
+# get_html_prolog(FILENAME)
+#
+# If FILENAME is defined, return contents of file. Otherwise return default
+# HTML prolog. Die on error.
+#
+
+sub get_html_prolog($)
+{
+ my $filename = $_[0];
+ my $result = "";
+
+ if (defined($filename))
+ {
+ local *HANDLE;
+
+ open(HANDLE, "<".$filename)
+ or die("ERROR: cannot open html prolog $filename!\n");
+ while (<HANDLE>)
+ {
+ $result .= $_;
+ }
+ close(HANDLE);
+ }
+ else
+ {
+ $result = <<END_OF_HTML
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
+
+<html lang="en">
+
+<head>
+ <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
+ <title>\@pagetitle\@</title>
+ <link rel="stylesheet" type="text/css" href="\@basedir\@gcov.css">
+</head>
+
+<body>
+
+END_OF_HTML
+ ;
+ }
+
+ return $result;
+}
+
+
+#
+# get_html_epilog(FILENAME)
+#
+# If FILENAME is defined, return contents of file. Otherwise return default
+# HTML epilog. Die on error.
+#
+sub get_html_epilog($)
+{
+ my $filename = $_[0];
+ my $result = "";
+
+ if (defined($filename))
+ {
+ local *HANDLE;
+
+ open(HANDLE, "<".$filename)
+ or die("ERROR: cannot open html epilog $filename!\n");
+ while (<HANDLE>)
+ {
+ $result .= $_;
+ }
+ close(HANDLE);
+ }
+ else
+ {
+ $result = <<END_OF_HTML
+
+</body>
+</html>
+END_OF_HTML
+ ;
+ }
+
+ return $result;
+
+}
+
+sub warn_handler($)
+{
+ my ($msg) = @_;
+
+ warn("$tool_name: $msg");
+}
+
+sub die_handler($)
+{
+ my ($msg) = @_;
+
+ die("$tool_name: $msg");
+}
diff --git a/chromium/third_party/lcov-1.9/bin/geninfo b/chromium/third_party/lcov-1.9/bin/geninfo
new file mode 100755
index 00000000000..dcb1a678186
--- /dev/null
+++ b/chromium/third_party/lcov-1.9/bin/geninfo
@@ -0,0 +1,3068 @@
+#!/usr/bin/perl -w
+#
+# Copyright (c) International Business Machines Corp., 2002,2010
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or (at
+# your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+#
+#
+# geninfo
+#
+# This script generates .info files from data files as created by code
+# instrumented with gcc's built-in profiling mechanism. Call it with
+# --help and refer to the geninfo man page to get information on usage
+# and available options.
+#
+#
+# Authors:
+# 2002-08-23 created by Peter Oberparleiter <Peter.Oberparleiter@de.ibm.com>
+# IBM Lab Boeblingen
+# based on code by Manoj Iyer <manjo@mail.utexas.edu> and
+# Megan Bock <mbock@us.ibm.com>
+# IBM Austin
+# 2002-09-05 / Peter Oberparleiter: implemented option that allows file list
+# 2003-04-16 / Peter Oberparleiter: modified read_gcov so that it can also
+# parse the new gcov format which is to be introduced in gcc 3.3
+# 2003-04-30 / Peter Oberparleiter: made info write to STDERR, not STDOUT
+# 2003-07-03 / Peter Oberparleiter: added line checksum support, added
+# --no-checksum
+# 2003-09-18 / Nigel Hinds: capture branch coverage data from GCOV
+# 2003-12-11 / Laurent Deniel: added --follow option
+# workaround gcov (<= 3.2.x) bug with empty .da files
+# 2004-01-03 / Laurent Deniel: Ignore empty .bb files
+# 2004-02-16 / Andreas Krebbel: Added support for .gcno/.gcda files and
+# gcov versioning
+# 2004-08-09 / Peter Oberparleiter: added configuration file support
+# 2008-07-14 / Tom Zoerner: added --function-coverage command line option
+# 2008-08-13 / Peter Oberparleiter: modified function coverage
+# implementation (now enabled per default)
+#
+
+use strict;
+use File::Basename;
+use File::Spec::Functions qw /abs2rel catdir file_name_is_absolute splitdir
+ splitpath/;
+use Getopt::Long;
+use Digest::MD5 qw(md5_base64);
+
+
+# Constants
+our $lcov_version = 'LCOV version 1.9';
+our $lcov_url = "http://ltp.sourceforge.net/coverage/lcov.php";
+our $gcov_tool = "gcov";
+our $tool_name = basename($0);
+
+our $GCOV_VERSION_3_4_0 = 0x30400;
+our $GCOV_VERSION_3_3_0 = 0x30300;
+our $GCNO_FUNCTION_TAG = 0x01000000;
+our $GCNO_LINES_TAG = 0x01450000;
+our $GCNO_FILE_MAGIC = 0x67636e6f;
+our $BBG_FILE_MAGIC = 0x67626267;
+
+our $COMPAT_HAMMER = "hammer";
+
+our $ERROR_GCOV = 0;
+our $ERROR_SOURCE = 1;
+our $ERROR_GRAPH = 2;
+
+our $EXCL_START = "LCOV_EXCL_START";
+our $EXCL_STOP = "LCOV_EXCL_STOP";
+our $EXCL_LINE = "LCOV_EXCL_LINE";
+
+our $BR_LINE = 0;
+our $BR_BLOCK = 1;
+our $BR_BRANCH = 2;
+our $BR_TAKEN = 3;
+our $BR_VEC_ENTRIES = 4;
+our $BR_VEC_WIDTH = 32;
+
+our $UNNAMED_BLOCK = 9999;
+
+# Prototypes
+sub print_usage(*);
+sub gen_info($);
+sub process_dafile($$);
+sub match_filename($@);
+sub solve_ambiguous_match($$$);
+sub split_filename($);
+sub solve_relative_path($$);
+sub read_gcov_header($);
+sub read_gcov_file($);
+sub info(@);
+sub get_gcov_version();
+sub system_no_output($@);
+sub read_config($);
+sub apply_config($);
+sub get_exclusion_data($);
+sub apply_exclusion_data($$);
+sub process_graphfile($$);
+sub filter_fn_name($);
+sub warn_handler($);
+sub die_handler($);
+sub graph_error($$);
+sub graph_expect($);
+sub graph_read(*$;$);
+sub graph_skip(*$;$);
+sub sort_uniq(@);
+sub sort_uniq_lex(@);
+sub graph_cleanup($);
+sub graph_find_base($);
+sub graph_from_bb($$$);
+sub graph_add_order($$$);
+sub read_bb_word(*;$);
+sub read_bb_value(*;$);
+sub read_bb_string(*$);
+sub read_bb($$);
+sub read_bbg_word(*;$);
+sub read_bbg_value(*;$);
+sub read_bbg_string(*);
+sub read_bbg_lines_record(*$$$$$$);
+sub read_bbg($$);
+sub read_gcno_word(*;$);
+sub read_gcno_value(*$;$);
+sub read_gcno_string(*$);
+sub read_gcno_lines_record(*$$$$$$$);
+sub read_gcno_function_record(*$$$$);
+sub read_gcno($$);
+sub get_gcov_capabilities();
+sub get_overall_line($$$$);
+sub print_overall_rate($$$$$$$$$);
+sub br_gvec_len($);
+sub br_gvec_get($$);
+sub debug($);
+sub int_handler();
+
+
+# Global variables
+our $gcov_version;
+our $graph_file_extension;
+our $data_file_extension;
+our @data_directory;
+our $test_name = "";
+our $quiet;
+our $help;
+our $output_filename;
+our $base_directory;
+our $version;
+our $follow;
+our $checksum;
+our $no_checksum;
+our $compat_libtool;
+our $no_compat_libtool;
+our $adjust_testname;
+our $config; # Configuration file contents
+our $compatibility; # Compatibility version flag - used to indicate
+ # non-standard GCOV data format versions
+our @ignore_errors; # List of errors to ignore (parameter)
+our @ignore; # List of errors to ignore (array)
+our $initial;
+our $no_recursion = 0;
+our $maxdepth;
+our $no_markers = 0;
+our $opt_derive_func_data = 0;
+our $debug = 0;
+our $gcov_caps;
+our @gcov_options;
+
+our $cwd = `pwd`;
+chomp($cwd);
+
+
+#
+# Code entry point
+#
+
+# Register handler routine to be called when interrupted
+$SIG{"INT"} = \&int_handler;
+$SIG{__WARN__} = \&warn_handler;
+$SIG{__DIE__} = \&die_handler;
+
+# Prettify version string
+$lcov_version =~ s/\$\s*Revision\s*:?\s*(\S+)\s*\$/$1/;
+
+# Set LANG so that gcov output will be in a unified format
+$ENV{"LANG"} = "C";
+
+# Read configuration file if available
+if (defined($ENV{"HOME"}) && (-r $ENV{"HOME"}."/.lcovrc"))
+{
+ $config = read_config($ENV{"HOME"}."/.lcovrc");
+}
+elsif (-r "/etc/lcovrc")
+{
+ $config = read_config("/etc/lcovrc");
+}
+
+if ($config)
+{
+ # Copy configuration file values to variables
+ apply_config({
+ "geninfo_gcov_tool" => \$gcov_tool,
+ "geninfo_adjust_testname" => \$adjust_testname,
+ "geninfo_checksum" => \$checksum,
+ "geninfo_no_checksum" => \$no_checksum, # deprecated
+ "geninfo_compat_libtool" => \$compat_libtool});
+
+ # Merge options
+ if (defined($no_checksum))
+ {
+ $checksum = ($no_checksum ? 0 : 1);
+ $no_checksum = undef;
+ }
+}
+
+# Parse command line options
+if (!GetOptions("test-name|t=s" => \$test_name,
+ "output-filename|o=s" => \$output_filename,
+ "checksum" => \$checksum,
+ "no-checksum" => \$no_checksum,
+ "base-directory|b=s" => \$base_directory,
+ "version|v" =>\$version,
+ "quiet|q" => \$quiet,
+ "help|h|?" => \$help,
+ "follow|f" => \$follow,
+ "compat-libtool" => \$compat_libtool,
+ "no-compat-libtool" => \$no_compat_libtool,
+ "gcov-tool=s" => \$gcov_tool,
+ "ignore-errors=s" => \@ignore_errors,
+ "initial|i" => \$initial,
+ "no-recursion" => \$no_recursion,
+ "no-markers" => \$no_markers,
+ "derive-func-data" => \$opt_derive_func_data,
+ "debug" => \$debug,
+ ))
+{
+ print(STDERR "Use $tool_name --help to get usage information\n");
+ exit(1);
+}
+else
+{
+ # Merge options
+ if (defined($no_checksum))
+ {
+ $checksum = ($no_checksum ? 0 : 1);
+ $no_checksum = undef;
+ }
+
+ if (defined($no_compat_libtool))
+ {
+ $compat_libtool = ($no_compat_libtool ? 0 : 1);
+ $no_compat_libtool = undef;
+ }
+}
+
+@data_directory = @ARGV;
+
+# Check for help option
+if ($help)
+{
+ print_usage(*STDOUT);
+ exit(0);
+}
+
+# Check for version option
+if ($version)
+{
+ print("$tool_name: $lcov_version\n");
+ exit(0);
+}
+
+# Make sure test names only contain valid characters
+if ($test_name =~ s/\W/_/g)
+{
+ warn("WARNING: invalid characters removed from testname!\n");
+}
+
+# Adjust test name to include uname output if requested
+if ($adjust_testname)
+{
+ $test_name .= "__".`uname -a`;
+ $test_name =~ s/\W/_/g;
+}
+
+# Make sure base_directory contains an absolute path specification
+if ($base_directory)
+{
+ $base_directory = solve_relative_path($cwd, $base_directory);
+}
+
+# Check for follow option
+if ($follow)
+{
+ $follow = "-follow"
+}
+else
+{
+ $follow = "";
+}
+
+# Determine checksum mode
+if (defined($checksum))
+{
+ # Normalize to boolean
+ $checksum = ($checksum ? 1 : 0);
+}
+else
+{
+ # Default is off
+ $checksum = 0;
+}
+
+# Determine libtool compatibility mode
+if (defined($compat_libtool))
+{
+ $compat_libtool = ($compat_libtool? 1 : 0);
+}
+else
+{
+ # Default is on
+ $compat_libtool = 1;
+}
+
+# Determine max depth for recursion
+if ($no_recursion)
+{
+ $maxdepth = "-maxdepth 1";
+}
+else
+{
+ $maxdepth = "";
+}
+
+# Check for directory name
+if (!@data_directory)
+{
+ die("No directory specified\n".
+ "Use $tool_name --help to get usage information\n");
+}
+else
+{
+ foreach (@data_directory)
+ {
+ stat($_);
+ if (!-r _)
+ {
+ die("ERROR: cannot read $_!\n");
+ }
+ }
+}
+
+if (@ignore_errors)
+{
+ my @expanded;
+ my $error;
+
+ # Expand comma-separated entries
+ foreach (@ignore_errors) {
+ if (/,/)
+ {
+ push(@expanded, split(",", $_));
+ }
+ else
+ {
+ push(@expanded, $_);
+ }
+ }
+
+ foreach (@expanded)
+ {
+ /^gcov$/ && do { $ignore[$ERROR_GCOV] = 1; next; } ;
+ /^source$/ && do { $ignore[$ERROR_SOURCE] = 1; next; };
+ /^graph$/ && do { $ignore[$ERROR_GRAPH] = 1; next; };
+ die("ERROR: unknown argument for --ignore-errors: $_\n");
+ }
+}
+
+if (system_no_output(3, $gcov_tool, "--help") == -1)
+{
+ die("ERROR: need tool $gcov_tool!\n");
+}
+
+$gcov_version = get_gcov_version();
+
+if ($gcov_version < $GCOV_VERSION_3_4_0)
+{
+ if (defined($compatibility) && $compatibility eq $COMPAT_HAMMER)
+ {
+ $data_file_extension = ".da";
+ $graph_file_extension = ".bbg";
+ }
+ else
+ {
+ $data_file_extension = ".da";
+ $graph_file_extension = ".bb";
+ }
+}
+else
+{
+ $data_file_extension = ".gcda";
+ $graph_file_extension = ".gcno";
+}
+
+# Determine gcov options
+$gcov_caps = get_gcov_capabilities();
+push(@gcov_options, "-b") if ($gcov_caps->{'branch-probabilities'});
+push(@gcov_options, "-c") if ($gcov_caps->{'branch-counts'});
+push(@gcov_options, "-a") if ($gcov_caps->{'all-blocks'});
+push(@gcov_options, "-p") if ($gcov_caps->{'preserve-paths'});
+
+# Check output filename
+if (defined($output_filename) && ($output_filename ne "-"))
+{
+ # Initially create output filename, data is appended
+ # for each data file processed
+ local *DUMMY_HANDLE;
+ open(DUMMY_HANDLE, ">$output_filename")
+ or die("ERROR: cannot create $output_filename!\n");
+ close(DUMMY_HANDLE);
+
+ # Make $output_filename an absolute path because we're going
+ # to change directories while processing files
+ if (!($output_filename =~ /^\/(.*)$/))
+ {
+ $output_filename = $cwd."/".$output_filename;
+ }
+}
+
+# Do something
+foreach my $entry (@data_directory) {
+ gen_info($entry);
+}
+
+if ($initial) {
+ warn("Note: --initial does not generate branch coverage ".
+ "data\n");
+}
+info("Finished .info-file creation\n");
+
+exit(0);
+
+
+
+#
+# print_usage(handle)
+#
+# Print usage information.
+#
+
+sub print_usage(*)
+{
+ local *HANDLE = $_[0];
+
+ print(HANDLE <<END_OF_USAGE);
+Usage: $tool_name [OPTIONS] DIRECTORY
+
+Traverse DIRECTORY and create a .info file for each data file found. Note
+that you may specify more than one directory, all of which are then processed
+sequentially.
+
+ -h, --help Print this help, then exit
+ -v, --version Print version number, then exit
+ -q, --quiet Do not print progress messages
+ -i, --initial Capture initial zero coverage data
+ -t, --test-name NAME Use test case name NAME for resulting data
+ -o, --output-filename OUTFILE Write data only to OUTFILE
+ -f, --follow Follow links when searching .da/.gcda files
+ -b, --base-directory DIR Use DIR as base directory for relative paths
+ --(no-)checksum Enable (disable) line checksumming
+ --(no-)compat-libtool Enable (disable) libtool compatibility mode
+ --gcov-tool TOOL Specify gcov tool location
+ --ignore-errors ERROR Continue after ERROR (gcov, source, graph)
+ --no-recursion Exclude subdirectories from processing
+ --function-coverage Capture function call counts
+ --no-markers Ignore exclusion markers in source code
+ --derive-func-data Generate function data from line data
+
+For more information see: $lcov_url
+END_OF_USAGE
+ ;
+}
+
+#
+# get_common_prefix(min_dir, filenames)
+#
+# Return the longest path prefix shared by all filenames. MIN_DIR specifies
+# the minimum number of directories that a filename may have after removing
+# the prefix.
+#
+
+sub get_common_prefix($@)
+{
+ my ($min_dir, @files) = @_;
+ my $file;
+ my @prefix;
+ my $i;
+
+ foreach $file (@files) {
+ my ($v, $d, $f) = splitpath($file);
+ my @comp = splitdir($d);
+
+ if (!@prefix) {
+ @prefix = @comp;
+ next;
+ }
+ for ($i = 0; $i < scalar(@comp) && $i < scalar(@prefix); $i++) {
+ if ($comp[$i] ne $prefix[$i] ||
+ ((scalar(@comp) - ($i + 1)) <= $min_dir)) {
+ delete(@prefix[$i..scalar(@prefix)]);
+ last;
+ }
+ }
+ }
+
+ return catdir(@prefix);
+}
+
+#
+# gen_info(directory)
+#
+# Traverse DIRECTORY and create a .info file for each data file found.
+# The .info file contains TEST_NAME in the following format:
+#
+# TN:<test name>
+#
+# For each source file name referenced in the data file, there is a section
+# containing source code and coverage data:
+#
+# SF:<absolute path to the source file>
+# FN:<line number of function start>,<function name> for each function
+# DA:<line number>,<execution count> for each instrumented line
+# LH:<number of lines with an execution count> greater than 0
+# LF:<number of instrumented lines>
+#
+# Sections are separated by:
+#
+# end_of_record
+#
+# In addition to the main source code file there are sections for each
+# #included file containing executable code. Note that the absolute path
+# of a source file is generated by interpreting the contents of the respective
+# graph file. Relative filenames are prefixed with the directory in which the
+# graph file is found. Note also that symbolic links to the graph file will be
+# resolved so that the actual file path is used instead of the path to a link.
+# This approach is necessary for the mechanism to work with the /proc/gcov
+# files.
+#
+# Die on error.
+#
+
+sub gen_info($)
+{
+ my $directory = $_[0];
+ my @file_list;
+ my $file;
+ my $prefix;
+ my $type;
+ my $ext;
+
+ if ($initial) {
+ $type = "graph";
+ $ext = $graph_file_extension;
+ } else {
+ $type = "data";
+ $ext = $data_file_extension;
+ }
+
+ if (-d $directory)
+ {
+ info("Scanning $directory for $ext files ...\n");
+
+ @file_list = `find "$directory" $maxdepth $follow -name \\*$ext -type f 2>/dev/null`;
+ chomp(@file_list);
+ @file_list or
+ die("ERROR: no $ext files found in $directory!\n");
+ $prefix = get_common_prefix(1, @file_list);
+ info("Found %d %s files in %s\n", $#file_list+1, $type,
+ $directory);
+ }
+ else
+ {
+ @file_list = ($directory);
+ $prefix = "";
+ }
+
+ # Process all files in list
+ foreach $file (@file_list) {
+ # Process file
+ if ($initial) {
+ process_graphfile($file, $prefix);
+ } else {
+ process_dafile($file, $prefix);
+ }
+ }
+}
+
+
+sub derive_data($$$)
+{
+ my ($contentdata, $funcdata, $bbdata) = @_;
+ my @gcov_content = @{$contentdata};
+ my @gcov_functions = @{$funcdata};
+ my %fn_count;
+ my %ln_fn;
+ my $line;
+ my $maxline;
+ my %fn_name;
+ my $fn;
+ my $count;
+
+ if (!defined($bbdata)) {
+ return @gcov_functions;
+ }
+
+ # First add existing function data
+ while (@gcov_functions) {
+ $count = shift(@gcov_functions);
+ $fn = shift(@gcov_functions);
+
+ $fn_count{$fn} = $count;
+ }
+
+ # Convert line coverage data to function data
+ foreach $fn (keys(%{$bbdata})) {
+ my $line_data = $bbdata->{$fn};
+ my $line;
+
+ if ($fn eq "") {
+ next;
+ }
+ # Find the lowest line count for this function
+ $count = 0;
+ foreach $line (@$line_data) {
+ my $lcount = $gcov_content[ ( $line - 1 ) * 3 + 1 ];
+
+ if (($lcount > 0) &&
+ (($count == 0) || ($lcount < $count))) {
+ $count = $lcount;
+ }
+ }
+ $fn_count{$fn} = $count;
+ }
+
+
+ # Check if we got data for all functions
+ foreach $fn (keys(%fn_name)) {
+ if ($fn eq "") {
+ next;
+ }
+ if (defined($fn_count{$fn})) {
+ next;
+ }
+ warn("WARNING: no derived data found for function $fn\n");
+ }
+
+ # Convert hash to list in @gcov_functions format
+ foreach $fn (sort(keys(%fn_count))) {
+ push(@gcov_functions, $fn_count{$fn}, $fn);
+ }
+
+ return @gcov_functions;
+}
+
+#
+# get_filenames(directory, pattern)
+#
+# Return a list of filenames found in directory which match the specified
+# pattern.
+#
+# Die on error.
+#
+
+sub get_filenames($$)
+{
+ my ($dirname, $pattern) = @_;
+ my @result;
+ my $directory;
+ local *DIR;
+
+ opendir(DIR, $dirname) or
+ die("ERROR: cannot read directory $dirname\n");
+ while ($directory = readdir(DIR)) {
+ push(@result, $directory) if ($directory =~ /$pattern/);
+ }
+ closedir(DIR);
+
+ return @result;
+}
+
+#
+# process_dafile(da_filename, dir)
+#
+# Create a .info file for a single data file.
+#
+# Die on error.
+#
+
+sub process_dafile($$)
+{
+ my ($file, $dir) = @_;
+ my $da_filename; # Name of data file to process
+ my $da_dir; # Directory of data file
+ my $source_dir; # Directory of source file
+ my $da_basename; # data filename without ".da/.gcda" extension
+ my $bb_filename; # Name of respective graph file
+ my $bb_basename; # Basename of the original graph file
+ my $graph; # Contents of graph file
+ my $instr; # Contents of graph file part 2
+ my $gcov_error; # Error code of gcov tool
+ my $object_dir; # Directory containing all object files
+ my $source_filename; # Name of a source code file
+ my $gcov_file; # Name of a .gcov file
+ my @gcov_content; # Content of a .gcov file
+ my $gcov_branches; # Branch content of a .gcov file
+ my @gcov_functions; # Function calls of a .gcov file
+ my @gcov_list; # List of generated .gcov files
+ my $line_number; # Line number count
+ my $lines_hit; # Number of instrumented lines hit
+ my $lines_found; # Number of instrumented lines found
+ my $funcs_hit; # Number of instrumented functions hit
+ my $funcs_found; # Number of instrumented functions found
+ my $br_hit;
+ my $br_found;
+ my $source; # gcov source header information
+ my $object; # gcov object header information
+ my @matches; # List of absolute paths matching filename
+ my @unprocessed; # List of unprocessed source code files
+ my $base_dir; # Base directory for current file
+ my @tmp_links; # Temporary links to be cleaned up
+ my @result;
+ my $index;
+ my $da_renamed; # If data file is to be renamed
+ local *INFO_HANDLE;
+
+ info("Processing %s\n", abs2rel($file, $dir));
+ # Get path to data file in absolute and normalized form (begins with /,
+ # contains no more ../ or ./)
+ $da_filename = solve_relative_path($cwd, $file);
+
+ # Get directory and basename of data file
+ ($da_dir, $da_basename) = split_filename($da_filename);
+
+ # avoid files from .libs dirs
+ if ($compat_libtool && $da_dir =~ m/(.*)\/\.libs$/) {
+ $source_dir = $1;
+ } else {
+ $source_dir = $da_dir;
+ }
+
+ if (-z $da_filename)
+ {
+ $da_renamed = 1;
+ }
+ else
+ {
+ $da_renamed = 0;
+ }
+
+ # Construct base_dir for current file
+ if ($base_directory)
+ {
+ $base_dir = $base_directory;
+ }
+ else
+ {
+ $base_dir = $source_dir;
+ }
+
+ # Check for writable $base_dir (gcov will try to write files there)
+ stat($base_dir);
+ if (!-w _)
+ {
+ die("ERROR: cannot write to directory $base_dir!\n");
+ }
+
+ # Construct name of graph file
+ $bb_basename = $da_basename.$graph_file_extension;
+ $bb_filename = "$da_dir/$bb_basename";
+
+ # Find out the real location of graph file in case we're just looking at
+ # a link
+ while (readlink($bb_filename))
+ {
+ my $last_dir = dirname($bb_filename);
+
+ $bb_filename = readlink($bb_filename);
+ $bb_filename = solve_relative_path($last_dir, $bb_filename);
+ }
+
+ # Ignore empty graph file (e.g. source file with no statement)
+ if (-z $bb_filename)
+ {
+ warn("WARNING: empty $bb_filename (skipped)\n");
+ return;
+ }
+
+ # Read contents of graph file into hash. We need it later to find out
+ # the absolute path to each .gcov file created as well as for
+ # information about functions and their source code positions.
+ if ($gcov_version < $GCOV_VERSION_3_4_0)
+ {
+ if (defined($compatibility) && $compatibility eq $COMPAT_HAMMER)
+ {
+ ($instr, $graph) = read_bbg($bb_filename, $base_dir);
+ }
+ else
+ {
+ ($instr, $graph) = read_bb($bb_filename, $base_dir);
+ }
+ }
+ else
+ {
+ ($instr, $graph) = read_gcno($bb_filename, $base_dir);
+ }
+
+ # Set $object_dir to real location of object files. This may differ
+ # from $da_dir if the graph file is just a link to the "real" object
+ # file location.
+ $object_dir = dirname($bb_filename);
+
+ # Is the data file in a different directory? (this happens e.g. with
+ # the gcov-kernel patch)
+ if ($object_dir ne $da_dir)
+ {
+ # Need to create link to data file in $object_dir
+ system("ln", "-s", $da_filename,
+ "$object_dir/$da_basename$data_file_extension")
+ and die ("ERROR: cannot create link $object_dir/".
+ "$da_basename$data_file_extension!\n");
+ push(@tmp_links,
+ "$object_dir/$da_basename$data_file_extension");
+ # Need to create link to graph file if basename of link
+ # and file are different (CONFIG_MODVERSION compat)
+ if ((basename($bb_filename) ne $bb_basename) &&
+ (! -e "$object_dir/$bb_basename")) {
+ symlink($bb_filename, "$object_dir/$bb_basename") or
+ warn("WARNING: cannot create link ".
+ "$object_dir/$bb_basename\n");
+ push(@tmp_links, "$object_dir/$bb_basename");
+ }
+ }
+
+ # Change to directory containing data files and apply GCOV
+ chdir($base_dir);
+
+ if ($da_renamed)
+ {
+ # Need to rename empty data file to workaround
+ # gcov <= 3.2.x bug (Abort)
+ system_no_output(3, "mv", "$da_filename", "$da_filename.ori")
+ and die ("ERROR: cannot rename $da_filename\n");
+ }
+
+ # Execute gcov command and suppress standard output
+ $gcov_error = system_no_output(1, $gcov_tool, $da_filename,
+ "-o", $object_dir, @gcov_options);
+
+ if ($da_renamed)
+ {
+ system_no_output(3, "mv", "$da_filename.ori", "$da_filename")
+ and die ("ERROR: cannot rename $da_filename.ori");
+ }
+
+ # Clean up temporary links
+ foreach (@tmp_links) {
+ unlink($_);
+ }
+
+ if ($gcov_error)
+ {
+ if ($ignore[$ERROR_GCOV])
+ {
+ warn("WARNING: GCOV failed for $da_filename!\n");
+ return;
+ }
+ die("ERROR: GCOV failed for $da_filename!\n");
+ }
+
+ # Collect data from resulting .gcov files and create .info file
+ @gcov_list = get_filenames('.', '\.gcov$');
+
+ # Check for files
+ if (!@gcov_list)
+ {
+ warn("WARNING: gcov did not create any files for ".
+ "$da_filename!\n");
+ }
+
+ # Check whether we're writing to a single file
+ if ($output_filename)
+ {
+ if ($output_filename eq "-")
+ {
+ *INFO_HANDLE = *STDOUT;
+ }
+ else
+ {
+ # Append to output file
+ open(INFO_HANDLE, ">>$output_filename")
+ or die("ERROR: cannot write to ".
+ "$output_filename!\n");
+ }
+ }
+ else
+ {
+ # Open .info file for output
+ open(INFO_HANDLE, ">$da_filename.info")
+ or die("ERROR: cannot create $da_filename.info!\n");
+ }
+
+ # Write test name
+ printf(INFO_HANDLE "TN:%s\n", $test_name);
+
+ # Traverse the list of generated .gcov files and combine them into a
+ # single .info file
+ @unprocessed = keys(%{$instr});
+ foreach $gcov_file (sort(@gcov_list))
+ {
+ my $i;
+ my $num;
+
+ ($source, $object) = read_gcov_header($gcov_file);
+
+ if (defined($source))
+ {
+ $source = solve_relative_path($base_dir, $source);
+ }
+
+ # gcov will happily create output even if there's no source code
+ # available - this interferes with checksum creation so we need
+ # to pull the emergency brake here.
+ if (defined($source) && ! -r $source && $checksum)
+ {
+ if ($ignore[$ERROR_SOURCE])
+ {
+ warn("WARNING: could not read source file ".
+ "$source\n");
+ next;
+ }
+ die("ERROR: could not read source file $source\n");
+ }
+
+ @matches = match_filename(defined($source) ? $source :
+ $gcov_file, keys(%{$instr}));
+
+ # Skip files that are not mentioned in the graph file
+ if (!@matches)
+ {
+ warn("WARNING: cannot find an entry for ".$gcov_file.
+ " in $graph_file_extension file, skipping ".
+ "file!\n");
+ unlink($gcov_file);
+ next;
+ }
+
+ # Read in contents of gcov file
+ @result = read_gcov_file($gcov_file);
+ if (!defined($result[0])) {
+ warn("WARNING: skipping unreadable file ".
+ $gcov_file."\n");
+ unlink($gcov_file);
+ next;
+ }
+ @gcov_content = @{$result[0]};
+ $gcov_branches = $result[1];
+ @gcov_functions = @{$result[2]};
+
+ # Skip empty files
+ if (!@gcov_content)
+ {
+ warn("WARNING: skipping empty file ".$gcov_file."\n");
+ unlink($gcov_file);
+ next;
+ }
+
+ if (scalar(@matches) == 1)
+ {
+ # Just one match
+ $source_filename = $matches[0];
+ }
+ else
+ {
+ # Try to solve the ambiguity
+ $source_filename = solve_ambiguous_match($gcov_file,
+ \@matches, \@gcov_content);
+ }
+
+ # Remove processed file from list
+ for ($index = scalar(@unprocessed) - 1; $index >= 0; $index--)
+ {
+ if ($unprocessed[$index] eq $source_filename)
+ {
+ splice(@unprocessed, $index, 1);
+ last;
+ }
+ }
+
+ # Write absolute path of source file
+ printf(INFO_HANDLE "SF:%s\n", $source_filename);
+
+ # If requested, derive function coverage data from
+ # line coverage data of the first line of a function
+ if ($opt_derive_func_data) {
+ @gcov_functions =
+ derive_data(\@gcov_content, \@gcov_functions,
+ $graph->{$source_filename});
+ }
+
+ # Write function-related information
+ if (defined($graph->{$source_filename}))
+ {
+ my $fn_data = $graph->{$source_filename};
+ my $fn;
+
+ foreach $fn (sort
+ {$fn_data->{$a}->[0] <=> $fn_data->{$b}->[0]}
+ keys(%{$fn_data})) {
+ my $ln_data = $fn_data->{$fn};
+ my $line = $ln_data->[0];
+
+ # Skip empty function
+ if ($fn eq "") {
+ next;
+ }
+ # Remove excluded functions
+ if (!$no_markers) {
+ my $gfn;
+ my $found = 0;
+
+ foreach $gfn (@gcov_functions) {
+ if ($gfn eq $fn) {
+ $found = 1;
+ last;
+ }
+ }
+ if (!$found) {
+ next;
+ }
+ }
+
+ # Normalize function name
+ $fn = filter_fn_name($fn);
+
+ print(INFO_HANDLE "FN:$line,$fn\n");
+ }
+ }
+
+ #--
+ #-- FNDA: <call-count>, <function-name>
+ #-- FNF: overall count of functions
+ #-- FNH: overall count of functions with non-zero call count
+ #--
+ $funcs_found = 0;
+ $funcs_hit = 0;
+ while (@gcov_functions)
+ {
+ my $count = shift(@gcov_functions);
+ my $fn = shift(@gcov_functions);
+
+ $fn = filter_fn_name($fn);
+ printf(INFO_HANDLE "FNDA:$count,$fn\n");
+ $funcs_found++;
+ $funcs_hit++ if ($count > 0);
+ }
+ if ($funcs_found > 0) {
+ printf(INFO_HANDLE "FNF:%s\n", $funcs_found);
+ printf(INFO_HANDLE "FNH:%s\n", $funcs_hit);
+ }
+
+ # Write coverage information for each instrumented branch:
+ #
+ # BRDA:<line number>,<block number>,<branch number>,<taken>
+ #
+ # where 'taken' is the number of times the branch was taken
+ # or '-' if the block to which the branch belongs was never
+ # executed
+ $br_found = 0;
+ $br_hit = 0;
+ $num = br_gvec_len($gcov_branches);
+ for ($i = 0; $i < $num; $i++) {
+ my ($line, $block, $branch, $taken) =
+ br_gvec_get($gcov_branches, $i);
+
+ print(INFO_HANDLE "BRDA:$line,$block,$branch,$taken\n");
+ $br_found++;
+ $br_hit++ if ($taken ne '-' && $taken > 0);
+ }
+ if ($br_found > 0) {
+ printf(INFO_HANDLE "BRF:%s\n", $br_found);
+ printf(INFO_HANDLE "BRH:%s\n", $br_hit);
+ }
+
+ # Reset line counters
+ $line_number = 0;
+ $lines_found = 0;
+ $lines_hit = 0;
+
+ # Write coverage information for each instrumented line
+ # Note: @gcov_content contains a list of (flag, count, source)
+ # tuple for each source code line
+ while (@gcov_content)
+ {
+ $line_number++;
+
+ # Check for instrumented line
+ if ($gcov_content[0])
+ {
+ $lines_found++;
+ printf(INFO_HANDLE "DA:".$line_number.",".
+ $gcov_content[1].($checksum ?
+ ",". md5_base64($gcov_content[2]) : "").
+ "\n");
+
+ # Increase $lines_hit in case of an execution
+ # count>0
+ if ($gcov_content[1] > 0) { $lines_hit++; }
+ }
+
+ # Remove already processed data from array
+ splice(@gcov_content,0,3);
+ }
+
+ # Write line statistics and section separator
+ printf(INFO_HANDLE "LF:%s\n", $lines_found);
+ printf(INFO_HANDLE "LH:%s\n", $lines_hit);
+ print(INFO_HANDLE "end_of_record\n");
+
+ # Remove .gcov file after processing
+ unlink($gcov_file);
+ }
+
+ # Check for files which show up in the graph file but were never
+ # processed
+ if (@unprocessed && @gcov_list)
+ {
+ foreach (@unprocessed)
+ {
+ warn("WARNING: no data found for $_\n");
+ }
+ }
+
+ if (!($output_filename && ($output_filename eq "-")))
+ {
+ close(INFO_HANDLE);
+ }
+
+ # Change back to initial directory
+ chdir($cwd);
+}
+
+
+#
+# solve_relative_path(path, dir)
+#
+# Solve relative path components of DIR which, if not absolute, resides in PATH.
+#
+
+sub solve_relative_path($$)
+{
+ my $path = $_[0];
+ my $dir = $_[1];
+ my $result;
+
+ $result = $dir;
+ # Prepend path if not absolute
+ if ($dir =~ /^[^\/]/)
+ {
+ $result = "$path/$result";
+ }
+
+ # Remove //
+ $result =~ s/\/\//\//g;
+
+ # Remove .
+ $result =~ s/\/\.\//\//g;
+
+ # Solve ..
+ while ($result =~ s/\/[^\/]+\/\.\.\//\//)
+ {
+ }
+
+ # Remove preceding ..
+ $result =~ s/^\/\.\.\//\//g;
+
+ return $result;
+}
+
+
+#
+# match_filename(gcov_filename, list)
+#
+# Return a list of those entries of LIST which match the relative filename
+# GCOV_FILENAME.
+#
+
+sub match_filename($@)
+{
+ my ($filename, @list) = @_;
+ my ($vol, $dir, $file) = splitpath($filename);
+ my @comp = splitdir($dir);
+ my $comps = scalar(@comp);
+ my $entry;
+ my @result;
+
+entry:
+ foreach $entry (@list) {
+ my ($evol, $edir, $efile) = splitpath($entry);
+ my @ecomp;
+ my $ecomps;
+ my $i;
+
+ # Filename component must match
+ if ($efile ne $file) {
+ next;
+ }
+ # Check directory components last to first for match
+ @ecomp = splitdir($edir);
+ $ecomps = scalar(@ecomp);
+ if ($ecomps < $comps) {
+ next;
+ }
+ for ($i = 0; $i < $comps; $i++) {
+ if ($comp[$comps - $i - 1] ne
+ $ecomp[$ecomps - $i - 1]) {
+ next entry;
+ }
+ }
+ push(@result, $entry),
+ }
+
+ return @result;
+}
+
+#
+# solve_ambiguous_match(rel_filename, matches_ref, gcov_content_ref)
+#
+# Try to solve ambiguous matches of mapping (gcov file) -> (source code) file
+# by comparing source code provided in the GCOV file with that of the files
+# in MATCHES. REL_FILENAME identifies the relative filename of the gcov
+# file.
+#
+# Return the one real match or die if there is none.
+#
+
+sub solve_ambiguous_match($$$)
+{
+ my $rel_name = $_[0];
+ my $matches = $_[1];
+ my $content = $_[2];
+ my $filename;
+ my $index;
+ my $no_match;
+ local *SOURCE;
+
+ # Check the list of matches
+ foreach $filename (@$matches)
+ {
+
+ # Compare file contents
+ open(SOURCE, $filename)
+ or die("ERROR: cannot read $filename!\n");
+
+ $no_match = 0;
+ for ($index = 2; <SOURCE>; $index += 3)
+ {
+ chomp;
+
+ # Also remove CR from line-end
+ s/\015$//;
+
+ if ($_ ne @$content[$index])
+ {
+ $no_match = 1;
+ last;
+ }
+ }
+
+ close(SOURCE);
+
+ if (!$no_match)
+ {
+ info("Solved source file ambiguity for $rel_name\n");
+ return $filename;
+ }
+ }
+
+ die("ERROR: could not match gcov data for $rel_name!\n");
+}
+
+
+#
+# split_filename(filename)
+#
+# Return (path, filename, extension) for a given FILENAME.
+#
+
+sub split_filename($)
+{
+ my @path_components = split('/', $_[0]);
+ my @file_components = split('\.', pop(@path_components));
+ my $extension = pop(@file_components);
+
+ return (join("/",@path_components), join(".",@file_components),
+ $extension);
+}
+
+
+#
+# read_gcov_header(gcov_filename)
+#
+# Parse file GCOV_FILENAME and return a list containing the following
+# information:
+#
+# (source, object)
+#
+# where:
+#
+# source: complete relative path of the source code file (gcc >= 3.3 only)
+# object: name of associated graph file
+#
+# Die on error.
+#
+
+sub read_gcov_header($)
+{
+ my $source;
+ my $object;
+ local *INPUT;
+
+ if (!open(INPUT, $_[0]))
+ {
+ if ($ignore_errors[$ERROR_GCOV])
+ {
+ warn("WARNING: cannot read $_[0]!\n");
+ return (undef,undef);
+ }
+ die("ERROR: cannot read $_[0]!\n");
+ }
+
+ while (<INPUT>)
+ {
+ chomp($_);
+
+ # Also remove CR from line-end
+ s/\015$//;
+
+ if (/^\s+-:\s+0:Source:(.*)$/)
+ {
+ # Source: header entry
+ $source = $1;
+ }
+ elsif (/^\s+-:\s+0:Object:(.*)$/)
+ {
+ # Object: header entry
+ $object = $1;
+ }
+ else
+ {
+ last;
+ }
+ }
+
+ close(INPUT);
+
+ return ($source, $object);
+}
+
+
+#
+# br_gvec_len(vector)
+#
+# Return the number of entries in the branch coverage vector.
+#
+
+sub br_gvec_len($)
+{
+ my ($vec) = @_;
+
+ return 0 if (!defined($vec));
+ return (length($vec) * 8 / $BR_VEC_WIDTH) / $BR_VEC_ENTRIES;
+}
+
+
+#
+# br_gvec_get(vector, number)
+#
+# Return an entry from the branch coverage vector.
+#
+
+sub br_gvec_get($$)
+{
+ my ($vec, $num) = @_;
+ my $line;
+ my $block;
+ my $branch;
+ my $taken;
+ my $offset = $num * $BR_VEC_ENTRIES;
+
+ # Retrieve data from vector
+ $line = vec($vec, $offset + $BR_LINE, $BR_VEC_WIDTH);
+ $block = vec($vec, $offset + $BR_BLOCK, $BR_VEC_WIDTH);
+ $branch = vec($vec, $offset + $BR_BRANCH, $BR_VEC_WIDTH);
+ $taken = vec($vec, $offset + $BR_TAKEN, $BR_VEC_WIDTH);
+
+ # Decode taken value from an integer
+ if ($taken == 0) {
+ $taken = "-";
+ } else {
+ $taken--;
+ }
+
+ return ($line, $block, $branch, $taken);
+}
+
+
+#
+# br_gvec_push(vector, line, block, branch, taken)
+#
+# Add an entry to the branch coverage vector.
+#
+
+sub br_gvec_push($$$$$)
+{
+ my ($vec, $line, $block, $branch, $taken) = @_;
+ my $offset;
+
+ $vec = "" if (!defined($vec));
+ $offset = br_gvec_len($vec) * $BR_VEC_ENTRIES;
+
+ # Encode taken value into an integer
+ if ($taken eq "-") {
+ $taken = 0;
+ } else {
+ $taken++;
+ }
+
+ # Add to vector
+ vec($vec, $offset + $BR_LINE, $BR_VEC_WIDTH) = $line;
+ vec($vec, $offset + $BR_BLOCK, $BR_VEC_WIDTH) = $block;
+ vec($vec, $offset + $BR_BRANCH, $BR_VEC_WIDTH) = $branch;
+ vec($vec, $offset + $BR_TAKEN, $BR_VEC_WIDTH) = $taken;
+
+ return $vec;
+}
+
+
+#
+# read_gcov_file(gcov_filename)
+#
+# Parse file GCOV_FILENAME (.gcov file format) and return the list:
+# (reference to gcov_content, reference to gcov_branch, reference to gcov_func)
+#
+# gcov_content is a list of 3 elements
+# (flag, count, source) for each source code line:
+#
+# $result[($line_number-1)*3+0] = instrumentation flag for line $line_number
+# $result[($line_number-1)*3+1] = execution count for line $line_number
+# $result[($line_number-1)*3+2] = source code text for line $line_number
+#
+# gcov_branch is a vector of 4 4-byte long elements for each branch:
+# line number, block number, branch number, count + 1 or 0
+#
+# gcov_func is a list of 2 elements
+# (number of calls, function name) for each function
+#
+# Die on error.
+#
+
+sub read_gcov_file($)
+{
+ my $filename = $_[0];
+ my @result = ();
+ my $branches = "";
+ my @functions = ();
+ my $number;
+ my $exclude_flag = 0;
+ my $exclude_line = 0;
+ my $last_block = $UNNAMED_BLOCK;
+ my $last_line = 0;
+ local *INPUT;
+
+ if (!open(INPUT, $filename)) {
+ if ($ignore_errors[$ERROR_GCOV])
+ {
+ warn("WARNING: cannot read $filename!\n");
+ return (undef, undef, undef);
+ }
+ die("ERROR: cannot read $filename!\n");
+ }
+
+ if ($gcov_version < $GCOV_VERSION_3_3_0)
+ {
+ # Expect gcov format as used in gcc < 3.3
+ while (<INPUT>)
+ {
+ chomp($_);
+
+ # Also remove CR from line-end
+ s/\015$//;
+
+ if (/^branch\s+(\d+)\s+taken\s+=\s+(\d+)/) {
+ next if ($exclude_line);
+ $branches = br_gvec_push($branches, $last_line,
+ $last_block, $1, $2);
+ } elsif (/^branch\s+(\d+)\s+never\s+executed/) {
+ next if ($exclude_line);
+ $branches = br_gvec_push($branches, $last_line,
+ $last_block, $1, '-');
+ }
+ elsif (/^call/ || /^function/)
+ {
+ # Function call return data
+ }
+ else
+ {
+ $last_line++;
+ # Check for exclusion markers
+ if (!$no_markers) {
+ if (/$EXCL_STOP/) {
+ $exclude_flag = 0;
+ } elsif (/$EXCL_START/) {
+ $exclude_flag = 1;
+ }
+ if (/$EXCL_LINE/ || $exclude_flag) {
+ $exclude_line = 1;
+ } else {
+ $exclude_line = 0;
+ }
+ }
+ # Source code execution data
+ if (/^\t\t(.*)$/)
+ {
+ # Uninstrumented line
+ push(@result, 0);
+ push(@result, 0);
+ push(@result, $1);
+ next;
+ }
+ $number = (split(" ",substr($_, 0, 16)))[0];
+
+ # Check for zero count which is indicated
+ # by ######
+ if ($number eq "######") { $number = 0; }
+
+ if ($exclude_line) {
+ # Register uninstrumented line instead
+ push(@result, 0);
+ push(@result, 0);
+ } else {
+ push(@result, 1);
+ push(@result, $number);
+ }
+ push(@result, substr($_, 16));
+ }
+ }
+ }
+ else
+ {
+ # Expect gcov format as used in gcc >= 3.3
+ while (<INPUT>)
+ {
+ chomp($_);
+
+ # Also remove CR from line-end
+ s/\015$//;
+
+ if (/^\s*(\d+|\$+):\s*(\d+)-block\s+(\d+)\s*$/) {
+ # Block information - used to group related
+ # branches
+ $last_line = $2;
+ $last_block = $3;
+ } elsif (/^branch\s+(\d+)\s+taken\s+(\d+)/) {
+ next if ($exclude_line);
+ $branches = br_gvec_push($branches, $last_line,
+ $last_block, $1, $2);
+ } elsif (/^branch\s+(\d+)\s+never\s+executed/) {
+ next if ($exclude_line);
+ $branches = br_gvec_push($branches, $last_line,
+ $last_block, $1, '-');
+ }
+ elsif (/^function\s+(\S+)\s+called\s+(\d+)/)
+ {
+ if ($exclude_line) {
+ next;
+ }
+ push(@functions, $2, $1);
+ }
+ elsif (/^call/)
+ {
+ # Function call return data
+ }
+ elsif (/^\s*([^:]+):\s*([^:]+):(.*)$/)
+ {
+ my ($count, $line, $code) = ($1, $2, $3);
+
+ $last_line = $line;
+ $last_block = $UNNAMED_BLOCK;
+ # Check for exclusion markers
+ if (!$no_markers) {
+ if (/$EXCL_STOP/) {
+ $exclude_flag = 0;
+ } elsif (/$EXCL_START/) {
+ $exclude_flag = 1;
+ }
+ if (/$EXCL_LINE/ || $exclude_flag) {
+ $exclude_line = 1;
+ } else {
+ $exclude_line = 0;
+ }
+ }
+ # <exec count>:<line number>:<source code>
+ if ($line eq "0")
+ {
+ # Extra data
+ }
+ elsif ($count eq "-")
+ {
+ # Uninstrumented line
+ push(@result, 0);
+ push(@result, 0);
+ push(@result, $code);
+ }
+ else
+ {
+ if ($exclude_line) {
+ push(@result, 0);
+ push(@result, 0);
+ } else {
+ # Check for zero count
+ if ($count eq "#####") {
+ $count = 0;
+ }
+ push(@result, 1);
+ push(@result, $count);
+ }
+ push(@result, $code);
+ }
+ }
+ }
+ }
+
+ close(INPUT);
+ if ($exclude_flag) {
+ warn("WARNING: unterminated exclusion section in $filename\n");
+ }
+ return(\@result, $branches, \@functions);
+}
+
+
+#
+# Get the GCOV tool version. Return an integer number which represents the
+# GCOV version. Version numbers can be compared using standard integer
+# operations.
+#
+
+sub get_gcov_version()
+{
+ local *HANDLE;
+ my $version_string;
+ my $result;
+
+ open(GCOV_PIPE, "$gcov_tool -v |")
+ or die("ERROR: cannot retrieve gcov version!\n");
+ $version_string = <GCOV_PIPE>;
+ close(GCOV_PIPE);
+
+ $result = 0;
+ if ($version_string =~ /(\d+)\.(\d+)(\.(\d+))?/)
+ {
+ if (defined($4))
+ {
+ info("Found gcov version: $1.$2.$4\n");
+ $result = $1 << 16 | $2 << 8 | $4;
+ }
+ else
+ {
+ info("Found gcov version: $1.$2\n");
+ $result = $1 << 16 | $2 << 8;
+ }
+ }
+ if ($version_string =~ /suse/i && $result == 0x30303 ||
+ $version_string =~ /mandrake/i && $result == 0x30302)
+ {
+ info("Using compatibility mode for GCC 3.3 (hammer)\n");
+ $compatibility = $COMPAT_HAMMER;
+ }
+ return $result;
+}
+
+
+#
+# info(printf_parameter)
+#
+# Use printf to write PRINTF_PARAMETER to stdout only when the $quiet flag
+# is not set.
+#
+
+sub info(@)
+{
+ if (!$quiet)
+ {
+ # Print info string
+ if (defined($output_filename) && ($output_filename eq "-"))
+ {
+ # Don't interfere with the .info output to STDOUT
+ printf(STDERR @_);
+ }
+ else
+ {
+ printf(@_);
+ }
+ }
+}
+
+
+#
+# int_handler()
+#
+# Called when the script was interrupted by an INT signal (e.g. CTRl-C)
+#
+
+sub int_handler()
+{
+ if ($cwd) { chdir($cwd); }
+ info("Aborted.\n");
+ exit(1);
+}
+
+
+#
+# system_no_output(mode, parameters)
+#
+# Call an external program using PARAMETERS while suppressing depending on
+# the value of MODE:
+#
+# MODE & 1: suppress STDOUT
+# MODE & 2: suppress STDERR
+#
+# Return 0 on success, non-zero otherwise.
+#
+
+sub system_no_output($@)
+{
+ my $mode = shift;
+ my $result;
+ local *OLD_STDERR;
+ local *OLD_STDOUT;
+
+ # Save old stdout and stderr handles
+ ($mode & 1) && open(OLD_STDOUT, ">>&STDOUT");
+ ($mode & 2) && open(OLD_STDERR, ">>&STDERR");
+
+ # Redirect to /dev/null
+ ($mode & 1) && open(STDOUT, ">/dev/null");
+ ($mode & 2) && open(STDERR, ">/dev/null");
+
+ system(@_);
+ $result = $?;
+
+ # Close redirected handles
+ ($mode & 1) && close(STDOUT);
+ ($mode & 2) && close(STDERR);
+
+ # Restore old handles
+ ($mode & 1) && open(STDOUT, ">>&OLD_STDOUT");
+ ($mode & 2) && open(STDERR, ">>&OLD_STDERR");
+
+ return $result;
+}
+
+
+#
+# read_config(filename)
+#
+# Read configuration file FILENAME and return a reference to a hash containing
+# all valid key=value pairs found.
+#
+
+sub read_config($)
+{
+ my $filename = $_[0];
+ my %result;
+ my $key;
+ my $value;
+ local *HANDLE;
+
+ if (!open(HANDLE, "<$filename"))
+ {
+ warn("WARNING: cannot read configuration file $filename\n");
+ return undef;
+ }
+ while (<HANDLE>)
+ {
+ chomp;
+ # Skip comments
+ s/#.*//;
+ # Remove leading blanks
+ s/^\s+//;
+ # Remove trailing blanks
+ s/\s+$//;
+ next unless length;
+ ($key, $value) = split(/\s*=\s*/, $_, 2);
+ if (defined($key) && defined($value))
+ {
+ $result{$key} = $value;
+ }
+ else
+ {
+ warn("WARNING: malformed statement in line $. ".
+ "of configuration file $filename\n");
+ }
+ }
+ close(HANDLE);
+ return \%result;
+}
+
+
+#
+# apply_config(REF)
+#
+# REF is a reference to a hash containing the following mapping:
+#
+# key_string => var_ref
+#
+# where KEY_STRING is a keyword and VAR_REF is a reference to an associated
+# variable. If the global configuration hash CONFIG contains a value for
+# keyword KEY_STRING, VAR_REF will be assigned the value for that keyword.
+#
+
+sub apply_config($)
+{
+ my $ref = $_[0];
+
+ foreach (keys(%{$ref}))
+ {
+ if (defined($config->{$_}))
+ {
+ ${$ref->{$_}} = $config->{$_};
+ }
+ }
+}
+
+
+#
+# get_exclusion_data(filename)
+#
+# Scan specified source code file for exclusion markers and return
+# linenumber -> 1
+# for all lines which should be excluded.
+#
+
+sub get_exclusion_data($)
+{
+ my ($filename) = @_;
+ my %list;
+ my $flag = 0;
+ local *HANDLE;
+
+ if (!open(HANDLE, "<$filename")) {
+ warn("WARNING: could not open $filename\n");
+ return undef;
+ }
+ while (<HANDLE>) {
+ if (/$EXCL_STOP/) {
+ $flag = 0;
+ } elsif (/$EXCL_START/) {
+ $flag = 1;
+ }
+ if (/$EXCL_LINE/ || $flag) {
+ $list{$.} = 1;
+ }
+ }
+ close(HANDLE);
+
+ if ($flag) {
+ warn("WARNING: unterminated exclusion section in $filename\n");
+ }
+
+ return \%list;
+}
+
+
+#
+# apply_exclusion_data(instr, graph)
+#
+# Remove lines from instr and graph data structures which are marked
+# for exclusion in the source code file.
+#
+# Return adjusted (instr, graph).
+#
+# graph : file name -> function data
+# function data : function name -> line data
+# line data : [ line1, line2, ... ]
+#
+# instr : filename -> line data
+# line data : [ line1, line2, ... ]
+#
+
+sub apply_exclusion_data($$)
+{
+ my ($instr, $graph) = @_;
+ my $filename;
+ my %excl_data;
+ my $excl_read_failed = 0;
+
+ # Collect exclusion marker data
+ foreach $filename (sort_uniq_lex(keys(%{$graph}), keys(%{$instr}))) {
+ my $excl = get_exclusion_data($filename);
+
+ # Skip and note if file could not be read
+ if (!defined($excl)) {
+ $excl_read_failed = 1;
+ next;
+ }
+
+ # Add to collection if there are markers
+ $excl_data{$filename} = $excl if (keys(%{$excl}) > 0);
+ }
+
+ # Warn if not all source files could be read
+ if ($excl_read_failed) {
+ warn("WARNING: some exclusion markers may be ignored\n");
+ }
+
+ # Skip if no markers were found
+ return ($instr, $graph) if (keys(%excl_data) == 0);
+
+ # Apply exclusion marker data to graph
+ foreach $filename (keys(%excl_data)) {
+ my $function_data = $graph->{$filename};
+ my $excl = $excl_data{$filename};
+ my $function;
+
+ next if (!defined($function_data));
+
+ foreach $function (keys(%{$function_data})) {
+ my $line_data = $function_data->{$function};
+ my $line;
+ my @new_data;
+
+ # To be consistent with exclusion parser in non-initial
+ # case we need to remove a function if the first line
+ # was excluded
+ if ($excl->{$line_data->[0]}) {
+ delete($function_data->{$function});
+ next;
+ }
+ # Copy only lines which are not excluded
+ foreach $line (@{$line_data}) {
+ push(@new_data, $line) if (!$excl->{$line});
+ }
+
+ # Store modified list
+ if (scalar(@new_data) > 0) {
+ $function_data->{$function} = \@new_data;
+ } else {
+ # All of this function was excluded
+ delete($function_data->{$function});
+ }
+ }
+
+ # Check if all functions of this file were excluded
+ if (keys(%{$function_data}) == 0) {
+ delete($graph->{$filename});
+ }
+ }
+
+ # Apply exclusion marker data to instr
+ foreach $filename (keys(%excl_data)) {
+ my $line_data = $instr->{$filename};
+ my $excl = $excl_data{$filename};
+ my $line;
+ my @new_data;
+
+ next if (!defined($line_data));
+
+ # Copy only lines which are not excluded
+ foreach $line (@{$line_data}) {
+ push(@new_data, $line) if (!$excl->{$line});
+ }
+
+ # Store modified list
+ if (scalar(@new_data) > 0) {
+ $instr->{$filename} = \@new_data;
+ } else {
+ # All of this file was excluded
+ delete($instr->{$filename});
+ }
+ }
+
+ return ($instr, $graph);
+}
+
+
+sub process_graphfile($$)
+{
+ my ($file, $dir) = @_;
+ my $graph_filename = $file;
+ my $graph_dir;
+ my $graph_basename;
+ my $source_dir;
+ my $base_dir;
+ my $graph;
+ my $instr;
+ my $filename;
+ local *INFO_HANDLE;
+
+ info("Processing %s\n", abs2rel($file, $dir));
+
+ # Get path to data file in absolute and normalized form (begins with /,
+ # contains no more ../ or ./)
+ $graph_filename = solve_relative_path($cwd, $graph_filename);
+
+ # Get directory and basename of data file
+ ($graph_dir, $graph_basename) = split_filename($graph_filename);
+
+ # avoid files from .libs dirs
+ if ($compat_libtool && $graph_dir =~ m/(.*)\/\.libs$/) {
+ $source_dir = $1;
+ } else {
+ $source_dir = $graph_dir;
+ }
+
+ # Construct base_dir for current file
+ if ($base_directory)
+ {
+ $base_dir = $base_directory;
+ }
+ else
+ {
+ $base_dir = $source_dir;
+ }
+
+ if ($gcov_version < $GCOV_VERSION_3_4_0)
+ {
+ if (defined($compatibility) && $compatibility eq $COMPAT_HAMMER)
+ {
+ ($instr, $graph) = read_bbg($graph_filename, $base_dir);
+ }
+ else
+ {
+ ($instr, $graph) = read_bb($graph_filename, $base_dir);
+ }
+ }
+ else
+ {
+ ($instr, $graph) = read_gcno($graph_filename, $base_dir);
+ }
+
+ if (!$no_markers) {
+ # Apply exclusion marker data to graph file data
+ ($instr, $graph) = apply_exclusion_data($instr, $graph);
+ }
+
+ # Check whether we're writing to a single file
+ if ($output_filename)
+ {
+ if ($output_filename eq "-")
+ {
+ *INFO_HANDLE = *STDOUT;
+ }
+ else
+ {
+ # Append to output file
+ open(INFO_HANDLE, ">>$output_filename")
+ or die("ERROR: cannot write to ".
+ "$output_filename!\n");
+ }
+ }
+ else
+ {
+ # Open .info file for output
+ open(INFO_HANDLE, ">$graph_filename.info")
+ or die("ERROR: cannot create $graph_filename.info!\n");
+ }
+
+ # Write test name
+ printf(INFO_HANDLE "TN:%s\n", $test_name);
+ foreach $filename (sort(keys(%{$instr})))
+ {
+ my $funcdata = $graph->{$filename};
+ my $line;
+ my $linedata;
+
+ print(INFO_HANDLE "SF:$filename\n");
+
+ if (defined($funcdata)) {
+ my @functions = sort {$funcdata->{$a}->[0] <=>
+ $funcdata->{$b}->[0]}
+ keys(%{$funcdata});
+ my $func;
+
+ # Gather list of instrumented lines and functions
+ foreach $func (@functions) {
+ $linedata = $funcdata->{$func};
+
+ # Print function name and starting line
+ print(INFO_HANDLE "FN:".$linedata->[0].
+ ",".filter_fn_name($func)."\n");
+ }
+ # Print zero function coverage data
+ foreach $func (@functions) {
+ print(INFO_HANDLE "FNDA:0,".
+ filter_fn_name($func)."\n");
+ }
+ # Print function summary
+ print(INFO_HANDLE "FNF:".scalar(@functions)."\n");
+ print(INFO_HANDLE "FNH:0\n");
+ }
+ # Print zero line coverage data
+ foreach $line (@{$instr->{$filename}}) {
+ print(INFO_HANDLE "DA:$line,0\n");
+ }
+ # Print line summary
+ print(INFO_HANDLE "LF:".scalar(@{$instr->{$filename}})."\n");
+ print(INFO_HANDLE "LH:0\n");
+
+ print(INFO_HANDLE "end_of_record\n");
+ }
+ if (!($output_filename && ($output_filename eq "-")))
+ {
+ close(INFO_HANDLE);
+ }
+}
+
+sub filter_fn_name($)
+{
+ my ($fn) = @_;
+
+ # Remove characters used internally as function name delimiters
+ $fn =~ s/[,=]/_/g;
+
+ return $fn;
+}
+
+sub warn_handler($)
+{
+ my ($msg) = @_;
+
+ warn("$tool_name: $msg");
+}
+
+sub die_handler($)
+{
+ my ($msg) = @_;
+
+ die("$tool_name: $msg");
+}
+
+
+#
+# graph_error(filename, message)
+#
+# Print message about error in graph file. If ignore_graph_error is set, return.
+# Otherwise abort.
+#
+
+sub graph_error($$)
+{
+ my ($filename, $msg) = @_;
+
+ if ($ignore[$ERROR_GRAPH]) {
+ warn("WARNING: $filename: $msg - skipping\n");
+ return;
+ }
+ die("ERROR: $filename: $msg\n");
+}
+
+#
+# graph_expect(description)
+#
+# If debug is set to a non-zero value, print the specified description of what
+# is expected to be read next from the graph file.
+#
+
+sub graph_expect($)
+{
+ my ($msg) = @_;
+
+ if (!$debug || !defined($msg)) {
+ return;
+ }
+
+ print(STDERR "DEBUG: expecting $msg\n");
+}
+
+#
+# graph_read(handle, bytes[, description])
+#
+# Read and return the specified number of bytes from handle. Return undef
+# if the number of bytes could not be read.
+#
+
+sub graph_read(*$;$)
+{
+ my ($handle, $length, $desc) = @_;
+ my $data;
+ my $result;
+
+ graph_expect($desc);
+ $result = read($handle, $data, $length);
+ if ($debug) {
+ my $ascii = "";
+ my $hex = "";
+ my $i;
+
+ print(STDERR "DEBUG: read($length)=$result: ");
+ for ($i = 0; $i < length($data); $i++) {
+ my $c = substr($data, $i, 1);;
+ my $n = ord($c);
+
+ $hex .= sprintf("%02x ", $n);
+ if ($n >= 32 && $n <= 127) {
+ $ascii .= $c;
+ } else {
+ $ascii .= ".";
+ }
+ }
+ print(STDERR "$hex |$ascii|");
+ print(STDERR "\n");
+ }
+ if ($result != $length) {
+ return undef;
+ }
+ return $data;
+}
+
+#
+# graph_skip(handle, bytes[, description])
+#
+# Read and discard the specified number of bytes from handle. Return non-zero
+# if bytes could be read, zero otherwise.
+#
+
+sub graph_skip(*$;$)
+{
+ my ($handle, $length, $desc) = @_;
+
+ if (defined(graph_read($handle, $length, $desc))) {
+ return 1;
+ }
+ return 0;
+}
+
+#
+# sort_uniq(list)
+#
+# Return list in numerically ascending order and without duplicate entries.
+#
+
+sub sort_uniq(@)
+{
+ my (@list) = @_;
+ my %hash;
+
+ foreach (@list) {
+ $hash{$_} = 1;
+ }
+ return sort { $a <=> $b } keys(%hash);
+}
+
+#
+# sort_uniq_lex(list)
+#
+# Return list in lexically ascending order and without duplicate entries.
+#
+
+sub sort_uniq_lex(@)
+{
+ my (@list) = @_;
+ my %hash;
+
+ foreach (@list) {
+ $hash{$_} = 1;
+ }
+ return sort keys(%hash);
+}
+
+#
+# graph_cleanup(graph)
+#
+# Remove entries for functions with no lines. Remove duplicate line numbers.
+# Sort list of line numbers numerically ascending.
+#
+
+sub graph_cleanup($)
+{
+ my ($graph) = @_;
+ my $filename;
+
+ foreach $filename (keys(%{$graph})) {
+ my $per_file = $graph->{$filename};
+ my $function;
+
+ foreach $function (keys(%{$per_file})) {
+ my $lines = $per_file->{$function};
+
+ if (scalar(@$lines) == 0) {
+ # Remove empty function
+ delete($per_file->{$function});
+ next;
+ }
+ # Normalize list
+ $per_file->{$function} = [ sort_uniq(@$lines) ];
+ }
+ if (scalar(keys(%{$per_file})) == 0) {
+ # Remove empty file
+ delete($graph->{$filename});
+ }
+ }
+}
+
+#
+# graph_find_base(bb)
+#
+# Try to identify the filename which is the base source file for the
+# specified bb data.
+#
+
+sub graph_find_base($)
+{
+ my ($bb) = @_;
+ my %file_count;
+ my $basefile;
+ my $file;
+ my $func;
+ my $filedata;
+ my $count;
+ my $num;
+
+ # Identify base name for this bb data.
+ foreach $func (keys(%{$bb})) {
+ $filedata = $bb->{$func};
+
+ foreach $file (keys(%{$filedata})) {
+ $count = $file_count{$file};
+
+ # Count file occurrence
+ $file_count{$file} = defined($count) ? $count + 1 : 1;
+ }
+ }
+ $count = 0;
+ $num = 0;
+ foreach $file (keys(%file_count)) {
+ if ($file_count{$file} > $count) {
+ # The file that contains code for the most functions
+ # is likely the base file
+ $count = $file_count{$file};
+ $num = 1;
+ $basefile = $file;
+ } elsif ($file_count{$file} == $count) {
+ # If more than one file could be the basefile, we
+ # don't have a basefile
+ $basefile = undef;
+ }
+ }
+
+ return $basefile;
+}
+
+#
+# graph_from_bb(bb, fileorder, bb_filename)
+#
+# Convert data from bb to the graph format and list of instrumented lines.
+# Returns (instr, graph).
+#
+# bb : function name -> file data
+# : undef -> file order
+# file data : filename -> line data
+# line data : [ line1, line2, ... ]
+#
+# file order : function name -> [ filename1, filename2, ... ]
+#
+# graph : file name -> function data
+# function data : function name -> line data
+# line data : [ line1, line2, ... ]
+#
+# instr : filename -> line data
+# line data : [ line1, line2, ... ]
+#
+
+sub graph_from_bb($$$)
+{
+ my ($bb, $fileorder, $bb_filename) = @_;
+ my $graph = {};
+ my $instr = {};
+ my $basefile;
+ my $file;
+ my $func;
+ my $filedata;
+ my $linedata;
+ my $order;
+
+ $basefile = graph_find_base($bb);
+ # Create graph structure
+ foreach $func (keys(%{$bb})) {
+ $filedata = $bb->{$func};
+ $order = $fileorder->{$func};
+
+ # Account for lines in functions
+ if (defined($basefile) && defined($filedata->{$basefile})) {
+ # If the basefile contributes to this function,
+ # account this function to the basefile.
+ $graph->{$basefile}->{$func} = $filedata->{$basefile};
+ } else {
+ # If the basefile does not contribute to this function,
+ # account this function to the first file contributing
+ # lines.
+ $graph->{$order->[0]}->{$func} =
+ $filedata->{$order->[0]};
+ }
+
+ foreach $file (keys(%{$filedata})) {
+ # Account for instrumented lines
+ $linedata = $filedata->{$file};
+ push(@{$instr->{$file}}, @$linedata);
+ }
+ }
+ # Clean up array of instrumented lines
+ foreach $file (keys(%{$instr})) {
+ $instr->{$file} = [ sort_uniq(@{$instr->{$file}}) ];
+ }
+
+ return ($instr, $graph);
+}
+
+#
+# graph_add_order(fileorder, function, filename)
+#
+# Add an entry for filename to the fileorder data set for function.
+#
+
+sub graph_add_order($$$)
+{
+ my ($fileorder, $function, $filename) = @_;
+ my $item;
+ my $list;
+
+ $list = $fileorder->{$function};
+ foreach $item (@$list) {
+ if ($item eq $filename) {
+ return;
+ }
+ }
+ push(@$list, $filename);
+ $fileorder->{$function} = $list;
+}
+#
+# read_bb_word(handle[, description])
+#
+# Read and return a word in .bb format from handle.
+#
+
+sub read_bb_word(*;$)
+{
+ my ($handle, $desc) = @_;
+
+ return graph_read($handle, 4, $desc);
+}
+
+#
+# read_bb_value(handle[, description])
+#
+# Read a word in .bb format from handle and return the word and its integer
+# value.
+#
+
+sub read_bb_value(*;$)
+{
+ my ($handle, $desc) = @_;
+ my $word;
+
+ $word = read_bb_word($handle, $desc);
+ return undef if (!defined($word));
+
+ return ($word, unpack("V", $word));
+}
+
+#
+# read_bb_string(handle, delimiter)
+#
+# Read and return a string in .bb format from handle up to the specified
+# delimiter value.
+#
+
+sub read_bb_string(*$)
+{
+ my ($handle, $delimiter) = @_;
+ my $word;
+ my $value;
+ my $string = "";
+
+ graph_expect("string");
+ do {
+ ($word, $value) = read_bb_value($handle, "string or delimiter");
+ return undef if (!defined($value));
+ if ($value != $delimiter) {
+ $string .= $word;
+ }
+ } while ($value != $delimiter);
+ $string =~ s/\0//g;
+
+ return $string;
+}
+
+#
+# read_bb(filename, base_dir)
+#
+# Read the contents of the specified .bb file and return (instr, graph), where:
+#
+# instr : filename -> line data
+# line data : [ line1, line2, ... ]
+#
+# graph : filename -> file_data
+# file_data : function name -> line_data
+# line_data : [ line1, line2, ... ]
+#
+# Relative filenames are converted to absolute form using base_dir as
+# base directory. See the gcov info pages of gcc 2.95 for a description of
+# the .bb file format.
+#
+
+sub read_bb($$)
+{
+ my ($bb_filename, $base) = @_;
+ my $minus_one = 0x80000001;
+ my $minus_two = 0x80000002;
+ my $value;
+ my $filename;
+ my $function;
+ my $bb = {};
+ my $fileorder = {};
+ my $instr;
+ my $graph;
+ local *HANDLE;
+
+ open(HANDLE, "<$bb_filename") or goto open_error;
+ binmode(HANDLE);
+ while (!eof(HANDLE)) {
+ $value = read_bb_value(*HANDLE, "data word");
+ goto incomplete if (!defined($value));
+ if ($value == $minus_one) {
+ # Source file name
+ graph_expect("filename");
+ $filename = read_bb_string(*HANDLE, $minus_one);
+ goto incomplete if (!defined($filename));
+ if ($filename ne "") {
+ $filename = solve_relative_path($base,
+ $filename);
+ }
+ } elsif ($value == $minus_two) {
+ # Function name
+ graph_expect("function name");
+ $function = read_bb_string(*HANDLE, $minus_two);
+ goto incomplete if (!defined($function));
+ } elsif ($value > 0) {
+ # Line number
+ if (!defined($filename) || !defined($function)) {
+ warn("WARNING: unassigned line number ".
+ "$value\n");
+ next;
+ }
+ push(@{$bb->{$function}->{$filename}}, $value);
+ graph_add_order($fileorder, $function, $filename);
+ }
+ }
+ close(HANDLE);
+ ($instr, $graph) = graph_from_bb($bb, $fileorder, $bb_filename);
+ graph_cleanup($graph);
+
+ return ($instr, $graph);
+
+open_error:
+ graph_error($bb_filename, "could not open file");
+ return undef;
+incomplete:
+ graph_error($bb_filename, "reached unexpected end of file");
+ return undef;
+}
+
+#
+# read_bbg_word(handle[, description])
+#
+# Read and return a word in .bbg format.
+#
+
+sub read_bbg_word(*;$)
+{
+ my ($handle, $desc) = @_;
+
+ return graph_read($handle, 4, $desc);
+}
+
+#
+# read_bbg_value(handle[, description])
+#
+# Read a word in .bbg format from handle and return its integer value.
+#
+
+sub read_bbg_value(*;$)
+{
+ my ($handle, $desc) = @_;
+ my $word;
+
+ $word = read_bbg_word($handle, $desc);
+ return undef if (!defined($word));
+
+ return unpack("N", $word);
+}
+
+#
+# read_bbg_string(handle)
+#
+# Read and return a string in .bbg format.
+#
+
+sub read_bbg_string(*)
+{
+ my ($handle, $desc) = @_;
+ my $length;
+ my $string;
+
+ graph_expect("string");
+ # Read string length
+ $length = read_bbg_value($handle, "string length");
+ return undef if (!defined($length));
+ if ($length == 0) {
+ return "";
+ }
+ # Read string
+ $string = graph_read($handle, $length, "string");
+ return undef if (!defined($string));
+ # Skip padding
+ graph_skip($handle, 4 - $length % 4, "string padding") or return undef;
+
+ return $string;
+}
+
+#
+# read_bbg_lines_record(handle, bbg_filename, bb, fileorder, filename,
+# function, base)
+#
+# Read a bbg format lines record from handle and add the relevant data to
+# bb and fileorder. Return filename on success, undef on error.
+#
+
+sub read_bbg_lines_record(*$$$$$$)
+{
+ my ($handle, $bbg_filename, $bb, $fileorder, $filename, $function,
+ $base) = @_;
+ my $string;
+ my $lineno;
+
+ graph_expect("lines record");
+ # Skip basic block index
+ graph_skip($handle, 4, "basic block index") or return undef;
+ while (1) {
+ # Read line number
+ $lineno = read_bbg_value($handle, "line number");
+ return undef if (!defined($lineno));
+ if ($lineno == 0) {
+ # Got a marker for a new filename
+ graph_expect("filename");
+ $string = read_bbg_string($handle);
+ return undef if (!defined($string));
+ # Check for end of record
+ if ($string eq "") {
+ return $filename;
+ }
+ $filename = solve_relative_path($base, $string);
+ next;
+ }
+ # Got an actual line number
+ if (!defined($filename)) {
+ warn("WARNING: unassigned line number in ".
+ "$bbg_filename\n");
+ next;
+ }
+ push(@{$bb->{$function}->{$filename}}, $lineno);
+ graph_add_order($fileorder, $function, $filename);
+ }
+}
+
+#
+# read_bbg(filename, base_dir)
+#
+# Read the contents of the specified .bbg file and return the following mapping:
+# graph: filename -> file_data
+# file_data: function name -> line_data
+# line_data: [ line1, line2, ... ]
+#
+# Relative filenames are converted to absolute form using base_dir as
+# base directory. See the gcov-io.h file in the SLES 9 gcc 3.3.3 source code
+# for a description of the .bbg format.
+#
+
+sub read_bbg($$)
+{
+ my ($bbg_filename, $base) = @_;
+ my $file_magic = 0x67626267;
+ my $tag_function = 0x01000000;
+ my $tag_lines = 0x01450000;
+ my $word;
+ my $tag;
+ my $length;
+ my $function;
+ my $filename;
+ my $bb = {};
+ my $fileorder = {};
+ my $instr;
+ my $graph;
+ local *HANDLE;
+
+ open(HANDLE, "<$bbg_filename") or goto open_error;
+ binmode(HANDLE);
+ # Read magic
+ $word = read_bbg_value(*HANDLE, "file magic");
+ goto incomplete if (!defined($word));
+ # Check magic
+ if ($word != $file_magic) {
+ goto magic_error;
+ }
+ # Skip version
+ graph_skip(*HANDLE, 4, "version") or goto incomplete;
+ while (!eof(HANDLE)) {
+ # Read record tag
+ $tag = read_bbg_value(*HANDLE, "record tag");
+ goto incomplete if (!defined($tag));
+ # Read record length
+ $length = read_bbg_value(*HANDLE, "record length");
+ goto incomplete if (!defined($tag));
+ if ($tag == $tag_function) {
+ graph_expect("function record");
+ # Read function name
+ graph_expect("function name");
+ $function = read_bbg_string(*HANDLE);
+ goto incomplete if (!defined($function));
+ $filename = undef;
+ # Skip function checksum
+ graph_skip(*HANDLE, 4, "function checksum")
+ or goto incomplete;
+ } elsif ($tag == $tag_lines) {
+ # Read lines record
+ $filename = read_bbg_lines_record(HANDLE, $bbg_filename,
+ $bb, $fileorder, $filename,
+ $function, $base);
+ goto incomplete if (!defined($filename));
+ } else {
+ # Skip record contents
+ graph_skip(*HANDLE, $length, "unhandled record")
+ or goto incomplete;
+ }
+ }
+ close(HANDLE);
+ ($instr, $graph) = graph_from_bb($bb, $fileorder, $bbg_filename);
+ graph_cleanup($graph);
+
+ return ($instr, $graph);
+
+open_error:
+ graph_error($bbg_filename, "could not open file");
+ return undef;
+incomplete:
+ graph_error($bbg_filename, "reached unexpected end of file");
+ return undef;
+magic_error:
+ graph_error($bbg_filename, "found unrecognized bbg file magic");
+ return undef;
+}
+
+#
+# read_gcno_word(handle[, description])
+#
+# Read and return a word in .gcno format.
+#
+
+sub read_gcno_word(*;$)
+{
+ my ($handle, $desc) = @_;
+
+ return graph_read($handle, 4, $desc);
+}
+
+#
+# read_gcno_value(handle, big_endian[, description])
+#
+# Read a word in .gcno format from handle and return its integer value
+# according to the specified endianness.
+#
+
+sub read_gcno_value(*$;$)
+{
+ my ($handle, $big_endian, $desc) = @_;
+ my $word;
+
+ $word = read_gcno_word($handle, $desc);
+ return undef if (!defined($word));
+ if ($big_endian) {
+ return unpack("N", $word);
+ } else {
+ return unpack("V", $word);
+ }
+}
+
+#
+# read_gcno_string(handle, big_endian)
+#
+# Read and return a string in .gcno format.
+#
+
+sub read_gcno_string(*$)
+{
+ my ($handle, $big_endian) = @_;
+ my $length;
+ my $string;
+
+ graph_expect("string");
+ # Read string length
+ $length = read_gcno_value($handle, $big_endian, "string length");
+ return undef if (!defined($length));
+ if ($length == 0) {
+ return "";
+ }
+ $length *= 4;
+ # Read string
+ $string = graph_read($handle, $length, "string and padding");
+ return undef if (!defined($string));
+ $string =~ s/\0//g;
+
+ return $string;
+}
+
+#
+# read_gcno_lines_record(handle, gcno_filename, bb, fileorder, filename,
+# function, base, big_endian)
+#
+# Read a gcno format lines record from handle and add the relevant data to
+# bb and fileorder. Return filename on success, undef on error.
+#
+
+sub read_gcno_lines_record(*$$$$$$$)
+{
+ my ($handle, $gcno_filename, $bb, $fileorder, $filename, $function,
+ $base, $big_endian) = @_;
+ my $string;
+ my $lineno;
+
+ graph_expect("lines record");
+ # Skip basic block index
+ graph_skip($handle, 4, "basic block index") or return undef;
+ while (1) {
+ # Read line number
+ $lineno = read_gcno_value($handle, $big_endian, "line number");
+ return undef if (!defined($lineno));
+ if ($lineno == 0) {
+ # Got a marker for a new filename
+ graph_expect("filename");
+ $string = read_gcno_string($handle, $big_endian);
+ return undef if (!defined($string));
+ # Check for end of record
+ if ($string eq "") {
+ return $filename;
+ }
+ $filename = solve_relative_path($base, $string);
+ next;
+ }
+ # Got an actual line number
+ if (!defined($filename)) {
+ warn("WARNING: unassigned line number in ".
+ "$gcno_filename\n");
+ next;
+ }
+ # Add to list
+ push(@{$bb->{$function}->{$filename}}, $lineno);
+ graph_add_order($fileorder, $function, $filename);
+ }
+}
+
+#
+# read_gcno_function_record(handle, graph, base, big_endian)
+#
+# Read a gcno format function record from handle and add the relevant data
+# to graph. Return (filename, function) on success, undef on error.
+#
+
+sub read_gcno_function_record(*$$$$)
+{
+ my ($handle, $bb, $fileorder, $base, $big_endian) = @_;
+ my $filename;
+ my $function;
+ my $lineno;
+ my $lines;
+
+ graph_expect("function record");
+ # Skip ident and checksum
+ graph_skip($handle, 8, "function ident and checksum") or return undef;
+ # Read function name
+ graph_expect("function name");
+ $function = read_gcno_string($handle, $big_endian);
+ return undef if (!defined($function));
+ # Read filename
+ graph_expect("filename");
+ $filename = read_gcno_string($handle, $big_endian);
+ return undef if (!defined($filename));
+ $filename = solve_relative_path($base, $filename);
+ # Read first line number
+ $lineno = read_gcno_value($handle, $big_endian, "initial line number");
+ return undef if (!defined($lineno));
+ # Add to list
+ push(@{$bb->{$function}->{$filename}}, $lineno);
+ graph_add_order($fileorder, $function, $filename);
+
+ return ($filename, $function);
+}
+
+#
+# read_gcno(filename, base_dir)
+#
+# Read the contents of the specified .gcno file and return the following
+# mapping:
+# graph: filename -> file_data
+# file_data: function name -> line_data
+# line_data: [ line1, line2, ... ]
+#
+# Relative filenames are converted to absolute form using base_dir as
+# base directory. See the gcov-io.h file in the gcc 3.3 source code
+# for a description of the .gcno format.
+#
+
+sub read_gcno($$)
+{
+ my ($gcno_filename, $base) = @_;
+ my $file_magic = 0x67636e6f;
+ my $tag_function = 0x01000000;
+ my $tag_lines = 0x01450000;
+ my $big_endian;
+ my $word;
+ my $tag;
+ my $length;
+ my $filename;
+ my $function;
+ my $bb = {};
+ my $fileorder = {};
+ my $instr;
+ my $graph;
+ local *HANDLE;
+
+ open(HANDLE, "<$gcno_filename") or goto open_error;
+ binmode(HANDLE);
+ # Read magic
+ $word = read_gcno_word(*HANDLE, "file magic");
+ goto incomplete if (!defined($word));
+ # Determine file endianness
+ if (unpack("N", $word) == $file_magic) {
+ $big_endian = 1;
+ } elsif (unpack("V", $word) == $file_magic) {
+ $big_endian = 0;
+ } else {
+ goto magic_error;
+ }
+ # Skip version and stamp
+ graph_skip(*HANDLE, 8, "version and stamp") or goto incomplete;
+ while (!eof(HANDLE)) {
+ my $next_pos;
+ my $curr_pos;
+
+ # Read record tag
+ $tag = read_gcno_value(*HANDLE, $big_endian, "record tag");
+ goto incomplete if (!defined($tag));
+ # Read record length
+ $length = read_gcno_value(*HANDLE, $big_endian,
+ "record length");
+ goto incomplete if (!defined($length));
+ # Convert length to bytes
+ $length *= 4;
+ # Calculate start of next record
+ $next_pos = tell(HANDLE);
+ goto tell_error if ($next_pos == -1);
+ $next_pos += $length;
+ # Process record
+ if ($tag == $tag_function) {
+ ($filename, $function) = read_gcno_function_record(
+ *HANDLE, $bb, $fileorder, $base, $big_endian);
+ goto incomplete if (!defined($function));
+ } elsif ($tag == $tag_lines) {
+ # Read lines record
+ $filename = read_gcno_lines_record(*HANDLE,
+ $gcno_filename, $bb, $fileorder,
+ $filename, $function, $base,
+ $big_endian);
+ goto incomplete if (!defined($filename));
+ } else {
+ # Skip record contents
+ graph_skip(*HANDLE, $length, "unhandled record")
+ or goto incomplete;
+ }
+ # Ensure that we are at the start of the next record
+ $curr_pos = tell(HANDLE);
+ goto tell_error if ($curr_pos == -1);
+ next if ($curr_pos == $next_pos);
+ goto record_error if ($curr_pos > $next_pos);
+ graph_skip(*HANDLE, $next_pos - $curr_pos,
+ "unhandled record content")
+ or goto incomplete;
+ }
+ close(HANDLE);
+ ($instr, $graph) = graph_from_bb($bb, $fileorder, $gcno_filename);
+ graph_cleanup($graph);
+
+ return ($instr, $graph);
+
+open_error:
+ graph_error($gcno_filename, "could not open file");
+ return undef;
+incomplete:
+ graph_error($gcno_filename, "reached unexpected end of file");
+ return undef;
+magic_error:
+ graph_error($gcno_filename, "found unrecognized gcno file magic");
+ return undef;
+tell_error:
+ graph_error($gcno_filename, "could not determine file position");
+ return undef;
+record_error:
+ graph_error($gcno_filename, "found unrecognized record format");
+ return undef;
+}
+
+sub debug($)
+{
+ my ($msg) = @_;
+
+ return if (!$debug);
+ print(STDERR "DEBUG: $msg");
+}
+
+#
+# get_gcov_capabilities
+#
+# Determine the list of available gcov options.
+#
+
+sub get_gcov_capabilities()
+{
+ my $help = `$gcov_tool --help`;
+ my %capabilities;
+
+ foreach (split(/\n/, $help)) {
+ next if (!/--(\S+)/);
+ next if ($1 eq 'help');
+ next if ($1 eq 'version');
+ next if ($1 eq 'object-directory');
+
+ $capabilities{$1} = 1;
+ debug("gcov has capability '$1'\n");
+ }
+
+ return \%capabilities;
+}
diff --git a/chromium/third_party/lcov-1.9/bin/genpng b/chromium/third_party/lcov-1.9/bin/genpng
new file mode 100755
index 00000000000..7fe9dfe10ca
--- /dev/null
+++ b/chromium/third_party/lcov-1.9/bin/genpng
@@ -0,0 +1,384 @@
+#!/usr/bin/perl -w
+#
+# Copyright (c) International Business Machines Corp., 2002
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or (at
+# your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+#
+#
+# genpng
+#
+# This script creates an overview PNG image of a source code file by
+# representing each source code character by a single pixel.
+#
+# Note that the PERL module GD.pm is required for this script to work.
+# It may be obtained from http://www.cpan.org
+#
+# History:
+# 2002-08-26: created by Peter Oberparleiter <Peter.Oberparleiter@de.ibm.com>
+#
+
+use strict;
+use File::Basename;
+use Getopt::Long;
+
+
+# Constants
+our $lcov_version = 'LCOV version 1.9';
+our $lcov_url = "http://ltp.sourceforge.net/coverage/lcov.php";
+our $tool_name = basename($0);
+
+
+# Prototypes
+sub gen_png($$$@);
+sub check_and_load_module($);
+sub genpng_print_usage(*);
+sub genpng_process_file($$$$);
+sub genpng_warn_handler($);
+sub genpng_die_handler($);
+
+
+#
+# Code entry point
+#
+
+# Prettify version string
+$lcov_version =~ s/\$\s*Revision\s*:?\s*(\S+)\s*\$/$1/;
+
+# Check whether required module GD.pm is installed
+if (check_and_load_module("GD"))
+{
+ # Note: cannot use die() to print this message because inserting this
+ # code into another script via do() would not fail as required!
+ print(STDERR <<END_OF_TEXT)
+ERROR: required module GD.pm not found on this system (see www.cpan.org).
+END_OF_TEXT
+ ;
+ exit(2);
+}
+
+# Check whether we're called from the command line or from another script
+if (!caller)
+{
+ my $filename;
+ my $tab_size = 4;
+ my $width = 80;
+ my $out_filename;
+ my $help;
+ my $version;
+
+ $SIG{__WARN__} = \&genpng_warn_handler;
+ $SIG{__DIE__} = \&genpng_die_handler;
+
+ # Parse command line options
+ if (!GetOptions("tab-size=i" => \$tab_size,
+ "width=i" => \$width,
+ "output-filename=s" => \$out_filename,
+ "help" => \$help,
+ "version" => \$version))
+ {
+ print(STDERR "Use $tool_name --help to get usage ".
+ "information\n");
+ exit(1);
+ }
+
+ $filename = $ARGV[0];
+
+ # Check for help flag
+ if ($help)
+ {
+ genpng_print_usage(*STDOUT);
+ exit(0);
+ }
+
+ # Check for version flag
+ if ($version)
+ {
+ print("$tool_name: $lcov_version\n");
+ exit(0);
+ }
+
+ # Check options
+ if (!$filename)
+ {
+ die("No filename specified\n");
+ }
+
+ # Check for output filename
+ if (!$out_filename)
+ {
+ $out_filename = "$filename.png";
+ }
+
+ genpng_process_file($filename, $out_filename, $width, $tab_size);
+ exit(0);
+}
+
+
+#
+# genpng_print_usage(handle)
+#
+# Write out command line usage information to given filehandle.
+#
+
+sub genpng_print_usage(*)
+{
+ local *HANDLE = $_[0];
+
+ print(HANDLE <<END_OF_USAGE)
+Usage: $tool_name [OPTIONS] SOURCEFILE
+
+Create an overview image for a given source code file of either plain text
+or .gcov file format.
+
+ -h, --help Print this help, then exit
+ -v, --version Print version number, then exit
+ -t, --tab-size TABSIZE Use TABSIZE spaces in place of tab
+ -w, --width WIDTH Set width of output image to WIDTH pixel
+ -o, --output-filename FILENAME Write image to FILENAME
+
+For more information see: $lcov_url
+END_OF_USAGE
+ ;
+}
+
+
+#
+# check_and_load_module(module_name)
+#
+# Check whether a module by the given name is installed on this system
+# and make it known to the interpreter if available. Return undefined if it
+# is installed, an error message otherwise.
+#
+
+sub check_and_load_module($)
+{
+ eval("use $_[0];");
+ return $@;
+}
+
+
+#
+# genpng_process_file(filename, out_filename, width, tab_size)
+#
+
+sub genpng_process_file($$$$)
+{
+ my $filename = $_[0];
+ my $out_filename = $_[1];
+ my $width = $_[2];
+ my $tab_size = $_[3];
+ local *HANDLE;
+ my @source;
+
+ open(HANDLE, "<$filename")
+ or die("ERROR: cannot open $filename!\n");
+
+ # Check for .gcov filename extension
+ if ($filename =~ /^(.*).gcov$/)
+ {
+ # Assume gcov text format
+ while (<HANDLE>)
+ {
+ if (/^\t\t(.*)$/)
+ {
+ # Uninstrumented line
+ push(@source, ":$1");
+ }
+ elsif (/^ ###### (.*)$/)
+ {
+ # Line with zero execution count
+ push(@source, "0:$1");
+ }
+ elsif (/^( *)(\d*) (.*)$/)
+ {
+ # Line with positive execution count
+ push(@source, "$2:$3");
+ }
+ }
+ }
+ else
+ {
+ # Plain text file
+ while (<HANDLE>) { push(@source, ":$_"); }
+ }
+ close(HANDLE);
+
+ gen_png($out_filename, $width, $tab_size, @source);
+}
+
+
+#
+# gen_png(filename, width, tab_size, source)
+#
+# Write an overview PNG file to FILENAME. Source code is defined by SOURCE
+# which is a list of lines <count>:<source code> per source code line.
+# The output image will be made up of one pixel per character of source,
+# coloring will be done according to execution counts. WIDTH defines the
+# image width. TAB_SIZE specifies the number of spaces to use as replacement
+# string for tabulator signs in source code text.
+#
+# Die on error.
+#
+
+sub gen_png($$$@)
+{
+ my $filename = shift(@_); # Filename for PNG file
+ my $overview_width = shift(@_); # Imagewidth for image
+ my $tab_size = shift(@_); # Replacement string for tab signs
+ my @source = @_; # Source code as passed via argument 2
+ my $height = scalar(@source); # Height as define by source size
+ my $overview; # Source code overview image data
+ my $col_plain_back; # Color for overview background
+ my $col_plain_text; # Color for uninstrumented text
+ my $col_cov_back; # Color for background of covered lines
+ my $col_cov_text; # Color for text of covered lines
+ my $col_nocov_back; # Color for background of lines which
+ # were not covered (count == 0)
+ my $col_nocov_text; # Color for test of lines which were not
+ # covered (count == 0)
+ my $col_hi_back; # Color for background of highlighted lines
+ my $col_hi_text; # Color for text of highlighted lines
+ my $line; # Current line during iteration
+ my $row = 0; # Current row number during iteration
+ my $column; # Current column number during iteration
+ my $color_text; # Current text color during iteration
+ my $color_back; # Current background color during iteration
+ my $last_count; # Count of last processed line
+ my $count; # Count of current line
+ my $source; # Source code of current line
+ my $replacement; # Replacement string for tabulator chars
+ local *PNG_HANDLE; # Handle for output PNG file
+
+ # Create image
+ $overview = new GD::Image($overview_width, $height)
+ or die("ERROR: cannot allocate overview image!\n");
+
+ # Define colors
+ $col_plain_back = $overview->colorAllocate(0xff, 0xff, 0xff);
+ $col_plain_text = $overview->colorAllocate(0xaa, 0xaa, 0xaa);
+ $col_cov_back = $overview->colorAllocate(0xaa, 0xa7, 0xef);
+ $col_cov_text = $overview->colorAllocate(0x5d, 0x5d, 0xea);
+ $col_nocov_back = $overview->colorAllocate(0xff, 0x00, 0x00);
+ $col_nocov_text = $overview->colorAllocate(0xaa, 0x00, 0x00);
+ $col_hi_back = $overview->colorAllocate(0x00, 0xff, 0x00);
+ $col_hi_text = $overview->colorAllocate(0x00, 0xaa, 0x00);
+
+ # Visualize each line
+ foreach $line (@source)
+ {
+ # Replace tabs with spaces to keep consistent with source
+ # code view
+ while ($line =~ /^([^\t]*)(\t)/)
+ {
+ $replacement = " "x($tab_size - ((length($1) - 1) %
+ $tab_size));
+ $line =~ s/^([^\t]*)(\t)/$1$replacement/;
+ }
+
+ # Skip lines which do not follow the <count>:<line>
+ # specification, otherwise $1 = count, $2 = source code
+ if (!($line =~ /(\*?)(\d*):(.*)$/)) { next; }
+ $count = $2;
+ $source = $3;
+
+ # Decide which color pair to use
+
+ # If this line was not instrumented but the one before was,
+ # take the color of that line to widen color areas in
+ # resulting image
+ if (($count eq "") && defined($last_count) &&
+ ($last_count ne ""))
+ {
+ $count = $last_count;
+ }
+
+ if ($count eq "")
+ {
+ # Line was not instrumented
+ $color_text = $col_plain_text;
+ $color_back = $col_plain_back;
+ }
+ elsif ($count == 0)
+ {
+ # Line was instrumented but not executed
+ $color_text = $col_nocov_text;
+ $color_back = $col_nocov_back;
+ }
+ elsif ($1 eq "*")
+ {
+ # Line was highlighted
+ $color_text = $col_hi_text;
+ $color_back = $col_hi_back;
+ }
+ else
+ {
+ # Line was instrumented and executed
+ $color_text = $col_cov_text;
+ $color_back = $col_cov_back;
+ }
+
+ # Write one pixel for each source character
+ $column = 0;
+ foreach (split("", $source))
+ {
+ # Check for width
+ if ($column >= $overview_width) { last; }
+
+ if ($_ eq " ")
+ {
+ # Space
+ $overview->setPixel($column++, $row,
+ $color_back);
+ }
+ else
+ {
+ # Text
+ $overview->setPixel($column++, $row,
+ $color_text);
+ }
+ }
+
+ # Fill rest of line
+ while ($column < $overview_width)
+ {
+ $overview->setPixel($column++, $row, $color_back);
+ }
+
+ $last_count = $2;
+
+ $row++;
+ }
+
+ # Write PNG file
+ open (PNG_HANDLE, ">$filename")
+ or die("ERROR: cannot write png file $filename!\n");
+ binmode(*PNG_HANDLE);
+ print(PNG_HANDLE $overview->png());
+ close(PNG_HANDLE);
+}
+
+sub genpng_warn_handler($)
+{
+ my ($msg) = @_;
+
+ warn("$tool_name: $msg");
+}
+
+sub genpng_die_handler($)
+{
+ my ($msg) = @_;
+
+ die("$tool_name: $msg");
+}
diff --git a/chromium/third_party/lcov-1.9/bin/install.sh b/chromium/third_party/lcov-1.9/bin/install.sh
new file mode 100755
index 00000000000..27140f91dd1
--- /dev/null
+++ b/chromium/third_party/lcov-1.9/bin/install.sh
@@ -0,0 +1,71 @@
+#!/bin/bash
+#
+# install.sh [--uninstall] sourcefile targetfile [install options]
+#
+
+
+# Check for uninstall option
+if test "x$1" == "x--uninstall" ; then
+ UNINSTALL=true
+ SOURCE=$2
+ TARGET=$3
+ shift 3
+else
+ UNINSTALL=false
+ SOURCE=$1
+ TARGET=$2
+ shift 2
+fi
+
+# Check usage
+if test -z "$SOURCE" || test -z "$TARGET" ; then
+ echo Usage: install.sh [--uninstall] source target [install options] >&2
+ exit 1
+fi
+
+
+#
+# do_install(SOURCE_FILE, TARGET_FILE)
+#
+
+do_install()
+{
+ local SOURCE=$1
+ local TARGET=$2
+ local PARAMS=$3
+
+ install -p -D $PARAMS $SOURCE $TARGET
+}
+
+
+#
+# do_uninstall(SOURCE_FILE, TARGET_FILE)
+#
+
+do_uninstall()
+{
+ local SOURCE=$1
+ local TARGET=$2
+
+ # Does target exist?
+ if test -r $TARGET ; then
+ # Is target of the same version as this package?
+ if diff $SOURCE $TARGET >/dev/null; then
+ rm -f $TARGET
+ else
+ echo WARNING: Skipping uninstall for $TARGET - versions differ! >&2
+ fi
+ else
+ echo WARNING: Skipping uninstall for $TARGET - not installed! >&2
+ fi
+}
+
+
+# Call sub routine
+if $UNINSTALL ; then
+ do_uninstall $SOURCE $TARGET
+else
+ do_install $SOURCE $TARGET "$*"
+fi
+
+exit 0
diff --git a/chromium/third_party/lcov-1.9/bin/lcov b/chromium/third_party/lcov-1.9/bin/lcov
new file mode 100755
index 00000000000..4e392ffa3c1
--- /dev/null
+++ b/chromium/third_party/lcov-1.9/bin/lcov
@@ -0,0 +1,4175 @@
+#!/usr/bin/perl -w
+#
+# Copyright (c) International Business Machines Corp., 2002,2010
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or (at
+# your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+#
+#
+# lcov
+#
+# This is a wrapper script which provides a single interface for accessing
+# LCOV coverage data.
+#
+#
+# History:
+# 2002-08-29 created by Peter Oberparleiter <Peter.Oberparleiter@de.ibm.com>
+# IBM Lab Boeblingen
+# 2002-09-05 / Peter Oberparleiter: implemented --kernel-directory +
+# multiple directories
+# 2002-10-16 / Peter Oberparleiter: implemented --add-tracefile option
+# 2002-10-17 / Peter Oberparleiter: implemented --extract option
+# 2002-11-04 / Peter Oberparleiter: implemented --list option
+# 2003-03-07 / Paul Larson: Changed to make it work with the latest gcov
+# kernel patch. This will break it with older gcov-kernel
+# patches unless you change the value of $gcovmod in this script
+# 2003-04-07 / Peter Oberparleiter: fixed bug which resulted in an error
+# when trying to combine .info files containing data without
+# a test name
+# 2003-04-10 / Peter Oberparleiter: extended Paul's change so that LCOV
+# works both with the new and the old gcov-kernel patch
+# 2003-04-10 / Peter Oberparleiter: added $gcov_dir constant in anticipation
+# of a possible move of the gcov kernel directory to another
+# file system in a future version of the gcov-kernel patch
+# 2003-04-15 / Paul Larson: make info write to STDERR, not STDOUT
+# 2003-04-15 / Paul Larson: added --remove option
+# 2003-04-30 / Peter Oberparleiter: renamed --reset to --zerocounters
+# to remove naming ambiguity with --remove
+# 2003-04-30 / Peter Oberparleiter: adjusted help text to include --remove
+# 2003-06-27 / Peter Oberparleiter: implemented --diff
+# 2003-07-03 / Peter Oberparleiter: added line checksum support, added
+# --no-checksum
+# 2003-12-11 / Laurent Deniel: added --follow option
+# 2004-03-29 / Peter Oberparleiter: modified --diff option to better cope with
+# ambiguous patch file entries, modified --capture option to use
+# modprobe before insmod (needed for 2.6)
+# 2004-03-30 / Peter Oberparleiter: added --path option
+# 2004-08-09 / Peter Oberparleiter: added configuration file support
+# 2008-08-13 / Peter Oberparleiter: added function coverage support
+#
+
+use strict;
+use File::Basename;
+use File::Path;
+use File::Find;
+use File::Temp qw /tempdir/;
+use File::Spec::Functions qw /abs2rel canonpath catdir catfile catpath
+ file_name_is_absolute rootdir splitdir splitpath/;
+use Getopt::Long;
+use Cwd qw /abs_path getcwd/;
+
+
+# Global constants
+our $lcov_version = 'LCOV version 1.9';
+our $lcov_url = "http://ltp.sourceforge.net/coverage/lcov.php";
+our $tool_name = basename($0);
+
+# Directory containing gcov kernel files
+our $gcov_dir;
+
+# Where to create temporary directories
+our $tmp_dir;
+
+# Internal constants
+our $GKV_PROC = 0; # gcov-kernel data in /proc via external patch
+our $GKV_SYS = 1; # gcov-kernel data in /sys via vanilla 2.6.31+
+our @GKV_NAME = ( "external", "upstream" );
+our $pkg_gkv_file = ".gcov_kernel_version";
+our $pkg_build_file = ".build_directory";
+
+our $BR_BLOCK = 0;
+our $BR_BRANCH = 1;
+our $BR_TAKEN = 2;
+our $BR_VEC_ENTRIES = 3;
+our $BR_VEC_WIDTH = 32;
+
+# Branch data combination types
+our $BR_SUB = 0;
+our $BR_ADD = 1;
+
+# Prototypes
+sub print_usage(*);
+sub check_options();
+sub userspace_reset();
+sub userspace_capture();
+sub kernel_reset();
+sub kernel_capture();
+sub kernel_capture_initial();
+sub package_capture();
+sub add_traces();
+sub read_info_file($);
+sub get_info_entry($);
+sub set_info_entry($$$$$$$$$;$$$$$$);
+sub add_counts($$);
+sub merge_checksums($$$);
+sub combine_info_entries($$$);
+sub combine_info_files($$);
+sub write_info_file(*$);
+sub extract();
+sub remove();
+sub list();
+sub get_common_filename($$);
+sub read_diff($);
+sub diff();
+sub system_no_output($@);
+sub read_config($);
+sub apply_config($);
+sub info(@);
+sub create_temp_dir();
+sub transform_pattern($);
+sub warn_handler($);
+sub die_handler($);
+sub abort_handler($);
+sub temp_cleanup();
+sub setup_gkv();
+sub get_overall_line($$$$);
+sub print_overall_rate($$$$$$$$$);
+sub lcov_geninfo(@);
+sub create_package($$$;$);
+sub get_func_found_and_hit($);
+sub br_ivec_get($$);
+
+# Global variables & initialization
+our @directory; # Specifies where to get coverage data from
+our @kernel_directory; # If set, captures only from specified kernel subdirs
+our @add_tracefile; # If set, reads in and combines all files in list
+our $list; # If set, list contents of tracefile
+our $extract; # If set, extracts parts of tracefile
+our $remove; # If set, removes parts of tracefile
+our $diff; # If set, modifies tracefile according to diff
+our $reset; # If set, reset all coverage data to zero
+our $capture; # If set, capture data
+our $output_filename; # Name for file to write coverage data to
+our $test_name = ""; # Test case name
+our $quiet = ""; # If set, suppress information messages
+our $help; # Help option flag
+our $version; # Version option flag
+our $convert_filenames; # If set, convert filenames when applying diff
+our $strip; # If set, strip leading directories when applying diff
+our $temp_dir_name; # Name of temporary directory
+our $cwd = `pwd`; # Current working directory
+our $to_file; # If set, indicates that output is written to a file
+our $follow; # If set, indicates that find shall follow links
+our $diff_path = ""; # Path removed from tracefile when applying diff
+our $base_directory; # Base directory (cwd of gcc during compilation)
+our $checksum; # If set, calculate a checksum for each line
+our $no_checksum; # If set, don't calculate a checksum for each line
+our $compat_libtool; # If set, indicates that libtool mode is to be enabled
+our $no_compat_libtool; # If set, indicates that libtool mode is to be disabled
+our $gcov_tool;
+our $ignore_errors;
+our $initial;
+our $no_recursion = 0;
+our $to_package;
+our $from_package;
+our $maxdepth;
+our $no_markers;
+our $config; # Configuration file contents
+chomp($cwd);
+our $tool_dir = dirname($0); # Directory where genhtml tool is installed
+our @temp_dirs;
+our $gcov_gkv; # gcov kernel support version found on machine
+our $opt_derive_func_data;
+our $opt_debug;
+our $opt_list_full_path;
+our $opt_no_list_full_path;
+our $opt_list_width = 80;
+our $opt_list_truncate_max = 20;
+our $ln_overall_found;
+our $ln_overall_hit;
+our $fn_overall_found;
+our $fn_overall_hit;
+our $br_overall_found;
+our $br_overall_hit;
+
+
+#
+# Code entry point
+#
+
+$SIG{__WARN__} = \&warn_handler;
+$SIG{__DIE__} = \&die_handler;
+$SIG{'INT'} = \&abort_handler;
+$SIG{'QUIT'} = \&abort_handler;
+
+# Prettify version string
+$lcov_version =~ s/\$\s*Revision\s*:?\s*(\S+)\s*\$/$1/;
+
+# Add current working directory if $tool_dir is not already an absolute path
+if (! ($tool_dir =~ /^\/(.*)$/))
+{
+ $tool_dir = "$cwd/$tool_dir";
+}
+
+# Read configuration file if available
+if (defined($ENV{"HOME"}) && (-r $ENV{"HOME"}."/.lcovrc"))
+{
+ $config = read_config($ENV{"HOME"}."/.lcovrc");
+}
+elsif (-r "/etc/lcovrc")
+{
+ $config = read_config("/etc/lcovrc");
+}
+
+if ($config)
+{
+ # Copy configuration file values to variables
+ apply_config({
+ "lcov_gcov_dir" => \$gcov_dir,
+ "lcov_tmp_dir" => \$tmp_dir,
+ "lcov_list_full_path" => \$opt_list_full_path,
+ "lcov_list_width" => \$opt_list_width,
+ "lcov_list_truncate_max"=> \$opt_list_truncate_max,
+ });
+}
+
+# Parse command line options
+if (!GetOptions("directory|d|di=s" => \@directory,
+ "add-tracefile|a=s" => \@add_tracefile,
+ "list|l=s" => \$list,
+ "kernel-directory|k=s" => \@kernel_directory,
+ "extract|e=s" => \$extract,
+ "remove|r=s" => \$remove,
+ "diff=s" => \$diff,
+ "convert-filenames" => \$convert_filenames,
+ "strip=i" => \$strip,
+ "capture|c" => \$capture,
+ "output-file|o=s" => \$output_filename,
+ "test-name|t=s" => \$test_name,
+ "zerocounters|z" => \$reset,
+ "quiet|q" => \$quiet,
+ "help|h|?" => \$help,
+ "version|v" => \$version,
+ "follow|f" => \$follow,
+ "path=s" => \$diff_path,
+ "base-directory|b=s" => \$base_directory,
+ "checksum" => \$checksum,
+ "no-checksum" => \$no_checksum,
+ "compat-libtool" => \$compat_libtool,
+ "no-compat-libtool" => \$no_compat_libtool,
+ "gcov-tool=s" => \$gcov_tool,
+ "ignore-errors=s" => \$ignore_errors,
+ "initial|i" => \$initial,
+ "no-recursion" => \$no_recursion,
+ "to-package=s" => \$to_package,
+ "from-package=s" => \$from_package,
+ "no-markers" => \$no_markers,
+ "derive-func-data" => \$opt_derive_func_data,
+ "debug" => \$opt_debug,
+ "list-full-path" => \$opt_list_full_path,
+ "no-list-full-path" => \$opt_no_list_full_path,
+ ))
+{
+ print(STDERR "Use $tool_name --help to get usage information\n");
+ exit(1);
+}
+else
+{
+ # Merge options
+ if (defined($no_checksum))
+ {
+ $checksum = ($no_checksum ? 0 : 1);
+ $no_checksum = undef;
+ }
+
+ if (defined($no_compat_libtool))
+ {
+ $compat_libtool = ($no_compat_libtool ? 0 : 1);
+ $no_compat_libtool = undef;
+ }
+
+ if (defined($opt_no_list_full_path))
+ {
+ $opt_list_full_path = ($opt_no_list_full_path ? 0 : 1);
+ $opt_no_list_full_path = undef;
+ }
+}
+
+# Check for help option
+if ($help)
+{
+ print_usage(*STDOUT);
+ exit(0);
+}
+
+# Check for version option
+if ($version)
+{
+ print("$tool_name: $lcov_version\n");
+ exit(0);
+}
+
+# Check list width option
+if ($opt_list_width <= 40) {
+ die("ERROR: lcov_list_width parameter out of range (needs to be ".
+ "larger than 40)\n");
+}
+
+# Normalize --path text
+$diff_path =~ s/\/$//;
+
+if ($follow)
+{
+ $follow = "-follow";
+}
+else
+{
+ $follow = "";
+}
+
+if ($no_recursion)
+{
+ $maxdepth = "-maxdepth 1";
+}
+else
+{
+ $maxdepth = "";
+}
+
+# Check for valid options
+check_options();
+
+# Only --extract, --remove and --diff allow unnamed parameters
+if (@ARGV && !($extract || $remove || $diff))
+{
+ die("Extra parameter found: '".join(" ", @ARGV)."'\n".
+ "Use $tool_name --help to get usage information\n");
+}
+
+# Check for output filename
+$to_file = ($output_filename && ($output_filename ne "-"));
+
+if ($capture)
+{
+ if (!$to_file)
+ {
+ # Option that tells geninfo to write to stdout
+ $output_filename = "-";
+ }
+}
+
+# Determine kernel directory for gcov data
+if (!$from_package && !@directory && ($capture || $reset)) {
+ ($gcov_gkv, $gcov_dir) = setup_gkv();
+}
+
+# Check for requested functionality
+if ($reset)
+{
+ # Differentiate between user space and kernel reset
+ if (@directory)
+ {
+ userspace_reset();
+ }
+ else
+ {
+ kernel_reset();
+ }
+}
+elsif ($capture)
+{
+ # Capture source can be user space, kernel or package
+ if ($from_package) {
+ package_capture();
+ } elsif (@directory) {
+ userspace_capture();
+ } else {
+ if ($initial) {
+ if (defined($to_package)) {
+ die("ERROR: --initial cannot be used together ".
+ "with --to-package\n");
+ }
+ kernel_capture_initial();
+ } else {
+ kernel_capture();
+ }
+ }
+}
+elsif (@add_tracefile)
+{
+ ($ln_overall_found, $ln_overall_hit,
+ $fn_overall_found, $fn_overall_hit,
+ $br_overall_found, $br_overall_hit) = add_traces();
+}
+elsif ($remove)
+{
+ ($ln_overall_found, $ln_overall_hit,
+ $fn_overall_found, $fn_overall_hit,
+ $br_overall_found, $br_overall_hit) = remove();
+}
+elsif ($extract)
+{
+ ($ln_overall_found, $ln_overall_hit,
+ $fn_overall_found, $fn_overall_hit,
+ $br_overall_found, $br_overall_hit) = extract();
+}
+elsif ($list)
+{
+ list();
+}
+elsif ($diff)
+{
+ if (scalar(@ARGV) != 1)
+ {
+ die("ERROR: option --diff requires one additional argument!\n".
+ "Use $tool_name --help to get usage information\n");
+ }
+ ($ln_overall_found, $ln_overall_hit,
+ $fn_overall_found, $fn_overall_hit,
+ $br_overall_found, $br_overall_hit) = diff();
+}
+
+temp_cleanup();
+
+if (defined($ln_overall_found)) {
+ print_overall_rate(1, $ln_overall_found, $ln_overall_hit,
+ 1, $fn_overall_found, $fn_overall_hit,
+ 1, $br_overall_found, $br_overall_hit);
+} else {
+ info("Done.\n") if (!$list && !$capture);
+}
+exit(0);
+
+#
+# print_usage(handle)
+#
+# Print usage information.
+#
+
+sub print_usage(*)
+{
+ local *HANDLE = $_[0];
+
+ print(HANDLE <<END_OF_USAGE);
+Usage: $tool_name [OPTIONS]
+
+Use lcov to collect coverage data from either the currently running Linux
+kernel or from a user space application. Specify the --directory option to
+get coverage data for a user space program.
+
+Misc:
+ -h, --help Print this help, then exit
+ -v, --version Print version number, then exit
+ -q, --quiet Do not print progress messages
+
+Operation:
+ -z, --zerocounters Reset all execution counts to zero
+ -c, --capture Capture coverage data
+ -a, --add-tracefile FILE Add contents of tracefiles
+ -e, --extract FILE PATTERN Extract files matching PATTERN from FILE
+ -r, --remove FILE PATTERN Remove files matching PATTERN from FILE
+ -l, --list FILE List contents of tracefile FILE
+ --diff FILE DIFF Transform tracefile FILE according to DIFF
+
+Options:
+ -i, --initial Capture initial zero coverage data
+ -t, --test-name NAME Specify test name to be stored with data
+ -o, --output-file FILENAME Write data to FILENAME instead of stdout
+ -d, --directory DIR Use .da files in DIR instead of kernel
+ -f, --follow Follow links when searching .da files
+ -k, --kernel-directory KDIR Capture kernel coverage data only from KDIR
+ -b, --base-directory DIR Use DIR as base directory for relative paths
+ --convert-filenames Convert filenames when applying diff
+ --strip DEPTH Strip initial DEPTH directory levels in diff
+ --path PATH Strip PATH from tracefile when applying diff
+ --(no-)checksum Enable (disable) line checksumming
+ --(no-)compat-libtool Enable (disable) libtool compatibility mode
+ --gcov-tool TOOL Specify gcov tool location
+ --ignore-errors ERRORS Continue after ERRORS (gcov, source, graph)
+ --no-recursion Exclude subdirectories from processing
+ --to-package FILENAME Store unprocessed coverage data in FILENAME
+ --from-package FILENAME Capture from unprocessed data in FILENAME
+ --no-markers Ignore exclusion markers in source code
+ --derive-func-data Generate function data from line data
+ --list-full-path Print full path during a list operation
+
+For more information see: $lcov_url
+END_OF_USAGE
+ ;
+}
+
+
+#
+# check_options()
+#
+# Check for valid combination of command line options. Die on error.
+#
+
+sub check_options()
+{
+ my $i = 0;
+
+ # Count occurrence of mutually exclusive options
+ $reset && $i++;
+ $capture && $i++;
+ @add_tracefile && $i++;
+ $extract && $i++;
+ $remove && $i++;
+ $list && $i++;
+ $diff && $i++;
+
+ if ($i == 0)
+ {
+ die("Need one of the options -z, -c, -a, -e, -r, -l or ".
+ "--diff\n".
+ "Use $tool_name --help to get usage information\n");
+ }
+ elsif ($i > 1)
+ {
+ die("ERROR: only one of -z, -c, -a, -e, -r, -l or ".
+ "--diff allowed!\n".
+ "Use $tool_name --help to get usage information\n");
+ }
+}
+
+
+#
+# userspace_reset()
+#
+# Reset coverage data found in DIRECTORY by deleting all contained .da files.
+#
+# Die on error.
+#
+
+sub userspace_reset()
+{
+ my $current_dir;
+ my @file_list;
+
+ foreach $current_dir (@directory)
+ {
+ info("Deleting all .da files in $current_dir".
+ ($no_recursion?"\n":" and subdirectories\n"));
+ @file_list = `find "$current_dir" $maxdepth $follow -name \\*\\.da -o -name \\*\\.gcda -type f 2>/dev/null`;
+ chomp(@file_list);
+ foreach (@file_list)
+ {
+ unlink($_) or die("ERROR: cannot remove file $_!\n");
+ }
+ }
+}
+
+
+#
+# userspace_capture()
+#
+# Capture coverage data found in DIRECTORY and write it to a package (if
+# TO_PACKAGE specified) or to OUTPUT_FILENAME or STDOUT.
+#
+# Die on error.
+#
+
+sub userspace_capture()
+{
+ my $dir;
+ my $build;
+
+ if (!defined($to_package)) {
+ lcov_geninfo(@directory);
+ return;
+ }
+ if (scalar(@directory) != 1) {
+ die("ERROR: -d may be specified only once with --to-package\n");
+ }
+ $dir = $directory[0];
+ if (defined($base_directory)) {
+ $build = $base_directory;
+ } else {
+ $build = $dir;
+ }
+ create_package($to_package, $dir, $build);
+}
+
+
+#
+# kernel_reset()
+#
+# Reset kernel coverage.
+#
+# Die on error.
+#
+
+sub kernel_reset()
+{
+ local *HANDLE;
+ my $reset_file;
+
+ info("Resetting kernel execution counters\n");
+ if (-e "$gcov_dir/vmlinux") {
+ $reset_file = "$gcov_dir/vmlinux";
+ } elsif (-e "$gcov_dir/reset") {
+ $reset_file = "$gcov_dir/reset";
+ } else {
+ die("ERROR: no reset control found in $gcov_dir\n");
+ }
+ open(HANDLE, ">$reset_file") or
+ die("ERROR: cannot write to $reset_file!\n");
+ print(HANDLE "0");
+ close(HANDLE);
+}
+
+
+#
+# lcov_copy_single(from, to)
+#
+# Copy single regular file FROM to TO without checking its size. This is
+# required to work with special files generated by the kernel
+# seq_file-interface.
+#
+#
+sub lcov_copy_single($$)
+{
+ my ($from, $to) = @_;
+ my $content;
+ local $/;
+ local *HANDLE;
+
+ open(HANDLE, "<$from") or die("ERROR: cannot read $from: $!\n");
+ $content = <HANDLE>;
+ close(HANDLE);
+ open(HANDLE, ">$to") or die("ERROR: cannot write $from: $!\n");
+ if (defined($content)) {
+ print(HANDLE $content);
+ }
+ close(HANDLE);
+}
+
+#
+# lcov_find(dir, function, data[, extension, ...)])
+#
+# Search DIR for files and directories whose name matches PATTERN and run
+# FUNCTION for each match. If not pattern is specified, match all names.
+#
+# FUNCTION has the following prototype:
+# function(dir, relative_name, data)
+#
+# Where:
+# dir: the base directory for this search
+# relative_name: the name relative to the base directory of this entry
+# data: the DATA variable passed to lcov_find
+#
+sub lcov_find($$$;@)
+{
+ my ($dir, $fn, $data, @pattern) = @_;
+ my $result;
+ my $_fn = sub {
+ my $filename = $File::Find::name;
+
+ if (defined($result)) {
+ return;
+ }
+ $filename = abs2rel($filename, $dir);
+ foreach (@pattern) {
+ if ($filename =~ /$_/) {
+ goto ok;
+ }
+ }
+ return;
+ ok:
+ $result = &$fn($dir, $filename, $data);
+ };
+ if (scalar(@pattern) == 0) {
+ @pattern = ".*";
+ }
+ find( { wanted => $_fn, no_chdir => 1 }, $dir);
+
+ return $result;
+}
+
+#
+# lcov_copy_fn(from, rel, to)
+#
+# Copy directories, files and links from/rel to to/rel.
+#
+
+sub lcov_copy_fn($$$)
+{
+ my ($from, $rel, $to) = @_;
+ my $absfrom = canonpath(catfile($from, $rel));
+ my $absto = canonpath(catfile($to, $rel));
+
+ if (-d) {
+ if (! -d $absto) {
+ mkpath($absto) or
+ die("ERROR: cannot create directory $absto\n");
+ chmod(0700, $absto);
+ }
+ } elsif (-l) {
+ # Copy symbolic link
+ my $link = readlink($absfrom);
+
+ if (!defined($link)) {
+ die("ERROR: cannot read link $absfrom: $!\n");
+ }
+ symlink($link, $absto) or
+ die("ERROR: cannot create link $absto: $!\n");
+ } else {
+ lcov_copy_single($absfrom, $absto);
+ chmod(0600, $absto);
+ }
+ return undef;
+}
+
+#
+# lcov_copy(from, to, subdirs)
+#
+# Copy all specified SUBDIRS and files from directory FROM to directory TO. For
+# regular files, copy file contents without checking its size. This is required
+# to work with seq_file-generated files.
+#
+
+sub lcov_copy($$;@)
+{
+ my ($from, $to, @subdirs) = @_;
+ my @pattern;
+
+ foreach (@subdirs) {
+ push(@pattern, "^$_");
+ }
+ lcov_find($from, \&lcov_copy_fn, $to, @pattern);
+}
+
+#
+# lcov_geninfo(directory)
+#
+# Call geninfo for the specified directory and with the parameters specified
+# at the command line.
+#
+
+sub lcov_geninfo(@)
+{
+ my (@dir) = @_;
+ my @param;
+
+ # Capture data
+ info("Capturing coverage data from ".join(" ", @dir)."\n");
+ @param = ("$tool_dir/geninfo", @dir);
+ if ($output_filename)
+ {
+ @param = (@param, "--output-filename", $output_filename);
+ }
+ if ($test_name)
+ {
+ @param = (@param, "--test-name", $test_name);
+ }
+ if ($follow)
+ {
+ @param = (@param, "--follow");
+ }
+ if ($quiet)
+ {
+ @param = (@param, "--quiet");
+ }
+ if (defined($checksum))
+ {
+ if ($checksum)
+ {
+ @param = (@param, "--checksum");
+ }
+ else
+ {
+ @param = (@param, "--no-checksum");
+ }
+ }
+ if ($base_directory)
+ {
+ @param = (@param, "--base-directory", $base_directory);
+ }
+ if ($no_compat_libtool)
+ {
+ @param = (@param, "--no-compat-libtool");
+ }
+ elsif ($compat_libtool)
+ {
+ @param = (@param, "--compat-libtool");
+ }
+ if ($gcov_tool)
+ {
+ @param = (@param, "--gcov-tool", $gcov_tool);
+ }
+ if ($ignore_errors)
+ {
+ @param = (@param, "--ignore-errors", $ignore_errors);
+ }
+ if ($initial)
+ {
+ @param = (@param, "--initial");
+ }
+ if ($no_markers)
+ {
+ @param = (@param, "--no-markers");
+ }
+ if ($opt_derive_func_data)
+ {
+ @param = (@param, "--derive-func-data");
+ }
+ if ($opt_debug)
+ {
+ @param = (@param, "--debug");
+ }
+ system(@param) and exit($? >> 8);
+}
+
+#
+# read_file(filename)
+#
+# Return the contents of the file defined by filename.
+#
+
+sub read_file($)
+{
+ my ($filename) = @_;
+ my $content;
+ local $\;
+ local *HANDLE;
+
+ open(HANDLE, "<$filename") || return undef;
+ $content = <HANDLE>;
+ close(HANDLE);
+
+ return $content;
+}
+
+#
+# get_package(package_file)
+#
+# Unpack unprocessed coverage data files from package_file to a temporary
+# directory and return directory name, build directory and gcov kernel version
+# as found in package.
+#
+
+sub get_package($)
+{
+ my ($file) = @_;
+ my $dir = create_temp_dir();
+ my $gkv;
+ my $build;
+ my $cwd = getcwd();
+ my $count;
+ local *HANDLE;
+
+ info("Reading package $file:\n");
+ info(" data directory .......: $dir\n");
+ $file = abs_path($file);
+ chdir($dir);
+ open(HANDLE, "tar xvfz $file 2>/dev/null|")
+ or die("ERROR: could not process package $file\n");
+ while (<HANDLE>) {
+ if (/\.da$/ || /\.gcda$/) {
+ $count++;
+ }
+ }
+ close(HANDLE);
+ $build = read_file("$dir/$pkg_build_file");
+ if (defined($build)) {
+ info(" build directory ......: $build\n");
+ }
+ $gkv = read_file("$dir/$pkg_gkv_file");
+ if (defined($gkv)) {
+ $gkv = int($gkv);
+ if ($gkv != $GKV_PROC && $gkv != $GKV_SYS) {
+ die("ERROR: unsupported gcov kernel version found ".
+ "($gkv)\n");
+ }
+ info(" content type .........: kernel data\n");
+ info(" gcov kernel version ..: %s\n", $GKV_NAME[$gkv]);
+ } else {
+ info(" content type .........: application data\n");
+ }
+ info(" data files ...........: $count\n");
+ chdir($cwd);
+
+ return ($dir, $build, $gkv);
+}
+
+#
+# write_file(filename, $content)
+#
+# Create a file named filename and write the specified content to it.
+#
+
+sub write_file($$)
+{
+ my ($filename, $content) = @_;
+ local *HANDLE;
+
+ open(HANDLE, ">$filename") || return 0;
+ print(HANDLE $content);
+ close(HANDLE) || return 0;
+
+ return 1;
+}
+
+# count_package_data(filename)
+#
+# Count the number of coverage data files in the specified package file.
+#
+
+sub count_package_data($)
+{
+ my ($filename) = @_;
+ local *HANDLE;
+ my $count = 0;
+
+ open(HANDLE, "tar tfz $filename|") or return undef;
+ while (<HANDLE>) {
+ if (/\.da$/ || /\.gcda$/) {
+ $count++;
+ }
+ }
+ close(HANDLE);
+ return $count;
+}
+
+#
+# create_package(package_file, source_directory, build_directory[,
+# kernel_gcov_version])
+#
+# Store unprocessed coverage data files from source_directory to package_file.
+#
+
+sub create_package($$$;$)
+{
+ my ($file, $dir, $build, $gkv) = @_;
+ my $cwd = getcwd();
+
+ # Print information about the package
+ info("Creating package $file:\n");
+ info(" data directory .......: $dir\n");
+
+ # Handle build directory
+ if (defined($build)) {
+ info(" build directory ......: $build\n");
+ write_file("$dir/$pkg_build_file", $build)
+ or die("ERROR: could not write to ".
+ "$dir/$pkg_build_file\n");
+ }
+
+ # Handle gcov kernel version data
+ if (defined($gkv)) {
+ info(" content type .........: kernel data\n");
+ info(" gcov kernel version ..: %s\n", $GKV_NAME[$gkv]);
+ write_file("$dir/$pkg_gkv_file", $gkv)
+ or die("ERROR: could not write to ".
+ "$dir/$pkg_gkv_file\n");
+ } else {
+ info(" content type .........: application data\n");
+ }
+
+ # Create package
+ $file = abs_path($file);
+ chdir($dir);
+ system("tar cfz $file .")
+ and die("ERROR: could not create package $file\n");
+
+ # Remove temporary files
+ unlink("$dir/$pkg_build_file");
+ unlink("$dir/$pkg_gkv_file");
+
+ # Show number of data files
+ if (!$quiet) {
+ my $count = count_package_data($file);
+
+ if (defined($count)) {
+ info(" data files ...........: $count\n");
+ }
+ }
+ chdir($cwd);
+}
+
+sub find_link_fn($$$)
+{
+ my ($from, $rel, $filename) = @_;
+ my $absfile = catfile($from, $rel, $filename);
+
+ if (-l $absfile) {
+ return $absfile;
+ }
+ return undef;
+}
+
+#
+# get_base(dir)
+#
+# Return (BASE, OBJ), where
+# - BASE: is the path to the kernel base directory relative to dir
+# - OBJ: is the absolute path to the kernel build directory
+#
+
+sub get_base($)
+{
+ my ($dir) = @_;
+ my $marker = "kernel/gcov/base.gcno";
+ my $markerfile;
+ my $sys;
+ my $obj;
+ my $link;
+
+ $markerfile = lcov_find($dir, \&find_link_fn, $marker);
+ if (!defined($markerfile)) {
+ return (undef, undef);
+ }
+
+ # sys base is parent of parent of markerfile.
+ $sys = abs2rel(dirname(dirname(dirname($markerfile))), $dir);
+
+ # obj base is parent of parent of markerfile link target.
+ $link = readlink($markerfile);
+ if (!defined($link)) {
+ die("ERROR: could not read $markerfile\n");
+ }
+ $obj = dirname(dirname(dirname($link)));
+
+ return ($sys, $obj);
+}
+
+#
+# apply_base_dir(data_dir, base_dir, build_dir, @directories)
+#
+# Make entries in @directories relative to data_dir.
+#
+
+sub apply_base_dir($$$@)
+{
+ my ($data, $base, $build, @dirs) = @_;
+ my $dir;
+ my @result;
+
+ foreach $dir (@dirs) {
+ # Is directory path relative to data directory?
+ if (-d catdir($data, $dir)) {
+ push(@result, $dir);
+ next;
+ }
+ # Relative to the auto-detected base-directory?
+ if (defined($base)) {
+ if (-d catdir($data, $base, $dir)) {
+ push(@result, catdir($base, $dir));
+ next;
+ }
+ }
+ # Relative to the specified base-directory?
+ if (defined($base_directory)) {
+ if (file_name_is_absolute($base_directory)) {
+ $base = abs2rel($base_directory, rootdir());
+ } else {
+ $base = $base_directory;
+ }
+ if (-d catdir($data, $base, $dir)) {
+ push(@result, catdir($base, $dir));
+ next;
+ }
+ }
+ # Relative to the build directory?
+ if (defined($build)) {
+ if (file_name_is_absolute($build)) {
+ $base = abs2rel($build, rootdir());
+ } else {
+ $base = $build;
+ }
+ if (-d catdir($data, $base, $dir)) {
+ push(@result, catdir($base, $dir));
+ next;
+ }
+ }
+ die("ERROR: subdirectory $dir not found\n".
+ "Please use -b to specify the correct directory\n");
+ }
+ return @result;
+}
+
+#
+# copy_gcov_dir(dir, [@subdirectories])
+#
+# Create a temporary directory and copy all or, if specified, only some
+# subdirectories from dir to that directory. Return the name of the temporary
+# directory.
+#
+
+sub copy_gcov_dir($;@)
+{
+ my ($data, @dirs) = @_;
+ my $tempdir = create_temp_dir();
+
+ info("Copying data to temporary directory $tempdir\n");
+ lcov_copy($data, $tempdir, @dirs);
+
+ return $tempdir;
+}
+
+#
+# kernel_capture_initial
+#
+# Capture initial kernel coverage data, i.e. create a coverage data file from
+# static graph files which contains zero coverage data for all instrumented
+# lines.
+#
+
+sub kernel_capture_initial()
+{
+ my $build;
+ my $source;
+ my @params;
+
+ if (defined($base_directory)) {
+ $build = $base_directory;
+ $source = "specified";
+ } else {
+ (undef, $build) = get_base($gcov_dir);
+ if (!defined($build)) {
+ die("ERROR: could not auto-detect build directory.\n".
+ "Please use -b to specify the build directory\n");
+ }
+ $source = "auto-detected";
+ }
+ info("Using $build as kernel build directory ($source)\n");
+ # Build directory needs to be passed to geninfo
+ $base_directory = $build;
+ if (@kernel_directory) {
+ foreach my $dir (@kernel_directory) {
+ push(@params, "$build/$dir");
+ }
+ } else {
+ push(@params, $build);
+ }
+ lcov_geninfo(@params);
+}
+
+#
+# kernel_capture_from_dir(directory, gcov_kernel_version, build)
+#
+# Perform the actual kernel coverage capturing from the specified directory
+# assuming that the data was copied from the specified gcov kernel version.
+#
+
+sub kernel_capture_from_dir($$$)
+{
+ my ($dir, $gkv, $build) = @_;
+
+ # Create package or coverage file
+ if (defined($to_package)) {
+ create_package($to_package, $dir, $build, $gkv);
+ } else {
+ # Build directory needs to be passed to geninfo
+ $base_directory = $build;
+ lcov_geninfo($dir);
+ }
+}
+
+#
+# adjust_kernel_dir(dir, build)
+#
+# Adjust directories specified with -k so that they point to the directory
+# relative to DIR. Return the build directory if specified or the auto-
+# detected build-directory.
+#
+
+sub adjust_kernel_dir($$)
+{
+ my ($dir, $build) = @_;
+ my ($sys_base, $build_auto) = get_base($dir);
+
+ if (!defined($build)) {
+ $build = $build_auto;
+ }
+ if (!defined($build)) {
+ die("ERROR: could not auto-detect build directory.\n".
+ "Please use -b to specify the build directory\n");
+ }
+ # Make @kernel_directory relative to sysfs base
+ if (@kernel_directory) {
+ @kernel_directory = apply_base_dir($dir, $sys_base, $build,
+ @kernel_directory);
+ }
+ return $build;
+}
+
+sub kernel_capture()
+{
+ my $data_dir;
+ my $build = $base_directory;
+
+ if ($gcov_gkv == $GKV_SYS) {
+ $build = adjust_kernel_dir($gcov_dir, $build);
+ }
+ $data_dir = copy_gcov_dir($gcov_dir, @kernel_directory);
+ kernel_capture_from_dir($data_dir, $gcov_gkv, $build);
+}
+
+#
+# package_capture()
+#
+# Capture coverage data from a package of unprocessed coverage data files
+# as generated by lcov --to-package.
+#
+
+sub package_capture()
+{
+ my $dir;
+ my $build;
+ my $gkv;
+
+ ($dir, $build, $gkv) = get_package($from_package);
+
+ # Check for build directory
+ if (defined($base_directory)) {
+ if (defined($build)) {
+ info("Using build directory specified by -b.\n");
+ }
+ $build = $base_directory;
+ }
+
+ # Do the actual capture
+ if (defined($gkv)) {
+ if ($gkv == $GKV_SYS) {
+ $build = adjust_kernel_dir($dir, $build);
+ }
+ if (@kernel_directory) {
+ $dir = copy_gcov_dir($dir, @kernel_directory);
+ }
+ kernel_capture_from_dir($dir, $gkv, $build);
+ } else {
+ # Build directory needs to be passed to geninfo
+ $base_directory = $build;
+ lcov_geninfo($dir);
+ }
+}
+
+
+#
+# info(printf_parameter)
+#
+# Use printf to write PRINTF_PARAMETER to stdout only when the $quiet flag
+# is not set.
+#
+
+sub info(@)
+{
+ if (!$quiet)
+ {
+ # Print info string
+ if ($to_file)
+ {
+ printf(@_)
+ }
+ else
+ {
+ # Don't interfere with the .info output to STDOUT
+ printf(STDERR @_);
+ }
+ }
+}
+
+
+#
+# create_temp_dir()
+#
+# Create a temporary directory and return its path.
+#
+# Die on error.
+#
+
+sub create_temp_dir()
+{
+ my $dir;
+
+ if (defined($tmp_dir)) {
+ $dir = tempdir(DIR => $tmp_dir, CLEANUP => 1);
+ } else {
+ $dir = tempdir(CLEANUP => 1);
+ }
+ if (!defined($dir)) {
+ die("ERROR: cannot create temporary directory\n");
+ }
+ push(@temp_dirs, $dir);
+
+ return $dir;
+}
+
+
+#
+# br_taken_to_num(taken)
+#
+# Convert a branch taken value .info format to number format.
+#
+
+sub br_taken_to_num($)
+{
+ my ($taken) = @_;
+
+ return 0 if ($taken eq '-');
+ return $taken + 1;
+}
+
+
+#
+# br_num_to_taken(taken)
+#
+# Convert a branch taken value in number format to .info format.
+#
+
+sub br_num_to_taken($)
+{
+ my ($taken) = @_;
+
+ return '-' if ($taken == 0);
+ return $taken - 1;
+}
+
+
+#
+# br_taken_add(taken1, taken2)
+#
+# Return the result of taken1 + taken2 for 'branch taken' values.
+#
+
+sub br_taken_add($$)
+{
+ my ($t1, $t2) = @_;
+
+ return $t1 if (!defined($t2));
+ return $t2 if (!defined($t1));
+ return $t1 if ($t2 eq '-');
+ return $t2 if ($t1 eq '-');
+ return $t1 + $t2;
+}
+
+
+#
+# br_taken_sub(taken1, taken2)
+#
+# Return the result of taken1 - taken2 for 'branch taken' values. Return 0
+# if the result would become negative.
+#
+
+sub br_taken_sub($$)
+{
+ my ($t1, $t2) = @_;
+
+ return $t1 if (!defined($t2));
+ return undef if (!defined($t1));
+ return $t1 if ($t1 eq '-');
+ return $t1 if ($t2 eq '-');
+ return 0 if $t2 > $t1;
+ return $t1 - $t2;
+}
+
+
+#
+#
+# br_ivec_len(vector)
+#
+# Return the number of entries in the branch coverage vector.
+#
+
+sub br_ivec_len($)
+{
+ my ($vec) = @_;
+
+ return 0 if (!defined($vec));
+ return (length($vec) * 8 / $BR_VEC_WIDTH) / $BR_VEC_ENTRIES;
+}
+
+
+#
+# br_ivec_push(vector, block, branch, taken)
+#
+# Add an entry to the branch coverage vector. If an entry with the same
+# branch ID already exists, add the corresponding taken values.
+#
+
+sub br_ivec_push($$$$)
+{
+ my ($vec, $block, $branch, $taken) = @_;
+ my $offset;
+ my $num = br_ivec_len($vec);
+ my $i;
+
+ $vec = "" if (!defined($vec));
+
+ # Check if branch already exists in vector
+ for ($i = 0; $i < $num; $i++) {
+ my ($v_block, $v_branch, $v_taken) = br_ivec_get($vec, $i);
+
+ next if ($v_block != $block || $v_branch != $branch);
+
+ # Add taken counts
+ $taken = br_taken_add($taken, $v_taken);
+ last;
+ }
+
+ $offset = $i * $BR_VEC_ENTRIES;
+ $taken = br_taken_to_num($taken);
+
+ # Add to vector
+ vec($vec, $offset + $BR_BLOCK, $BR_VEC_WIDTH) = $block;
+ vec($vec, $offset + $BR_BRANCH, $BR_VEC_WIDTH) = $branch;
+ vec($vec, $offset + $BR_TAKEN, $BR_VEC_WIDTH) = $taken;
+
+ return $vec;
+}
+
+
+#
+# br_ivec_get(vector, number)
+#
+# Return an entry from the branch coverage vector.
+#
+
+sub br_ivec_get($$)
+{
+ my ($vec, $num) = @_;
+ my $block;
+ my $branch;
+ my $taken;
+ my $offset = $num * $BR_VEC_ENTRIES;
+
+ # Retrieve data from vector
+ $block = vec($vec, $offset + $BR_BLOCK, $BR_VEC_WIDTH);
+ $branch = vec($vec, $offset + $BR_BRANCH, $BR_VEC_WIDTH);
+ $taken = vec($vec, $offset + $BR_TAKEN, $BR_VEC_WIDTH);
+
+ # Decode taken value from an integer
+ $taken = br_num_to_taken($taken);
+
+ return ($block, $branch, $taken);
+}
+
+
+#
+# get_br_found_and_hit(brcount)
+#
+# Return (br_found, br_hit) for brcount
+#
+
+sub get_br_found_and_hit($)
+{
+ my ($brcount) = @_;
+ my $line;
+ my $br_found = 0;
+ my $br_hit = 0;
+
+ foreach $line (keys(%{$brcount})) {
+ my $brdata = $brcount->{$line};
+ my $i;
+ my $num = br_ivec_len($brdata);
+
+ for ($i = 0; $i < $num; $i++) {
+ my $taken;
+
+ (undef, undef, $taken) = br_ivec_get($brdata, $i);
+
+ $br_found++;
+ $br_hit++ if ($taken ne "-" && $taken > 0);
+ }
+ }
+
+ return ($br_found, $br_hit);
+}
+
+
+#
+# read_info_file(info_filename)
+#
+# Read in the contents of the .info file specified by INFO_FILENAME. Data will
+# be returned as a reference to a hash containing the following mappings:
+#
+# %result: for each filename found in file -> \%data
+#
+# %data: "test" -> \%testdata
+# "sum" -> \%sumcount
+# "func" -> \%funcdata
+# "found" -> $lines_found (number of instrumented lines found in file)
+# "hit" -> $lines_hit (number of executed lines in file)
+# "check" -> \%checkdata
+# "testfnc" -> \%testfncdata
+# "sumfnc" -> \%sumfnccount
+# "testbr" -> \%testbrdata
+# "sumbr" -> \%sumbrcount
+#
+# %testdata : name of test affecting this file -> \%testcount
+# %testfncdata: name of test affecting this file -> \%testfnccount
+# %testbrdata: name of test affecting this file -> \%testbrcount
+#
+# %testcount : line number -> execution count for a single test
+# %testfnccount: function name -> execution count for a single test
+# %testbrcount : line number -> branch coverage data for a single test
+# %sumcount : line number -> execution count for all tests
+# %sumfnccount : function name -> execution count for all tests
+# %sumbrcount : line number -> branch coverage data for all tests
+# %funcdata : function name -> line number
+# %checkdata : line number -> checksum of source code line
+# $brdata : vector of items: block, branch, taken
+#
+# Note that .info file sections referring to the same file and test name
+# will automatically be combined by adding all execution counts.
+#
+# Note that if INFO_FILENAME ends with ".gz", it is assumed that the file
+# is compressed using GZIP. If available, GUNZIP will be used to decompress
+# this file.
+#
+# Die on error.
+#
+
+sub read_info_file($)
+{
+ my $tracefile = $_[0]; # Name of tracefile
+ my %result; # Resulting hash: file -> data
+ my $data; # Data handle for current entry
+ my $testdata; # " "
+ my $testcount; # " "
+ my $sumcount; # " "
+ my $funcdata; # " "
+ my $checkdata; # " "
+ my $testfncdata;
+ my $testfnccount;
+ my $sumfnccount;
+ my $testbrdata;
+ my $testbrcount;
+ my $sumbrcount;
+ my $line; # Current line read from .info file
+ my $testname; # Current test name
+ my $filename; # Current filename
+ my $hitcount; # Count for lines hit
+ my $count; # Execution count of current line
+ my $negative; # If set, warn about negative counts
+ my $changed_testname; # If set, warn about changed testname
+ my $line_checksum; # Checksum of current line
+ local *INFO_HANDLE; # Filehandle for .info file
+
+ info("Reading tracefile $tracefile\n");
+
+ # Check if file exists and is readable
+ stat($_[0]);
+ if (!(-r _))
+ {
+ die("ERROR: cannot read file $_[0]!\n");
+ }
+
+ # Check if this is really a plain file
+ if (!(-f _))
+ {
+ die("ERROR: not a plain file: $_[0]!\n");
+ }
+
+ # Check for .gz extension
+ if ($_[0] =~ /\.gz$/)
+ {
+ # Check for availability of GZIP tool
+ system_no_output(1, "gunzip" ,"-h")
+ and die("ERROR: gunzip command not available!\n");
+
+ # Check integrity of compressed file
+ system_no_output(1, "gunzip", "-t", $_[0])
+ and die("ERROR: integrity check failed for ".
+ "compressed file $_[0]!\n");
+
+ # Open compressed file
+ open(INFO_HANDLE, "gunzip -c $_[0]|")
+ or die("ERROR: cannot start gunzip to decompress ".
+ "file $_[0]!\n");
+ }
+ else
+ {
+ # Open decompressed file
+ open(INFO_HANDLE, $_[0])
+ or die("ERROR: cannot read file $_[0]!\n");
+ }
+
+ $testname = "";
+ while (<INFO_HANDLE>)
+ {
+ chomp($_);
+ $line = $_;
+
+ # Switch statement
+ foreach ($line)
+ {
+ /^TN:([^,]*)(,diff)?/ && do
+ {
+ # Test name information found
+ $testname = defined($1) ? $1 : "";
+ if ($testname =~ s/\W/_/g)
+ {
+ $changed_testname = 1;
+ }
+ $testname .= $2 if (defined($2));
+ last;
+ };
+
+ /^[SK]F:(.*)/ && do
+ {
+ # Filename information found
+ # Retrieve data for new entry
+ $filename = $1;
+
+ $data = $result{$filename};
+ ($testdata, $sumcount, $funcdata, $checkdata,
+ $testfncdata, $sumfnccount, $testbrdata,
+ $sumbrcount) =
+ get_info_entry($data);
+
+ if (defined($testname))
+ {
+ $testcount = $testdata->{$testname};
+ $testfnccount = $testfncdata->{$testname};
+ $testbrcount = $testbrdata->{$testname};
+ }
+ else
+ {
+ $testcount = {};
+ $testfnccount = {};
+ $testbrcount = {};
+ }
+ last;
+ };
+
+ /^DA:(\d+),(-?\d+)(,[^,\s]+)?/ && do
+ {
+ # Fix negative counts
+ $count = $2 < 0 ? 0 : $2;
+ if ($2 < 0)
+ {
+ $negative = 1;
+ }
+ # Execution count found, add to structure
+ # Add summary counts
+ $sumcount->{$1} += $count;
+
+ # Add test-specific counts
+ if (defined($testname))
+ {
+ $testcount->{$1} += $count;
+ }
+
+ # Store line checksum if available
+ if (defined($3))
+ {
+ $line_checksum = substr($3, 1);
+
+ # Does it match a previous definition
+ if (defined($checkdata->{$1}) &&
+ ($checkdata->{$1} ne
+ $line_checksum))
+ {
+ die("ERROR: checksum mismatch ".
+ "at $filename:$1\n");
+ }
+
+ $checkdata->{$1} = $line_checksum;
+ }
+ last;
+ };
+
+ /^FN:(\d+),([^,]+)/ && do
+ {
+ # Function data found, add to structure
+ $funcdata->{$2} = $1;
+
+ # Also initialize function call data
+ if (!defined($sumfnccount->{$2})) {
+ $sumfnccount->{$2} = 0;
+ }
+ if (defined($testname))
+ {
+ if (!defined($testfnccount->{$2})) {
+ $testfnccount->{$2} = 0;
+ }
+ }
+ last;
+ };
+
+ /^FNDA:(\d+),([^,]+)/ && do
+ {
+ # Function call count found, add to structure
+ # Add summary counts
+ $sumfnccount->{$2} += $1;
+
+ # Add test-specific counts
+ if (defined($testname))
+ {
+ $testfnccount->{$2} += $1;
+ }
+ last;
+ };
+
+ /^BRDA:(\d+),(\d+),(\d+),(\d+|-)/ && do {
+ # Branch coverage data found
+ my ($line, $block, $branch, $taken) =
+ ($1, $2, $3, $4);
+
+ $sumbrcount->{$line} =
+ br_ivec_push($sumbrcount->{$line},
+ $block, $branch, $taken);
+
+ # Add test-specific counts
+ if (defined($testname)) {
+ $testbrcount->{$line} =
+ br_ivec_push(
+ $testbrcount->{$line},
+ $block, $branch,
+ $taken);
+ }
+ last;
+ };
+
+ /^end_of_record/ && do
+ {
+ # Found end of section marker
+ if ($filename)
+ {
+ # Store current section data
+ if (defined($testname))
+ {
+ $testdata->{$testname} =
+ $testcount;
+ $testfncdata->{$testname} =
+ $testfnccount;
+ $testbrdata->{$testname} =
+ $testbrcount;
+ }
+
+ set_info_entry($data, $testdata,
+ $sumcount, $funcdata,
+ $checkdata, $testfncdata,
+ $sumfnccount,
+ $testbrdata,
+ $sumbrcount);
+ $result{$filename} = $data;
+ last;
+ }
+ };
+
+ # default
+ last;
+ }
+ }
+ close(INFO_HANDLE);
+
+ # Calculate hit and found values for lines and functions of each file
+ foreach $filename (keys(%result))
+ {
+ $data = $result{$filename};
+
+ ($testdata, $sumcount, undef, undef, $testfncdata,
+ $sumfnccount, $testbrdata, $sumbrcount) =
+ get_info_entry($data);
+
+ # Filter out empty files
+ if (scalar(keys(%{$sumcount})) == 0)
+ {
+ delete($result{$filename});
+ next;
+ }
+ # Filter out empty test cases
+ foreach $testname (keys(%{$testdata}))
+ {
+ if (!defined($testdata->{$testname}) ||
+ scalar(keys(%{$testdata->{$testname}})) == 0)
+ {
+ delete($testdata->{$testname});
+ delete($testfncdata->{$testname});
+ }
+ }
+
+ $data->{"found"} = scalar(keys(%{$sumcount}));
+ $hitcount = 0;
+
+ foreach (keys(%{$sumcount}))
+ {
+ if ($sumcount->{$_} > 0) { $hitcount++; }
+ }
+
+ $data->{"hit"} = $hitcount;
+
+ # Get found/hit values for function call data
+ $data->{"f_found"} = scalar(keys(%{$sumfnccount}));
+ $hitcount = 0;
+
+ foreach (keys(%{$sumfnccount})) {
+ if ($sumfnccount->{$_} > 0) {
+ $hitcount++;
+ }
+ }
+ $data->{"f_hit"} = $hitcount;
+
+ # Get found/hit values for branch data
+ {
+ my ($br_found, $br_hit) = get_br_found_and_hit($sumbrcount);
+
+ $data->{"b_found"} = $br_found;
+ $data->{"b_hit"} = $br_hit;
+ }
+ }
+
+ if (scalar(keys(%result)) == 0)
+ {
+ die("ERROR: no valid records found in tracefile $tracefile\n");
+ }
+ if ($negative)
+ {
+ warn("WARNING: negative counts found in tracefile ".
+ "$tracefile\n");
+ }
+ if ($changed_testname)
+ {
+ warn("WARNING: invalid characters removed from testname in ".
+ "tracefile $tracefile\n");
+ }
+
+ return(\%result);
+}
+
+
+#
+# get_info_entry(hash_ref)
+#
+# Retrieve data from an entry of the structure generated by read_info_file().
+# Return a list of references to hashes:
+# (test data hash ref, sum count hash ref, funcdata hash ref, checkdata hash
+# ref, testfncdata hash ref, sumfnccount hash ref, testbrdata hash ref,
+# sumbrcount hash ref, lines found, lines hit, functions found,
+# functions hit, branches found, branches hit)
+#
+
+sub get_info_entry($)
+{
+ my $testdata_ref = $_[0]->{"test"};
+ my $sumcount_ref = $_[0]->{"sum"};
+ my $funcdata_ref = $_[0]->{"func"};
+ my $checkdata_ref = $_[0]->{"check"};
+ my $testfncdata = $_[0]->{"testfnc"};
+ my $sumfnccount = $_[0]->{"sumfnc"};
+ my $testbrdata = $_[0]->{"testbr"};
+ my $sumbrcount = $_[0]->{"sumbr"};
+ my $lines_found = $_[0]->{"found"};
+ my $lines_hit = $_[0]->{"hit"};
+ my $f_found = $_[0]->{"f_found"};
+ my $f_hit = $_[0]->{"f_hit"};
+ my $br_found = $_[0]->{"b_found"};
+ my $br_hit = $_[0]->{"b_hit"};
+
+ return ($testdata_ref, $sumcount_ref, $funcdata_ref, $checkdata_ref,
+ $testfncdata, $sumfnccount, $testbrdata, $sumbrcount,
+ $lines_found, $lines_hit, $f_found, $f_hit,
+ $br_found, $br_hit);
+}
+
+
+#
+# set_info_entry(hash_ref, testdata_ref, sumcount_ref, funcdata_ref,
+# checkdata_ref, testfncdata_ref, sumfcncount_ref,
+# testbrdata_ref, sumbrcount_ref[,lines_found,
+# lines_hit, f_found, f_hit, $b_found, $b_hit])
+#
+# Update the hash referenced by HASH_REF with the provided data references.
+#
+
+sub set_info_entry($$$$$$$$$;$$$$$$)
+{
+ my $data_ref = $_[0];
+
+ $data_ref->{"test"} = $_[1];
+ $data_ref->{"sum"} = $_[2];
+ $data_ref->{"func"} = $_[3];
+ $data_ref->{"check"} = $_[4];
+ $data_ref->{"testfnc"} = $_[5];
+ $data_ref->{"sumfnc"} = $_[6];
+ $data_ref->{"testbr"} = $_[7];
+ $data_ref->{"sumbr"} = $_[8];
+
+ if (defined($_[9])) { $data_ref->{"found"} = $_[9]; }
+ if (defined($_[10])) { $data_ref->{"hit"} = $_[10]; }
+ if (defined($_[11])) { $data_ref->{"f_found"} = $_[11]; }
+ if (defined($_[12])) { $data_ref->{"f_hit"} = $_[12]; }
+ if (defined($_[13])) { $data_ref->{"b_found"} = $_[13]; }
+ if (defined($_[14])) { $data_ref->{"b_hit"} = $_[14]; }
+}
+
+
+#
+# add_counts(data1_ref, data2_ref)
+#
+# DATA1_REF and DATA2_REF are references to hashes containing a mapping
+#
+# line number -> execution count
+#
+# Return a list (RESULT_REF, LINES_FOUND, LINES_HIT) where RESULT_REF
+# is a reference to a hash containing the combined mapping in which
+# execution counts are added.
+#
+
+sub add_counts($$)
+{
+ my %data1 = %{$_[0]}; # Hash 1
+ my %data2 = %{$_[1]}; # Hash 2
+ my %result; # Resulting hash
+ my $line; # Current line iteration scalar
+ my $data1_count; # Count of line in hash1
+ my $data2_count; # Count of line in hash2
+ my $found = 0; # Total number of lines found
+ my $hit = 0; # Number of lines with a count > 0
+
+ foreach $line (keys(%data1))
+ {
+ $data1_count = $data1{$line};
+ $data2_count = $data2{$line};
+
+ # Add counts if present in both hashes
+ if (defined($data2_count)) { $data1_count += $data2_count; }
+
+ # Store sum in %result
+ $result{$line} = $data1_count;
+
+ $found++;
+ if ($data1_count > 0) { $hit++; }
+ }
+
+ # Add lines unique to data2
+ foreach $line (keys(%data2))
+ {
+ # Skip lines already in data1
+ if (defined($data1{$line})) { next; }
+
+ # Copy count from data2
+ $result{$line} = $data2{$line};
+
+ $found++;
+ if ($result{$line} > 0) { $hit++; }
+ }
+
+ return (\%result, $found, $hit);
+}
+
+
+#
+# merge_checksums(ref1, ref2, filename)
+#
+# REF1 and REF2 are references to hashes containing a mapping
+#
+# line number -> checksum
+#
+# Merge checksum lists defined in REF1 and REF2 and return reference to
+# resulting hash. Die if a checksum for a line is defined in both hashes
+# but does not match.
+#
+
+sub merge_checksums($$$)
+{
+ my $ref1 = $_[0];
+ my $ref2 = $_[1];
+ my $filename = $_[2];
+ my %result;
+ my $line;
+
+ foreach $line (keys(%{$ref1}))
+ {
+ if (defined($ref2->{$line}) &&
+ ($ref1->{$line} ne $ref2->{$line}))
+ {
+ die("ERROR: checksum mismatch at $filename:$line\n");
+ }
+ $result{$line} = $ref1->{$line};
+ }
+
+ foreach $line (keys(%{$ref2}))
+ {
+ $result{$line} = $ref2->{$line};
+ }
+
+ return \%result;
+}
+
+
+#
+# merge_func_data(funcdata1, funcdata2, filename)
+#
+
+sub merge_func_data($$$)
+{
+ my ($funcdata1, $funcdata2, $filename) = @_;
+ my %result;
+ my $func;
+
+ if (defined($funcdata1)) {
+ %result = %{$funcdata1};
+ }
+
+ foreach $func (keys(%{$funcdata2})) {
+ my $line1 = $result{$func};
+ my $line2 = $funcdata2->{$func};
+
+ if (defined($line1) && ($line1 != $line2)) {
+ warn("WARNING: function data mismatch at ".
+ "$filename:$line2\n");
+ next;
+ }
+ $result{$func} = $line2;
+ }
+
+ return \%result;
+}
+
+
+#
+# add_fnccount(fnccount1, fnccount2)
+#
+# Add function call count data. Return list (fnccount_added, f_found, f_hit)
+#
+
+sub add_fnccount($$)
+{
+ my ($fnccount1, $fnccount2) = @_;
+ my %result;
+ my $f_found;
+ my $f_hit;
+ my $function;
+
+ if (defined($fnccount1)) {
+ %result = %{$fnccount1};
+ }
+ foreach $function (keys(%{$fnccount2})) {
+ $result{$function} += $fnccount2->{$function};
+ }
+ $f_found = scalar(keys(%result));
+ $f_hit = 0;
+ foreach $function (keys(%result)) {
+ if ($result{$function} > 0) {
+ $f_hit++;
+ }
+ }
+
+ return (\%result, $f_found, $f_hit);
+}
+
+#
+# add_testfncdata(testfncdata1, testfncdata2)
+#
+# Add function call count data for several tests. Return reference to
+# added_testfncdata.
+#
+
+sub add_testfncdata($$)
+{
+ my ($testfncdata1, $testfncdata2) = @_;
+ my %result;
+ my $testname;
+
+ foreach $testname (keys(%{$testfncdata1})) {
+ if (defined($testfncdata2->{$testname})) {
+ my $fnccount;
+
+ # Function call count data for this testname exists
+ # in both data sets: merge
+ ($fnccount) = add_fnccount(
+ $testfncdata1->{$testname},
+ $testfncdata2->{$testname});
+ $result{$testname} = $fnccount;
+ next;
+ }
+ # Function call count data for this testname is unique to
+ # data set 1: copy
+ $result{$testname} = $testfncdata1->{$testname};
+ }
+
+ # Add count data for testnames unique to data set 2
+ foreach $testname (keys(%{$testfncdata2})) {
+ if (!defined($result{$testname})) {
+ $result{$testname} = $testfncdata2->{$testname};
+ }
+ }
+ return \%result;
+}
+
+
+#
+# brcount_to_db(brcount)
+#
+# Convert brcount data to the following format:
+#
+# db: line number -> block hash
+# block hash: block number -> branch hash
+# branch hash: branch number -> taken value
+#
+
+sub brcount_to_db($)
+{
+ my ($brcount) = @_;
+ my $line;
+ my $db;
+
+ # Add branches from first count to database
+ foreach $line (keys(%{$brcount})) {
+ my $brdata = $brcount->{$line};
+ my $i;
+ my $num = br_ivec_len($brdata);
+
+ for ($i = 0; $i < $num; $i++) {
+ my ($block, $branch, $taken) = br_ivec_get($brdata, $i);
+
+ $db->{$line}->{$block}->{$branch} = $taken;
+ }
+ }
+
+ return $db;
+}
+
+
+#
+# db_to_brcount(db)
+#
+# Convert branch coverage data back to brcount format.
+#
+
+sub db_to_brcount($)
+{
+ my ($db) = @_;
+ my $line;
+ my $brcount = {};
+ my $br_found = 0;
+ my $br_hit = 0;
+
+ # Convert database back to brcount format
+ foreach $line (sort({$a <=> $b} keys(%{$db}))) {
+ my $ldata = $db->{$line};
+ my $brdata;
+ my $block;
+
+ foreach $block (sort({$a <=> $b} keys(%{$ldata}))) {
+ my $bdata = $ldata->{$block};
+ my $branch;
+
+ foreach $branch (sort({$a <=> $b} keys(%{$bdata}))) {
+ my $taken = $bdata->{$branch};
+
+ $br_found++;
+ $br_hit++ if ($taken ne "-" && $taken > 0);
+ $brdata = br_ivec_push($brdata, $block,
+ $branch, $taken);
+ }
+ }
+ $brcount->{$line} = $brdata;
+ }
+
+ return ($brcount, $br_found, $br_hit);
+}
+
+
+# combine_brcount(brcount1, brcount2, type)
+#
+# If add is BR_ADD, add branch coverage data and return list (brcount_added,
+# br_found, br_hit). If add is BR_SUB, subtract the taken values of brcount2
+# from brcount1 and return (brcount_sub, br_found, br_hit).
+#
+
+sub combine_brcount($$$)
+{
+ my ($brcount1, $brcount2, $type) = @_;
+ my $line;
+ my $block;
+ my $branch;
+ my $taken;
+ my $db;
+ my $br_found = 0;
+ my $br_hit = 0;
+ my $result;
+
+ # Convert branches from first count to database
+ $db = brcount_to_db($brcount1);
+ # Combine values from database and second count
+ foreach $line (keys(%{$brcount2})) {
+ my $brdata = $brcount2->{$line};
+ my $num = br_ivec_len($brdata);
+ my $i;
+
+ for ($i = 0; $i < $num; $i++) {
+ ($block, $branch, $taken) = br_ivec_get($brdata, $i);
+ my $new_taken = $db->{$line}->{$block}->{$branch};
+
+ if ($type == $BR_ADD) {
+ $new_taken = br_taken_add($new_taken, $taken);
+ } elsif ($type == $BR_SUB) {
+ $new_taken = br_taken_sub($new_taken, $taken);
+ }
+ $db->{$line}->{$block}->{$branch} = $new_taken
+ if (defined($new_taken));
+ }
+ }
+ # Convert database back to brcount format
+ ($result, $br_found, $br_hit) = db_to_brcount($db);
+
+ return ($result, $br_found, $br_hit);
+}
+
+
+#
+# add_testbrdata(testbrdata1, testbrdata2)
+#
+# Add branch coverage data for several tests. Return reference to
+# added_testbrdata.
+#
+
+sub add_testbrdata($$)
+{
+ my ($testbrdata1, $testbrdata2) = @_;
+ my %result;
+ my $testname;
+
+ foreach $testname (keys(%{$testbrdata1})) {
+ if (defined($testbrdata2->{$testname})) {
+ my $brcount;
+
+ # Branch coverage data for this testname exists
+ # in both data sets: add
+ ($brcount) = combine_brcount(
+ $testbrdata1->{$testname},
+ $testbrdata2->{$testname}, $BR_ADD);
+ $result{$testname} = $brcount;
+ next;
+ }
+ # Branch coverage data for this testname is unique to
+ # data set 1: copy
+ $result{$testname} = $testbrdata1->{$testname};
+ }
+
+ # Add count data for testnames unique to data set 2
+ foreach $testname (keys(%{$testbrdata2})) {
+ if (!defined($result{$testname})) {
+ $result{$testname} = $testbrdata2->{$testname};
+ }
+ }
+ return \%result;
+}
+
+
+#
+# combine_info_entries(entry_ref1, entry_ref2, filename)
+#
+# Combine .info data entry hashes referenced by ENTRY_REF1 and ENTRY_REF2.
+# Return reference to resulting hash.
+#
+
+sub combine_info_entries($$$)
+{
+ my $entry1 = $_[0]; # Reference to hash containing first entry
+ my $testdata1;
+ my $sumcount1;
+ my $funcdata1;
+ my $checkdata1;
+ my $testfncdata1;
+ my $sumfnccount1;
+ my $testbrdata1;
+ my $sumbrcount1;
+
+ my $entry2 = $_[1]; # Reference to hash containing second entry
+ my $testdata2;
+ my $sumcount2;
+ my $funcdata2;
+ my $checkdata2;
+ my $testfncdata2;
+ my $sumfnccount2;
+ my $testbrdata2;
+ my $sumbrcount2;
+
+ my %result; # Hash containing combined entry
+ my %result_testdata;
+ my $result_sumcount = {};
+ my $result_funcdata;
+ my $result_testfncdata;
+ my $result_sumfnccount;
+ my $result_testbrdata;
+ my $result_sumbrcount;
+ my $lines_found;
+ my $lines_hit;
+ my $f_found;
+ my $f_hit;
+ my $br_found;
+ my $br_hit;
+
+ my $testname;
+ my $filename = $_[2];
+
+ # Retrieve data
+ ($testdata1, $sumcount1, $funcdata1, $checkdata1, $testfncdata1,
+ $sumfnccount1, $testbrdata1, $sumbrcount1) = get_info_entry($entry1);
+ ($testdata2, $sumcount2, $funcdata2, $checkdata2, $testfncdata2,
+ $sumfnccount2, $testbrdata2, $sumbrcount2) = get_info_entry($entry2);
+
+ # Merge checksums
+ $checkdata1 = merge_checksums($checkdata1, $checkdata2, $filename);
+
+ # Combine funcdata
+ $result_funcdata = merge_func_data($funcdata1, $funcdata2, $filename);
+
+ # Combine function call count data
+ $result_testfncdata = add_testfncdata($testfncdata1, $testfncdata2);
+ ($result_sumfnccount, $f_found, $f_hit) =
+ add_fnccount($sumfnccount1, $sumfnccount2);
+
+ # Combine branch coverage data
+ $result_testbrdata = add_testbrdata($testbrdata1, $testbrdata2);
+ ($result_sumbrcount, $br_found, $br_hit) =
+ combine_brcount($sumbrcount1, $sumbrcount2, $BR_ADD);
+
+ # Combine testdata
+ foreach $testname (keys(%{$testdata1}))
+ {
+ if (defined($testdata2->{$testname}))
+ {
+ # testname is present in both entries, requires
+ # combination
+ ($result_testdata{$testname}) =
+ add_counts($testdata1->{$testname},
+ $testdata2->{$testname});
+ }
+ else
+ {
+ # testname only present in entry1, add to result
+ $result_testdata{$testname} = $testdata1->{$testname};
+ }
+
+ # update sum count hash
+ ($result_sumcount, $lines_found, $lines_hit) =
+ add_counts($result_sumcount,
+ $result_testdata{$testname});
+ }
+
+ foreach $testname (keys(%{$testdata2}))
+ {
+ # Skip testnames already covered by previous iteration
+ if (defined($testdata1->{$testname})) { next; }
+
+ # testname only present in entry2, add to result hash
+ $result_testdata{$testname} = $testdata2->{$testname};
+
+ # update sum count hash
+ ($result_sumcount, $lines_found, $lines_hit) =
+ add_counts($result_sumcount,
+ $result_testdata{$testname});
+ }
+
+ # Calculate resulting sumcount
+
+ # Store result
+ set_info_entry(\%result, \%result_testdata, $result_sumcount,
+ $result_funcdata, $checkdata1, $result_testfncdata,
+ $result_sumfnccount, $result_testbrdata,
+ $result_sumbrcount, $lines_found, $lines_hit,
+ $f_found, $f_hit, $br_found, $br_hit);
+
+ return(\%result);
+}
+
+
+#
+# combine_info_files(info_ref1, info_ref2)
+#
+# Combine .info data in hashes referenced by INFO_REF1 and INFO_REF2. Return
+# reference to resulting hash.
+#
+
+sub combine_info_files($$)
+{
+ my %hash1 = %{$_[0]};
+ my %hash2 = %{$_[1]};
+ my $filename;
+
+ foreach $filename (keys(%hash2))
+ {
+ if ($hash1{$filename})
+ {
+ # Entry already exists in hash1, combine them
+ $hash1{$filename} =
+ combine_info_entries($hash1{$filename},
+ $hash2{$filename},
+ $filename);
+ }
+ else
+ {
+ # Entry is unique in both hashes, simply add to
+ # resulting hash
+ $hash1{$filename} = $hash2{$filename};
+ }
+ }
+
+ return(\%hash1);
+}
+
+
+#
+# add_traces()
+#
+
+sub add_traces()
+{
+ my $total_trace;
+ my $current_trace;
+ my $tracefile;
+ my @result;
+ local *INFO_HANDLE;
+
+ info("Combining tracefiles.\n");
+
+ foreach $tracefile (@add_tracefile)
+ {
+ $current_trace = read_info_file($tracefile);
+ if ($total_trace)
+ {
+ $total_trace = combine_info_files($total_trace,
+ $current_trace);
+ }
+ else
+ {
+ $total_trace = $current_trace;
+ }
+ }
+
+ # Write combined data
+ if ($to_file)
+ {
+ info("Writing data to $output_filename\n");
+ open(INFO_HANDLE, ">$output_filename")
+ or die("ERROR: cannot write to $output_filename!\n");
+ @result = write_info_file(*INFO_HANDLE, $total_trace);
+ close(*INFO_HANDLE);
+ }
+ else
+ {
+ @result = write_info_file(*STDOUT, $total_trace);
+ }
+
+ return @result;
+}
+
+
+#
+# write_info_file(filehandle, data)
+#
+
+sub write_info_file(*$)
+{
+ local *INFO_HANDLE = $_[0];
+ my %data = %{$_[1]};
+ my $source_file;
+ my $entry;
+ my $testdata;
+ my $sumcount;
+ my $funcdata;
+ my $checkdata;
+ my $testfncdata;
+ my $sumfnccount;
+ my $testbrdata;
+ my $sumbrcount;
+ my $testname;
+ my $line;
+ my $func;
+ my $testcount;
+ my $testfnccount;
+ my $testbrcount;
+ my $found;
+ my $hit;
+ my $f_found;
+ my $f_hit;
+ my $br_found;
+ my $br_hit;
+ my $ln_total_found = 0;
+ my $ln_total_hit = 0;
+ my $fn_total_found = 0;
+ my $fn_total_hit = 0;
+ my $br_total_found = 0;
+ my $br_total_hit = 0;
+
+ foreach $source_file (sort(keys(%data)))
+ {
+ $entry = $data{$source_file};
+ ($testdata, $sumcount, $funcdata, $checkdata, $testfncdata,
+ $sumfnccount, $testbrdata, $sumbrcount, $found, $hit,
+ $f_found, $f_hit, $br_found, $br_hit) =
+ get_info_entry($entry);
+
+ # Add to totals
+ $ln_total_found += $found;
+ $ln_total_hit += $hit;
+ $fn_total_found += $f_found;
+ $fn_total_hit += $f_hit;
+ $br_total_found += $br_found;
+ $br_total_hit += $br_hit;
+
+ foreach $testname (sort(keys(%{$testdata})))
+ {
+ $testcount = $testdata->{$testname};
+ $testfnccount = $testfncdata->{$testname};
+ $testbrcount = $testbrdata->{$testname};
+ $found = 0;
+ $hit = 0;
+
+ print(INFO_HANDLE "TN:$testname\n");
+ print(INFO_HANDLE "SF:$source_file\n");
+
+ # Write function related data
+ foreach $func (
+ sort({$funcdata->{$a} <=> $funcdata->{$b}}
+ keys(%{$funcdata})))
+ {
+ print(INFO_HANDLE "FN:".$funcdata->{$func}.
+ ",$func\n");
+ }
+ foreach $func (keys(%{$testfnccount})) {
+ print(INFO_HANDLE "FNDA:".
+ $testfnccount->{$func}.
+ ",$func\n");
+ }
+ ($f_found, $f_hit) =
+ get_func_found_and_hit($testfnccount);
+ print(INFO_HANDLE "FNF:$f_found\n");
+ print(INFO_HANDLE "FNH:$f_hit\n");
+
+ # Write branch related data
+ $br_found = 0;
+ $br_hit = 0;
+ foreach $line (sort({$a <=> $b}
+ keys(%{$testbrcount}))) {
+ my $brdata = $testbrcount->{$line};
+ my $num = br_ivec_len($brdata);
+ my $i;
+
+ for ($i = 0; $i < $num; $i++) {
+ my ($block, $branch, $taken) =
+ br_ivec_get($brdata, $i);
+
+ print(INFO_HANDLE "BRDA:$line,$block,".
+ "$branch,$taken\n");
+ $br_found++;
+ $br_hit++ if ($taken ne '-' &&
+ $taken > 0);
+ }
+ }
+ if ($br_found > 0) {
+ print(INFO_HANDLE "BRF:$br_found\n");
+ print(INFO_HANDLE "BRH:$br_hit\n");
+ }
+
+ # Write line related data
+ foreach $line (sort({$a <=> $b} keys(%{$testcount})))
+ {
+ print(INFO_HANDLE "DA:$line,".
+ $testcount->{$line}.
+ (defined($checkdata->{$line}) &&
+ $checksum ?
+ ",".$checkdata->{$line} : "")."\n");
+ $found++;
+ if ($testcount->{$line} > 0)
+ {
+ $hit++;
+ }
+
+ }
+ print(INFO_HANDLE "LF:$found\n");
+ print(INFO_HANDLE "LH:$hit\n");
+ print(INFO_HANDLE "end_of_record\n");
+ }
+ }
+
+ return ($ln_total_found, $ln_total_hit, $fn_total_found, $fn_total_hit,
+ $br_total_found, $br_total_hit);
+}
+
+
+#
+# transform_pattern(pattern)
+#
+# Transform shell wildcard expression to equivalent PERL regular expression.
+# Return transformed pattern.
+#
+
+sub transform_pattern($)
+{
+ my $pattern = $_[0];
+
+ # Escape special chars
+
+ $pattern =~ s/\\/\\\\/g;
+ $pattern =~ s/\//\\\//g;
+ $pattern =~ s/\^/\\\^/g;
+ $pattern =~ s/\$/\\\$/g;
+ $pattern =~ s/\(/\\\(/g;
+ $pattern =~ s/\)/\\\)/g;
+ $pattern =~ s/\[/\\\[/g;
+ $pattern =~ s/\]/\\\]/g;
+ $pattern =~ s/\{/\\\{/g;
+ $pattern =~ s/\}/\\\}/g;
+ $pattern =~ s/\./\\\./g;
+ $pattern =~ s/\,/\\\,/g;
+ $pattern =~ s/\|/\\\|/g;
+ $pattern =~ s/\+/\\\+/g;
+ $pattern =~ s/\!/\\\!/g;
+
+ # Transform ? => (.) and * => (.*)
+
+ $pattern =~ s/\*/\(\.\*\)/g;
+ $pattern =~ s/\?/\(\.\)/g;
+
+ return $pattern;
+}
+
+
+#
+# extract()
+#
+
+sub extract()
+{
+ my $data = read_info_file($extract);
+ my $filename;
+ my $keep;
+ my $pattern;
+ my @pattern_list;
+ my $extracted = 0;
+ my @result;
+ local *INFO_HANDLE;
+
+ # Need perlreg expressions instead of shell pattern
+ @pattern_list = map({ transform_pattern($_); } @ARGV);
+
+ # Filter out files which do not match any pattern
+ foreach $filename (sort(keys(%{$data})))
+ {
+ $keep = 0;
+
+ foreach $pattern (@pattern_list)
+ {
+ $keep ||= ($filename =~ (/^$pattern$/));
+ }
+
+
+ if (!$keep)
+ {
+ delete($data->{$filename});
+ }
+ else
+ {
+ info("Extracting $filename\n"),
+ $extracted++;
+ }
+ }
+
+ # Write extracted data
+ if ($to_file)
+ {
+ info("Extracted $extracted files\n");
+ info("Writing data to $output_filename\n");
+ open(INFO_HANDLE, ">$output_filename")
+ or die("ERROR: cannot write to $output_filename!\n");
+ @result = write_info_file(*INFO_HANDLE, $data);
+ close(*INFO_HANDLE);
+ }
+ else
+ {
+ @result = write_info_file(*STDOUT, $data);
+ }
+
+ return @result;
+}
+
+
+#
+# remove()
+#
+
+sub remove()
+{
+ my $data = read_info_file($remove);
+ my $filename;
+ my $match_found;
+ my $pattern;
+ my @pattern_list;
+ my $removed = 0;
+ my @result;
+ local *INFO_HANDLE;
+
+ # Need perlreg expressions instead of shell pattern
+ @pattern_list = map({ transform_pattern($_); } @ARGV);
+
+ # Filter out files that match the pattern
+ foreach $filename (sort(keys(%{$data})))
+ {
+ $match_found = 0;
+
+ foreach $pattern (@pattern_list)
+ {
+ $match_found ||= ($filename =~ (/$pattern$/));
+ }
+
+
+ if ($match_found)
+ {
+ delete($data->{$filename});
+ info("Removing $filename\n"),
+ $removed++;
+ }
+ }
+
+ # Write data
+ if ($to_file)
+ {
+ info("Deleted $removed files\n");
+ info("Writing data to $output_filename\n");
+ open(INFO_HANDLE, ">$output_filename")
+ or die("ERROR: cannot write to $output_filename!\n");
+ @result = write_info_file(*INFO_HANDLE, $data);
+ close(*INFO_HANDLE);
+ }
+ else
+ {
+ @result = write_info_file(*STDOUT, $data);
+ }
+
+ return @result;
+}
+
+
+# get_prefix(max_width, max_percentage_too_long, path_list)
+#
+# Return a path prefix that satisfies the following requirements:
+# - is shared by more paths in path_list than any other prefix
+# - the percentage of paths which would exceed the given max_width length
+# after applying the prefix does not exceed max_percentage_too_long
+#
+# If multiple prefixes satisfy all requirements, the longest prefix is
+# returned. Return an empty string if no prefix could be found.
+
+sub get_prefix($$@)
+{
+ my ($max_width, $max_long, @path_list) = @_;
+ my $path;
+ my $ENTRY_NUM = 0;
+ my $ENTRY_LONG = 1;
+ my %prefix;
+
+ # Build prefix hash
+ foreach $path (@path_list) {
+ my ($v, $d, $f) = splitpath($path);
+ my @dirs = splitdir($d);
+ my $p_len = length($path);
+ my $i;
+
+ # Remove trailing '/'
+ pop(@dirs) if ($dirs[scalar(@dirs) - 1] eq '');
+ for ($i = 0; $i < scalar(@dirs); $i++) {
+ my $subpath = catpath($v, catdir(@dirs[0..$i]), '');
+ my $entry = $prefix{$subpath};
+
+ $entry = [ 0, 0 ] if (!defined($entry));
+ $entry->[$ENTRY_NUM]++;
+ if (($p_len - length($subpath) - 1) > $max_width) {
+ $entry->[$ENTRY_LONG]++;
+ }
+ $prefix{$subpath} = $entry;
+ }
+ }
+ # Find suitable prefix (sort descending by two keys: 1. number of
+ # entries covered by a prefix, 2. length of prefix)
+ foreach $path (sort {($prefix{$a}->[$ENTRY_NUM] ==
+ $prefix{$b}->[$ENTRY_NUM]) ?
+ length($b) <=> length($a) :
+ $prefix{$b}->[$ENTRY_NUM] <=>
+ $prefix{$a}->[$ENTRY_NUM]}
+ keys(%prefix)) {
+ my ($num, $long) = @{$prefix{$path}};
+
+ # Check for additional requirement: number of filenames
+ # that would be too long may not exceed a certain percentage
+ if ($long <= $num * $max_long / 100) {
+ return $path;
+ }
+ }
+
+ return "";
+}
+
+
+#
+# shorten_filename(filename, width)
+#
+# Truncate filename if it is longer than width characters.
+#
+
+sub shorten_filename($$)
+{
+ my ($filename, $width) = @_;
+ my $l = length($filename);
+ my $s;
+ my $e;
+
+ return $filename if ($l <= $width);
+ $e = int(($width - 3) / 2);
+ $s = $width - 3 - $e;
+
+ return substr($filename, 0, $s).'...'.substr($filename, $l - $e);
+}
+
+
+sub shorten_number($$)
+{
+ my ($number, $width) = @_;
+ my $result = sprintf("%*d", $width, $number);
+
+ return $result if (length($result) <= $width);
+ $number = $number / 1000;
+ return $result if (length($result) <= $width);
+ $result = sprintf("%*dk", $width - 1, $number);
+ return $result if (length($result) <= $width);
+ $number = $number / 1000;
+ $result = sprintf("%*dM", $width - 1, $number);
+ return $result if (length($result) <= $width);
+ return '#';
+}
+
+sub shorten_rate($$)
+{
+ my ($rate, $width) = @_;
+ my $result = sprintf("%*.1f%%", $width - 3, $rate);
+
+ return $result if (length($result) <= $width);
+ $result = sprintf("%*d%%", $width - 1, $rate);
+ return $result if (length($result) <= $width);
+ return "#";
+}
+
+#
+# list()
+#
+
+sub list()
+{
+ my $data = read_info_file($list);
+ my $filename;
+ my $found;
+ my $hit;
+ my $entry;
+ my $fn_found;
+ my $fn_hit;
+ my $br_found;
+ my $br_hit;
+ my $total_found = 0;
+ my $total_hit = 0;
+ my $fn_total_found = 0;
+ my $fn_total_hit = 0;
+ my $br_total_found = 0;
+ my $br_total_hit = 0;
+ my $prefix;
+ my $strlen = length("Filename");
+ my $format;
+ my $heading1;
+ my $heading2;
+ my @footer;
+ my $barlen;
+ my $rate;
+ my $fnrate;
+ my $brrate;
+ my $lastpath;
+ my $F_LN_NUM = 0;
+ my $F_LN_RATE = 1;
+ my $F_FN_NUM = 2;
+ my $F_FN_RATE = 3;
+ my $F_BR_NUM = 4;
+ my $F_BR_RATE = 5;
+ my @fwidth_narrow = (5, 5, 3, 5, 4, 5);
+ my @fwidth_wide = (6, 5, 5, 5, 6, 5);
+ my @fwidth = @fwidth_wide;
+ my $w;
+ my $max_width = $opt_list_width;
+ my $max_long = $opt_list_truncate_max;
+ my $fwidth_narrow_length;
+ my $fwidth_wide_length;
+ my $got_prefix = 0;
+ my $root_prefix = 0;
+
+ # Calculate total width of narrow fields
+ $fwidth_narrow_length = 0;
+ foreach $w (@fwidth_narrow) {
+ $fwidth_narrow_length += $w + 1;
+ }
+ # Calculate total width of wide fields
+ $fwidth_wide_length = 0;
+ foreach $w (@fwidth_wide) {
+ $fwidth_wide_length += $w + 1;
+ }
+ # Get common file path prefix
+ $prefix = get_prefix($max_width - $fwidth_narrow_length, $max_long,
+ keys(%{$data}));
+ $root_prefix = 1 if ($prefix eq rootdir());
+ $got_prefix = 1 if (length($prefix) > 0);
+ $prefix =~ s/\/$//;
+ # Get longest filename length
+ foreach $filename (keys(%{$data})) {
+ if (!$opt_list_full_path) {
+ if (!$got_prefix || !$root_prefix &&
+ !($filename =~ s/^\Q$prefix\/\E//)) {
+ my ($v, $d, $f) = splitpath($filename);
+
+ $filename = $f;
+ }
+ }
+ # Determine maximum length of entries
+ if (length($filename) > $strlen) {
+ $strlen = length($filename)
+ }
+ }
+ if (!$opt_list_full_path) {
+ my $blanks;
+
+ $w = $fwidth_wide_length;
+ # Check if all columns fit into max_width characters
+ if ($strlen + $fwidth_wide_length > $max_width) {
+ # Use narrow fields
+ @fwidth = @fwidth_narrow;
+ $w = $fwidth_narrow_length;
+ if (($strlen + $fwidth_narrow_length) > $max_width) {
+ # Truncate filenames at max width
+ $strlen = $max_width - $fwidth_narrow_length;
+ }
+ }
+ # Add some blanks between filename and fields if possible
+ $blanks = int($strlen * 0.5);
+ $blanks = 4 if ($blanks < 4);
+ $blanks = 8 if ($blanks > 8);
+ if (($strlen + $w + $blanks) < $max_width) {
+ $strlen += $blanks;
+ } else {
+ $strlen = $max_width - $w;
+ }
+ }
+ # Filename
+ $w = $strlen;
+ $format = "%-${w}s|";
+ $heading1 = sprintf("%*s|", $w, "");
+ $heading2 = sprintf("%-*s|", $w, "Filename");
+ $barlen = $w + 1;
+ # Line coverage rate
+ $w = $fwidth[$F_LN_RATE];
+ $format .= "%${w}s ";
+ $heading1 .= sprintf("%-*s |", $w + $fwidth[$F_LN_NUM],
+ "Lines");
+ $heading2 .= sprintf("%-*s ", $w, "Rate");
+ $barlen += $w + 1;
+ # Number of lines
+ $w = $fwidth[$F_LN_NUM];
+ $format .= "%${w}s|";
+ $heading2 .= sprintf("%*s|", $w, "Num");
+ $barlen += $w + 1;
+ # Function coverage rate
+ $w = $fwidth[$F_FN_RATE];
+ $format .= "%${w}s ";
+ $heading1 .= sprintf("%-*s|", $w + $fwidth[$F_FN_NUM] + 1,
+ "Functions");
+ $heading2 .= sprintf("%-*s ", $w, "Rate");
+ $barlen += $w + 1;
+ # Number of functions
+ $w = $fwidth[$F_FN_NUM];
+ $format .= "%${w}s|";
+ $heading2 .= sprintf("%*s|", $w, "Num");
+ $barlen += $w + 1;
+ # Branch coverage rate
+ $w = $fwidth[$F_BR_RATE];
+ $format .= "%${w}s ";
+ $heading1 .= sprintf("%-*s", $w + $fwidth[$F_BR_NUM] + 1,
+ "Branches");
+ $heading2 .= sprintf("%-*s ", $w, "Rate");
+ $barlen += $w + 1;
+ # Number of branches
+ $w = $fwidth[$F_BR_NUM];
+ $format .= "%${w}s";
+ $heading2 .= sprintf("%*s", $w, "Num");
+ $barlen += $w;
+ # Line end
+ $format .= "\n";
+ $heading1 .= "\n";
+ $heading2 .= "\n";
+
+ # Print heading
+ print($heading1);
+ print($heading2);
+ print(("="x$barlen)."\n");
+
+ # Print per file information
+ foreach $filename (sort(keys(%{$data})))
+ {
+ my @file_data;
+ my $print_filename = $filename;
+
+ $entry = $data->{$filename};
+ if (!$opt_list_full_path) {
+ my $p;
+
+ $print_filename = $filename;
+ if (!$got_prefix || !$root_prefix &&
+ !($print_filename =~ s/^\Q$prefix\/\E//)) {
+ my ($v, $d, $f) = splitpath($filename);
+
+ $p = catpath($v, $d, "");
+ $p =~ s/\/$//;
+ $print_filename = $f;
+ } else {
+ $p = $prefix;
+ }
+
+ if (!defined($lastpath) || $lastpath ne $p) {
+ print("\n") if (defined($lastpath));
+ $lastpath = $p;
+ print("[$lastpath/]\n") if (!$root_prefix);
+ }
+ $print_filename = shorten_filename($print_filename,
+ $strlen);
+ }
+
+ (undef, undef, undef, undef, undef, undef, undef, undef,
+ $found, $hit, $fn_found, $fn_hit, $br_found, $br_hit) =
+ get_info_entry($entry);
+
+ # Assume zero count if there is no function data for this file
+ if (!defined($fn_found) || !defined($fn_hit)) {
+ $fn_found = 0;
+ $fn_hit = 0;
+ }
+ # Assume zero count if there is no branch data for this file
+ if (!defined($br_found) || !defined($br_hit)) {
+ $br_found = 0;
+ $br_hit = 0;
+ }
+
+ # Add line coverage totals
+ $total_found += $found;
+ $total_hit += $hit;
+ # Add function coverage totals
+ $fn_total_found += $fn_found;
+ $fn_total_hit += $fn_hit;
+ # Add branch coverage totals
+ $br_total_found += $br_found;
+ $br_total_hit += $br_hit;
+
+ # Determine line coverage rate for this file
+ if ($found == 0) {
+ $rate = "-";
+ } else {
+ $rate = shorten_rate(100 * $hit / $found,
+ $fwidth[$F_LN_RATE]);
+ }
+ # Determine function coverage rate for this file
+ if (!defined($fn_found) || $fn_found == 0) {
+ $fnrate = "-";
+ } else {
+ $fnrate = shorten_rate(100 * $fn_hit / $fn_found,
+ $fwidth[$F_FN_RATE]);
+ }
+ # Determine branch coverage rate for this file
+ if (!defined($br_found) || $br_found == 0) {
+ $brrate = "-";
+ } else {
+ $brrate = shorten_rate(100 * $br_hit / $br_found,
+ $fwidth[$F_BR_RATE]);
+ }
+
+ # Assemble line parameters
+ push(@file_data, $print_filename);
+ push(@file_data, $rate);
+ push(@file_data, shorten_number($found, $fwidth[$F_LN_NUM]));
+ push(@file_data, $fnrate);
+ push(@file_data, shorten_number($fn_found, $fwidth[$F_FN_NUM]));
+ push(@file_data, $brrate);
+ push(@file_data, shorten_number($br_found, $fwidth[$F_BR_NUM]));
+
+ # Print assembled line
+ printf($format, @file_data);
+ }
+
+ # Determine total line coverage rate
+ if ($total_found == 0) {
+ $rate = "-";
+ } else {
+ $rate = shorten_rate(100 * $total_hit / $total_found,
+ $fwidth[$F_LN_RATE]);
+ }
+ # Determine total function coverage rate
+ if ($fn_total_found == 0) {
+ $fnrate = "-";
+ } else {
+ $fnrate = shorten_rate(100 * $fn_total_hit / $fn_total_found,
+ $fwidth[$F_FN_RATE]);
+ }
+ # Determine total branch coverage rate
+ if ($br_total_found == 0) {
+ $brrate = "-";
+ } else {
+ $brrate = shorten_rate(100 * $br_total_hit / $br_total_found,
+ $fwidth[$F_BR_RATE]);
+ }
+
+ # Print separator
+ print(("="x$barlen)."\n");
+
+ # Assemble line parameters
+ push(@footer, sprintf("%*s", $strlen, "Total:"));
+ push(@footer, $rate);
+ push(@footer, shorten_number($total_found, $fwidth[$F_LN_NUM]));
+ push(@footer, $fnrate);
+ push(@footer, shorten_number($fn_total_found, $fwidth[$F_FN_NUM]));
+ push(@footer, $brrate);
+ push(@footer, shorten_number($br_total_found, $fwidth[$F_BR_NUM]));
+
+ # Print assembled line
+ printf($format, @footer);
+}
+
+
+#
+# get_common_filename(filename1, filename2)
+#
+# Check for filename components which are common to FILENAME1 and FILENAME2.
+# Upon success, return
+#
+# (common, path1, path2)
+#
+# or 'undef' in case there are no such parts.
+#
+
+sub get_common_filename($$)
+{
+ my @list1 = split("/", $_[0]);
+ my @list2 = split("/", $_[1]);
+ my @result;
+
+ # Work in reverse order, i.e. beginning with the filename itself
+ while (@list1 && @list2 && ($list1[$#list1] eq $list2[$#list2]))
+ {
+ unshift(@result, pop(@list1));
+ pop(@list2);
+ }
+
+ # Did we find any similarities?
+ if (scalar(@result) > 0)
+ {
+ return (join("/", @result), join("/", @list1),
+ join("/", @list2));
+ }
+ else
+ {
+ return undef;
+ }
+}
+
+
+#
+# strip_directories($path, $depth)
+#
+# Remove DEPTH leading directory levels from PATH.
+#
+
+sub strip_directories($$)
+{
+ my $filename = $_[0];
+ my $depth = $_[1];
+ my $i;
+
+ if (!defined($depth) || ($depth < 1))
+ {
+ return $filename;
+ }
+ for ($i = 0; $i < $depth; $i++)
+ {
+ $filename =~ s/^[^\/]*\/+(.*)$/$1/;
+ }
+ return $filename;
+}
+
+
+#
+# read_diff(filename)
+#
+# Read diff output from FILENAME to memory. The diff file has to follow the
+# format generated by 'diff -u'. Returns a list of hash references:
+#
+# (mapping, path mapping)
+#
+# mapping: filename -> reference to line hash
+# line hash: line number in new file -> corresponding line number in old file
+#
+# path mapping: filename -> old filename
+#
+# Die in case of error.
+#
+
+sub read_diff($)
+{
+ my $diff_file = $_[0]; # Name of diff file
+ my %diff; # Resulting mapping filename -> line hash
+ my %paths; # Resulting mapping old path -> new path
+ my $mapping; # Reference to current line hash
+ my $line; # Contents of current line
+ my $num_old; # Current line number in old file
+ my $num_new; # Current line number in new file
+ my $file_old; # Name of old file in diff section
+ my $file_new; # Name of new file in diff section
+ my $filename; # Name of common filename of diff section
+ my $in_block = 0; # Non-zero while we are inside a diff block
+ local *HANDLE; # File handle for reading the diff file
+
+ info("Reading diff $diff_file\n");
+
+ # Check if file exists and is readable
+ stat($diff_file);
+ if (!(-r _))
+ {
+ die("ERROR: cannot read file $diff_file!\n");
+ }
+
+ # Check if this is really a plain file
+ if (!(-f _))
+ {
+ die("ERROR: not a plain file: $diff_file!\n");
+ }
+
+ # Check for .gz extension
+ if ($diff_file =~ /\.gz$/)
+ {
+ # Check for availability of GZIP tool
+ system_no_output(1, "gunzip", "-h")
+ and die("ERROR: gunzip command not available!\n");
+
+ # Check integrity of compressed file
+ system_no_output(1, "gunzip", "-t", $diff_file)
+ and die("ERROR: integrity check failed for ".
+ "compressed file $diff_file!\n");
+
+ # Open compressed file
+ open(HANDLE, "gunzip -c $diff_file|")
+ or die("ERROR: cannot start gunzip to decompress ".
+ "file $_[0]!\n");
+ }
+ else
+ {
+ # Open decompressed file
+ open(HANDLE, $diff_file)
+ or die("ERROR: cannot read file $_[0]!\n");
+ }
+
+ # Parse diff file line by line
+ while (<HANDLE>)
+ {
+ chomp($_);
+ $line = $_;
+
+ foreach ($line)
+ {
+ # Filename of old file:
+ # --- <filename> <date>
+ /^--- (\S+)/ && do
+ {
+ $file_old = strip_directories($1, $strip);
+ last;
+ };
+ # Filename of new file:
+ # +++ <filename> <date>
+ /^\+\+\+ (\S+)/ && do
+ {
+ # Add last file to resulting hash
+ if ($filename)
+ {
+ my %new_hash;
+ $diff{$filename} = $mapping;
+ $mapping = \%new_hash;
+ }
+ $file_new = strip_directories($1, $strip);
+ $filename = $file_old;
+ $paths{$filename} = $file_new;
+ $num_old = 1;
+ $num_new = 1;
+ last;
+ };
+ # Start of diff block:
+ # @@ -old_start,old_num, +new_start,new_num @@
+ /^\@\@\s+-(\d+),(\d+)\s+\+(\d+),(\d+)\s+\@\@$/ && do
+ {
+ $in_block = 1;
+ while ($num_old < $1)
+ {
+ $mapping->{$num_new} = $num_old;
+ $num_old++;
+ $num_new++;
+ }
+ last;
+ };
+ # Unchanged line
+ # <line starts with blank>
+ /^ / && do
+ {
+ if ($in_block == 0)
+ {
+ last;
+ }
+ $mapping->{$num_new} = $num_old;
+ $num_old++;
+ $num_new++;
+ last;
+ };
+ # Line as seen in old file
+ # <line starts with '-'>
+ /^-/ && do
+ {
+ if ($in_block == 0)
+ {
+ last;
+ }
+ $num_old++;
+ last;
+ };
+ # Line as seen in new file
+ # <line starts with '+'>
+ /^\+/ && do
+ {
+ if ($in_block == 0)
+ {
+ last;
+ }
+ $num_new++;
+ last;
+ };
+ # Empty line
+ /^$/ && do
+ {
+ if ($in_block == 0)
+ {
+ last;
+ }
+ $mapping->{$num_new} = $num_old;
+ $num_old++;
+ $num_new++;
+ last;
+ };
+ }
+ }
+
+ close(HANDLE);
+
+ # Add final diff file section to resulting hash
+ if ($filename)
+ {
+ $diff{$filename} = $mapping;
+ }
+
+ if (!%diff)
+ {
+ die("ERROR: no valid diff data found in $diff_file!\n".
+ "Make sure to use 'diff -u' when generating the diff ".
+ "file.\n");
+ }
+ return (\%diff, \%paths);
+}
+
+
+#
+# apply_diff($count_data, $line_hash)
+#
+# Transform count data using a mapping of lines:
+#
+# $count_data: reference to hash: line number -> data
+# $line_hash: reference to hash: line number new -> line number old
+#
+# Return a reference to transformed count data.
+#
+
+sub apply_diff($$)
+{
+ my $count_data = $_[0]; # Reference to data hash: line -> hash
+ my $line_hash = $_[1]; # Reference to line hash: new line -> old line
+ my %result; # Resulting hash
+ my $last_new = 0; # Last new line number found in line hash
+ my $last_old = 0; # Last old line number found in line hash
+
+ # Iterate all new line numbers found in the diff
+ foreach (sort({$a <=> $b} keys(%{$line_hash})))
+ {
+ $last_new = $_;
+ $last_old = $line_hash->{$last_new};
+
+ # Is there data associated with the corresponding old line?
+ if (defined($count_data->{$line_hash->{$_}}))
+ {
+ # Copy data to new hash with a new line number
+ $result{$_} = $count_data->{$line_hash->{$_}};
+ }
+ }
+ # Transform all other lines which come after the last diff entry
+ foreach (sort({$a <=> $b} keys(%{$count_data})))
+ {
+ if ($_ <= $last_old)
+ {
+ # Skip lines which were covered by line hash
+ next;
+ }
+ # Copy data to new hash with an offset
+ $result{$_ + ($last_new - $last_old)} = $count_data->{$_};
+ }
+
+ return \%result;
+}
+
+
+#
+# apply_diff_to_brcount(brcount, linedata)
+#
+# Adjust line numbers of branch coverage data according to linedata.
+#
+
+sub apply_diff_to_brcount($$)
+{
+ my ($brcount, $linedata) = @_;
+ my $db;
+
+ # Convert brcount to db format
+ $db = brcount_to_db($brcount);
+ # Apply diff to db format
+ $db = apply_diff($db, $linedata);
+ # Convert db format back to brcount format
+ ($brcount) = db_to_brcount($db);
+
+ return $brcount;
+}
+
+
+#
+# get_hash_max(hash_ref)
+#
+# Return the highest integer key from hash.
+#
+
+sub get_hash_max($)
+{
+ my ($hash) = @_;
+ my $max;
+
+ foreach (keys(%{$hash})) {
+ if (!defined($max)) {
+ $max = $_;
+ } elsif ($hash->{$_} > $max) {
+ $max = $_;
+ }
+ }
+ return $max;
+}
+
+sub get_hash_reverse($)
+{
+ my ($hash) = @_;
+ my %result;
+
+ foreach (keys(%{$hash})) {
+ $result{$hash->{$_}} = $_;
+ }
+
+ return \%result;
+}
+
+#
+# apply_diff_to_funcdata(funcdata, line_hash)
+#
+
+sub apply_diff_to_funcdata($$)
+{
+ my ($funcdata, $linedata) = @_;
+ my $last_new = get_hash_max($linedata);
+ my $last_old = $linedata->{$last_new};
+ my $func;
+ my %result;
+ my $line_diff = get_hash_reverse($linedata);
+
+ foreach $func (keys(%{$funcdata})) {
+ my $line = $funcdata->{$func};
+
+ if (defined($line_diff->{$line})) {
+ $result{$func} = $line_diff->{$line};
+ } elsif ($line > $last_old) {
+ $result{$func} = $line + $last_new - $last_old;
+ }
+ }
+
+ return \%result;
+}
+
+
+#
+# get_line_hash($filename, $diff_data, $path_data)
+#
+# Find line hash in DIFF_DATA which matches FILENAME. On success, return list
+# line hash. or undef in case of no match. Die if more than one line hashes in
+# DIFF_DATA match.
+#
+
+sub get_line_hash($$$)
+{
+ my $filename = $_[0];
+ my $diff_data = $_[1];
+ my $path_data = $_[2];
+ my $conversion;
+ my $old_path;
+ my $new_path;
+ my $diff_name;
+ my $common;
+ my $old_depth;
+ my $new_depth;
+
+ # Remove trailing slash from diff path
+ $diff_path =~ s/\/$//;
+ foreach (keys(%{$diff_data}))
+ {
+ my $sep = "";
+
+ $sep = '/' if (!/^\//);
+
+ # Try to match diff filename with filename
+ if ($filename =~ /^\Q$diff_path$sep$_\E$/)
+ {
+ if ($diff_name)
+ {
+ # Two files match, choose the more specific one
+ # (the one with more path components)
+ $old_depth = ($diff_name =~ tr/\///);
+ $new_depth = (tr/\///);
+ if ($old_depth == $new_depth)
+ {
+ die("ERROR: diff file contains ".
+ "ambiguous entries for ".
+ "$filename\n");
+ }
+ elsif ($new_depth > $old_depth)
+ {
+ $diff_name = $_;
+ }
+ }
+ else
+ {
+ $diff_name = $_;
+ }
+ };
+ }
+ if ($diff_name)
+ {
+ # Get converted path
+ if ($filename =~ /^(.*)$diff_name$/)
+ {
+ ($common, $old_path, $new_path) =
+ get_common_filename($filename,
+ $1.$path_data->{$diff_name});
+ }
+ return ($diff_data->{$diff_name}, $old_path, $new_path);
+ }
+ else
+ {
+ return undef;
+ }
+}
+
+
+#
+# convert_paths(trace_data, path_conversion_data)
+#
+# Rename all paths in TRACE_DATA which show up in PATH_CONVERSION_DATA.
+#
+
+sub convert_paths($$)
+{
+ my $trace_data = $_[0];
+ my $path_conversion_data = $_[1];
+ my $filename;
+ my $new_path;
+
+ if (scalar(keys(%{$path_conversion_data})) == 0)
+ {
+ info("No path conversion data available.\n");
+ return;
+ }
+
+ # Expand path conversion list
+ foreach $filename (keys(%{$path_conversion_data}))
+ {
+ $new_path = $path_conversion_data->{$filename};
+ while (($filename =~ s/^(.*)\/[^\/]+$/$1/) &&
+ ($new_path =~ s/^(.*)\/[^\/]+$/$1/) &&
+ ($filename ne $new_path))
+ {
+ $path_conversion_data->{$filename} = $new_path;
+ }
+ }
+
+ # Adjust paths
+ FILENAME: foreach $filename (keys(%{$trace_data}))
+ {
+ # Find a path in our conversion table that matches, starting
+ # with the longest path
+ foreach (sort({length($b) <=> length($a)}
+ keys(%{$path_conversion_data})))
+ {
+ # Is this path a prefix of our filename?
+ if (!($filename =~ /^$_(.*)$/))
+ {
+ next;
+ }
+ $new_path = $path_conversion_data->{$_}.$1;
+
+ # Make sure not to overwrite an existing entry under
+ # that path name
+ if ($trace_data->{$new_path})
+ {
+ # Need to combine entries
+ $trace_data->{$new_path} =
+ combine_info_entries(
+ $trace_data->{$filename},
+ $trace_data->{$new_path},
+ $filename);
+ }
+ else
+ {
+ # Simply rename entry
+ $trace_data->{$new_path} =
+ $trace_data->{$filename};
+ }
+ delete($trace_data->{$filename});
+ next FILENAME;
+ }
+ info("No conversion available for filename $filename\n");
+ }
+}
+
+#
+# sub adjust_fncdata(funcdata, testfncdata, sumfnccount)
+#
+# Remove function call count data from testfncdata and sumfnccount which
+# is no longer present in funcdata.
+#
+
+sub adjust_fncdata($$$)
+{
+ my ($funcdata, $testfncdata, $sumfnccount) = @_;
+ my $testname;
+ my $func;
+ my $f_found;
+ my $f_hit;
+
+ # Remove count data in testfncdata for functions which are no longer
+ # in funcdata
+ foreach $testname (%{$testfncdata}) {
+ my $fnccount = $testfncdata->{$testname};
+
+ foreach $func (%{$fnccount}) {
+ if (!defined($funcdata->{$func})) {
+ delete($fnccount->{$func});
+ }
+ }
+ }
+ # Remove count data in sumfnccount for functions which are no longer
+ # in funcdata
+ foreach $func (%{$sumfnccount}) {
+ if (!defined($funcdata->{$func})) {
+ delete($sumfnccount->{$func});
+ }
+ }
+}
+
+#
+# get_func_found_and_hit(sumfnccount)
+#
+# Return (f_found, f_hit) for sumfnccount
+#
+
+sub get_func_found_and_hit($)
+{
+ my ($sumfnccount) = @_;
+ my $function;
+ my $f_found;
+ my $f_hit;
+
+ $f_found = scalar(keys(%{$sumfnccount}));
+ $f_hit = 0;
+ foreach $function (keys(%{$sumfnccount})) {
+ if ($sumfnccount->{$function} > 0) {
+ $f_hit++;
+ }
+ }
+ return ($f_found, $f_hit);
+}
+
+#
+# diff()
+#
+
+sub diff()
+{
+ my $trace_data = read_info_file($diff);
+ my $diff_data;
+ my $path_data;
+ my $old_path;
+ my $new_path;
+ my %path_conversion_data;
+ my $filename;
+ my $line_hash;
+ my $new_name;
+ my $entry;
+ my $testdata;
+ my $testname;
+ my $sumcount;
+ my $funcdata;
+ my $checkdata;
+ my $testfncdata;
+ my $sumfnccount;
+ my $testbrdata;
+ my $sumbrcount;
+ my $found;
+ my $hit;
+ my $f_found;
+ my $f_hit;
+ my $br_found;
+ my $br_hit;
+ my $converted = 0;
+ my $unchanged = 0;
+ my @result;
+ local *INFO_HANDLE;
+
+ ($diff_data, $path_data) = read_diff($ARGV[0]);
+
+ foreach $filename (sort(keys(%{$trace_data})))
+ {
+ # Find a diff section corresponding to this file
+ ($line_hash, $old_path, $new_path) =
+ get_line_hash($filename, $diff_data, $path_data);
+ if (!$line_hash)
+ {
+ # There's no diff section for this file
+ $unchanged++;
+ next;
+ }
+ $converted++;
+ if ($old_path && $new_path && ($old_path ne $new_path))
+ {
+ $path_conversion_data{$old_path} = $new_path;
+ }
+ # Check for deleted files
+ if (scalar(keys(%{$line_hash})) == 0)
+ {
+ info("Removing $filename\n");
+ delete($trace_data->{$filename});
+ next;
+ }
+ info("Converting $filename\n");
+ $entry = $trace_data->{$filename};
+ ($testdata, $sumcount, $funcdata, $checkdata, $testfncdata,
+ $sumfnccount, $testbrdata, $sumbrcount) =
+ get_info_entry($entry);
+ # Convert test data
+ foreach $testname (keys(%{$testdata}))
+ {
+ # Adjust line numbers of line coverage data
+ $testdata->{$testname} =
+ apply_diff($testdata->{$testname}, $line_hash);
+ # Adjust line numbers of branch coverage data
+ $testbrdata->{$testname} =
+ apply_diff_to_brcount($testbrdata->{$testname},
+ $line_hash);
+ # Remove empty sets of test data
+ if (scalar(keys(%{$testdata->{$testname}})) == 0)
+ {
+ delete($testdata->{$testname});
+ delete($testfncdata->{$testname});
+ delete($testbrdata->{$testname});
+ }
+ }
+ # Rename test data to indicate conversion
+ foreach $testname (keys(%{$testdata}))
+ {
+ # Skip testnames which already contain an extension
+ if ($testname =~ /,[^,]+$/)
+ {
+ next;
+ }
+ # Check for name conflict
+ if (defined($testdata->{$testname.",diff"}))
+ {
+ # Add counts
+ ($testdata->{$testname}) = add_counts(
+ $testdata->{$testname},
+ $testdata->{$testname.",diff"});
+ delete($testdata->{$testname.",diff"});
+ # Add function call counts
+ ($testfncdata->{$testname}) = add_fnccount(
+ $testfncdata->{$testname},
+ $testfncdata->{$testname.",diff"});
+ delete($testfncdata->{$testname.",diff"});
+ # Add branch counts
+ ($testbrdata->{$testname}) = combine_brcount(
+ $testbrdata->{$testname},
+ $testbrdata->{$testname.",diff"},
+ $BR_ADD);
+ delete($testbrdata->{$testname.",diff"});
+ }
+ # Move test data to new testname
+ $testdata->{$testname.",diff"} = $testdata->{$testname};
+ delete($testdata->{$testname});
+ # Move function call count data to new testname
+ $testfncdata->{$testname.",diff"} =
+ $testfncdata->{$testname};
+ delete($testfncdata->{$testname});
+ # Move branch count data to new testname
+ $testbrdata->{$testname.",diff"} =
+ $testbrdata->{$testname};
+ delete($testbrdata->{$testname});
+ }
+ # Convert summary of test data
+ $sumcount = apply_diff($sumcount, $line_hash);
+ # Convert function data
+ $funcdata = apply_diff_to_funcdata($funcdata, $line_hash);
+ # Convert branch coverage data
+ $sumbrcount = apply_diff_to_brcount($sumbrcount, $line_hash);
+ # Update found/hit numbers
+ # Convert checksum data
+ $checkdata = apply_diff($checkdata, $line_hash);
+ # Convert function call count data
+ adjust_fncdata($funcdata, $testfncdata, $sumfnccount);
+ ($f_found, $f_hit) = get_func_found_and_hit($sumfnccount);
+ ($br_found, $br_hit) = get_br_found_and_hit($sumbrcount);
+ # Update found/hit numbers
+ $found = 0;
+ $hit = 0;
+ foreach (keys(%{$sumcount}))
+ {
+ $found++;
+ if ($sumcount->{$_} > 0)
+ {
+ $hit++;
+ }
+ }
+ if ($found > 0)
+ {
+ # Store converted entry
+ set_info_entry($entry, $testdata, $sumcount, $funcdata,
+ $checkdata, $testfncdata, $sumfnccount,
+ $testbrdata, $sumbrcount, $found, $hit,
+ $f_found, $f_hit, $br_found, $br_hit);
+ }
+ else
+ {
+ # Remove empty data set
+ delete($trace_data->{$filename});
+ }
+ }
+
+ # Convert filenames as well if requested
+ if ($convert_filenames)
+ {
+ convert_paths($trace_data, \%path_conversion_data);
+ }
+
+ info("$converted entr".($converted != 1 ? "ies" : "y")." converted, ".
+ "$unchanged entr".($unchanged != 1 ? "ies" : "y")." left ".
+ "unchanged.\n");
+
+ # Write data
+ if ($to_file)
+ {
+ info("Writing data to $output_filename\n");
+ open(INFO_HANDLE, ">$output_filename")
+ or die("ERROR: cannot write to $output_filename!\n");
+ @result = write_info_file(*INFO_HANDLE, $trace_data);
+ close(*INFO_HANDLE);
+ }
+ else
+ {
+ @result = write_info_file(*STDOUT, $trace_data);
+ }
+
+ return @result;
+}
+
+
+#
+# system_no_output(mode, parameters)
+#
+# Call an external program using PARAMETERS while suppressing depending on
+# the value of MODE:
+#
+# MODE & 1: suppress STDOUT
+# MODE & 2: suppress STDERR
+#
+# Return 0 on success, non-zero otherwise.
+#
+
+sub system_no_output($@)
+{
+ my $mode = shift;
+ my $result;
+ local *OLD_STDERR;
+ local *OLD_STDOUT;
+
+ # Save old stdout and stderr handles
+ ($mode & 1) && open(OLD_STDOUT, ">>&STDOUT");
+ ($mode & 2) && open(OLD_STDERR, ">>&STDERR");
+
+ # Redirect to /dev/null
+ ($mode & 1) && open(STDOUT, ">/dev/null");
+ ($mode & 2) && open(STDERR, ">/dev/null");
+
+ system(@_);
+ $result = $?;
+
+ # Close redirected handles
+ ($mode & 1) && close(STDOUT);
+ ($mode & 2) && close(STDERR);
+
+ # Restore old handles
+ ($mode & 1) && open(STDOUT, ">>&OLD_STDOUT");
+ ($mode & 2) && open(STDERR, ">>&OLD_STDERR");
+
+ return $result;
+}
+
+
+#
+# read_config(filename)
+#
+# Read configuration file FILENAME and return a reference to a hash containing
+# all valid key=value pairs found.
+#
+
+sub read_config($)
+{
+ my $filename = $_[0];
+ my %result;
+ my $key;
+ my $value;
+ local *HANDLE;
+
+ if (!open(HANDLE, "<$filename"))
+ {
+ warn("WARNING: cannot read configuration file $filename\n");
+ return undef;
+ }
+ while (<HANDLE>)
+ {
+ chomp;
+ # Skip comments
+ s/#.*//;
+ # Remove leading blanks
+ s/^\s+//;
+ # Remove trailing blanks
+ s/\s+$//;
+ next unless length;
+ ($key, $value) = split(/\s*=\s*/, $_, 2);
+ if (defined($key) && defined($value))
+ {
+ $result{$key} = $value;
+ }
+ else
+ {
+ warn("WARNING: malformed statement in line $. ".
+ "of configuration file $filename\n");
+ }
+ }
+ close(HANDLE);
+ return \%result;
+}
+
+
+#
+# apply_config(REF)
+#
+# REF is a reference to a hash containing the following mapping:
+#
+# key_string => var_ref
+#
+# where KEY_STRING is a keyword and VAR_REF is a reference to an associated
+# variable. If the global configuration hash CONFIG contains a value for
+# keyword KEY_STRING, VAR_REF will be assigned the value for that keyword.
+#
+
+sub apply_config($)
+{
+ my $ref = $_[0];
+
+ foreach (keys(%{$ref}))
+ {
+ if (defined($config->{$_}))
+ {
+ ${$ref->{$_}} = $config->{$_};
+ }
+ }
+}
+
+sub warn_handler($)
+{
+ my ($msg) = @_;
+
+ temp_cleanup();
+ warn("$tool_name: $msg");
+}
+
+sub die_handler($)
+{
+ my ($msg) = @_;
+
+ temp_cleanup();
+ die("$tool_name: $msg");
+}
+
+sub abort_handler($)
+{
+ temp_cleanup();
+ exit(1);
+}
+
+sub temp_cleanup()
+{
+ if (@temp_dirs) {
+ info("Removing temporary directories.\n");
+ foreach (@temp_dirs) {
+ rmtree($_);
+ }
+ @temp_dirs = ();
+ }
+}
+
+sub setup_gkv_sys()
+{
+ system_no_output(3, "mount", "-t", "debugfs", "nodev",
+ "/sys/kernel/debug");
+}
+
+sub setup_gkv_proc()
+{
+ if (system_no_output(3, "modprobe", "gcov_proc")) {
+ system_no_output(3, "modprobe", "gcov_prof");
+ }
+}
+
+sub check_gkv_sys($)
+{
+ my ($dir) = @_;
+
+ if (-e "$dir/reset") {
+ return 1;
+ }
+ return 0;
+}
+
+sub check_gkv_proc($)
+{
+ my ($dir) = @_;
+
+ if (-e "$dir/vmlinux") {
+ return 1;
+ }
+ return 0;
+}
+
+sub setup_gkv()
+{
+ my $dir;
+ my $sys_dir = "/sys/kernel/debug/gcov";
+ my $proc_dir = "/proc/gcov";
+ my @todo;
+
+ if (!defined($gcov_dir)) {
+ info("Auto-detecting gcov kernel support.\n");
+ @todo = ( "cs", "cp", "ss", "cs", "sp", "cp" );
+ } elsif ($gcov_dir =~ /proc/) {
+ info("Checking gcov kernel support at $gcov_dir ".
+ "(user-specified).\n");
+ @todo = ( "cp", "sp", "cp", "cs", "ss", "cs");
+ } else {
+ info("Checking gcov kernel support at $gcov_dir ".
+ "(user-specified).\n");
+ @todo = ( "cs", "ss", "cs", "cp", "sp", "cp", );
+ }
+ foreach (@todo) {
+ if ($_ eq "cs") {
+ # Check /sys
+ $dir = defined($gcov_dir) ? $gcov_dir : $sys_dir;
+ if (check_gkv_sys($dir)) {
+ info("Found ".$GKV_NAME[$GKV_SYS]." gcov ".
+ "kernel support at $dir\n");
+ return ($GKV_SYS, $dir);
+ }
+ } elsif ($_ eq "cp") {
+ # Check /proc
+ $dir = defined($gcov_dir) ? $gcov_dir : $proc_dir;
+ if (check_gkv_proc($dir)) {
+ info("Found ".$GKV_NAME[$GKV_PROC]." gcov ".
+ "kernel support at $dir\n");
+ return ($GKV_PROC, $dir);
+ }
+ } elsif ($_ eq "ss") {
+ # Setup /sys
+ setup_gkv_sys();
+ } elsif ($_ eq "sp") {
+ # Setup /proc
+ setup_gkv_proc();
+ }
+ }
+ if (defined($gcov_dir)) {
+ die("ERROR: could not find gcov kernel data at $gcov_dir\n");
+ } else {
+ die("ERROR: no gcov kernel data found\n");
+ }
+}
+
+
+#
+# get_overall_line(found, hit, name_singular, name_plural)
+#
+# Return a string containing overall information for the specified
+# found/hit data.
+#
+
+sub get_overall_line($$$$)
+{
+ my ($found, $hit, $name_sn, $name_pl) = @_;
+ my $name;
+
+ return "no data found" if (!defined($found) || $found == 0);
+ $name = ($found == 1) ? $name_sn : $name_pl;
+ return sprintf("%.1f%% (%d of %d %s)", $hit * 100 / $found, $hit,
+ $found, $name);
+}
+
+
+#
+# print_overall_rate(ln_do, ln_found, ln_hit, fn_do, fn_found, fn_hit, br_do
+# br_found, br_hit)
+#
+# Print overall coverage rates for the specified coverage types.
+#
+
+sub print_overall_rate($$$$$$$$$)
+{
+ my ($ln_do, $ln_found, $ln_hit, $fn_do, $fn_found, $fn_hit,
+ $br_do, $br_found, $br_hit) = @_;
+
+ info("Overall coverage rate:\n");
+ info(" lines......: %s\n",
+ get_overall_line($ln_found, $ln_hit, "line", "lines"))
+ if ($ln_do);
+ info(" functions..: %s\n",
+ get_overall_line($fn_found, $fn_hit, "function", "functions"))
+ if ($fn_do);
+ info(" branches...: %s\n",
+ get_overall_line($br_found, $br_hit, "branch", "branches"))
+ if ($br_do);
+}
diff --git a/chromium/third_party/lcov-1.9/bin/updateversion.pl b/chromium/third_party/lcov-1.9/bin/updateversion.pl
new file mode 100755
index 00000000000..55f2bc1dd9a
--- /dev/null
+++ b/chromium/third_party/lcov-1.9/bin/updateversion.pl
@@ -0,0 +1,146 @@
+#!/usr/bin/perl -w
+
+use strict;
+
+sub update_man_page($);
+sub update_bin_tool($);
+sub update_txt_file($);
+sub update_spec_file($);
+sub get_file_info($);
+
+our $directory = $ARGV[0];
+our $version = $ARGV[1];
+our $release = $ARGV[2];
+
+our @man_pages = ("man/gendesc.1", "man/genhtml.1", "man/geninfo.1",
+ "man/genpng.1", "man/lcov.1", "man/lcovrc.5");
+our @bin_tools = ("bin/gendesc", "bin/genhtml", "bin/geninfo",
+ "bin/genpng", "bin/lcov");
+our @txt_files = ("README");
+our @spec_files = ("rpm/lcov.spec");
+
+if (!defined($directory) || !defined($version) || !defined($release)) {
+ die("Usage: $0 <directory> <version string> <release string>\n");
+}
+
+foreach (@man_pages) {
+ print("Updating man page $_\n");
+ update_man_page($directory."/".$_);
+}
+foreach (@bin_tools) {
+ print("Updating bin tool $_\n");
+ update_bin_tool($directory."/".$_);
+}
+foreach (@txt_files) {
+ print("Updating text file $_\n");
+ update_txt_file($directory."/".$_);
+}
+foreach (@spec_files) {
+ print("Updating spec file $_\n");
+ update_spec_file($directory."/".$_);
+}
+print("Done.\n");
+
+sub get_file_info($)
+{
+ my ($filename) = @_;
+ my ($sec, $min, $hour, $year, $month, $day);
+ my @stat;
+
+ @stat = stat($filename);
+ ($sec, $min, $hour, $day, $month, $year) = localtime($stat[9]);
+ $year += 1900;
+ $month += 1;
+
+ return (sprintf("%04d-%02d-%02d", $year, $month, $day),
+ sprintf("%04d%02d%02d%02d%02d.%02d", $year, $month, $day,
+ $hour, $min, $sec),
+ sprintf("%o", $stat[2] & 07777));
+}
+
+sub update_man_page($)
+{
+ my ($filename) = @_;
+ my @date = get_file_info($filename);
+ my $date_string = $date[0];
+ local *IN;
+ local *OUT;
+
+ $date_string =~ s/-/\\-/g;
+ open(IN, "<$filename") || die ("Error: cannot open $filename\n");
+ open(OUT, ">$filename.new") ||
+ die("Error: cannot create $filename.new\n");
+ while (<IN>) {
+ s/\"LCOV\s+\d+\.\d+\"/\"LCOV $version\"/g;
+ s/\d\d\d\d\\\-\d\d\\\-\d\d/$date_string/g;
+ print(OUT $_);
+ }
+ close(OUT);
+ close(IN);
+ chmod(oct($date[2]), "$filename.new");
+ system("mv", "-f", "$filename.new", "$filename");
+ system("touch", "$filename", "-t", $date[1]);
+}
+
+sub update_bin_tool($)
+{
+ my ($filename) = @_;
+ my @date = get_file_info($filename);
+ local *IN;
+ local *OUT;
+
+ open(IN, "<$filename") || die ("Error: cannot open $filename\n");
+ open(OUT, ">$filename.new") ||
+ die("Error: cannot create $filename.new\n");
+ while (<IN>) {
+ s/(our\s+\$lcov_version\s*=\s*["']).*(["'].*)$/$1LCOV version $version$2/g;
+ print(OUT $_);
+ }
+ close(OUT);
+ close(IN);
+ chmod(oct($date[2]), "$filename.new");
+ system("mv", "-f", "$filename.new", "$filename");
+ system("touch", "$filename", "-t", $date[1]);
+}
+
+sub update_txt_file($)
+{
+ my ($filename) = @_;
+ my @date = get_file_info($filename);
+ local *IN;
+ local *OUT;
+
+ open(IN, "<$filename") || die ("Error: cannot open $filename\n");
+ open(OUT, ">$filename.new") ||
+ die("Error: cannot create $filename.new\n");
+ while (<IN>) {
+ s/(Last\s+changes:\s+)\d\d\d\d-\d\d-\d\d/$1$date[0]/g;
+ print(OUT $_);
+ }
+ close(OUT);
+ close(IN);
+ chmod(oct($date[2]), "$filename.new");
+ system("mv", "-f", "$filename.new", "$filename");
+ system("touch", "$filename", "-t", $date[1]);
+}
+
+sub update_spec_file($)
+{
+ my ($filename) = @_;
+ my @date = get_file_info($filename);
+ local *IN;
+ local *OUT;
+
+ open(IN, "<$filename") || die ("Error: cannot open $filename\n");
+ open(OUT, ">$filename.new") ||
+ die("Error: cannot create $filename.new\n");
+ while (<IN>) {
+ s/^(Version:\s*)\d+\.\d+.*$/$1$version/;
+ s/^(Release:\s*).*$/$1$release/;
+ print(OUT $_);
+ }
+ close(OUT);
+ close(IN);
+ system("mv", "-f", "$filename.new", "$filename");
+ system("touch", "$filename", "-t", $date[1]);
+}
diff --git a/chromium/third_party/lcov-1.9/contrib/galaxy/CHANGES b/chromium/third_party/lcov-1.9/contrib/galaxy/CHANGES
new file mode 100644
index 00000000000..b09883bd6c3
--- /dev/null
+++ b/chromium/third_party/lcov-1.9/contrib/galaxy/CHANGES
@@ -0,0 +1 @@
+09-04-2003 Initial checkin
diff --git a/chromium/third_party/lcov-1.9/contrib/galaxy/README b/chromium/third_party/lcov-1.9/contrib/galaxy/README
new file mode 100644
index 00000000000..e21c509c0ae
--- /dev/null
+++ b/chromium/third_party/lcov-1.9/contrib/galaxy/README
@@ -0,0 +1,48 @@
+-------------------------------------------------
+- README file for the LCOV galaxy mapping tool -
+- Last changes: 2003-09-04 -
+-------------------------------------------------
+
+Description
+-----------
+
+Further README contents
+-----------------------
+ 1. Included files
+ 2. Installing
+ 3. Notes and Comments
+
+
+
+1. Important files
+------------------
+ README - This README file
+ CHANGES - List of changes between releases
+ conglomerate_functions.pl - Replacement file - Generates shading
+ genflat.pl - Generates info for shading from .info files
+ gen_makefile.sh - Replacement file - updates to postscript
+ posterize.pl - Replacement file - generates a final ps file
+
+2. Installing
+-------------
+ This install requires fcgp, which means the target kernel src must be on
+the system creating the map.
+
+ Download and copy the new files into the fcgp directory, (Note: its always
+a good idea to have backups).
+
+ Run genflat.pl against your kernel info files
+ ./genflat.pl kernel.info kernel2.info > coverage.dat
+
+ Run the make command for the fcgp (Note: this can take a while)
+ make KERNEL_DIR=/usr/src/linux
+
+ Update posterize.pl as needed, normally page size, margins, titles.
+Most of these settings will be broken out as command line options in the future.
+
+ Run posterize.pl this will generate the file poster.ps.
+
+3. Notes and Comments
+---------------------
+ This is a quick and dirty implementation suited for my needs. It does not
+perform any of the tiling the original did.
diff --git a/chromium/third_party/lcov-1.9/contrib/galaxy/conglomerate_functions.pl b/chromium/third_party/lcov-1.9/contrib/galaxy/conglomerate_functions.pl
new file mode 100755
index 00000000000..4e259feee11
--- /dev/null
+++ b/chromium/third_party/lcov-1.9/contrib/galaxy/conglomerate_functions.pl
@@ -0,0 +1,195 @@
+#! /usr/bin/perl -w
+
+# Takes a set of ps images (belonging to one file) and produces a
+# conglomerate picture of that file: static functions in the middle,
+# others around it. Each one gets a box about its area.
+
+use strict;
+
+my $SCRUNCH = $ARGV [0];
+my $BOXSCRUNCH = $ARGV [1];
+my $Tmp;
+my $DEBUG = 1;
+
+shift @ARGV; # skip SCRUNCH and BOXSCRUNCH
+shift @ARGV;
+
+
+DecorateFuncs (@ARGV);
+
+
+#TMPFILE=`mktemp ${TMPDIR:-/tmp}/$$.XXXXXX`
+
+# Arrange.
+my $ArgList = "";
+
+foreach $Tmp (@ARGV) {
+ $ArgList .= "'$Tmp' ";
+}
+
+my @Arranged = `../draw_arrangement $SCRUNCH 0 360 0 $ArgList`;
+
+my $CFile = $ARGV [0];
+$CFile =~ s/\.c\..*$/.c/;
+if ($DEBUG) { print ("% Conglomeration of $CFile\n"); }
+
+print "gsave angle rotate\n";
+
+# Now output the file, except last line.
+my $LastLine = pop (@Arranged);
+my $Fill = Box_2 ($LastLine,$CFile);
+print $Fill;
+# Draw box with file name
+my @Output = Box ('normal', 'Helvetica-Bold', 32, $CFile, $LastLine);
+splice(@Output, $#Output, 0, "grestore\n");
+#print @Output;
+
+print (@Arranged);
+#add a duplicate box to test if this works
+print @Output;
+
+
+sub ParseBound
+{
+ my $BBoxLine = shift;
+
+ $BBoxLine =~ /(-?[\d.]+)\s+(-?[\d.]+)\s+(-?[\d.]+)\s+(-?[\d.]+)/;
+
+ # XMin, YMin, XMax, YMax
+ return ($1 * $BOXSCRUNCH, $2 * $BOXSCRUNCH,
+ $3 * $BOXSCRUNCH, $4 * $BOXSCRUNCH);
+}
+
+
+
+# Box (type, font, fontsize, Label, BBoxLine)
+sub Box
+{
+ my $Type = shift;
+ my $Font = shift;
+ my $Fontsize = shift;
+ my $Label = shift;
+ my $BBoxLine = shift;
+ my @Output = ();
+
+ # print (STDERR "Box ('$Type', '$Font', '$Fontsize', '$Label', '$BBoxLine')\n");
+ push (@Output, "% start of box\n");
+
+ push (@Output, "D5\n") if ($Type eq "dashed");
+
+ # print (STDERR "BBoxLine: '$BBoxLine'\n");
+ # print (STDERR "Parsed: '" . join ("' '", ParseBound ($BBoxLine)) . "\n");
+ my ($XMin, $YMin, $XMax, $YMax) = ParseBound ($BBoxLine);
+
+ my $LeftSpaced = $XMin + 6;
+ my $BottomSpaced = $YMin + 6;
+
+ # Put black box around it
+ push (@Output, (
+ "($Label) $LeftSpaced $BottomSpaced $Fontsize /$Font\n",
+ "$YMin $XMin $YMax $XMax U\n"
+ )
+ );
+
+ push (@Output, "D\n") if ($Type eq "dashed");
+ # fill bounding box
+ push (@Output, "% end of box\n");
+
+ # Output bounding box
+ push (@Output, "% bound $XMin $YMin $XMax $YMax\n");
+
+ return @Output;
+}
+
+sub Box_2
+{
+ my $BBoxLine = shift;
+ my $CFile = shift;
+ my $CovFile = "./coverage.dat";
+ my ($XMin, $YMin, $XMax, $YMax) = ParseBound ($BBoxLine);
+ my @output = `fgrep $CFile $CovFile`;
+ chomp $output[0];
+ my ($junk, $Class, $per) = split /\t/, $output[0];
+ return "$XMin $YMin $XMax $YMax $Class\n";
+}
+# Decorate (rgb-vals(1 string) filename)
+sub Decorate
+{
+ my $RGB = shift;
+ my $Filename = shift;
+
+ my @Input = ReadPS ($Filename);
+ my $LastLine = pop (@Input);
+ my @Output = ();
+
+ # Color at the beginning.
+ push (@Output, "C$RGB\n");
+
+ # Now output the file, except last line.
+ push (@Output, @Input);
+
+ # Draw dashed box with function name
+ # FIXME Make bound cover the label as well!
+ my $FuncName = $Filename;
+ $FuncName =~ s/^[^.]+\.c\.(.+?)\..*$/$1/;
+
+ push (@Output, Box ('dashed', 'Helvetica', 24, $FuncName, $LastLine));
+
+ # Slap over the top.
+ WritePS ($Filename, @Output);
+}
+
+
+
+# Add colored boxes around functions
+sub DecorateFuncs
+{
+ my $FName = "";
+ my $FType = "";
+
+ foreach $FName (@ARGV)
+ {
+ $FName =~ /\+([A-Z]+)\+/;
+ $FType = $1;
+
+ if ($FType eq 'STATIC') {
+ Decorate ("2", $FName); # Light green.
+ }
+ elsif ($FType eq 'INDIRECT') {
+ Decorate ("3", $FName); # Green.
+ }
+ elsif ($FType eq 'EXPORTED') {
+ Decorate ("4", $FName); # Red.
+ }
+ elsif ($FType eq 'NORMAL') {
+ Decorate ("5", $FName); # Blue.
+ }
+ else {
+ die ("Unknown extension $FName");
+ }
+ }
+}
+
+
+sub ReadPS
+{
+ my $Filename = shift;
+ my @Contents = ();
+
+ open (INFILE, "$Filename") or die ("Could not read $Filename: $!");
+ @Contents = <INFILE>;
+ close (INFILE);
+
+ return @Contents;
+}
+
+sub WritePS
+{
+ my $Filename = shift;
+
+ open (OUTFILE, ">$Filename")
+ or die ("Could not write $Filename: $!");
+ print (OUTFILE @_);
+ close (OUTFILE);
+}
+
diff --git a/chromium/third_party/lcov-1.9/contrib/galaxy/gen_makefile.sh b/chromium/third_party/lcov-1.9/contrib/galaxy/gen_makefile.sh
new file mode 100755
index 00000000000..ab51a5ea9b4
--- /dev/null
+++ b/chromium/third_party/lcov-1.9/contrib/galaxy/gen_makefile.sh
@@ -0,0 +1,129 @@
+#! /bin/sh
+
+cd image
+
+# Space-optimized version: strip comments, drop precision to 3
+# figures, eliminate duplicates.
+# update(creinig): precision reduction is now done in data2ps and comments
+# (except for % bound) now are also ommitted from the start
+
+echo 'image.ps: image-unop.ps'
+#echo ' grep -v "^%" < $< | sed -e "s/\.\([0-9][0-9]\)[0-9]\+/.\1/g" -e "s/\(^\| \|-\)\([0-9][0-9][0-9]\)[0-9][0-9]\.[0-9][0-9]/\1\200/g" -e "s/\(^\| \|-\)\([0-9][0-9][0-9]\)[0-9]\.[0-9][0-9]/\1\20/g" -e "s/\(^\| \|-\)\([0-9][0-9][0-9]\)\.[0-9][0-9]/\1\2/g" -e "s/\(^\| \|-\)\([0-9][0-9]\)\.\([0-9]\)[0-9]/\1\2.\30/g" | awk "\$$0 ~ /lineto/ { if ( LASTLINE == \$$0 ) next; } { LASTLINE=\$$0; print; }" > $@'
+echo ' grep -v "^% bound" < $< > $@'
+# Need last comment (bounding box)
+echo ' tail -1 $< >> $@'
+echo ' ls -l image.ps image-unop.ps'
+
+echo 'image-unop.ps: outline.ps ring1.ps ring2.ps ring3.ps ring4.ps'
+echo ' cat ring[1234].ps > $@'
+# Bounding box is at bottom now. Next two won't change it.
+echo ' tail -1 $@ > bounding-box'
+echo ' cat outline.ps >> $@'
+echo ' cat ../tux.ps >> $@'
+echo ' cat bounding-box >> $@ && rm bounding-box'
+
+# Finished rings are precious!
+echo .SECONDARY: ring1.ps ring2.ps ring3.ps ring4.ps
+
+# Rings 1 and 4 are all thrown together.
+echo RING1_DEPS:=`find $RING1 -name '*.c.*' | sed 's/\.c.*/-all.ps/' | sort | uniq`
+echo RING4_DEPS:=`find $RING4 -name '*.c.*' | sed 's/\.c.*/-all.ps/' | sort | uniq`
+
+# Other rings are divided into dirs.
+echo RING2_DEPS:=`for d in $RING2; do echo $d-ring2.ps; done`
+echo RING3_DEPS:=`for d in $RING3; do echo $d-ring3.ps; done`
+echo
+
+# First ring starts at inner radius.
+echo 'ring1.ps: $(RING1_DEPS)'
+echo " @echo Making Ring 1"
+echo " @echo /angle 0 def > \$@"
+echo " @../draw_arrangement $FILE_SCRUNCH 0 360 $INNER_RADIUS \$(RING1_DEPS) >> \$@"
+echo " @echo Done Ring 1"
+
+# Second ring starts at end of above ring (assume it's circular, so
+# grab any bound).
+echo 'ring2.ps: ring1.ps $(RING2_DEPS)'
+echo " @echo Making Ring 2"
+echo " @echo /angle 0 def > \$@"
+echo " @../rotary_arrange.sh $DIR_SPACING" `for f in $RING2; do echo $f-ring2.ps $f-ring2.angle; done` '>> $@'
+echo " @echo Done Ring 2"
+
+# Third ring starts at end of second ring.
+echo 'ring3.ps: ring2.ps $(RING3_DEPS)'
+echo " @echo Making Ring 3"
+echo " @echo /angle 0 def > \$@"
+echo " @../rotary_arrange.sh $DIR_SPACING" `for f in $RING3; do echo $f-ring3.ps $f-ring3.angle; done` '>> $@'
+echo " @echo Done Ring 3"
+
+# Outer ring starts at end of fourth ring.
+# And it's just a big ring of drivers.
+echo 'ring4.ps: $(RING4_DEPS) ring3.radius'
+echo " @echo Making Ring 4"
+echo " @echo /angle 0 def > \$@"
+echo " @../draw_arrangement $FILE_SCRUNCH 0 360 \`cat ring3.radius\` \$(RING4_DEPS) >> \$@"
+echo " @echo Done Ring 4"
+echo
+
+# How to make directory picture: angle file contains start and end angle.
+# Second ring starts at end of above ring (assume it's circular, so
+# grab any bound).
+echo "%-ring2.ps: %-ring2.angle ring1.radius"
+echo " @echo Rendering \$@"
+echo " @../draw_arrangement $FILE_SCRUNCH 0 \`cat \$<\` \`cat ring1.radius\` \`find \$* -name '*-all.ps'\` > \$@"
+
+echo "%-ring3.ps: %-ring3.angle ring2.radius"
+echo " @echo Rendering \$@"
+echo " @../draw_arrangement $FILE_SCRUNCH 0 \`cat \$<\` \`cat ring2.radius\` \`find \$* -name '*-all.ps'\` > \$@"
+
+# How to extract radii
+echo "%.radius: %.ps"
+echo ' @echo scale=2\; `tail -1 $< | sed "s/^.* //"` + '$RING_SPACING' | bc > $@'
+echo
+
+# How to make angle. Need total angle for that directory, and weight.
+echo "%-ring2.angle: %-ring2.weight ring2.weight"
+echo ' @echo "scale=2; ( 360 - ' `echo $RING2 | wc -w` ' * ' $DIR_SPACING ') * `cat $<` / `cat ring2.weight`" | bc > $@'
+
+echo "%-ring3.angle: %-ring3.weight ring3.weight"
+echo ' @echo "scale=2; ( 360 - ' `echo $RING3 | wc -w` ' * ' $DIR_SPACING ') * `cat $<` / `cat ring3.weight`" | bc > $@'
+
+# How to make ring weights (sum directory totals).
+echo "ring2.weight:" `for d in $RING2; do echo $d-ring2.weight; done`
+echo ' @cat $^ | ../tally > $@'
+echo "ring3.weight:" `for d in $RING3; do echo $d-ring3.weight; done`
+echo ' @cat $^ | ../tally > $@'
+
+# How to make a wieght.
+echo "%-ring2.weight:" `find $RING2 -name '*.c.*' | sed 's/\.c.*/-all.ps/' | sort | uniq`
+echo ' @../total_area.pl `find $* -name \*-all.ps` > $@'
+echo "%-ring3.weight:" `find $RING3 -name '*.c.*' | sed 's/\.c.*/-all.ps/' | sort | uniq`
+echo ' @../total_area.pl `find $* -name \*-all.ps` > $@'
+echo
+
+# Now rule to make the graphs of a function.
+#echo %.ps::%
+#echo ' @../function2ps `echo $< | sed '\''s/^.*\.\([^.]*\)\.\+.*$$/\1/'\''` > $@ $<'
+## Need the space.
+##echo ' @rm -f $<'
+#echo
+
+# Rule to make all from constituent parts.
+echo %-all.ps:
+echo " @echo Rendering \$*.c"
+echo " @../conglomerate_functions.pl $FUNCTION_SCRUNCH $BOX_SCRUNCH \$^ > \$@"
+# Need the space.
+#echo ' @rm -f $^'
+echo
+
+# Generating outline, requires all the angles.
+echo outline.ps: ../make-outline.sh ring1.ps ring2.ps ring3.ps ring4.ps `for f in $RING2; do echo $f-ring2.angle; done` `for f in $RING3; do echo $f-ring3.angle; done`
+echo " ../make-outline.sh $INNER_RADIUS $DIR_SPACING $RING_SPACING \"$RING1\" > \$@"
+echo
+
+# Now all the rules to make each function.
+for d in `find . -type d`; do
+ for f in `cd $d; ls *+.ps 2>/dev/null | sed 's/\.c\..*$//' | uniq`; do
+ echo $d/$f-all.ps: `cd $d; ls $f.c.* | sed -e "s?^?$d/?"`
+ done
+done
diff --git a/chromium/third_party/lcov-1.9/contrib/galaxy/genflat.pl b/chromium/third_party/lcov-1.9/contrib/galaxy/genflat.pl
new file mode 100755
index 00000000000..b8b8ff47b1a
--- /dev/null
+++ b/chromium/third_party/lcov-1.9/contrib/galaxy/genflat.pl
@@ -0,0 +1,1238 @@
+#!/usr/bin/perl -w
+#
+# Copyright (c) International Business Machines Corp., 2002
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or (at
+# your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+#
+#
+# genflat
+#
+# This script generates std output from .info files as created by the
+# geninfo script. Call it with --help to get information on usage and
+# available options. This code is based on the lcov genhtml script
+# by Peter Oberparleiter <Peter.Oberparleiter@de.ibm.com>
+#
+#
+# History:
+# 2003-08-19 ripped up Peter's script James M Kenefick Jr. <jkenefic@us.ibm.com>
+#
+
+use strict;
+use File::Basename;
+use Getopt::Long;
+# Constants
+our $lcov_version = "";
+our $lcov_url = "";
+
+# Specify coverage rate limits (in %) for classifying file entries
+# HI: $hi_limit <= rate <= 100 graph color: green
+# MED: $med_limit <= rate < $hi_limit graph color: orange
+# LO: 0 <= rate < $med_limit graph color: red
+our $hi_limit = 50;
+our $med_limit = 15;
+
+# Data related prototypes
+sub print_usage(*);
+sub gen_html();
+sub process_dir($);
+sub process_file($$$);
+sub info(@);
+sub read_info_file($);
+sub get_info_entry($);
+sub set_info_entry($$$$;$$);
+sub get_prefix(@);
+sub shorten_prefix($);
+sub get_dir_list(@);
+sub get_relative_base_path($);
+sub get_date_string();
+sub split_filename($);
+sub subtract_counts($$);
+sub add_counts($$);
+sub apply_baseline($$);
+sub combine_info_files($$);
+sub combine_info_entries($$);
+sub apply_prefix($$);
+sub escape_regexp($);
+
+
+# HTML related prototypes
+
+
+sub write_file_table(*$$$$);
+
+
+# Global variables & initialization
+our %info_data; # Hash containing all data from .info file
+our $dir_prefix; # Prefix to remove from all sub directories
+our %test_description; # Hash containing test descriptions if available
+our $date = get_date_string();
+
+our @info_filenames; # List of .info files to use as data source
+our $test_title; # Title for output as written to each page header
+our $output_directory; # Name of directory in which to store output
+our $base_filename; # Optional name of file containing baseline data
+our $desc_filename; # Name of file containing test descriptions
+our $css_filename; # Optional name of external stylesheet file to use
+our $quiet; # If set, suppress information messages
+our $help; # Help option flag
+our $version; # Version option flag
+our $show_details; # If set, generate detailed directory view
+our $no_prefix; # If set, do not remove filename prefix
+our $frames; # If set, use frames for source code view
+our $keep_descriptions; # If set, do not remove unused test case descriptions
+our $no_sourceview; # If set, do not create a source code view for each file
+our $tab_size = 8; # Number of spaces to use in place of tab
+
+our $cwd = `pwd`; # Current working directory
+chomp($cwd);
+our $tool_dir = dirname($0); # Directory where genhtml tool is installed
+
+
+#
+# Code entry point
+#
+
+# Add current working directory if $tool_dir is not already an absolute path
+if (! ($tool_dir =~ /^\/(.*)$/))
+{
+ $tool_dir = "$cwd/$tool_dir";
+}
+
+# Parse command line options
+if (!GetOptions("output-directory=s" => \$output_directory,
+ "css-file=s" => \$css_filename,
+ "baseline-file=s" => \$base_filename,
+ "prefix=s" => \$dir_prefix,
+ "num-spaces=i" => \$tab_size,
+ "no-prefix" => \$no_prefix,
+ "quiet" => \$quiet,
+ "help" => \$help,
+ "version" => \$version
+ ))
+{
+ print_usage(*STDERR);
+ exit(1);
+}
+
+@info_filenames = @ARGV;
+
+# Check for help option
+if ($help)
+{
+ print_usage(*STDOUT);
+ exit(0);
+}
+
+# Check for version option
+if ($version)
+{
+ print($lcov_version."\n");
+ exit(0);
+}
+
+# Check for info filename
+if (!@info_filenames)
+{
+ print(STDERR "No filename specified\n");
+ print_usage(*STDERR);
+ exit(1);
+}
+
+# Generate a title if none is specified
+if (!$test_title)
+{
+ if (scalar(@info_filenames) == 1)
+ {
+ # Only one filename specified, use it as title
+ $test_title = basename($info_filenames[0]);
+ }
+ else
+ {
+ # More than one filename specified, used default title
+ $test_title = "unnamed";
+ }
+}
+
+# Make sure tab_size is within valid range
+if ($tab_size < 1)
+{
+ print(STDERR "ERROR: invalid number of spaces specified: ".
+ "$tab_size!\n");
+ exit(1);
+}
+
+# Do something
+gen_html();
+
+exit(0);
+
+
+
+#
+# print_usage(handle)
+#
+# Print usage information.
+#
+
+sub print_usage(*)
+{
+ local *HANDLE = $_[0];
+ my $executable_name = basename($0);
+
+ print(HANDLE <<END_OF_USAGE);
+Usage: $executable_name [OPTIONS] INFOFILE(S)
+
+Create HTML output for coverage data found in INFOFILE. Note that INFOFILE
+may also be a list of filenames.
+
+ -h, --help Print this help, then exit
+ -v, --version Print version number, then exit
+ -q, --quiet Do not print progress messages
+ -b, --baseline-file BASEFILE Use BASEFILE as baseline file
+ -p, --prefix PREFIX Remove PREFIX from all directory names
+ --no-prefix Do not remove prefix from directory names
+ --no-source Do not create source code view
+ --num-spaces NUM Replace tabs with NUM spaces in source view
+
+See $lcov_url for more information about this tool.
+END_OF_USAGE
+ ;
+}
+
+
+#
+# gen_html()
+#
+# Generate a set of HTML pages from contents of .info file INFO_FILENAME.
+# Files will be written to the current directory. If provided, test case
+# descriptions will be read from .tests file TEST_FILENAME and included
+# in ouput.
+#
+# Die on error.
+#
+
+sub gen_html()
+{
+ local *HTML_HANDLE;
+ my %overview;
+ my %base_data;
+ my $lines_found;
+ my $lines_hit;
+ my $overall_found = 0;
+ my $overall_hit = 0;
+ my $dir_name;
+ my $link_name;
+ my @dir_list;
+ my %new_info;
+
+ # Read in all specified .info files
+ foreach (@info_filenames)
+ {
+ info("Reading data file $_\n");
+ %new_info = %{read_info_file($_)};
+
+ # Combine %new_info with %info_data
+ %info_data = %{combine_info_files(\%info_data, \%new_info)};
+ }
+
+ info("Found %d entries.\n", scalar(keys(%info_data)));
+
+ # Read and apply baseline data if specified
+ if ($base_filename)
+ {
+ # Read baseline file
+ info("Reading baseline file $base_filename\n");
+ %base_data = %{read_info_file($base_filename)};
+ info("Found %d entries.\n", scalar(keys(%base_data)));
+
+ # Apply baseline
+ info("Subtracting baseline data.\n");
+ %info_data = %{apply_baseline(\%info_data, \%base_data)};
+ }
+
+ @dir_list = get_dir_list(keys(%info_data));
+
+ if ($no_prefix)
+ {
+ # User requested that we leave filenames alone
+ info("User asked not to remove filename prefix\n");
+ }
+ elsif (!defined($dir_prefix))
+ {
+ # Get prefix common to most directories in list
+ $dir_prefix = get_prefix(@dir_list);
+
+ if ($dir_prefix)
+ {
+ info("Found common filename prefix \"$dir_prefix\"\n");
+ }
+ else
+ {
+ info("No common filename prefix found!\n");
+ $no_prefix=1;
+ }
+ }
+ else
+ {
+ info("Using user-specified filename prefix \"".
+ "$dir_prefix\"\n");
+ }
+
+ # Process each subdirectory and collect overview information
+ foreach $dir_name (@dir_list)
+ {
+ ($lines_found, $lines_hit) = process_dir($dir_name);
+
+ $overview{$dir_name} = "$lines_found,$lines_hit, ";
+ $overall_found += $lines_found;
+ $overall_hit += $lines_hit;
+ }
+
+
+ if ($overall_found == 0)
+ {
+ info("Warning: No lines found!\n");
+ }
+ else
+ {
+ info("Overall coverage rate: %d of %d lines (%.1f%%)\n",
+ $overall_hit, $overall_found,
+ $overall_hit*100/$overall_found);
+ }
+}
+
+
+#
+# process_dir(dir_name)
+#
+
+sub process_dir($)
+{
+ my $abs_dir = $_[0];
+ my $trunc_dir;
+ my $rel_dir = $abs_dir;
+ my $base_dir;
+ my $filename;
+ my %overview;
+ my $lines_found;
+ my $lines_hit;
+ my $overall_found=0;
+ my $overall_hit=0;
+ my $base_name;
+ my $extension;
+ my $testdata;
+ my %testhash;
+ local *HTML_HANDLE;
+
+ # Remove prefix if applicable
+ if (!$no_prefix)
+ {
+ # Match directory name beginning with $dir_prefix
+ $rel_dir = apply_prefix($rel_dir, $dir_prefix);
+ }
+
+ $trunc_dir = $rel_dir;
+
+ # Remove leading /
+ if ($rel_dir =~ /^\/(.*)$/)
+ {
+ $rel_dir = substr($rel_dir, 1);
+ }
+
+ $base_dir = get_relative_base_path($rel_dir);
+
+ $abs_dir = escape_regexp($abs_dir);
+
+ # Match filenames which specify files in this directory, not including
+ # sub-directories
+ foreach $filename (grep(/^$abs_dir\/[^\/]*$/,keys(%info_data)))
+ {
+ ($lines_found, $lines_hit, $testdata) =
+ process_file($trunc_dir, $rel_dir, $filename);
+
+ $base_name = basename($filename);
+
+ $overview{$base_name} = "$lines_found,$lines_hit";
+
+ $testhash{$base_name} = $testdata;
+
+ $overall_found += $lines_found;
+ $overall_hit += $lines_hit;
+ }
+ write_file_table($abs_dir, "./linux/", \%overview, \%testhash, 4);
+
+
+ # Calculate resulting line counts
+ return ($overall_found, $overall_hit);
+}
+
+
+#
+# process_file(trunc_dir, rel_dir, filename)
+#
+
+sub process_file($$$)
+{
+ info("Processing file ".apply_prefix($_[2], $dir_prefix)."\n");
+ my $trunc_dir = $_[0];
+ my $rel_dir = $_[1];
+ my $filename = $_[2];
+ my $base_name = basename($filename);
+ my $base_dir = get_relative_base_path($rel_dir);
+ my $testdata;
+ my $testcount;
+ my $sumcount;
+ my $funcdata;
+ my $lines_found;
+ my $lines_hit;
+ my @source;
+ my $pagetitle;
+
+ ($testdata, $sumcount, $funcdata, $lines_found, $lines_hit) =
+ get_info_entry($info_data{$filename});
+ return ($lines_found, $lines_hit, $testdata);
+}
+
+
+#
+# read_info_file(info_filename)
+#
+# Read in the contents of the .info file specified by INFO_FILENAME. Data will
+# be returned as a reference to a hash containing the following mappings:
+#
+# %result: for each filename found in file -> \%data
+#
+# %data: "test" -> \%testdata
+# "sum" -> \%sumcount
+# "func" -> \%funcdata
+# "found" -> $lines_found (number of instrumented lines found in file)
+# "hit" -> $lines_hit (number of executed lines in file)
+#
+# %testdata: name of test affecting this file -> \%testcount
+#
+# %testcount: line number -> execution count for a single test
+# %sumcount : line number -> execution count for all tests
+# %funcdata : line number -> name of function beginning at that line
+#
+# Note that .info file sections referring to the same file and test name
+# will automatically be combined by adding all execution counts.
+#
+# Note that if INFO_FILENAME ends with ".gz", it is assumed that the file
+# is compressed using GZIP. If available, GUNZIP will be used to decompress
+# this file.
+#
+# Die on error
+#
+
+sub read_info_file($)
+{
+ my $tracefile = $_[0]; # Name of tracefile
+ my %result; # Resulting hash: file -> data
+ my $data; # Data handle for current entry
+ my $testdata; # " "
+ my $testcount; # " "
+ my $sumcount; # " "
+ my $funcdata; # " "
+ my $line; # Current line read from .info file
+ my $testname; # Current test name
+ my $filename; # Current filename
+ my $hitcount; # Count for lines hit
+ my $count; # Execution count of current line
+ my $negative; # If set, warn about negative counts
+ local *INFO_HANDLE; # Filehandle for .info file
+
+ # Check if file exists and is readable
+ stat($_[0]);
+ if (!(-r _))
+ {
+ die("ERROR: cannot read file $_[0]!\n");
+ }
+
+ # Check if this is really a plain file
+ if (!(-f _))
+ {
+ die("ERROR: not a plain file: $_[0]!\n");
+ }
+
+ # Check for .gz extension
+ if ($_[0] =~ /^(.*)\.gz$/)
+ {
+ # Check for availability of GZIP tool
+ system("gunzip -h >/dev/null 2>/dev/null")
+ and die("ERROR: gunzip command not available!\n");
+
+ # Check integrity of compressed file
+ system("gunzip -t $_[0] >/dev/null 2>/dev/null")
+ and die("ERROR: integrity check failed for ".
+ "compressed file $_[0]!\n");
+
+ # Open compressed file
+ open(INFO_HANDLE, "gunzip -c $_[0]|")
+ or die("ERROR: cannot start gunzip to uncompress ".
+ "file $_[0]!\n");
+ }
+ else
+ {
+ # Open uncompressed file
+ open(INFO_HANDLE, $_[0])
+ or die("ERROR: cannot read file $_[0]!\n");
+ }
+
+ $testname = "";
+ while (<INFO_HANDLE>)
+ {
+ chomp($_);
+ $line = $_;
+
+ # Switch statement
+ foreach ($line)
+ {
+ /^TN:(\w+)/ && do
+ {
+ # Test name information found
+ $testname = $1;
+ last;
+ };
+
+ /^[SK]F:(.*)/ && do
+ {
+ # Filename information found
+ # Retrieve data for new entry
+ $filename = $1;
+
+ $data = $result{$filename};
+ ($testdata, $sumcount, $funcdata) =
+ get_info_entry($data);
+
+ if (defined($testname))
+ {
+ $testcount = $testdata->{$testname};
+ }
+ else
+ {
+ my %new_hash;
+ $testcount = \%new_hash;
+ }
+ last;
+ };
+
+ /^DA:(\d+),(-?\d+)/ && do
+ {
+ # Fix negative counts
+ $count = $2 < 0 ? 0 : $2;
+ if ($2 < 0)
+ {
+ $negative = 1;
+ }
+ # Execution count found, add to structure
+ # Add summary counts
+ $sumcount->{$1} += $count;
+
+ # Add test-specific counts
+ if (defined($testname))
+ {
+ $testcount->{$1} += $count;
+ }
+ last;
+ };
+
+ /^FN:(\d+),([^,]+)/ && do
+ {
+ # Function data found, add to structure
+ $funcdata->{$1} = $2;
+ last;
+ };
+
+ /^end_of_record/ && do
+ {
+ # Found end of section marker
+ if ($filename)
+ {
+ # Store current section data
+ if (defined($testname))
+ {
+ $testdata->{$testname} =
+ $testcount;
+ }
+ set_info_entry($data, $testdata,
+ $sumcount, $funcdata);
+ $result{$filename} = $data;
+ }
+
+ };
+
+ # default
+ last;
+ }
+ }
+ close(INFO_HANDLE);
+
+ # Calculate lines_found and lines_hit for each file
+ foreach $filename (keys(%result))
+ {
+ $data = $result{$filename};
+
+ ($testdata, $sumcount, $funcdata) = get_info_entry($data);
+
+ $data->{"found"} = scalar(keys(%{$sumcount}));
+ $hitcount = 0;
+
+ foreach (keys(%{$sumcount}))
+ {
+ if ($sumcount->{$_} >0) { $hitcount++; }
+ }
+
+ $data->{"hit"} = $hitcount;
+
+ $result{$filename} = $data;
+ }
+
+ if (scalar(keys(%result)) == 0)
+ {
+ die("ERROR: No valid records found in tracefile $tracefile\n");
+ }
+ if ($negative)
+ {
+ warn("WARNING: Negative counts found in tracefile ".
+ "$tracefile\n");
+ }
+
+ return(\%result);
+}
+
+
+#
+# get_info_entry(hash_ref)
+#
+# Retrieve data from an entry of the structure generated by read_info_file().
+# Return a list of references to hashes:
+# (test data hash ref, sum count hash ref, funcdata hash ref, lines found,
+# lines hit)
+#
+
+sub get_info_entry($)
+{
+ my $testdata_ref = $_[0]->{"test"};
+ my $sumcount_ref = $_[0]->{"sum"};
+ my $funcdata_ref = $_[0]->{"func"};
+ my $lines_found = $_[0]->{"found"};
+ my $lines_hit = $_[0]->{"hit"};
+
+ return ($testdata_ref, $sumcount_ref, $funcdata_ref, $lines_found,
+ $lines_hit);
+}
+
+
+#
+# set_info_entry(hash_ref, testdata_ref, sumcount_ref, funcdata_ref[,
+# lines_found, lines_hit])
+#
+# Update the hash referenced by HASH_REF with the provided data references.
+#
+
+sub set_info_entry($$$$;$$)
+{
+ my $data_ref = $_[0];
+
+ $data_ref->{"test"} = $_[1];
+ $data_ref->{"sum"} = $_[2];
+ $data_ref->{"func"} = $_[3];
+
+ if (defined($_[4])) { $data_ref->{"found"} = $_[4]; }
+ if (defined($_[5])) { $data_ref->{"hit"} = $_[5]; }
+}
+
+
+#
+# get_prefix(filename_list)
+#
+# Search FILENAME_LIST for a directory prefix which is common to as many
+# list entries as possible, so that removing this prefix will minimize the
+# sum of the lengths of all resulting shortened filenames.
+#
+
+sub get_prefix(@)
+{
+ my @filename_list = @_; # provided list of filenames
+ my %prefix; # mapping: prefix -> sum of lengths
+ my $current; # Temporary iteration variable
+
+ # Find list of prefixes
+ foreach (@filename_list)
+ {
+ # Need explicit assignment to get a copy of $_ so that
+ # shortening the contained prefix does not affect the list
+ $current = shorten_prefix($_);
+ while ($current = shorten_prefix($current))
+ {
+ # Skip rest if the remaining prefix has already been
+ # added to hash
+ if ($prefix{$current}) { last; }
+
+ # Initialize with 0
+ $prefix{$current}="0";
+ }
+
+ }
+
+ # Calculate sum of lengths for all prefixes
+ foreach $current (keys(%prefix))
+ {
+ foreach (@filename_list)
+ {
+ # Add original length
+ $prefix{$current} += length($_);
+
+ # Check whether prefix matches
+ if (substr($_, 0, length($current)) eq $current)
+ {
+ # Subtract prefix length for this filename
+ $prefix{$current} -= length($current);
+ }
+ }
+ }
+
+ # Find and return prefix with minimal sum
+ $current = (keys(%prefix))[0];
+
+ foreach (keys(%prefix))
+ {
+ if ($prefix{$_} < $prefix{$current})
+ {
+ $current = $_;
+ }
+ }
+
+ return($current);
+}
+
+
+#
+# shorten_prefix(prefix)
+#
+# Return PREFIX shortened by last directory component.
+#
+
+sub shorten_prefix($)
+{
+ my @list = split("/", $_[0]);
+
+ pop(@list);
+ return join("/", @list);
+}
+
+
+
+#
+# get_dir_list(filename_list)
+#
+# Return sorted list of directories for each entry in given FILENAME_LIST.
+#
+
+sub get_dir_list(@)
+{
+ my %result;
+
+ foreach (@_)
+ {
+ $result{shorten_prefix($_)} = "";
+ }
+
+ return(sort(keys(%result)));
+}
+
+
+#
+# get_relative_base_path(subdirectory)
+#
+# Return a relative path string which references the base path when applied
+# in SUBDIRECTORY.
+#
+# Example: get_relative_base_path("fs/mm") -> "../../"
+#
+
+sub get_relative_base_path($)
+{
+ my $result = "";
+ my $index;
+
+ # Make an empty directory path a special case
+ if (!$_[0]) { return(""); }
+
+ # Count number of /s in path
+ $index = ($_[0] =~ s/\//\//g);
+
+ # Add a ../ to $result for each / in the directory path + 1
+ for (; $index>=0; $index--)
+ {
+ $result .= "../";
+ }
+
+ return $result;
+}
+
+
+#
+# get_date_string()
+#
+# Return the current date in the form: yyyy-mm-dd
+#
+
+sub get_date_string()
+{
+ my $year;
+ my $month;
+ my $day;
+
+ ($year, $month, $day) = (localtime())[5, 4, 3];
+
+ return sprintf("%d-%02d-%02d", $year+1900, $month+1, $day);
+}
+
+
+#
+# split_filename(filename)
+#
+# Return (path, filename, extension) for a given FILENAME.
+#
+
+sub split_filename($)
+{
+ if (!$_[0]) { return(); }
+ my @path_components = split('/', $_[0]);
+ my @file_components = split('\.', pop(@path_components));
+ my $extension = pop(@file_components);
+
+ return (join("/",@path_components), join(".",@file_components),
+ $extension);
+}
+
+
+#
+# write_file_table(filehandle, base_dir, overview, testhash, fileview)
+#
+# Write a complete file table. OVERVIEW is a reference to a hash containing
+# the following mapping:
+#
+# filename -> "lines_found,lines_hit,page_link"
+#
+# TESTHASH is a reference to the following hash:
+#
+# filename -> \%testdata
+# %testdata: name of test affecting this file -> \%testcount
+# %testcount: line number -> execution count for a single test
+#
+# Heading of first column is "Filename" if FILEVIEW is true, "Directory name"
+# otherwise.
+#
+
+sub write_file_table(*$$$$)
+{
+ my $dir = $_[0];
+ my $base_dir = $_[1];
+ my %overview = %{$_[2]};
+ my %testhash = %{$_[3]};
+ my $fileview = $_[4];
+ my $filename;
+ my $hit;
+ my $found;
+ my $classification;
+ my $rate_string;
+ my $rate;
+ my $junk;
+
+
+ foreach $filename (sort(keys(%overview)))
+ {
+ ($found, $hit, $junk) = split(",", $overview{$filename});
+#James I think this is right
+ $rate = $hit * 100 / $found;
+ $rate_string = sprintf("%.1f", $rate);
+
+ if ($rate < 0.001) { $classification = "None"; }
+ elsif ($rate < $med_limit) { $classification = "Lo"; }
+ elsif ($rate < $hi_limit) { $classification = "Med"; }
+ else { $classification = "Hi"; }
+
+ print "$dir/$filename\t$classification\t$rate_string\n";
+
+ }
+}
+
+
+#
+# info(printf_parameter)
+#
+# Use printf to write PRINTF_PARAMETER to stdout only when the $quiet flag
+# is not set.
+#
+
+sub info(@)
+{
+ if (!$quiet)
+ {
+ # Print info string
+ printf(STDERR @_);
+ }
+}
+
+
+#
+# subtract_counts(data_ref, base_ref)
+#
+
+sub subtract_counts($$)
+{
+ my %data = %{$_[0]};
+ my %base = %{$_[1]};
+ my $line;
+ my $data_count;
+ my $base_count;
+ my $hit = 0;
+ my $found = 0;
+
+ foreach $line (keys(%data))
+ {
+ $found++;
+ $data_count = $data{$line};
+ $base_count = $base{$line};
+
+ if (defined($base_count))
+ {
+ $data_count -= $base_count;
+
+ # Make sure we don't get negative numbers
+ if ($data_count<0) { $data_count = 0; }
+ }
+
+ $data{$line} = $data_count;
+ if ($data_count > 0) { $hit++; }
+ }
+
+ return (\%data, $found, $hit);
+}
+
+
+#
+# add_counts(data1_ref, data2_ref)
+#
+# DATA1_REF and DATA2_REF are references to hashes containing a mapping
+#
+# line number -> execution count
+#
+# Return a list (RESULT_REF, LINES_FOUND, LINES_HIT) where RESULT_REF
+# is a reference to a hash containing the combined mapping in which
+# execution counts are added.
+#
+
+sub add_counts($$)
+{
+ my %data1 = %{$_[0]}; # Hash 1
+ my %data2 = %{$_[1]}; # Hash 2
+ my %result; # Resulting hash
+ my $line; # Current line iteration scalar
+ my $data1_count; # Count of line in hash1
+ my $data2_count; # Count of line in hash2
+ my $found = 0; # Total number of lines found
+ my $hit = 0; # Number of lines with a count > 0
+
+ foreach $line (keys(%data1))
+ {
+ $data1_count = $data1{$line};
+ $data2_count = $data2{$line};
+
+ # Add counts if present in both hashes
+ if (defined($data2_count)) { $data1_count += $data2_count; }
+
+ # Store sum in %result
+ $result{$line} = $data1_count;
+
+ $found++;
+ if ($data1_count > 0) { $hit++; }
+ }
+
+ # Add lines unique to data2
+ foreach $line (keys(%data2))
+ {
+ # Skip lines already in data1
+ if (defined($data1{$line})) { next; }
+
+ # Copy count from data2
+ $result{$line} = $data2{$line};
+
+ $found++;
+ if ($result{$line} > 0) { $hit++; }
+ }
+
+ return (\%result, $found, $hit);
+}
+
+
+#
+# apply_baseline(data_ref, baseline_ref)
+#
+# Subtract the execution counts found in the baseline hash referenced by
+# BASELINE_REF from actual data in DATA_REF.
+#
+
+sub apply_baseline($$)
+{
+ my %data_hash = %{$_[0]};
+ my %base_hash = %{$_[1]};
+ my $filename;
+ my $testname;
+ my $data;
+ my $data_testdata;
+ my $data_funcdata;
+ my $data_count;
+ my $base;
+ my $base_testdata;
+ my $base_count;
+ my $sumcount;
+ my $found;
+ my $hit;
+
+ foreach $filename (keys(%data_hash))
+ {
+ # Get data set for data and baseline
+ $data = $data_hash{$filename};
+ $base = $base_hash{$filename};
+
+ # Get set entries for data and baseline
+ ($data_testdata, undef, $data_funcdata) =
+ get_info_entry($data);
+ ($base_testdata, $base_count) = get_info_entry($base);
+
+ # Sumcount has to be calculated anew
+ $sumcount = {};
+
+ # For each test case, subtract test specific counts
+ foreach $testname (keys(%{$data_testdata}))
+ {
+ # Get counts of both data and baseline
+ $data_count = $data_testdata->{$testname};
+
+ $hit = 0;
+
+ ($data_count, undef, $hit) =
+ subtract_counts($data_count, $base_count);
+
+ # Check whether this test case did hit any line at all
+ if ($hit > 0)
+ {
+ # Write back resulting hash
+ $data_testdata->{$testname} = $data_count;
+ }
+ else
+ {
+ # Delete test case which did not impact this
+ # file
+ delete($data_testdata->{$testname});
+ }
+
+ # Add counts to sum of counts
+ ($sumcount, $found, $hit) =
+ add_counts($sumcount, $data_count);
+ }
+
+ # Write back resulting entry
+ set_info_entry($data, $data_testdata, $sumcount,
+ $data_funcdata, $found, $hit);
+
+ $data_hash{$filename} = $data;
+ }
+
+ return (\%data_hash);
+}
+
+
+#
+# combine_info_entries(entry_ref1, entry_ref2)
+#
+# Combine .info data entry hashes referenced by ENTRY_REF1 and ENTRY_REF2.
+# Return reference to resulting hash.
+#
+
+sub combine_info_entries($$)
+{
+ my $entry1 = $_[0]; # Reference to hash containing first entry
+ my $testdata1;
+ my $sumcount1;
+ my $funcdata1;
+
+ my $entry2 = $_[1]; # Reference to hash containing second entry
+ my $testdata2;
+ my $sumcount2;
+ my $funcdata2;
+
+ my %result; # Hash containing combined entry
+ my %result_testdata;
+ my $result_sumcount = {};
+ my %result_funcdata;
+ my $lines_found;
+ my $lines_hit;
+
+ my $testname;
+
+ # Retrieve data
+ ($testdata1, $sumcount1, $funcdata1) = get_info_entry($entry1);
+ ($testdata2, $sumcount2, $funcdata2) = get_info_entry($entry2);
+
+ # Combine funcdata
+ foreach (keys(%{$funcdata1}))
+ {
+ $result_funcdata{$_} = $funcdata1->{$_};
+ }
+
+ foreach (keys(%{$funcdata2}))
+ {
+ $result_funcdata{$_} = $funcdata2->{$_};
+ }
+
+ # Combine testdata
+ foreach $testname (keys(%{$testdata1}))
+ {
+ if (defined($testdata2->{$testname}))
+ {
+ # testname is present in both entries, requires
+ # combination
+ ($result_testdata{$testname}) =
+ add_counts($testdata1->{$testname},
+ $testdata2->{$testname});
+ }
+ else
+ {
+ # testname only present in entry1, add to result
+ $result_testdata{$testname} = $testdata1->{$testname};
+ }
+
+ # update sum count hash
+ ($result_sumcount, $lines_found, $lines_hit) =
+ add_counts($result_sumcount,
+ $result_testdata{$testname});
+ }
+
+ foreach $testname (keys(%{$testdata2}))
+ {
+ # Skip testnames already covered by previous iteration
+ if (defined($testdata1->{$testname})) { next; }
+
+ # testname only present in entry2, add to result hash
+ $result_testdata{$testname} = $testdata2->{$testname};
+
+ # update sum count hash
+ ($result_sumcount, $lines_found, $lines_hit) =
+ add_counts($result_sumcount,
+ $result_testdata{$testname});
+ }
+
+ # Calculate resulting sumcount
+
+ # Store result
+ set_info_entry(\%result, \%result_testdata, $result_sumcount,
+ \%result_funcdata, $lines_found, $lines_hit);
+
+ return(\%result);
+}
+
+
+#
+# combine_info_files(info_ref1, info_ref2)
+#
+# Combine .info data in hashes referenced by INFO_REF1 and INFO_REF2. Return
+# reference to resulting hash.
+#
+
+sub combine_info_files($$)
+{
+ my %hash1 = %{$_[0]};
+ my %hash2 = %{$_[1]};
+ my $filename;
+
+ foreach $filename (keys(%hash2))
+ {
+ if ($hash1{$filename})
+ {
+ # Entry already exists in hash1, combine them
+ $hash1{$filename} =
+ combine_info_entries($hash1{$filename},
+ $hash2{$filename});
+ }
+ else
+ {
+ # Entry is unique in both hashes, simply add to
+ # resulting hash
+ $hash1{$filename} = $hash2{$filename};
+ }
+ }
+
+ return(\%hash1);
+}
+
+
+#
+# apply_prefix(filename, prefix)
+#
+# If FILENAME begins with PREFIX, remove PREFIX from FILENAME and return
+# resulting string, otherwise return FILENAME.
+#
+
+sub apply_prefix($$)
+{
+ my $filename = $_[0];
+ my $prefix = $_[1];
+ my $clean_prefix = escape_regexp($prefix);
+
+ if (defined($prefix) && ($prefix ne ""))
+ {
+ if ($filename =~ /^$clean_prefix\/(.*)$/)
+ {
+ return substr($filename, length($prefix) + 1);
+ }
+ }
+
+ return $filename;
+}
+
+
+#
+# escape_regexp(string)
+#
+# Escape special characters in STRING which would be incorrectly interpreted
+# in a PERL regular expression.
+#
+
+sub escape_regexp($)
+{
+ my $string = $_[0];
+
+ # Escape special characters
+ $string =~ s/\\/\\\\/g;
+ $string =~ s/\^/\\\^/g;
+ $string =~ s/\$/\\\$/g;
+ $string =~ s/\./\\\./g;
+ $string =~ s/\|/\\\|/g;
+ $string =~ s/\(/\\\(/g;
+ $string =~ s/\)/\\\)/g;
+ $string =~ s/\[/\\\[/g;
+ $string =~ s/\]/\\\]/g;
+ $string =~ s/\*/\\\*/g;
+ $string =~ s/\?/\\\?/g;
+ $string =~ s/\{/\\\{/g;
+ $string =~ s/\}/\\\}/g;
+ $string =~ s/\+/\\\+/g;
+
+ return $string;
+}
diff --git a/chromium/third_party/lcov-1.9/contrib/galaxy/posterize.pl b/chromium/third_party/lcov-1.9/contrib/galaxy/posterize.pl
new file mode 100755
index 00000000000..1b2895ede67
--- /dev/null
+++ b/chromium/third_party/lcov-1.9/contrib/galaxy/posterize.pl
@@ -0,0 +1,312 @@
+#!/usr/bin/perl
+#
+# Copyright (c) International Business Machines Corp., 2002
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or (at
+# your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+#
+#
+# posterize.pl
+#
+# This script generates a postscript file from output generated from the
+# fcgp http://sourceforge.net/projects/fcgp/ for plotting
+#
+#
+# History:
+# 2003-09-04 wrote - James M Kenefick Jr. <jkenefic@us.ibm.com>
+#
+
+
+
+# a good deal of this could be turned in to cli
+# arguments.
+
+# Constants
+my $Title = "Linux Kernel Coverage";
+my $KernelVersion = "2.5.73";
+my $TestDescription = "A Sample Print";
+my $Image = "../lgp/image.ps";
+
+# Variables
+my $Bounds = "";
+# Paper sizes in inches
+my $PAPER_WIDTH = 34;
+my $PAPER_HEIGHT = 42;
+
+# points per inch
+my $ppi = 72;
+
+# Margins
+my $TopMargin = 1;
+my $BottomMargin = 1.5;
+my $LeftMargin = 1;
+my $RightMargin = 1;
+
+
+$RightMargin = $PAPER_WIDTH - $RightMargin;
+$TopMargin = $PAPER_HEIGHT - $TopMargin;
+
+my $filename = "poster.ps";
+
+# Sizes in ppi
+my $PPI_WIDTH = ($PAPER_WIDTH * $ppi);
+my $PPI_HEIGHT = ($PAPER_HEIGHT * $ppi);
+
+# Date we create poster
+my $date = `date`;
+
+print STDERR "Creating Poster\n";
+
+open POSTER, ">$filename";
+
+
+
+print(POSTER <<END_OF_USAGE);
+%!PS-Adobe-1.0
+%%DocumentFonts: Helvetica Helvetica-Bold
+%%Title: Linux 2.4.0 Kernel Poster
+%%Creator: Rusty's scripts and postersize (GPL)
+%%CreationDate: $date
+%%Pages: 1
+%%BoundingBox: 0 0 $PPI_WIDTH $PPI_HEIGHT
+%%EndComments
+%!
+/PRorig_showpage_x178313 /showpage load def /showpage{
+ errordict /handleerror {} put
+ }def
+/initgraphics{}def/setpagedevice{pop}def
+statusdict begin /a4tray{}def /lettertray{}def end
+/a4{}def/a3{}def/a0{}def/letter{}def/legal{}def
+/a4small{}def /lettersmall{}def /a4tray{}def /lettertray{}def
+/setscreen{pop pop pop}def
+/ColorManagement {pop} def
+
+
+/A {gsave newpath 0 360 arc stroke grestore} bind def
+/M {moveto} bind def
+/L {lineto} bind def
+/D {[] 0 setdash} bind def
+/D5 {[5] 0 setdash} bind def
+/C0 {0 0 0 setrgbcolor} bind def
+/C1 {.8 .4 .4 setrgbcolor} bind def
+/C2 {.5 1 .5 setrgbcolor} bind def
+/C3 {0 .7 0 setrgbcolor} bind def
+/C4 {1 0 0 setrgbcolor} bind def
+/C5 {0 0 1 setrgbcolor} bind def
+/R {grestore} bind def
+/S {0 0 M stroke} bind def
+/T {gsave translate} bind def
+/U {C0 newpath 4 copy 4 2 roll 8 7 roll M L L L closepath stroke
+C1 findfont exch scalefont setfont M show} bind def
+
+% Added James M Kenefick Jr.
+/Hi_Color {0 0 1} def
+/Med_Color {0 .60 1} def
+/Lo_Color {0 1 1} def
+/None_Color {.75 .75 .75} def
+/Hi {newpath 4 copy 4 2 roll 8 7 roll M L L L Hi_Color setrgbcolor fill closepath} bind def
+/Med {newpath 4 copy 4 2 roll 8 7 roll M L L L Med_Color setrgbcolor fill closepath} bind def
+/Lo {newpath 4 copy 4 2 roll 8 7 roll M L L L Lo_Color setrgbcolor fill closepath} bind def
+/None {newpath 4 copy 4 2 roll 8 7 roll M L L L None_Color setrgbcolor fill closepath} bind def
+
+/inch
+{
+ 72 mul
+}
+def
+
+/LeftMargin $LeftMargin inch def
+/RightMargin $RightMargin inch def
+/TopMargin $TopMargin inch def
+/BottomMargin $BottomMargin inch def
+/FontScale 25 def
+/AuthorFontScale 70 def
+
+/centerText
+{
+ dup
+ stringwidth pop
+ 2 div
+ RightMargin LeftMargin sub 2 div
+ exch sub
+ LeftMargin add
+ NextLine moveto
+ show
+}
+def
+
+/upLine
+{
+ /NextLine
+ NextLine LineSpace2 add
+ def
+}
+def
+
+/advanceLine
+{
+ /NextLine
+ NextLine LineSpace sub
+ def
+}
+def
+
+/fontScale
+{
+ TopMargin BottomMargin sub FontScale div
+}
+def
+
+/authorfontScale
+{
+ TopMargin BottomMargin sub AuthorFontScale div
+}
+def
+
+/rightJustify
+{
+ dup
+ stringwidth pop
+ RightMargin 1 inch sub
+ exch sub
+ NextLine moveto
+ show
+}
+def
+
+/usableY
+{
+ TopMargin LineSpace 3 mul sub BottomMargin sub
+}
+def
+
+/usableX
+{
+ RightMargin LeftMargin sub
+}
+def
+gsave
+/Times-Roman findfont fontScale scalefont setfont
+/LineSpace fontScale def
+/NextLine (B) stringwidth pop TopMargin exch sub def
+
+%%EndProlog
+%%Page 1
+% title
+
+($Title) centerText advanceLine
+(Kernel: $KernelVersion) centerText advanceLine
+($TestDescription) centerText
+
+% Author Block
+LeftMargin BottomMargin translate
+/Times-Roman findfont authorfontScale scalefont setfont
+/LineSpace2 authorfontScale def
+/NextLine 0 def
+(Based on work by Rusty Russell, Christian Reiniger) rightJustify
+upLine
+(By James M. Kenefick Jr.) rightJustify
+
+grestore
+LeftMargin BottomMargin translate
+
+% Key Block
+15 15 scale
+% This is the key for the graph.
+
+/box { newpath moveto 0 1 rlineto 2 0 rlineto 0 -1 rlineto closepath } def
+/key { setrgbcolor 2 copy box gsave fill grestore 0 0 0 setrgbcolor strokepath fill moveto 2.4 0.25 rmoveto show } def
+
+/Helvetica-Oblique findfont
+1 scalefont setfont
+0.1 setlinewidth
+
+(static functions) 1 5 0.5 1 0.5 key % Light green.
+(indirectly called functions) 1 7 0 0.7 0 key % green
+(exported functions) 1 9 1 0 0 key % red
+(other functions) 1 11 0 0 1 key % blue
+
+(Low Coverage) 1 15 Lo_Color key % blue
+(Medium Coverage) 1 17 Med_Color key % blue
+(Hi Coverage) 1 19 Hi_Color key % blue
+(No Coverage) 1 21 None_Color key % blue
+1 3.25 moveto
+0.8 0.4 0.4 setrgbcolor
+/Helvetica findfont
+1 scalefont setfont
+(xxx) show
+1 3 moveto
+2.4 0.25 rmoveto
+0 0 0 setrgbcolor
+/Helvetica-Oblique findfont
+1 scalefont setfont
+(function name) show
+
+1 1.25 moveto
+0.8 0.4 0.4 setrgbcolor
+/Helvetica-Bold findfont
+1 scalefont setfont
+(xxx) show
+1 1 moveto
+2.4 0.25 rmoveto
+0 0 0 setrgbcolor
+/Helvetica-Oblique findfont
+1 scalefont setfont
+(source filename) show
+
+6 24 moveto
+/Helvetica-Bold findfont
+2 scalefont setfont
+(Key) show
+
+% Box around it
+newpath
+0.2 0.2 moveto
+0.2 27 lineto
+17 27 lineto
+17 0.2 lineto
+closepath
+strokepath fill
+
+
+1 15 div 1 15 div scale
+
+% find and move to center
+END_OF_USAGE
+
+# Find the bounds for the image
+
+$Bounds = `tail -1 $Image`;
+($Junk, $Junk, $minX, $minY, $maxX, $maxY) = split / /, $Bounds;
+
+my $xRange = $maxX - $minX;
+my $yRange = $maxY - $minY;
+
+if ($xRange < $yRange){
+ $Range = $xRange;
+} else {
+ $Range = $yRange;
+}
+print POSTER " 0 usableY usableX sub 2 div translate\n";
+print POSTER "usableX $Range div usableX $Range div scale\n";
+print POSTER "$Range 2 div $Range 2 div translate\n";
+print POSTER "gsave\n";
+# Paste in actual image.
+print POSTER `cat /home/lgp/image.ps`;
+print POSTER "%%Trailer\n";
+print POSTER "grestore\n";
+print POSTER "showpage\n";
+print POSTER "PRorig_showpage_x178313\n";
+print POSTER "/showpage /PRorig_showpage_x178313 load def\n";
+
diff --git a/chromium/third_party/lcov-1.9/descriptions.tests b/chromium/third_party/lcov-1.9/descriptions.tests
new file mode 100644
index 00000000000..b91fe3972c2
--- /dev/null
+++ b/chromium/third_party/lcov-1.9/descriptions.tests
@@ -0,0 +1,2990 @@
+personality01
+ Check that we can set the personality for a process.
+personality02
+ Check that we get EINVAL for a bad personality.
+exit01
+ Check that exit returns the correct values to the waiting parent
+exit02
+ Check that exit flushes output file buffers and closes files upon
+ exiting
+wait02
+ Basic test for wait(2) system call.
+wait401
+ check that a call to wait4() correctly waits for a child
+ process to exit
+wait402
+ check for ECHILD errno when using an illegal pid value
+
+waitpid01
+ Check that when a child kills itself by generating an alarm
+ exception, the waiting parent is correctly notified.
+waitpid02
+ Check that when a child kills itself by generating an integer zero
+ divide exception, the waiting parent is correctly notified.
+waitpid03
+ Check that parent waits until specific child has returned.
+waitpid04
+ test to check the error conditions in waitpid sys call
+waitpid05
+ Check that when a child kills itself with a kill statement after
+ determining its process id by using getpid, the parent receives a
+ correct report of the cause of its death. This also indirectly
+ checks that getpid returns the correct process id.
+waitpid06
+ Tests to see if pid's returned from fork and waitpid are same.
+waitpid07
+ Tests to see if pid's returned from fork and waitpid are same.
+waitpid08
+ Tests to see if pid's returned from fork and waitpid are same
+waitpid09
+ Check ability of parent to wait until child returns, and that the
+ child's process id is returned through the waitpid. Check that
+ waitpid returns immediately if no child is present.
+waitpid10
+ Tests to see if pid's returned from fork and waitpid are same
+waitpid11
+ Tests to see if pid's returned from fork and waitpid are same
+waitpid12
+ Tests to see if pid's returned from fork and waitpid are same
+waitpid13
+ Tests to see if pid's returned from fork and waitpid are same
+fcntl01
+ Test F_DUPFD, F_SETFL cmds of fcntl
+fcntl02
+ Basic test for fcntl(2) using F_DUPFD argument.
+fcntl03
+ Basic test for fcntl(2) using F_GETFD argument.
+fcntl04
+ Basic test for fcntl(2) using F_GETFL argument.
+fcntl05
+ Basic test for fcntl(2) using F_GETLK argument.
+fcntl06
+ Error checking conditions for remote locking of regions of a file.
+fcntl07
+ Close-On-Exec functional test.
+fcntl07B
+ Close-On-Exec of named pipe functional test.
+fcntl08
+ Basic test for fcntl(2) using F_SETFL argument.
+fcntl09
+ Basic test for fcntl(2) using F_SETLK argument.
+fcntl10
+ Basic test for fcntl(2) using F_SETLKW argument.
+fcntl11
+ Testcase to check locking of regions of a file
+
+fcntl12
+
+ Testcase to test that fcntl() sets EMFILE for F_DUPFD command.
+
+fcntl13
+
+ Testcase to test that fcntl() sets errno correctly.
+
+fcntl14
+
+ File locking test cases for fcntl. In Linux, S_ENFMT is not implemented
+ in the kernel. However all standard Unix kernels define S_ENFMT as
+ S_ISGID. So this test defines S_ENFMT as S_ISGID.
+
+fcntl15
+
+ Check that file locks are removed when file closed
+
+fcntl16
+
+ Additional file locking test cases for checking proper notification
+ of processes on lock change
+
+fcntl17
+
+ Check deadlock detection for file locking
+
+fcntl18
+
+ Test to check the error conditions in fcntl system call
+
+fcntl19
+
+ Testcase to check locking of regions of a file
+
+fcntl20
+
+ Check locking of regions of a file
+
+fcntl21
+
+ Check locking of regions of a file
+
+dup01
+
+ Basic test for dup(2).
+
+dup02
+
+ Negative test for dup(2) with bad fd.
+
+dup03
+
+ Negative test for dup(2) (too many fds).
+
+dup04
+
+ Basic test for dup(2) of a system pipe descriptor.
+
+dup05
+
+ Basic test for dup(2) of a named pipe descriptor.
+
+dup201
+
+ Negative tests for dup2() with bad fd (EBADF), and for "too many
+ open files" (EMFILE)
+
+dup202
+
+ Is the access mode the same for both file descriptors?
+ 0: read only ? "0444"
+ 1: write only ? "0222"
+ 2: read/write ? "0666"
+
+dup203
+
+ Testcase to check the basic functionality of dup2().
+
+dup204
+
+ Testcase to check the basic functionality of dup2(2).
+
+
+msync01
+
+ Verify that, msync() succeeds, when the region to synchronize, is part
+ of, or all of a mapped region.
+
+msync02
+
+ Verify that msync() succeeds when the region to synchronize is mapped
+ shared and the flags argument is MS_INVALIDATE.
+
+msync03
+
+ Verify that, msync() fails, when the region to synchronize, is outside
+ the address space of the process.
+
+msync04
+
+ Verify that, msync() fails, when the region to synchronize, is mapped
+ but the flags argument is invalid.
+
+msync05
+
+ Verify that, msync() fails, when the region to synchronize, was not
+ mapped.
+
+
+sendfile02
+
+ Testcase to test the basic functionality of the sendfile(2) system call.
+
+sendfile03
+
+ Testcase to test that sendfile(2) system call returns appropriate
+ errnos on error.
+
+fork01
+ Basic test for fork(2).
+fork02
+ Test correct operation of fork:
+ pid == 0 in child;
+ pid > 0 in parent from wait;
+fork03
+ Check that child can use a large text space and do a large
+ number of operations.
+fork04
+ Child inheritance of Environment Variables after fork().
+fork05
+ Make sure LDT is propagated correctly
+fork06
+ Test that a process can fork children a large number of
+ times in succession
+fork07
+ Check that all children inherit parent's file descriptor
+fork08
+ Check if the parent's file descriptors are affected by
+ actions in the child; they should not be.
+fork09
+ Check that child has access to a full set of files.
+fork10
+ Check inheritance of file descriptor by children, they
+ should all be referring to the same file.
+fork11
+ Test that parent gets a pid from each child when doing wait
+vfork01
+ Fork a process using vfork() and verify that, the attribute values like
+ euid, ruid, suid, egid, rgid, sgid, umask, inode and device number of
+ root and current working directories are same as that of the parent
+ process.
+vfork02
+ Fork a process using vfork() and verify that, the pending signals in
+ the parent are not pending in the child process.
+ioctl01
+
+ Testcase to check the errnos set by the ioctl(2) system call.
+
+ioctl02
+
+ Testcase to test the TCGETA, and TCSETA ioctl implementations for
+ the tty driver
+
+sockioctl01
+
+ Verify that ioctl() on sockets returns the proper errno for various
+ failure cases
+
+getitimer01
+ check that a correct call to getitimer() succeeds
+
+
+getitimer02
+ check that a getitimer() call fails as expected
+ with an incorrect second argument.
+
+getitimer03
+ check that a getitimer() call fails as expected
+ with an incorrect first argument.
+
+setitimer01
+ check that a reasonable setitimer() call succeeds.
+
+
+setitimer02
+ check that a setitimer() call fails as expected
+ with incorrect values.
+
+setitimer03
+ check that a setitimer() call fails as expected
+ with incorrect values.
+
+float_trigo
+ increase CPUs workload - verify that results of some math functions are stable
+ trigonometric (acos, asin, atan, atan2, cos, sin, tan),
+ hyperbolic (cosh, sinh, tanh),
+
+float_exp_log
+ increase CPUs workload - verify that results of some math functions are stable
+ exponential and logarithmic functions (exp, log, log10),
+ Functions that manipulate floating-point numbers (modf, ldexp, frexp),
+ Euclidean distance function (hypot),
+
+float_bessel
+ increase CPUs workload - verify that results of some math functions are stable
+ Bessel (j0, j1, y0, y1),
+ Computes the natural logarithm of the gamma function (lgamma),
+
+fload_power
+ increase CPUs workload - verify that results of some math functions are stable
+ Computes sqrt, power, fmod
+
+float_iperb
+ increase CPUs workload - verify that results of some math functions are stable
+pth_str01
+
+ Creates a tree of threads
+
+pth_str02
+
+ Creates n threads
+
+pth_str03
+
+ Creates a tree of threads does calculations, and
+ returns result to parent
+
+
+asyncio02
+
+ Write/close flushes data to the file.
+
+
+fpathconf
+
+ Basic test for fpathconf(2)
+
+gethostid01
+
+ Basic test for gethostid(2)
+
+
+pathconf01
+
+ Basic test for pathconf(2)
+
+setpgrp01
+
+ Basic test for the setpgrp(2) system call.
+
+setpgrp02
+
+ Testcase to check the basic functionality of the setpgrp(2) syscall.
+
+
+ulimit01
+
+ Basic test for the ulimit(2) system call.
+
+mmstress
+
+ Performs General Stress with Race conditions
+
+mmap1
+
+ Test the LINUX memory manager. The program is aimed at
+ stressing the memory manager by simultaneous map/unmap/read
+ by light weight processes, the test is scheduled to run for
+ a minimum of 24 hours.
+
+mmap2
+
+ Test the LINUX memory manager. The program is aimed at
+ stressing the memory manager by repeated map/write/unmap of a
+ of a large gb size file.
+
+mmap3
+
+ Test the LINUX memory manager. The program is aimed at
+ stressing the memory manager by repeated map/write/unmap
+ of file/memory of random size (maximum 1GB) this is done by
+ multiple processes.
+
+mmap001
+
+ Tests mmapping a big file and writing it once
+
+mmap01
+
+ Verify that, mmap() succeeds when used to map a file where size of the
+ file is not a multiple of the page size, the memory area beyond the end
+ of the file to the end of the page is accessible. Also, verify that
+ this area is all zeroed and the modifications done to this area are
+ not written to the file.
+
+mmap02
+
+ Call mmap() with prot parameter set to PROT_READ and with the file
+ descriptor being open for read, to map a file creating mapped memory
+ with read access. The minimum file permissions should be 0444.
+
+mmap03
+
+ Call mmap() to map a file creating a mapped region with execute access
+ under the following conditions -
+ - The prot parameter is set to PROT_EXE
+ - The file descriptor is open for read
+ - The file being mapped has execute permission bit set.
+ - The minimum file permissions should be 0555.
+
+ The call should succeed to map the file creating mapped memory with the
+ required attributes.
+
+mmap04
+
+ Call mmap() to map a file creating a mapped region with read/exec access
+ under the following conditions -
+ - The prot parameter is set to PROT_READ|PROT_EXEC
+ - The file descriptor is open for read
+ - The file being mapped has read and execute permission bit set.
+ - The minimum file permissions should be 0555.
+
+ The call should succeed to map the file creating mapped memory with the
+ required attributes.
+
+
+mmap05
+
+ Call mmap() to map a file creating mapped memory with no access under
+ the following conditions -
+ - The prot parameter is set to PROT_NONE
+ - The file descriptor is open for read(any mode other than write)
+ - The minimum file permissions should be 0444.
+
+ The call should succeed to map the file creating mapped memory with the
+ required attributes.
+
+mmap06
+
+ Call mmap() to map a file creating a mapped region with read access
+ under the following conditions -
+ - The prot parameter is set to PROT_READ
+ - The file descriptor is open for writing.
+
+ The call should fail to map the file.
+
+
+mmap07
+
+ Call mmap() to map a file creating a mapped region with read access
+ under the following conditions -
+ - The prot parameter is set to PROT_WRITE
+ - The file descriptor is open for writing.
+ - The flags parameter has MAP_PRIVATE set.
+
+ The call should fail to map the file.
+
+mmap08
+
+ Verify that mmap() fails to map a file creating a mapped region
+ when the file specified by file descriptor is not valid.
+
+
+mremap01
+
+ Verify that, mremap() succeeds when used to expand the existing
+ virtual memory mapped region to the requested size where the
+ virtual memory area was previously mapped to a file using mmap().
+
+mremap02
+
+ Verify that,
+ mremap() fails when used to expand the existing virtual memory mapped
+ region to the requested size, if the virtual memory area previously
+ mapped was not page aligned or invalid argument specified.
+
+mremap03
+
+ Verify that,
+ mremap() fails when used to expand the existing virtual memory mapped
+ region to the requested size, if there already exists mappings that
+ cover the whole address space requested or the old address specified was
+ not mapped.
+
+mremap04
+
+ Verify that,
+ mremap() fails when used to expand the existing virtual memory mapped
+ region to the requested size, if the memory area cannot be expanded at
+ the current virtual address and MREMAP_MAYMOVE flag not set.
+
+munmap01
+
+ Verify that, munmap call will succeed to unmap a mapped file or
+ anonymous shared memory region from the calling process's address space
+ and after successful completion of munmap, the unmapped region is no
+ longer accessible.
+
+munmap02
+
+ Verify that, munmap call will succeed to unmap a mapped file or
+ anonymous shared memory region from the calling process's address space
+ if the region specified by the address and the length is part or all of
+ the mapped region.
+
+munmap03
+
+ Verify that, munmap call will fail to unmap a mapped file or anonymous
+ shared memory region from the calling process's address space if the
+ address and the length of the region to be unmapped points outside the
+ calling process's address space
+
+brk01
+ Test the basic functionality of brk.
+
+sbrk01
+ Basic test for the sbrk(2) system call.
+
+
+mprotect01
+
+ Testcase to check the error conditions for mprotect(2)
+
+mprotect02
+
+ Testcase to check the mprotect(2) system call.
+
+mprotect03
+
+ Testcase to check the mprotect(2) system call.
+
+msgctl01
+ create a message queue, then issue the IPC_STAT command
+ and RMID commands to test the functionality
+
+
+msgctl02
+ create a message queue, then issue the IPC_SET command
+ to lower the msg_qbytes value.
+
+
+msgctl03
+ create a message queue, then issue the IPC_RMID command
+
+
+
+msgctl04
+ test for EACCES, EFAULT and EINVAL errors using
+ a variety of incorrect calls.
+
+
+msgctl05
+ test for EPERM error
+
+
+
+msgget01
+ create a message queue, write a message to it and
+ read it back.
+
+
+msgget02
+ test for EEXIST and ENOENT errors
+
+
+msgget03
+ test for an ENOSPC error by using up all available
+ message queues.
+
+msgget04
+ test for an EACCES error by creating a message queue
+ with no read or write permission and then attempting
+ to access it with various permissions.
+
+msgrcv01
+ test that msgrcv() receives the expected message
+
+msgrcv02
+ test for EACCES and EFAULT errors
+
+msgrcv03
+ test for EINVAL error
+
+msgrcv04
+ test for E2BIG and ENOMSG errors
+
+msgrcv05
+ test for EINTR error
+
+msgrcv06
+ test for EIDRM error
+
+msgsnd01
+ test that msgsnd() enqueues a message correctly
+
+msgsnd02
+ test for EACCES and EFAULT errors
+
+msgsnd03
+ test for EINVAL error
+
+msgsnd04
+ test for EAGAIN error
+
+msgsnd05
+ test for EINTR error
+
+
+msgsnd06
+ test for EIDRM error
+
+link02
+
+ Basic test for link(2)
+
+link03
+
+ Multi links tests
+
+link04
+
+ Negative test cases for link(2)
+
+link05
+
+ Multi links (EMLINK) negative test
+
+readlink01
+
+ Verify that, readlink will succeed to read the contents of the symbolic
+ link created the process.
+
+readlink02
+
+ Basic test for the readlink(2) system call
+
+readlink03
+
+ Verify that,
+ 1) readlink(2) returns -1 and sets errno to EACCES if search/write
+ permission is denied in the directory where the symbolic link
+ resides.
+ 2) readlink(2) returns -1 and sets errno to EINVAL if the buffer size
+ is not positive.
+ 3) readlink(2) returns -1 and sets errno to EINVAL if the specified
+ file is not a symbolic link file.
+ 4) readlink(2) returns -1 and sets errno to ENAMETOOLONG if the
+ pathname component of symbolic link is too long (ie, > PATH_MAX).
+ 5) readlink(2) returns -1 and sets errno to ENOENT if the component of
+ symbolic link points to an empty string.
+
+readlink04
+
+ Verify that, readlink call will succeed to read the contents of the
+ symbolic link if invoked by non-root user who is not the owner of the
+ symbolic link.
+
+
+symlink01
+
+ Test of various file function calls, such as rename or open, on a symbolic
+ link file.
+
+symlink02
+
+ Basic test for the symlink(2) system call.
+
+symlink03
+
+ Verify that,
+ 1) symlink(2) returns -1 and sets errno to EACCES if search/write
+ permission is denied in the directory where the symbolic link is
+ being created.
+ 2) symlink(2) returns -1 and sets errno to EEXIST if the specified
+ symbolic link already exists.
+ 3) symlink(2) returns -1 and sets errno to EFAULT if the specified
+ file or symbolic link points to invalid address.
+ 4) symlink(2) returns -1 and sets errno to ENAMETOOLONG if the
+ pathname component of symbolic link is too long (ie, > PATH_MAX).
+ 5) symlink(2) returns -1 and sets errno to ENOTDIR if the directory
+ component in pathname of symbolic link is not a directory.
+ 6) symlink(2) returns -1 and sets errno to ENOENT if the component of
+ symbolic link points to an empty string.
+
+symlink04
+
+ Verify that, symlink will succeed to create a symbolic link of an existing
+ object name path.
+
+
+symlink05
+
+ Verify that, symlink will succeed to create a symbolic link of an
+ non-existing object name path.
+
+
+unlink05
+
+ Basic test for the unlink(2) system call.
+
+unlink06
+
+ Test for the unlink(2) system call of a FIFO.
+
+unlink07
+
+ Tests for error handling for the unlink(2) system call.
+
+unlink08
+
+ More tests for error handling for the unlink(2) system call.
+
+
+linktest
+
+ Regression test for max links per file
+
+rename01
+
+ This test will verify the rename(2) syscall basic functionality.
+ Verify rename() works when the "new" file or directory does not exist.
+
+rename02
+
+ Basic test for the rename(2) system call
+
+rename03
+
+ This test will verify that rename(2) functions correctly
+ when the "new" file or directory exists
+
+rename04
+
+ This test will verify that rename(2) failed when newpath is
+ a non-empty directory and return EEXIST or ENOTEMPTY
+
+rename05
+
+ This test will verify that rename(2) fails with EISDIR
+
+rename06
+
+ This test will verify that rename(2) failed in EINVAL
+
+rename07
+
+ This test will verify that rename(2) failed in ENOTDIR
+
+rename08
+
+ This test will verify that rename(2) syscall failed in EFAULT
+
+rename09
+
+ check rename() fails with EACCES
+
+rename10
+
+ This test will verify that rename(2) syscall fails with ENAMETOOLONG
+ and ENOENT
+
+rename11
+
+ This test will verify that rename(2) failed in EBUSY
+
+rename12
+
+ check rename() fails with EPERM
+
+rename13
+
+ Verify rename() return successfully and performs no other action
+ when "old" file and "new" file link to the same file.
+
+rmdir01
+
+ This test will verify that rmdir(2) syscall basic functionality.
+ verify rmdir(2) returns a value of 0 and the directory being
+ removed
+
+rmdir02
+
+ This test will verify that rmdir(2) fail in
+ 1. ENOTEMPTY
+ 2. EBUSY
+ 3. ENAMETOOLONG
+ 4. ENOENT
+ 5. ENOTDIR
+ 6. EFAULT
+ 7. EFAULT
+
+rmdir03
+
+ check rmdir() fails with EPERM or EACCES
+
+rmdir04
+
+ Basic test for the rmdir(2) system call
+
+rmdir05
+
+ Verify that rmdir(2) returns a value of -1 and sets errno to indicate the error.
+
+
+
+mkdir01
+
+ Basic errno test for mkdir(2)
+
+mkdir02
+
+ This test will verify that new directory created
+ by mkdir(2) inherits the group ID from the parent
+ directory and S_ISGID bit, if the S_ISGID bit is set
+ in the parent directory.
+
+mkdir03
+
+ Check mkdir() with various error conditions that should produce
+ EFAULT, ENAMETOOLONG, EEXIST, ENOENT and ENOTDIR
+
+mkdir04
+
+ Attempt to create a directory in a directory having no permissions.
+
+mkdir05
+
+ This test will verify the mkdir(2) syscall basic functionality
+
+mkdir08
+
+ Basic test for mkdir(2)
+
+
+mknod01
+
+ Basic test for mknod(2)
+
+mknod02
+
+ Verify that mknod(2) succeeds when used to create a filesystem
+ node with set group-ID bit set on a directory without set group-ID bit set.
+ The node created should have set group-ID bit set and its gid should be
+ equal to that of its parent directory.
+
+mknod03
+
+ Verify that mknod(2) succeeds when used to create a filesystem
+ node with set group-ID bit set on a directory with set group-ID bit set.
+ The node created should have set group-ID bit set and its gid should be
+ equal to the effective gid of the process.
+
+mknod04
+
+ Verify that mknod(2) succeeds when used to create a filesystem
+ node on a directory with set group-ID bit set.
+ The node created should not have group-ID bit set and its gid should be
+ equal to the effective gid of the process.
+
+mknod05
+
+ Verify that mknod(2) succeeds when used by root to create a filesystem
+ node with set group-ID bit set on a directory with set group-ID bit set.
+ The node created should have set group-ID bit set and its gid should be
+ equal to that of its parent directory.
+
+
+mknod06
+
+ Verify that,
+ 1) mknod(2) returns -1 and sets errno to EEXIST if specified path
+ already exists.
+ 2) mknod(2) returns -1 and sets errno to EFAULT if pathname points
+ outside user's accessible address space.
+ 3) mknod(2) returns -1 and sets errno to ENOENT if the directory
+ component in pathname does not exist.
+ 4) mknod(2) returns -1 and sets errno to ENAMETOOLONG if the pathname
+ component was too long.
+ 5) mknod(2) returns -1 and sets errno to ENOTDIR if the directory
+ component in pathname is not a directory.
+
+mknod07
+
+ Verify that,
+ 1) mknod(2) returns -1 and sets errno to EPERM if the process id of
+ the caller is not super-user.
+ 2) mknod(2) returns -1 and sets errno to EACCES if parent directory
+ does not allow write permission to the process.
+
+mknod08
+
+ Verify that mknod(2) succeeds when used to create a filesystem
+ node on a directory without set group-ID bit set. The node created
+ should not have set group-ID bit set and its gid should be equal to that
+ of its parent directory.
+
+
+
+
+access01
+
+ Basic test for access(2) using F_OK, R_OK, W_OK, and X_OK arguments.
+
+access02
+
+ Verify that access() succeeds to check the read/write/execute permissions
+ on a file if the mode argument passed was R_OK/W_OK/X_OK.
+
+ Also verify that, access() succeeds to test the accessibility of the file
+ referred to by symbolic link if the pathname points to a symbolic link.
+
+access03
+
+ EFAULT error testing for access(2).
+
+access04
+
+ Verify that,
+ 1. access() fails with -1 return value and sets errno to EACCES
+ if the permission bits of the file mode do not permit the
+ requested (Read/Write/Execute) access.
+ 2. access() fails with -1 return value and sets errno to EINVAL
+ if the specified access mode argument is invalid.
+ 3. access() fails with -1 return value and sets errno to EFAULT
+ if the pathname points outside allocate address space for the
+ process.
+ 4. access() fails with -1 return value and sets errno to ENOENT
+ if the specified file doesn't exist (or pathname is NULL).
+ 5. access() fails with -1 return value and sets errno to ENAMETOOLONG
+ if the pathname size is > PATH_MAX characters.
+
+access05
+
+ Verify that access() succeeds to check the existence of a file if
+ search access is permitted on the pathname of the specified file.
+
+access06
+
+ EFAULT error testing for access(2).
+
+chroot01
+
+ Testcase to check the whether chroot sets errno to EPERM.
+
+chroot02
+
+ Test functionality of chroot(2)
+
+chroot03
+
+ Testcase to test whether chroot(2) sets errno correctly.
+
+pipeio
+
+ This tool can be used to beat on system or named pipes.
+ See the help() function below for user information.
+
+pipe01
+
+ Testcase to check the basic functionality of the pipe(2) syscall:
+ Check that both ends of the pipe (both file descriptors) are
+ available to a process opening the pipe.
+
+pipe05
+
+ Check what happens when pipe is passed a bad file descriptor.
+
+pipe06
+
+ Check what happens when the system runs out of pipes.
+
+pipe08
+
+ Check that a SIGPIPE signal is generated when a write is
+ attempted on an empty pipe.
+
+pipe09
+
+ Check that two processes can use the same pipe at the same time.
+
+pipe10
+
+ Check that parent can open a pipe and have a child read from it
+
+pipe11
+
+ Check if many children can read what is written to a pipe by the
+ parent.
+
+
+sem01
+
+ Creates a semaphore and two processes. The processes
+ each go through a loop where they semdown, delay for a
+ random amount of time, and semup, so they will almost
+ always be fighting for control of the semaphore.
+
+sem02
+ The application creates several threads using pthread_create().
+ One thread performs a semop() with the SEM_UNDO flag set. The
+ change in semaphore value performed by that semop should be
+ "undone" only when the last pthread exits.
+
+
+semctl01
+
+ test the 10 possible semctl() commands
+
+semctl02
+
+ test for EACCES error
+
+semctl03
+
+ test for EINVAL and EFAULT errors
+
+semctl04
+
+ test for EPERM error
+
+
+semctl05
+
+ test for ERANGE error
+
+semget01
+
+ test that semget() correctly creates a semaphore set
+
+semget02
+
+ test for EACCES and EEXIST errors
+
+semget03
+
+ test for ENOENT error
+
+semget05
+
+ test for ENOSPC error
+
+semget06
+
+ test for EINVAL error
+
+semop01
+
+ test that semop() basic functionality is correct
+
+semop02
+
+ test for E2BIG, EACCES, EFAULT and EINVAL errors
+
+semop03
+
+ test for EFBIG error
+
+semop04
+
+ test for EAGAIN error
+
+semop05
+
+ test for EINTR and EIDRM errors
+
+
+
+msgctl01
+ create a message queue, then issue the IPC_STAT command
+ and RMID commands to test the functionality
+
+
+msgctl02
+ create a message queue, then issue the IPC_SET command
+ to lower the msg_qbytes value.
+
+
+msgctl03
+ create a message queue, then issue the IPC_RMID command
+
+
+
+msgctl04
+ test for EACCES, EFAULT and EINVAL errors using
+ a variety of incorrect calls.
+
+
+msgctl05
+ test for EPERM error
+
+
+
+msgget01
+ create a message queue, write a message to it and
+ read it back.
+
+
+msgget02
+ test for EEXIST and ENOENT errors
+
+
+msgget03
+ test for an ENOSPC error by using up all available
+ message queues.
+
+msgget04
+ test for an EACCES error by creating a message queue
+ with no read or write permission and then attempting
+ to access it with various permissions.
+
+msgrcv01
+ test that msgrcv() receives the expected message
+
+msgrcv02
+ test for EACCES and EFAULT errors
+
+msgrcv03
+ test for EINVAL error
+
+msgrcv04
+ test for E2BIG and ENOMSG errors
+
+msgrcv05
+ test for EINTR error
+
+msgrcv06
+ test for EIDRM error
+
+msgsnd01
+ test that msgsnd() enqueues a message correctly
+
+msgsnd02
+ test for EACCES and EFAULT errors
+
+msgsnd03
+ test for EINVAL error
+
+msgsnd04
+ test for EAGAIN error
+
+msgsnd05
+ test for EINTR error
+
+
+msgsnd06
+ test for EIDRM error
+
+shmat01
+ test that shmat() works correctly
+
+shmat02
+ check for EINVAL and EACCES errors
+
+
+shmat03
+ test for EACCES error
+
+
+shmctl01
+ test the IPC_STAT, IPC_SET and IPC_RMID commands as
+ they are used with shmctl()
+
+
+shmctl02
+ check for EACCES, EFAULT and EINVAL errors
+
+
+shmctl03
+ check for EACCES, and EPERM errors
+
+
+shmdt01
+ check that shared memory is detached correctly
+
+
+shmdt02
+ check for EINVAL error
+
+
+shmget01
+ test that shmget() correctly creates a shared memory segment
+
+
+shmget02
+ check for ENOENT, EEXIST and EINVAL errors
+
+
+shmget03
+ test for ENOSPC error
+
+
+shmget04
+ test for EACCES error
+
+
+shmget05
+ test for EACCES error
+
+openfile
+
+ Creates files and opens simultaneously
+
+open01
+
+ Open a file with oflag = O_CREAT set, does it set the sticky bit off?
+
+ Open "/tmp" with O_DIRECTORY, does it set the S_IFDIR bit on?
+
+open02
+
+ Test if open without O_CREAT returns -1 if a file does not exist.
+
+open03
+
+ Basic test for open(2)
+
+open04
+
+ Testcase to check that open(2) sets EMFILE if a process opens files
+ more than its descriptor size
+
+open05
+
+ Testcase to check open(2) sets errno to EACCES correctly.
+
+open06
+
+ Testcase to check open(2) sets errno to ENXIO correctly.
+
+open07
+
+ Test the open(2) system call to ensure that it sets ELOOP correctly.
+
+open08
+
+ Check for the following errors:
+ 1. EEXIST
+ 2. EISDIR
+ 3. ENOTDIR
+ 4. ENAMETOOLONG
+ 5. EFAULT
+ 6. ETXTBSY
+
+
+openfile
+
+ Creates files and opens simultaneously
+
+
+chdir01
+
+ Check proper operation of chdir(): tests whether the
+ system call can it change the current, working directory, and find a
+ file there? Will it fail on a non-directory entry ?
+
+chdir02
+
+ Basic test for chdir(2).
+
+chdir03
+
+ Testcase for testing that chdir(2) sets EACCES errno
+
+chdir04
+
+ Testcase to test whether chdir(2) sets errno correctly.
+
+
+chmod01
+
+ Verify that, chmod(2) succeeds when used to change the mode permissions
+ of a file.
+
+chmod02
+
+ Basic test for chmod(2).
+
+chmod03
+
+ Verify that, chmod(2) will succeed to change the mode of a file
+ and set the sticky bit on it if invoked by non-root (uid != 0)
+ process with the following constraints,
+ - the process is the owner of the file.
+ - the effective group ID or one of the supplementary group ID's of the
+ process is equal to the group ID of the file.
+
+chmod04
+
+ Verify that, chmod(2) will succeed to change the mode of a directory
+ and set the sticky bit on it if invoked by non-root (uid != 0) process
+ with the following constraints,
+ - the process is the owner of the directory.
+ - the effective group ID or one of the supplementary group ID's of the
+ process is equal to the group ID of the directory.
+
+chmod05
+
+ Verify that, chmod(2) will succeed to change the mode of a directory
+ but fails to set the setgid bit on it if invoked by non-root (uid != 0)
+ process with the following constraints,
+ - the process is the owner of the directory.
+ - the effective group ID or one of the supplementary group ID's of the
+ process is not equal to the group ID of the directory.
+
+chmod06
+
+ Verify that,
+ 1) chmod(2) returns -1 and sets errno to EPERM if the effective user id
+ of process does not match the owner of the file and the process is
+ not super user.
+ 2) chmod(2) returns -1 and sets errno to EACCES if search permission is
+ denied on a component of the path prefix.
+ 3) chmod(2) returns -1 and sets errno to EFAULT if pathname points
+ outside user's accessible address space.
+ 4) chmod(2) returns -1 and sets errno to ENAMETOOLONG if the pathname
+ component is too long.
+ 5) chmod(2) returns -1 and sets errno to ENOTDIR if the directory
+ component in pathname is not a directory.
+ 6) chmod(2) returns -1 and sets errno to ENOENT if the specified file
+ does not exists.
+
+chmod07
+
+ Verify that, chmod(2) will succeed to change the mode of a file/directory
+ and sets the sticky bit on it if invoked by root (uid = 0) process with
+ the following constraints,
+ - the process is not the owner of the file/directory.
+ - the effective group ID or one of the supplementary group ID's of the
+ process is equal to the group ID of the file/directory.
+
+
+chown01
+
+ Basic test for chown(2).
+
+chown02
+
+ Verify that, when chown(2) invoked by super-user to change the owner and
+ group of a file specified by path to any numeric owner(uid)/group(gid)
+ values,
+ - clears setuid and setgid bits set on an executable file.
+ - preserves setgid bit set on a non-group-executable file.
+
+chown03
+
+ Verify that, chown(2) succeeds to change the group of a file specified
+ by path when called by non-root user with the following constraints,
+ - euid of the process is equal to the owner of the file.
+ - the intended gid is either egid, or one of the supplementary gids
+ of the process.
+ Also, verify that chown() clears the setuid/setgid bits set on the file.
+
+chown04
+
+ Verify that,
+ 1) chown(2) returns -1 and sets errno to EPERM if the effective user id
+ of process does not match the owner of the file and the process is
+ not super user.
+ 2) chown(2) returns -1 and sets errno to EACCES if search permission is
+ denied on a component of the path prefix.
+ 3) chown(2) returns -1 and sets errno to EFAULT if pathname points
+ outside user's accessible address space.
+ 4) chown(2) returns -1 and sets errno to ENAMETOOLONG if the pathname
+ component is too long.
+ 5) chown(2) returns -1 and sets errno to ENOTDIR if the directory
+ component in pathname is not a directory.
+ 6) chown(2) returns -1 and sets errno to ENOENT if the specified file
+ does not exists.
+
+chown05
+
+ Verify that, chown(2) succeeds to change the owner and group of a file
+ specified by path to any numeric owner(uid)/group(gid) values when invoked
+ by super-user.
+
+
+close01
+
+ Test that closing a regular file and a pipe works correctly
+
+close02
+
+ Check that an invalid file descriptor returns EBADF
+
+close08
+
+ Basic test for close(2).
+
+
+fchdir01
+
+ create a directory and cd into it.
+
+fchdir02
+
+ try to cd into a bad directory (bad fd).
+
+
+fchmod01
+
+ Basic test for Fchmod(2).
+
+fchmod02
+
+ Verify that, fchmod(2) will succeed to change the mode of a file/directory
+ set the sticky bit on it if invoked by root (uid = 0) process with
+ the following constraints,
+ - the process is not the owner of the file/directory.
+ - the effective group ID or one of the supplementary group ID's of the
+ process is equal to the group ID of the file/directory.
+
+fchmod03
+
+ Verify that, fchmod(2) will succeed to change the mode of a file
+ and set the sticky bit on it if invoked by non-root (uid != 0)
+ process with the following constraints,
+ - the process is the owner of the file.
+ - the effective group ID or one of the supplementary group ID's of the
+ process is equal to the group ID of the file.
+
+fchmod04
+
+ Verify that, fchmod(2) will succeed to change the mode of a directory
+ and set the sticky bit on it if invoked by non-root (uid != 0) process
+ with the following constraints,
+ - the process is the owner of the directory.
+ - the effective group ID or one of the supplementary group ID's of the
+ process is equal to the group ID of the directory.
+
+fchmod05
+
+ Verify that, fchmod(2) will succeed to change the mode of a directory
+ but fails to set the setgid bit on it if invoked by non-root (uid != 0)
+ process with the following constraints,
+ - the process is the owner of the directory.
+ - the effective group ID or one of the supplementary group ID's of the
+ process is not equal to the group ID of the directory.
+
+fchmod06
+
+ Verify that,
+ 1) fchmod(2) returns -1 and sets errno to EPERM if the effective user id
+ of process does not match the owner of the file and the process is
+ not super user.
+ 2) fchmod(2) returns -1 and sets errno to EBADF if the file descriptor
+ of the specified file is not valid.
+
+fchmod07
+
+ Verify that, fchmod(2) succeeds when used to change the mode permissions
+ of a file specified by file descriptor.
+
+
+fchown01
+
+ Basic test for fchown(2).
+
+fchown02
+
+ Verify that, when fchown(2) invoked by super-user to change the owner and
+ group of a file specified by file descriptor to any numeric
+ owner(uid)/group(gid) values,
+ - clears setuid and setgid bits set on an executable file.
+ - preserves setgid bit set on a non-group-executable file.
+
+fchown03
+
+ Verify that, fchown(2) succeeds to change the group of a file specified
+ by path when called by non-root user with the following constraints,
+ - euid of the process is equal to the owner of the file.
+ - the intended gid is either egid, or one of the supplementary gids
+ of the process.
+ Also, verify that fchown() clears the setuid/setgid bits set on the file.
+
+fchown04
+
+ Verify that,
+ 1) fchown(2) returns -1 and sets errno to EPERM if the effective user id
+ of process does not match the owner of the file and the process is
+ not super user.
+ 2) fchown(2) returns -1 and sets errno to EBADF if the file descriptor
+ of the specified file is not valid.
+
+fchown05
+
+ Verify that, fchown(2) succeeds to change the owner and group of a file
+ specified by file descriptor to any numeric owner(uid)/group(gid) values
+ when invoked by super-user.
+
+lchown01
+
+ Verify that, lchown(2) succeeds to change the owner and group of a file
+ specified by path to any numeric owner(uid)/group(gid) values when invoked
+ by super-user.
+
+
+lchown02
+
+ Verify that,
+ 1) lchown(2) returns -1 and sets errno to EPERM if the effective user id
+ of process does not match the owner of the file and the process is
+ not super user.
+ 2) lchown(2) returns -1 and sets errno to EACCES if search permission is
+ denied on a component of the path prefix.
+ 3) lchown(2) returns -1 and sets errno to EFAULT if pathname points
+ outside user's accessible address space.
+ 4) lchown(2) returns -1 and sets errno to ENAMETOOLONG if the pathname
+ component is too long.
+ 5) lchown(2) returns -1 and sets errno to ENOTDIR if the directory
+ component in pathname is not a directory.
+ 6) lchown(2) returns -1 and sets errno to ENOENT if the specified file
+ does not exists.
+
+
+creat01
+
+ Testcase to check the basic functionality of the creat(2) system call.
+
+creat03
+
+ Testcase to check whether the sticky bit cleared.
+
+creat04
+
+ Testcase to check creat(2) fails with EACCES
+
+creat05
+
+ Testcase to check that creat(2) system call returns EMFILE.
+
+creat06
+
+ Testcase to check creat(2) sets the following errnos correctly:
+ 1. EISDIR
+ 2. ENAMETOOLONG
+ 3. ENOENT
+ 4. ENOTDIR
+ 5. EFAULT
+ 6. EACCES
+
+creat07
+
+ Testcase to check creat(2) sets the following errnos correctly:
+ 1. ETXTBSY
+
+creat09
+
+ Basic test for creat(2) using 0700 argument.
+
+truncate01
+
+ Verify that, truncate(2) succeeds to truncate a file to a specified
+ length.
+
+
+truncate02
+
+ Verify that, truncate(2) succeeds to truncate a file to a certain length,
+ but the attempt to read past the truncated length will fail.
+
+
+truncate03
+
+ Verify that,
+ 1) truncate(2) returns -1 and sets errno to EACCES if search/write
+ permission denied for the process on the component of the path prefix
+ or named file.
+ 2) truncate(2) returns -1 and sets errno to ENOTDIR if the component of
+ the path prefix is not a directory.
+ 3) truncate(2) returns -1 and sets errno to EFAULT if pathname points
+ outside user's accessible address space.
+ 4) truncate(2) returns -1 and sets errno to ENAMETOOLONG if the component
+ of a pathname exceeded 255 characters or entire pathname exceeds 1023
+ characters.
+ 5) truncate(2) returns -1 and sets errno to ENOENT if the named file
+ does not exist.
+
+ftruncate01
+
+ Verify that, ftruncate(2) succeeds to truncate a file to a specified
+ length if the file indicated by file descriptor opened for writing.
+
+ftruncate02
+
+ Verify that, ftruncate(2) succeeds to truncate a file to a certain length,
+ but the attempt to read past the truncated length will fail.
+
+ftruncate03
+
+ Verify that,
+ 1) ftruncate(2) returns -1 and sets errno to EINVAL if the specified
+ truncate length is less than 0.
+ 2) ftruncate(2) returns -1 and sets errno to EBADF if the file descriptor
+ of the specified file is not valid.
+
+vhangup01
+
+ Check the return value, and errno of vhangup(2)
+ when a non-root user calls vhangup().
+
+vhangup02
+
+ To test the basic functionality of vhangup(2)
+growfiles
+
+ This program will grow a list of files.
+ Each file will grow by grow_incr before the same
+ file grows twice. Each file is open and closed before next file is opened.
+
+pipe01
+
+ Testcase to check the basic functionality of the pipe(2) syscall:
+ Check that both ends of the pipe (both file descriptors) are
+ available to a process opening the pipe.
+
+pipe05
+
+ Check what happens when pipe is passed a bad file descriptor.
+
+pipe06
+
+ Check what happens when the system runs out of pipes.
+
+pipe08
+
+ Check that a SIGPIPE signal is generated when a write is
+ attempted on an empty pipe.
+
+pipe09
+
+ Check that two processes can use the same pipe at the same time.
+
+pipe10
+
+ Check that parent can open a pipe and have a child read from it
+
+pipe11
+
+ Check if many children can read what is written to a pipe by the
+ parent.
+
+pipeio
+
+ This tool can be used to beat on system or named pipes.
+ See the help() function below for user information.
+
+ /ipc_stress/message_queue_test_01.c
+ /ipc_stress/pipe_test_01.c
+ /ipc_stress/semaphore_test_01.c
+ /ipc_stress/single_test_01.c
+
+proc01
+ Recursively reads all files within /proc filesystem.
+
+lftest
+ The purpose of this test is to verify the file size limitations of a filesystem.
+ It writes one buffer at a time and lseeks from the beginning of the file to the
+ end of the last write position. The intent is to test lseek64.
+
+
+llseek01
+
+ Verify that, llseek() call succeeds to set the file pointer position
+ to an offset larger than file size. Also, verify that any attempt
+ to write to this location fails.
+
+llseek02
+
+ Verify that,
+ 1. llseek() returns -1 and sets errno to EINVAL, if the 'Whence' argument
+ is not a proper value.
+ 2. llseek() returns -1 and sets errno to EBADF, if the file handle of
+ the specified file is not valid.
+
+lseek01
+
+ Basic test for lseek(2)
+
+lseek02
+
+ Negative test for lseek(2)
+
+lseek03
+
+ Negative test for lseek(2) whence
+
+lseek04
+
+ Negative test for lseek(2) of a fifo
+
+lseek05
+
+ Negative test for lseek(2) of a pipe
+
+lseek06
+
+ Verify that, lseek() call succeeds to set the file pointer position
+ to less than or equal to the file size, when a file is opened for
+ read or write.
+
+lseek07
+
+ Verify that, lseek() call succeeds to set the file pointer position
+ to more than the file size, when a file is opened for reading/writing.
+
+lseek08
+
+ Verify that, lseek() call succeeds to set the file pointer position
+ to the end of the file when 'whence' value set to SEEK_END and any
+ attempts to read from that position should fail.
+
+lseek09
+
+ Verify that, lseek() call succeeds to set the file pointer position
+ to the current specified location, when 'whence' value is set to
+ SEEK_CUR and the data read from the specified location should match
+ the expected data.
+
+lseek10
+
+ Verify that,
+ 1. lseek() returns -1 and sets errno to ESPIPE, if the file handle of
+ the specified file is associated with a pipe, socket, or FIFO.
+ 2. lseek() returns -1 and sets errno to EINVAL, if the 'Whence' argument
+ is not a proper value.
+ 3. lseek() returns -1 and sets errno to EBADF, if the file handle of
+ the specified file is not valid.
+
+rwtest
+
+ A wrapper for doio and iogen.
+
+doio
+ a general purpose io initiator with system call and
+ write logging. See doio.h for the structure which defines
+ what doio requests should look like.
+
+ Currently doio can handle read,write,reada,writea,ssread,
+ sswrite, and many varieties of listio requests.
+ For disk io, if the O_SSD flag is set doio will allocate
+ the appropriate amount of ssd and do the transfer - thus, doio
+ can handle all of the primitive types of file io.
+
+iogen
+ A tool for generating file/sds io for a doio process
+
+pread01
+
+ Verify the functionality of pread() by writing known data using pwrite()
+ to the file at various specified offsets and later read from the file from
+ various specified offsets, comparing the data read against the data
+ written.
+
+pread02
+
+ Verify that,
+ 1) pread() fails when attempted to read from an unnamed pipe.
+ 2) pread() fails if the specified offset position was invalid.
+
+
+pwrite01
+
+ Verify the functionality of pwrite() by writing known data using pwrite()
+ to the file at various specified offsets and later read from the file from
+ various specified offsets, comparing the data written against the data
+ read using read().
+
+pwrite02
+
+ Verify that,
+ 1) pwrite() fails when attempted to write to an unnamed pipe.
+ 2) pwrite() fails if the specified offset position was invalid.
+
+
+read01
+
+ Basic test for the read(2) system call
+
+read02
+
+ test 1: Does read return -1 if file descriptor is not valid, check for EBADF
+
+ test 2: Check if read sets EISDIR, if the fd refers to a directory
+
+ test 3: Check if read sets EFAULT, if buf is -1.
+
+read03
+
+ Testcase to check that read() sets errno to EAGAIN
+
+read04
+
+ Testcase to check if read returns the number of bytes read correctly.
+
+
+readv01
+
+ Testcase to check the basic functionality of the readv(2) system call.
+
+readv02
+
+ Testcase to check the error conditions of the readv(2) system call.
+
+write01
+
+ Basic test for write(2) system call.
+
+write02
+
+ Basic functionality test: does the return from write match the count
+ of the number of bytes written.
+
+
+write03
+
+ Testcase to check that write(2) doesn't corrupt a file when it fails
+
+write04
+
+ Testcase to check that write() sets errno to EAGAIN
+
+write05
+
+ Check the return value, and errnos of write(2)
+ - when the file descriptor is invalid - EBADF
+ - when the buf parameter is invalid - EFAULT
+ - on an attempt to write to a pipe that is not open for reading - EPIPE
+
+
+writev01
+
+ Testcase to check the basic functionality of writev(2) system call.
+
+
+writev02
+
+ In these testcases, writev() is called with partially valid data
+ to be written in a sparse file.
+
+
+writev03
+
+ The testcases are written calling writev() with partially valid data
+ to overwrite the contents, to write in the beginning and to write in
+ the end of the file.
+
+writev04
+
+ The testcases are written calling writev() with partially valid data
+ to overwrite the contents, to write in the beginning and to write in
+ the end of the file. This is same as writev03, but the length of
+ buffer used here is 8192 bytes.
+
+writev05
+
+ These testcases are written to test writev() on sparse files. This
+ is same as writev02. But the initial write() with valid data is
+ done at the beginning of the file.
+
+disktest
+
+ Does repeated accesses to a filespec and optionally writes to, reads from,
+ and verifies the data. By default, disktest makes assumptions about
+ the running environment which allows for a quick start of IO generation.
+ However, Disktest has a large number of command line options which can
+ be used to adapt the test for a variety of uses including data integrity,
+ medium integrity, performance, and simple application simulation.
+
+
+
+
+getdents01
+ get a directory entry
+
+getdents02
+ check that we get a failure with a bad file descriptor
+
+
+getdents03
+ check for an EINVAL error
+
+
+getdents04
+ check for an ENOTDIR error
+
+getdents05
+ check that we get a failure with a bad dirp address.
+process_stress
+ Spawn creates a tree
+ of processes with Dval depth and Bval breadth. Each parent will spawn
+ Bval children. Each child will store information about themselves
+ in shared memory. The leaf nodes will communicate the existence
+ of one another through message queues, once each leaf node has
+ received communication from all of her siblings she will reduce
+ the semaphore count and exit. Meanwhile all parents are waiting
+ to hear from their children through the use of semaphores. When
+ the semaphore count reaches zero then the parent knows all the
+ children have talked to one another. Locking of the connter semaphore
+ is provided by the use of another (binary) semaphore.
+
+
+
+
+sched_stress
+ Exports required environment variables and runs sched_driver
+sched_driver
+ This program uses system calls to change the
+ priorities of the throughput measurement testcases.
+ When real-time is in effect, priorities 50 through 64
+ are used. (MAX_PRI and MIN_PRI) When user-time
+ (normal) is in effect, 0-14 (corresponding to nice()
+ calls) is used. The driver only keeps track of
+ values from 50 to 64, and the testcases will scale
+ them down to 0 to 14 when needed, to change the
+ priority of a user-time process.
+
+time-schedule
+ This programme will determine the context switch
+ (scheduling) overhead on a system. It takes into
+ account SMP machines. True context switches are
+ measured.
+trace_sched
+ This utility spawns N tasks, each task sets its priority
+ by making a system call to the scheduler. The thread
+ function reads the priority that the scheduler sets for
+ this task and also reads from /proc the processor this
+ task last executed on the information that is gathered
+ by the thread function may be in real-time. Its only an
+ approximation.
+
+sched_getscheduler01
+
+ Testcase to check sched_getscheduler() returns correct return value
+
+sched_getscheduler02
+
+ To check for the errno ESRCH
+
+
+sched_setscheduler01
+
+ Testcase to test whether sched_setscheduler(2) sets the errnos
+ correctly.
+
+sched_setscheduler02
+
+ Testcase to test whether sched_setscheduler(2) sets the errnos
+ correctly.
+
+
+sched_yield01
+
+ Testcase to check that sched_yield returns correct values.
+
+
+nice01
+
+ Verify that root can provide a negative value to nice()
+ and hence root can decrease the nice value of the process
+ using nice() system call
+
+nice02
+
+ Verify that any user can successfully increase the nice value of
+ the process by passing a higher increment value (> max. applicable limits)
+ to nice() system call.
+
+nice03
+
+ Verify that any user can successfully increase the nice value of
+ the process by passing an increment value (< max. applicable limits) to
+ nice() system call.
+
+nice04
+
+ Verify that, nice(2) fails when, a non-root user attempts to increase
+ the priority of a process by specifying a negative increment value.
+
+nice05
+
+ Basic test for nice(2)
+
+
+poll01
+
+ Verify that valid open file descriptor must be provided to poll() to
+ succeed.
+
+select01
+
+ Basic test for the select(2) system call to a fd of regular file with no I/O
+ and small timeout
+
+select02
+
+ Basic test for the select(2) system call to fd of system pipe with no I/O
+ and small timeout
+
+select03
+
+ Basic test for the select(2) system call to fd of a named-pipe (FIFO)
+
+select04
+
+ Verify that select(2) returns immediately (does not block) if the
+ timeout value is zero.
+
+select05
+
+ Verify that select(2) fails when one or more of the file descriptor sets
+ specify a file descriptor which is not valid.
+
+select06
+
+ Verify that select(2) fails when a signal is delivered before any of the
+ selected events occur and before the timeout interval expires.
+
+select07
+
+ Verify that select(2) fails when an invalid timeout interval is specified.
+
+select08
+
+ Verify the functionality of select(2) by passing non-null writefds
+ which points to a regular file, pipes or FIFO's.
+
+select09
+
+ Verify the functionality of select(2) by passing non-null readfds
+ which points to a regular file, pipes or FIFO's.
+
+select10
+
+ Verify that a successful call to select() shall return the desired
+ number of modified descriptors for which bits are set in the bit masks,
+ where descriptors points to a regular file, pipes or FIFO's.
+sem01
+
+ Creates a semaphore and two processes. The processes
+ each go through a loop where they semdown, delay for a
+ random amount of time, and semup, so they will almost
+ always be fighting for control of the semaphore.
+
+sem02
+ The application creates several threads using pthread_create().
+ One thread performs a semop() with the SEM_UNDO flag set. The
+ change in semaphore value performed by that semop should be
+ "undone" only when the last pthread exits.
+
+
+semctl01
+
+ test the 10 possible semctl() commands
+
+semctl02
+
+ test for EACCES error
+
+semctl03
+
+ test for EINVAL and EFAULT errors
+
+semctl04
+
+ test for EPERM error
+
+
+semctl05
+
+ test for ERANGE error
+
+semget01
+
+ test that semget() correctly creates a semaphore set
+
+semget02
+
+ test for EACCES and EEXIST errors
+
+semget03
+
+ test for ENOENT error
+
+semget05
+
+ test for ENOSPC error
+
+semget06
+
+ test for EINVAL error
+
+semop01
+
+ test that semop() basic functionality is correct
+
+semop02
+
+ test for E2BIG, EACCES, EFAULT and EINVAL errors
+
+semop03
+
+ test for EFBIG error
+
+semop04
+
+ test for EAGAIN error
+
+semop05
+
+ test for EINTR and EIDRM errors
+
+
+shmat01
+ test that shmat() works correctly
+
+shmat02
+ check for EINVAL and EACCES errors
+
+
+shmat03
+ test for EACCES error
+
+
+shmctl01
+ test the IPC_STAT, IPC_SET and IPC_RMID commands as
+ they are used with shmctl()
+
+
+shmctl02
+ check for EACCES, EFAULT and EINVAL errors
+
+
+shmctl03
+ check for EACCES, and EPERM errors
+
+
+shmdt01
+ check that shared memory is detached correctly
+
+
+shmdt02
+ check for EINVAL error
+
+
+shmget01
+ test that shmget() correctly creates a shared memory segment
+
+
+shmget02
+ check for ENOENT, EEXIST and EINVAL errors
+
+
+shmget03
+ test for ENOSPC error
+
+
+shmget04
+ test for EACCES error
+
+
+shmget05
+ test for EACCES error
+shmat1
+
+ Test the LINUX memory manager. The program is aimed at
+ stressing the memory manager by repeated shmat/write/read/
+ shmatd of file/memory of random size (maximum 1000 * 4096)
+ done by multiple processes.
+
+shm_test
+
+ This program is designed to stress the Memory management sub -
+ system of Linux. This program will spawn multiple pairs of
+ reader and writer threads. One thread will create the shared
+ segment of random size and write to this memory, the other
+ pair will read from this memory.
+
+sigaction01
+
+ Test some features of sigaction (see below for more details)
+
+
+sigaction02
+
+ Testcase to check the basic errnos set by the sigaction(2) syscall.
+
+
+sigaltstack01
+
+ Send a signal using the main stack. While executing the signal handler
+ compare a variable's address lying on the main stack with the stack
+ boundaries returned by sigaltstack().
+
+
+sigaltstack02
+
+ Verify that,
+ 1. sigaltstack() fails and sets errno to EINVAL when "ss_flags" field
+ pointed to by 'ss' contains invalid flags.
+ 2. sigaltstack() fails and sets errno to ENOMEM when the size of alternate
+ stack area is less than MINSIGSTKSZ.
+
+sighold02
+
+ Basic test for the sighold02(2) system call.
+
+
+signal01
+ set the signal handler to our own function
+
+
+signal02
+ Test that we get an error using illegal signals
+
+signal03
+
+ Boundary value and other invalid value checking of signal setup and signal
+ sending.
+
+
+signal04
+ restore signals to default behavior
+
+
+signal05
+ set signals to be ignored
+
+
+sigprocmask01
+
+ Verify that sigprocmask() succeeds to examine and change the calling
+ process's signal mask.
+ Also, verify that sigpending() succeeds to store signal mask that are
+ blocked from delivery and pending for the calling process.
+
+sigrelse01
+
+ Basic test for the sigrelse(2) system call.
+
+sigsuspend01
+
+ Verify that sigsuspend() succeeds to change process's current signal
+ mask with the specified signal mask and suspends the process execution
+ until the delivery of a signal.
+kill01
+
+ Test case to check the basic functionality of kill().
+
+kill02
+
+ Sending a signal to processes with the same process group ID
+
+kill03
+
+ Test case to check that kill fails when given an invalid signal.
+
+kill04
+
+ Test case to check that kill() fails when passed a non-existent pid.
+
+kill05
+
+ Test case to check that kill() fails when passed a pid owned by another
+ user.
+
+kill06
+
+ Test case to check the basic functionality of kill() when killing an
+ entire process group with a negative pid.
+
+kill07
+
+ Test case to check that SIGKILL can not be caught.
+
+kill08
+
+ Test case to check the basic functionality of kill() when kill an
+ entire process group.
+
+kill09
+ Basic test for kill(2)
+
+kill10
+ Signal flooding test.
+
+
+mtest01
+ mallocs memory <chunksize> at a time until malloc fails.
+mallocstress
+
+ This program is designed to stress the VMM by doing repeated */
+ mallocs and frees, with out using the swap space. This is */
+ achieved by spawning N threads with repeatedly malloc and free*/
+ a memory of size M. The stress can be increased by increasing */
+ the number of repetitions over the default number using the */
+ -l [num] option.
+
+clisrv
+
+ Sender: Read contents of data file. Write each line to socket, then
+ read line back from socket and write to standard output.
+ Receiver: Read a stream socket one line at a time and write each line
+ back to the sender.
+ Usage: pthcli [port number]
+
+
+socket01
+
+ Verify that socket() returns the proper errno for various failure cases
+
+
+socketpair01
+
+ Verify that socketpair() returns the proper errno for various failure cases
+
+
+sockioctl01
+
+ Verify that ioctl() on sockets returns the proper errno for various
+ failure cases
+
+connect01
+
+ Verify that connect() returns the proper errno for various failure cases
+
+getpeername01
+
+ Verify that getpeername() returns the proper errno for various failure cases
+
+
+getsockname01
+
+ Verify that getsockname() returns the proper errno for various failure cases
+
+getsockopt01
+
+ Verify that getsockopt() returns the proper errno for various failure cases
+
+listen01
+
+ Verify that listen() returns the proper errno for various failure cases
+
+accept01
+
+ Verify that accept() returns the proper errno for various failure cases
+
+bind01
+
+ Verify that bind() returns the proper errno for various failure cases
+
+
+recv01
+
+ Verify that recv() returns the proper errno for various failure cases
+
+
+recvfrom01
+
+ Verify that recvfrom() returns the proper errno for various failure cases
+
+
+recvmsg01
+
+ Verify that recvmsg() returns the proper errno for various failure cases
+
+send01
+
+ Verify that send() returns the proper errno for various failure cases
+
+sendmsg01
+
+ Verify that sendmsg() returns the proper errno for various failure cases
+sendto01
+
+ Verify that sendto() returns the proper errno for various failure cases
+
+setsockopt01
+
+ Verify that setsockopt() returns the proper errno for various failure cases
+
+
+fstat01
+
+ Basic test for fstat(2)
+
+fstat02
+
+ Verify that, fstat(2) succeeds to get the status of a file and fills
+ the stat structure elements though file pointed to by file descriptor
+ not opened for reading.
+
+fstat03
+
+ Verify that, fstat(2) returns -1 and sets errno to EBADF if the file
+ pointed to by file descriptor is not valid.
+
+fstat04
+
+ Verify that, fstat(2) succeeds to get the status of a file pointed by
+ file descriptor and fills the stat structure elements.
+
+
+fstatfs01
+
+ Basic test for fstatfs(2)
+
+fstatfs02
+
+ Testcase to check fstatfs() sets errno correctly.
+
+lstat01
+
+ Verify that, lstat(2) succeeds to get the status of a file pointed to by
+ symlink and fills the stat structure elements.
+
+lstat02
+
+ Basic test for lstat(2)
+
+lstat03
+
+ Verify that,
+ 1) lstat(2) returns -1 and sets errno to EACCES if search permission is
+ denied on a component of the path prefix.
+ 2) lstat(2) returns -1 and sets errno to ENOENT if the specified file
+ does not exists or empty string.
+ 3) lstat(2) returns -1 and sets errno to EFAULT if pathname points
+ outside user's accessible address space.
+ 4) lstat(2) returns -1 and sets errno to ENAMETOOLONG if the pathname
+ component is too long.
+ 5) lstat(2) returns -1 and sets errno to ENOTDIR if the directory
+ component in pathname is not a directory.
+
+stat01
+
+ Verify that, stat(2) succeeds to get the status of a file and fills the
+ stat structure elements.
+
+stat02
+
+ Verify that, stat(2) succeeds to get the status of a file and fills the
+ stat structure elements though process doesn't have read access to the
+ file.
+
+
+stat03
+
+ Verify that,
+ 1) stat(2) returns -1 and sets errno to EACCES if search permission is
+ denied on a component of the path prefix.
+ 2) stat(2) returns -1 and sets errno to ENOENT if the specified file
+ does not exists or empty string.
+ 3) stat(2) returns -1 and sets errno to EFAULT if pathname points
+ outside user's accessible address space.
+ 4) stat(2) returns -1 and sets errno to ENAMETOOLONG if the pathname
+ component is too long.
+ 5) stat(2) returns -1 and sets errno to ENOTDIR if the directory
+ component in pathname is not a directory.
+
+stat05
+
+ Basic test for the stat05(2) system call.
+
+statfs01
+
+ Basic test for the statfs(2) system call.
+
+statfs02
+
+ Testcase to check that statfs(2) sets errno correctly.
+
+
+read01
+
+ Basic test for the read(2) system call
+
+read02
+
+ test 1: Does read return -1 if file descriptor is not valid, check for EBADF
+
+ test 2: Check if read sets EISDIR, if the fd refers to a directory
+
+ test 3: Check if read sets EFAULT, if buf is -1.
+
+read03
+
+ Testcase to check that read() sets errno to EAGAIN
+
+read04
+
+ Testcase to check if read returns the number of bytes read correctly.
+
+umask01
+
+ Basic test for the umask(2) system call.
+
+umask02
+
+ Check that umask changes the mask, and that the previous
+ value of the mask is returned correctly for each value.
+
+umask03
+
+ Check that umask changes the mask, and that the previous
+ value of the mask is returned correctly for each value.
+
+
+
+getgroups01
+
+ Getgroups system call critical test
+
+getgroups02
+
+ Basic test for getgroups(2)
+
+getgroups03
+
+ Verify that, getgroups() system call gets the supplementary group IDs
+ of the calling process.
+
+getgroups04
+
+ Verify that,
+ getgroups() fails with -1 and sets errno to EINVAL if the size
+ argument value is -ve.
+
+gethostname01
+
+ Basic test for gethostname(2)
+
+
+getpgid01
+
+ Testcase to check the basic functionality of getpgid().
+
+getpgid02
+
+ Testcase to check the basic functionality of getpgid().
+
+getpgrp01
+ Basic test for getpgrp(2)
+
+
+getpriority01
+
+ Verify that getpriority() succeeds get the scheduling priority of
+ the current process, process group or user.
+
+
+getpriority02
+
+ Verify that,
+ 1) getpriority() sets errno to ESRCH if no process was located
+ was located for 'which' and 'who' arguments.
+ 2) getpriority() sets errno to EINVAL if 'which' argument was
+ not one of PRIO_PROCESS, PRIO_PGRP, or PRIO_USER.
+
+getresgid01
+
+ Verify that getresgid() will be successful to get the real, effective
+ and saved user id of the calling process.
+
+getresgid02
+
+ Verify that getresgid() will be successful to get the real, effective
+ and saved user ids after calling process invokes setregid() to change
+ the effective/saved gids to that of specified user.
+
+getresgid03
+
+ Verify that getresgid() will be successful to get the real, effective
+ and saved user ids after calling process invokes setresgid() to change
+ the effective gid to that of specified user.
+
+
+getresuid01
+
+ Verify that getresuid() will be successful to get the real, effective
+ and saved user id of the calling process.
+
+getresuid02
+
+ Verify that getresuid() will be successful to get the real, effective
+ and saved user ids after calling process invokes setreuid() to change
+ the effective/saved uids to that of specified user.
+
+getresuid03
+
+ Verify that getresuid() will be successful to get the real, effective
+ and saved user ids after calling process invokes setresuid() to change
+ the effective uid to that of specified user.
+
+
+getsid01
+
+ call getsid() and make sure it succeeds
+
+getsid02
+
+ call getsid() with an invalid PID to produce a failure
+
+
+setfsgid01
+
+ Testcase to check the basic functionality of setfsgid(2) system
+ call.
+
+setfsuid01
+
+ Testcase to test the basic functionality of the setfsuid(2) system
+ call.
+
+
+setgid01
+
+ Basic test for the setgid(2) system call.
+
+setgid02
+
+ Testcase to ensure that the setgid() system call sets errno to EPERM
+
+
+setgroups01
+
+ Basic test for the setgroups(2) system call.
+
+setgroups02
+
+ Verify that,
+ 1. setgroups() fails with -1 and sets errno to EINVAL if the size
+ argument value is > NGROUPS
+ 2. setgroups() fails with -1 and sets errno to EPERM if the
+ calling process is not super-user.
+
+setgroups03
+
+ Verify that, only root process can invoke setgroups() system call to
+ set the supplementary group IDs of the process.
+
+
+setpgid01
+
+ Basic test for setpgid(2) system call.
+
+setpgid02
+
+ Testcase to check that setpgid() sets errno correctly.
+
+setpgid03
+
+ Test to check the error and trivial conditions in setpgid system call
+
+setpriority01
+
+ set the priority for the test process lower.
+
+setpriority02
+
+ test for an expected failure by trying to raise
+ the priority for the test process while not having
+ permissions to do so.
+
+setpriority03
+
+ test for an expected failure by using an invalid
+ PRIO value
+ setpriority04
+
+setpriority04
+ test for an expected failure by using an invalid
+ process id
+
+
+setpriority05
+ test for an expected failure by trying to change
+ a process with an ID that is different from the
+ test process
+
+setregid01
+
+ Basic test for the setregid(2) system call.
+
+setregid02
+
+ Test that setregid() fails and sets the proper errno values when a
+ non-root user attempts to change the real or effective group id to a
+ value other than the current gid or the current effective gid.
+
+setregid03
+
+ Test setregid() when executed by a non-root user.
+
+setregid04
+
+ Test setregid() when executed by root.
+
+setresuid01
+
+ Test setresuid() when executed by root.
+
+setresuid02
+
+ Test that a non-root user can change the real, effective and saved
+ uid values through the setresuid system call.
+
+
+setresuid03
+
+ Test that the setresuid system call sets the proper errno
+ values when a non-root user attempts to change the real, effective or
+ saved uid to a value other than one of the current uid, the current
+ effective uid of the current saved uid. Also verify that setresuid
+ fails if an invalid uid value is given.
+
+setreuid01
+
+ Basic test for the setreuid(2) system call.
+
+setreuid02
+
+ Test setreuid() when executed by root.
+
+setreuid03
+
+ Test setreuid() when executed by an unprivileged user.
+
+
+setreuid04
+
+ Test that root can change the real and effective uid to an
+ unprivileged user.
+
+setreuid05
+
+ Test the setreuid() feature, verifying the role of the saved-set-uid
+ and setreuid's effect on it.
+
+setreuid06
+
+ Test that EINVAL is set when setreuid is given an invalid user id.
+
+setrlimit01
+
+ Testcase to check the basic functionality of the setrlimit system call.
+
+
+setrlimit02
+
+ Testcase to test the different errnos set by setrlimit(2) system call.
+
+setrlimit03
+
+ Test for EPERM when the super-user tries to increase RLIMIT_NOFILE
+ beyond the system limit.
+
+setsid01
+
+ Test to check the error and trivial conditions in setsid system call
+
+setuid01
+
+ Basic test for the setuid(2) system call.
+
+setuid02
+
+ Basic test for the setuid(2) system call as root.
+
+setuid03
+
+ Test to check the error and trivial conditions in setuid
+
+fs_perms
+
+ Regression test for Linux filesystem permissions.
+uname01
+
+ Basic test for the uname(2) system call.
+
+uname02
+
+ Call uname() with an invalid address to produce a failure
+
+uname03
+
+ Call uname() and make sure it succeeds
+sysctl01
+
+ Testcase for testing the basic functionality of sysctl(2) system call.
+ This testcase attempts to read the kernel parameters using
+ sysctl({CTL_KERN, KERN_ }, ...) and compares it with the known
+ values.
+
+sysctl03
+
+ Testcase to check that sysctl(2) sets errno to EPERM correctly.
+
+
+sysctl04
+
+ Testcase to check that sysctl(2) sets errno to ENOTDIR
+
+
+sysctl05
+
+ Testcase to check that sysctl(2) sets errno to EFAULT
+
+time01
+
+ Basic test for the time(2) system call.
+
+
+time02
+
+ Verify that time(2) returns the value of time in seconds since
+ the Epoch and stores this value in the memory pointed to by the parameter.
+
+times01
+
+ Basic test for the times(2) system call.
+
+times02
+
+ Testcase to test that times() sets errno correctly
+
+times03
+
+ Testcase to check the basic functionality of the times() system call.
+
+utime01
+
+ Verify that the system call utime() successfully sets the modification
+ and access times of a file to the current time, if the times argument
+ is null, and the user ID of the process is "root".
+
+utime02
+
+ Verify that the system call utime() successfully sets the modification
+ and access times of a file to the current time, under the following
+ constraints,
+ - The times argument is null.
+ - The user ID of the process is not "root".
+ - The file is owned by the user ID of the process.
+
+utime03
+
+ Verify that the system call utime() successfully sets the modification
+ and access times of a file to the current time, under the following
+ constraints,
+ - The times argument is null.
+ - The user ID of the process is not "root".
+ - The file is not owned by the user ID of the process.
+ - The user ID of the process has write access to the file.
+
+
+utime04
+
+ Verify that the system call utime() successfully sets the modification
+ and access times of a file to the time specified by times argument, if
+ the times argument is not null, and the user ID of the process is "root".
+
+
+utime05
+
+ Verify that the system call utime() successfully sets the modification
+ and access times of a file to the value specified by the times argument
+ under the following constraints,
+ - The times argument is not null,
+ - The user ID of the process is not "root".
+ - The file is owned by the user ID of the process.
+
+
+utime06
+
+ 1. Verify that the system call utime() fails to set the modification
+ and access times of a file to the current time, under the following
+ constraints,
+ - The times argument is null.
+ - The user ID of the process is not "root".
+ - The file is not owned by the user ID of the process.
+ - The user ID of the process does not have write access to the
+ file.
+ 2. Verify that the system call utime() fails to set the modification
+ and access times of a file if the specified file doesn't exist.
+
+settimeofday01
+
+ Testcase to check the basic functionality of settimeofday().
+
+
+settimeofday02
+
+ Testcase to check that settimeofday() sets errnos correctly.
+
+stime01
+
+ Verify that the system call stime() successfully sets the system's idea
+ of data and time if invoked by "root" user.
+
+stime02
+
+ Verify that the system call stime() fails to set the system's idea
+ of data and time if invoked by "non-root" user.
+
+gettimeofday01
+
+ Testcase to check that gettimeofday(2) sets errno to EFAULT.
+
+
+
+alarm01
+
+ Basic test for alarm(2).
+
+alarm02
+
+ Boundary Value Test for alarm(2).
+
+alarm03
+
+ Alarm(2) cleared by a fork.
+
+alarm04
+
+ Check that when an alarm request is made, the signal SIGALRM is received
+ even after the process has done an exec().
+
+alarm05
+
+ Check the functionality of the Alarm system call when the time input
+ parameter is non zero.
+
+alarm06
+
+ Check the functionality of the Alarm system call when the time input
+ parameter is zero.
+
+alarm07
+
+ Check the functionality of the alarm() when the time input
+ parameter is non-zero and the process does a fork.
+
+getegid01
+
+ Basic test for getegid(2)
+
+
+geteuid01
+
+ Basic test for geteuid(2)
+
+
+getgid01
+
+ Basic test for getgid(2)
+
+getgid02
+
+ Testcase to check the basic functionality of getgid().
+
+getgid03
+
+ Testcase to check the basic functionality of getegid().
+
+
+getpid01
+
+ Basic test for getpid(2)
+
+
+getpid02
+
+ Verify that getpid() system call gets the process ID of the of the
+ calling process.
+
+
+getppid01
+
+ Testcase to check the basic functionality of the getppid() syscall.
+
+
+getuid01
+
+ Basic test for getuid(2)
+
+getuid02
+
+ Testcase to check the basic functionality of the geteuid() system call.
+
+getuid03
+
+ Testcase to check the basic functionality of the getuid() system call.
+
+nanosleep01
+
+ Verify that nanosleep() will be successful to suspend the execution
+ of a process for a specified time.
+
+nanosleep02
+
+ Verify that nanosleep() will be successful to suspend the execution
+ of a process, returns after the receipt of a signal and writes the
+ remaining sleep time into the structure.
+
+nanosleep03
+
+ Verify that nanosleep() will fail to suspend the execution
+ of a process for a specified time if interrupted by a non-blocked signal.
+
+nanosleep04
+
+ Verify that nanosleep() will fail to suspend the execution
+ of a process if the specified pause time is invalid.
+
diff --git a/chromium/third_party/lcov-1.9/example/Makefile b/chromium/third_party/lcov-1.9/example/Makefile
new file mode 100644
index 00000000000..5428237c239
--- /dev/null
+++ b/chromium/third_party/lcov-1.9/example/Makefile
@@ -0,0 +1,98 @@
+#
+# Makefile for the LCOV example program.
+#
+# Make targets:
+# - example: compile the example program
+# - output: run test cases on example program and create HTML output
+# - clean: clean up directory
+#
+
+CC := gcc
+CFLAGS := -Wall -I. -fprofile-arcs -ftest-coverage
+
+LCOV := ../bin/lcov
+GENHTML := ../bin/genhtml
+GENDESC := ../bin/gendesc
+GENPNG := ../bin/genpng
+
+# Depending on the presence of the GD.pm perl module, we can use the
+# special option '--frames' for genhtml
+USE_GENPNG := $(shell $(GENPNG) --help >/dev/null 2>/dev/null; echo $$?)
+
+ifeq ($(USE_GENPNG),0)
+ FRAMES := --frames
+else
+ FRAMES :=
+endif
+
+.PHONY: clean output test_noargs test_2_to_2000 test_overflow
+
+all: output
+
+example: example.o iterate.o gauss.o
+ $(CC) example.o iterate.o gauss.o -o example -lgcov
+
+example.o: example.c iterate.h gauss.h
+ $(CC) $(CFLAGS) -c example.c -o example.o
+
+iterate.o: methods/iterate.c iterate.h
+ $(CC) $(CFLAGS) -c methods/iterate.c -o iterate.o
+
+gauss.o: methods/gauss.c gauss.h
+ $(CC) $(CFLAGS) -c methods/gauss.c -o gauss.o
+
+output: example descriptions test_noargs test_2_to_2000 test_overflow
+ @echo
+ @echo '*'
+ @echo '* Generating HTML output'
+ @echo '*'
+ @echo
+ $(GENHTML) trace_noargs.info trace_args.info trace_overflow.info \
+ --output-directory output --title "Basic example" \
+ --show-details --description-file descriptions $(FRAMES) \
+ --legend
+ @echo
+ @echo '*'
+ @echo '* See '`pwd`/output/index.html
+ @echo '*'
+ @echo
+
+descriptions: descriptions.txt
+ $(GENDESC) descriptions.txt -o descriptions
+
+all_tests: example test_noargs test_2_to_2000 test_overflow
+
+test_noargs:
+ @echo
+ @echo '*'
+ @echo '* Test case 1: running ./example without parameters'
+ @echo '*'
+ @echo
+ $(LCOV) --zerocounters --directory .
+ ./example
+ $(LCOV) --capture --directory . --output-file trace_noargs.info --test-name test_noargs
+
+test_2_to_2000:
+ @echo
+ @echo '*'
+ @echo '* Test case 2: running ./example 2 2000'
+ @echo '*'
+ @echo
+ $(LCOV) --zerocounters --directory .
+ ./example 2 2000
+ $(LCOV) --capture --directory . --output-file trace_args.info --test-name test_2_to_2000
+
+test_overflow:
+ @echo
+ @echo '*'
+ @echo '* Test case 3: running ./example 0 100000 (causes an overflow)'
+ @echo '*'
+ @echo
+ $(LCOV) --zerocounters --directory .
+ ./example 0 100000 || true
+ $(LCOV) --capture --directory . --output-file trace_overflow.info --test-name "test_overflow"
+
+clean:
+ rm -rf *.o *.bb *.bbg *.da *.gcno *.gcda *.info output example \
+ descriptions
+
diff --git a/chromium/third_party/lcov-1.9/example/README b/chromium/third_party/lcov-1.9/example/README
new file mode 100644
index 00000000000..cf6cf2e4c68
--- /dev/null
+++ b/chromium/third_party/lcov-1.9/example/README
@@ -0,0 +1,6 @@
+
+To get an example of how the LCOV generated HTML output looks like,
+type 'make output' and point a web browser to the resulting file
+
+ output/index.html
+
diff --git a/chromium/third_party/lcov-1.9/example/descriptions.txt b/chromium/third_party/lcov-1.9/example/descriptions.txt
new file mode 100644
index 00000000000..47e6021310d
--- /dev/null
+++ b/chromium/third_party/lcov-1.9/example/descriptions.txt
@@ -0,0 +1,10 @@
+test_noargs
+ Example program is called without arguments so that default range
+ [0..9] is used.
+
+test_2_to_2000
+ Example program is called with "2" and "2000" as arguments.
+
+test_overflow
+ Example program is called with "0" and "100000" as arguments. The
+ resulting sum is too large to be stored as an int variable.
diff --git a/chromium/third_party/lcov-1.9/example/example.c b/chromium/third_party/lcov-1.9/example/example.c
new file mode 100644
index 00000000000..f9049aa64ba
--- /dev/null
+++ b/chromium/third_party/lcov-1.9/example/example.c
@@ -0,0 +1,60 @@
+/*
+ * example.c
+ *
+ * Calculate the sum of a given range of integer numbers. The range is
+ * specified by providing two integer numbers as command line argument.
+ * If no arguments are specified, assume the predefined range [0..9].
+ * Abort with an error message if the resulting number is too big to be
+ * stored as int variable.
+ *
+ * This program example is similar to the one found in the GCOV documentation.
+ * It is used to demonstrate the HTML output generated by LCOV.
+ *
+ * The program is split into 3 modules to better demonstrate the 'directory
+ * overview' function. There are also a lot of bloated comments inserted to
+ * artificially increase the source code size so that the 'source code
+ * overview' function makes at least a minimum of sense.
+ *
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include "iterate.h"
+#include "gauss.h"
+
+static int start = 0;
+static int end = 9;
+
+
+int main (int argc, char* argv[])
+{
+ int total1, total2;
+
+ /* Accept a pair of numbers as command line arguments. */
+
+ if (argc == 3)
+ {
+ start = atoi(argv[1]);
+ end = atoi(argv[2]);
+ }
+
+
+ /* Use both methods to calculate the result. */
+
+ total1 = iterate_get_sum (start, end);
+ total2 = gauss_get_sum (start, end);
+
+
+ /* Make sure both results are the same. */
+
+ if (total1 != total2)
+ {
+ printf ("Failure (%d != %d)!\n", total1, total2);
+ }
+ else
+ {
+ printf ("Success, sum[%d..%d] = %d\n", start, end, total1);
+ }
+
+ return 0;
+}
diff --git a/chromium/third_party/lcov-1.9/example/gauss.h b/chromium/third_party/lcov-1.9/example/gauss.h
new file mode 100644
index 00000000000..302a4a98038
--- /dev/null
+++ b/chromium/third_party/lcov-1.9/example/gauss.h
@@ -0,0 +1,6 @@
+#ifndef GAUSS_H
+#define GAUSS_H GAUSS_h
+
+extern int gauss_get_sum (int min, int max);
+
+#endif /* GAUSS_H */
diff --git a/chromium/third_party/lcov-1.9/example/iterate.h b/chromium/third_party/lcov-1.9/example/iterate.h
new file mode 100644
index 00000000000..471327951cf
--- /dev/null
+++ b/chromium/third_party/lcov-1.9/example/iterate.h
@@ -0,0 +1,6 @@
+#ifndef ITERATE_H
+#define ITERATE_H ITERATE_H
+
+extern int iterate_get_sum (int min, int max);
+
+#endif /* ITERATE_H */
diff --git a/chromium/third_party/lcov-1.9/example/methods/gauss.c b/chromium/third_party/lcov-1.9/example/methods/gauss.c
new file mode 100644
index 00000000000..9da3ce50835
--- /dev/null
+++ b/chromium/third_party/lcov-1.9/example/methods/gauss.c
@@ -0,0 +1,48 @@
+/*
+ * methods/gauss.c
+ *
+ * Calculate the sum of a given range of integer numbers.
+ *
+ * Somewhat of a more subtle way of calculation - and it even has a story
+ * behind it:
+ *
+ * Supposedly during math classes in elementary school, the teacher of
+ * young mathematician Gauss gave the class an assignment to calculate the
+ * sum of all natural numbers between 1 and 100, hoping that this task would
+ * keep the kids occupied for some time. The story goes that Gauss had the
+ * result ready after only a few minutes. What he had written on his black
+ * board was something like this:
+ *
+ * 1 + 100 = 101
+ * 2 + 99 = 101
+ * 3 + 98 = 101
+ * .
+ * .
+ * 100 + 1 = 101
+ *
+ * s = (1/2) * 100 * 101 = 5050
+ *
+ * A more general form of this formula would be
+ *
+ * s = (1/2) * (max + min) * (max - min + 1)
+ *
+ * which is used in the piece of code below to implement the requested
+ * function in constant time, i.e. without dependencies on the size of the
+ * input parameters.
+ *
+ */
+
+#include "gauss.h"
+
+
+int gauss_get_sum (int min, int max)
+{
+ /* This algorithm doesn't work well with invalid range specifications
+ so we're intercepting them here. */
+ if (max < min)
+ {
+ return 0;
+ }
+
+ return (int) ((max + min) * (double) (max - min + 1) / 2);
+}
diff --git a/chromium/third_party/lcov-1.9/example/methods/iterate.c b/chromium/third_party/lcov-1.9/example/methods/iterate.c
new file mode 100644
index 00000000000..023d1801c93
--- /dev/null
+++ b/chromium/third_party/lcov-1.9/example/methods/iterate.c
@@ -0,0 +1,45 @@
+/*
+ * methods/iterate.c
+ *
+ * Calculate the sum of a given range of integer numbers.
+ *
+ * This particular method of implementation works by way of brute force,
+ * i.e. it iterates over the entire range while adding the numbers to finally
+ * get the total sum. As a positive side effect, we're able to easily detect
+ * overflows, i.e. situations in which the sum would exceed the capacity
+ * of an integer variable.
+ *
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include "iterate.h"
+
+
+int iterate_get_sum (int min, int max)
+{
+ int i, total;
+
+ total = 0;
+
+ /* This is where we loop over each number in the range, including
+ both the minimum and the maximum number. */
+
+ for (i = min; i <= max; i++)
+ {
+ /* We can detect an overflow by checking whether the new
+ sum would become negative. */
+
+ if (total + i < total)
+ {
+ printf ("Error: sum too large!\n");
+ exit (1);
+ }
+
+ /* Everything seems to fit into an int, so continue adding. */
+
+ total += i;
+ }
+
+ return total;
+}
diff --git a/chromium/third_party/lcov-1.9/lcovrc b/chromium/third_party/lcov-1.9/lcovrc
new file mode 100644
index 00000000000..a53977f6622
--- /dev/null
+++ b/chromium/third_party/lcov-1.9/lcovrc
@@ -0,0 +1,130 @@
+#
+# /etc/lcovrc - system-wide defaults for LCOV
+#
+# To change settings for a single user, place a customized copy of this file
+# at location ~/.lcovrc
+#
+
+# Specify an external style sheet file (same as --css-file option of genhtml)
+#genhtml_css_file = gcov.css
+
+# Specify coverage rate limits (in %) for classifying file entries
+# HI: hi_limit <= rate <= 100 graph color: green
+# MED: med_limit <= rate < hi_limit graph color: orange
+# LO: 0 <= rate < med_limit graph color: red
+genhtml_hi_limit = 90
+genhtml_med_limit = 75
+
+# Width of line coverage field in source code view
+genhtml_line_field_width = 12
+
+# Width of branch coverage field in source code view
+genhtml_branch_field_width = 16
+
+# Width of overview image (used by --frames option of genhtml)
+genhtml_overview_width = 80
+
+# Resolution of overview navigation: this number specifies the maximum
+# difference in lines between the position a user selected from the overview
+# and the position the source code window is scrolled to (used by --frames
+# option of genhtml)
+genhtml_nav_resolution = 4
+
+# Clicking a line in the overview image should show the source code view at
+# a position a bit further up so that the requested line is not the first
+# line in the window. This number specifies that offset in lines (used by
+# --frames option of genhtml)
+genhtml_nav_offset = 10
+
+# Do not remove unused test descriptions if non-zero (same as
+# --keep-descriptions option of genhtml)
+genhtml_keep_descriptions = 0
+
+# Do not remove prefix from directory names if non-zero (same as --no-prefix
+# option of genhtml)
+genhtml_no_prefix = 0
+
+# Do not create source code view if non-zero (same as --no-source option of
+# genhtml)
+genhtml_no_source = 0
+
+# Replace tabs with number of spaces in source view (same as --num-spaces
+# option of genhtml)
+genhtml_num_spaces = 8
+
+# Highlight lines with converted-only data if non-zero (same as --highlight
+# option of genhtml)
+genhtml_highlight = 0
+
+# Include color legend in HTML output if non-zero (same as --legend option of
+# genhtml)
+genhtml_legend = 0
+
+# Use FILE as HTML prolog for generated pages (same as --html-prolog option of
+# genhtml)
+#genhtml_html_prolog = FILE
+
+# Use FILE as HTML epilog for generated pages (same as --html-epilog option of
+# genhtml)
+#genhtml_html_epilog = FILE
+
+# Use custom filename extension for pages (same as --html-extension option of
+# genhtml)
+#genhtml_html_extension = html
+
+# Compress all generated html files with gzip.
+#genhtml_html_gzip = 1
+
+# Include sorted overview pages (can be disabled by the --no-sort option of
+# genhtml)
+genhtml_sort = 1
+
+# Include function coverage data display (can be disabled by the
+# --no-func-coverage option of genhtml)
+genhtml_function_coverage = 1
+
+# Include branch coverage data display (can be disabled by the
+# --no-branch-coverage option of genhtml)
+genhtml_branch_coverage = 1
+
+# Location of the gcov tool (same as --gcov-info option of geninfo)
+#geninfo_gcov_tool = gcov
+
+# Adjust test names to include operating system information if non-zero
+#geninfo_adjust_testname = 0
+
+# Calculate checksum for each source code line if non-zero (same as --checksum
+# option of geninfo if non-zero, same as --no-checksum if zero)
+#geninfo_checksum = 1
+
+# Enable libtool compatibility mode if non-zero (same as --compat-libtool option
+# of geninfo if non-zero, same as --no-compat-libtool if zero)
+#geninfo_compat_libtool = 0
+
+# Directory containing gcov kernel files
+# lcov_gcov_dir = /proc/gcov
+
+# Location of the insmod tool
+lcov_insmod_tool = /sbin/insmod
+
+# Location of the modprobe tool
+lcov_modprobe_tool = /sbin/modprobe
+
+# Location of the rmmod tool
+lcov_rmmod_tool = /sbin/rmmod
+
+# Location for temporary directories
+lcov_tmp_dir = /tmp
+
+# Show full paths during list operation if non-zero (same as --list-full-path
+# option of lcov)
+lcov_list_full_path = 0
+
+# Specify the maximum width for list output. This value is ignored when
+# lcov_list_full_path is non-zero.
+lcov_list_width = 80
+
+# Specify the maximum percentage of file names which may be truncated when
+# choosing a directory prefix in list output. This value is ignored when
+# lcov_list_full_path is non-zero.
+lcov_list_truncate_max = 20
diff --git a/chromium/third_party/lcov-1.9/man/gendesc.1 b/chromium/third_party/lcov-1.9/man/gendesc.1
new file mode 100644
index 00000000000..64b805e1c72
--- /dev/null
+++ b/chromium/third_party/lcov-1.9/man/gendesc.1
@@ -0,0 +1,78 @@
+.TH gendesc 1 "LCOV 1.9" 2010\-08\-06 "User Manuals"
+.SH NAME
+gendesc \- Generate a test case description file
+.SH SYNOPSIS
+.B gendesc
+.RB [ \-h | \-\-help ]
+.RB [ \-v | \-\-version ]
+.RS 8
+.br
+.RB [ \-o | \-\-output\-filename
+.IR filename ]
+.br
+.I inputfile
+.SH DESCRIPTION
+Convert plain text test case descriptions into a format as understood by
+.BR genhtml .
+.I inputfile
+needs to observe the following format:
+
+For each test case:
+.IP " \-"
+one line containing the test case name beginning at the start of the line
+.RE
+.IP " \-"
+one or more lines containing the test case description indented with at
+least one whitespace character (tab or space)
+.RE
+
+.B Example input file:
+
+test01
+.RS
+An example test case description.
+.br
+Description continued
+.RE
+
+test42
+.RS
+Supposedly the answer to most of your questions
+.RE
+
+Note: valid test names can consist of letters, decimal digits and the
+underscore character ('_').
+.SH OPTIONS
+.B \-h
+.br
+.B \-\-help
+.RS
+Print a short help text, then exit.
+.RE
+
+.B \-v
+.br
+.B \-\-version
+.RS
+Print version number, then exit.
+.RE
+
+
+.BI "\-o " filename
+.br
+.BI "\-\-output\-filename " filename
+.RS
+Write description data to
+.IR filename .
+
+By default, output is written to STDOUT.
+.RE
+.SH AUTHOR
+Peter Oberparleiter <Peter.Oberparleiter@de.ibm.com>
+
+.SH SEE ALSO
+.BR lcov (1),
+.BR genhtml (1),
+.BR geninfo (1),
+.BR genpng (1),
+.BR gcov (1)
diff --git a/chromium/third_party/lcov-1.9/man/genhtml.1 b/chromium/third_party/lcov-1.9/man/genhtml.1
new file mode 100644
index 00000000000..fc5b9f0dda4
--- /dev/null
+++ b/chromium/third_party/lcov-1.9/man/genhtml.1
@@ -0,0 +1,502 @@
+.TH genhtml 1 "LCOV 1.9" 2010\-08\-06 "User Manuals"
+.SH NAME
+genhtml \- Generate HTML view from LCOV coverage data files
+.SH SYNOPSIS
+.B genhtml
+.RB [ \-h | \-\-help ]
+.RB [ \-v | \-\-version ]
+.RS 8
+.br
+.RB [ \-q | \-\-quiet ]
+.RB [ \-s | \-\-show\-details ]
+.RB [ \-f | \-\-frames ]
+.br
+.RB [ \-b | \-\-baseline\-file ]
+.IR baseline\-file
+.br
+.RB [ \-o | \-\-output\-directory
+.IR output\-directory ]
+.br
+.RB [ \-t | \-\-title
+.IR title ]
+.br
+.RB [ \-d | \-\-description\-file
+.IR description\-file ]
+.br
+.RB [ \-k | \-\-keep\-descriptions ]
+.RB [ \-c | \-\-css\-file
+.IR css\-file ]
+.br
+.RB [ \-p | \-\-prefix
+.IR prefix ]
+.RB [ \-\-no\-prefix ]
+.br
+.RB [ \-\-no\-source ]
+.RB [ \-\-num\-spaces
+.IR num ]
+.RB [ \-\-highlight ]
+.br
+.RB [ \-\-legend ]
+.RB [ \-\-html\-prolog
+.IR prolog\-file ]
+.br
+.RB [ \-\-html\-epilog
+.IR epilog\-file ]
+.RB [ \-\-html\-extension
+.IR extension ]
+.br
+.RB [ \-\-html\-gzip ]
+.RB [ \-\-sort ]
+.RB [ \-\-no\-sort ]
+.br
+.RB [ \-\-function\-coverage ]
+.RB [ \-\-no\-function\-coverage ]
+.br
+.RB [ \-\-branch\-coverage ]
+.RB [ \-\-no\-branch\-coverage ]
+.br
+.RB [ \-\-demangle\-cpp ]
+.br
+.IR tracefile(s)
+.RE
+.SH DESCRIPTION
+Create an HTML view of coverage data found in
+.IR tracefile .
+Note that
+.I tracefile
+may also be a list of filenames.
+
+HTML output files are created in the current working directory unless the
+\-\-output\-directory option is used. If
+.I tracefile
+ends with ".gz", it is assumed to be GZIP\-compressed and the gunzip tool
+will be used to decompress it transparently.
+
+Note that all source code files have to be present and readable at the
+exact file system location they were compiled.
+
+Use option
+.I \--css\-file
+to modify layout and colors of the generated HTML output. Files are
+marked in different colors depending on the associated coverage rate. By
+default, the coverage limits for low, medium and high coverage are set to
+0\-15%, 15\-50% and 50\-100% percent respectively. To change these
+values, use configuration file options
+.IR genhtml_hi_limit " and " genhtml_med_limit .
+
+.SH OPTIONS
+.B \-h
+.br
+.B \-\-help
+.RS
+Print a short help text, then exit.
+
+.RE
+.B \-v
+.br
+.B \-\-version
+.RS
+Print version number, then exit.
+
+.RE
+.B \-q
+.br
+.B \-\-quiet
+.RS
+Do not print progress messages.
+
+Suppresses all informational progress output. When this switch is enabled,
+only error or warning messages are printed.
+
+.RE
+.B \-f
+.br
+.B \-\-frames
+.RS
+Use HTML frames for source code view.
+
+If enabled, a frameset is created for each source code file, providing
+an overview of the source code as a "clickable" image. Note that this
+option will slow down output creation noticeably because each source
+code character has to be inspected once. Note also that the GD.pm PERL
+module has to be installed for this option to work (it may be obtained
+from http://www.cpan.org).
+
+.RE
+.B \-s
+.br
+.B \-\-show\-details
+.RS
+Generate detailed directory view.
+
+When this option is enabled,
+.B genhtml
+generates two versions of each
+file view. One containing the standard information plus a link to a
+"detailed" version. The latter additionally contains information about
+which test case covered how many lines of each source file.
+
+.RE
+.BI "\-b " baseline\-file
+.br
+.BI "\-\-baseline\-file " baseline\-file
+.RS
+Use data in
+.I baseline\-file
+as coverage baseline.
+
+The tracefile specified by
+.I baseline\-file
+is read and all counts found in the original
+.I tracefile
+are decremented by the corresponding counts in
+.I baseline\-file
+before creating any output.
+
+Note that when a count for a particular line in
+.I baseline\-file
+is greater than the count in the
+.IR tracefile ,
+the result is zero.
+
+.RE
+.BI "\-o " output\-directory
+.br
+.BI "\-\-output\-directory " output\-directory
+.RS
+Create files in
+.I output\-directory.
+
+Use this option to tell
+.B genhtml
+to write the resulting files to a directory other than
+the current one. If
+.I output\-directory
+does not exist, it will be created.
+
+It is advisable to use this option since depending on the
+project size, a lot of files and subdirectories may be created.
+
+.RE
+.BI "\-t " title
+.br
+.BI "\-\-title " title
+.RS
+Display
+.I title
+in header of all pages.
+
+.I title
+is written to the header portion of each generated HTML page to
+identify the context in which a particular output
+was created. By default this is the name of the tracefile.
+
+.RE
+.BI "\-d " description\-file
+.br
+.BI "\-\-description\-file " description\-file
+.RS
+Read test case descriptions from
+.IR description\-file .
+
+All test case descriptions found in
+.I description\-file
+and referenced in the input data file are read and written to an extra page
+which is then incorporated into the HTML output.
+
+The file format of
+.IR "description\-file " is:
+
+for each test case:
+.RS
+TN:<testname>
+.br
+TD:<test description>
+
+.RE
+
+Valid test case names can consist of letters, numbers and the underscore
+character ('_').
+.RE
+.B \-k
+.br
+.B \-\-keep\-descriptions
+.RS
+Do not remove unused test descriptions.
+
+Keep descriptions found in the description file even if the coverage data
+indicates that the associated test case did not cover any lines of code.
+
+This option can also be configured permanently using the configuration file
+option
+.IR genhtml_keep_descriptions .
+
+.RE
+.BI "\-c " css\-file
+.br
+.BI "\-\-css\-file " css\-file
+.RS
+Use external style sheet file
+.IR css\-file .
+
+Using this option, an extra .css file may be specified which will replace
+the default one. This may be helpful if the default colors make your eyes want
+to jump out of their sockets :)
+
+This option can also be configured permanently using the configuration file
+option
+.IR genhtml_css_file .
+
+.RE
+.BI "\-p " prefix
+.br
+.BI "\-\-prefix " prefix
+.RS
+Remove
+.I prefix
+from all directory names.
+
+Because lists containing long filenames are difficult to read, there is a
+mechanism implemented that will automatically try to shorten all directory
+names on the overview page beginning with a common prefix. By default,
+this is done using an algorithm that tries to find the prefix which, when
+applied, will minimize the resulting sum of characters of all directory
+names.
+
+Use this option to specify the prefix to be removed by yourself.
+
+.RE
+.B \-\-no\-prefix
+.RS
+Do not remove prefix from directory names.
+
+This switch will completely disable the prefix mechanism described in the
+previous section.
+
+This option can also be configured permanently using the configuration file
+option
+.IR genhtml_no_prefix .
+
+.RE
+.B \-\-no\-source
+.RS
+Do not create source code view.
+
+Use this switch if you don't want to get a source code view for each file.
+
+This option can also be configured permanently using the configuration file
+option
+.IR genhtml_no_source .
+
+.RE
+.BI "\-\-num\-spaces " spaces
+.RS
+Replace tabs in source view with
+.I num
+spaces.
+
+Default value is 8.
+
+This option can also be configured permanently using the configuration file
+option
+.IR genhtml_num_spaces .
+
+.RE
+.B \-\-highlight
+.RS
+Highlight lines with converted\-only coverage data.
+
+Use this option in conjunction with the \-\-diff option of
+.B lcov
+to highlight those lines which were only covered in data sets which were
+converted from previous source code versions.
+
+This option can also be configured permanently using the configuration file
+option
+.IR genhtml_highlight .
+
+.RE
+.B \-\-legend
+.RS
+Include color legend in HTML output.
+
+Use this option to include a legend explaining the meaning of color coding
+in the resulting HTML output.
+
+This option can also be configured permanently using the configuration file
+option
+.IR genhtml_legend .
+
+.RE
+.BI "\-\-html\-prolog " prolog\-file
+.RS
+Read customized HTML prolog from
+.IR prolog\-file .
+
+Use this option to replace the default HTML prolog (the initial part of the
+HTML source code leading up to and including the <body> tag) with the contents
+of
+.IR prolog\-file .
+Within the prolog text, the following words will be replaced when a page is generated:
+
+.B "@pagetitle@"
+.br
+The title of the page.
+
+.B "@basedir@"
+.br
+A relative path leading to the base directory (e.g. for locating css\-files).
+
+This option can also be configured permanently using the configuration file
+option
+.IR genhtml_html_prolog .
+
+.RE
+.BI "\-\-html\-epilog " epilog\-file
+.RS
+Read customized HTML epilog from
+.IR epilog\-file .
+
+Use this option to replace the default HTML epilog (the final part of the HTML
+source including </body>) with the contents of
+.IR epilog\-file .
+
+Within the epilog text, the following words will be replaced when a page is generated:
+
+.B "@basedir@"
+.br
+A relative path leading to the base directory (e.g. for locating css\-files).
+
+This option can also be configured permanently using the configuration file
+option
+.IR genhtml_html_epilog .
+
+.RE
+.BI "\-\-html\-extension " extension
+.RS
+Use customized filename extension for generated HTML pages.
+
+This option is useful in situations where different filename extensions
+are required to render the resulting pages correctly (e.g. php). Note that
+a '.' will be inserted between the filename and the extension specified by
+this option.
+
+This option can also be configured permanently using the configuration file
+option
+.IR genhtml_html_extension .
+.RE
+
+.B \-\-html\-gzip
+.RS
+Compress all generated html files with gzip and add a .htaccess file specifying
+gzip\-encoding in the root output directory.
+
+Use this option if you want to save space on your webserver. Requires a
+webserver with .htaccess support and a browser with support for gzip
+compressed html.
+
+This option can also be configured permanently using the configuration file
+option
+.IR genhtml_html_gzip .
+
+.RE
+.B \-\-sort
+.br
+.B \-\-no\-sort
+.RS
+Specify whether to include sorted views of file and directory overviews.
+
+Use \-\-sort to include sorted views or \-\-no\-sort to not include them.
+Sorted views are
+.B enabled
+by default.
+
+When sorted views are enabled, each overview page will contain links to
+views of that page sorted by coverage rate.
+
+This option can also be configured permanently using the configuration file
+option
+.IR genhtml_sort .
+
+.RE
+.B \-\-function\-coverage
+.br
+.B \-\-no\-function\-coverage
+.RS
+Specify whether to display function coverage summaries in HTML output.
+
+Use \-\-function\-coverage to enable function coverage summaries or
+\-\-no\-function\-coverage to disable it. Function coverage summaries are
+.B enabled
+by default
+
+When function coverage summaries are enabled, each overview page will contain
+the number of functions found and hit per file or directory, together with
+the resulting coverage rate. In addition, each source code view will contain
+a link to a page which lists all functions found in that file plus the
+respective call count for those functions.
+
+This option can also be configured permanently using the configuration file
+option
+.IR genhtml_function_coverage .
+
+.RE
+.B \-\-branch\-coverage
+.br
+.B \-\-no\-branch\-coverage
+.RS
+Specify whether to display branch coverage data in HTML output.
+
+Use \-\-branch\-coverage to enable branch coverage display or
+\-\-no\-branch\-coverage to disable it. Branch coverage data display is
+.B enabled
+by default
+
+When branch coverage display is enabled, each overview page will contain
+the number of branches found and hit per file or directory, together with
+the resulting coverage rate. In addition, each source code view will contain
+an extra column which lists all branches of a line with indications of
+whether the branch was taken or not. Branches are shown in the following format:
+
+ ' + ': Branch was taken at least once
+.br
+ ' - ': Branch was not taken
+.br
+ ' # ': The basic block containing the branch was never executed
+.br
+
+This option can also be configured permanently using the configuration file
+option
+.IR genhtml_branch_coverage .
+
+.RE
+.B \-\-demangle\-cpp
+.RS
+Specify whether to demangle C++ function names.
+
+Use this option if you want to convert C++ internal function names to
+human readable format for display on the HTML function overview page.
+This option requires that the c++filt tool is installed (see
+.BR c++filt (1)).
+
+.SH FILES
+
+.I /etc/lcovrc
+.RS
+The system\-wide configuration file.
+.RE
+
+.I ~/.lcovrc
+.RS
+The per\-user configuration file.
+.RE
+
+.SH AUTHOR
+Peter Oberparleiter <Peter.Oberparleiter@de.ibm.com>
+
+.SH SEE ALSO
+.BR lcov (1),
+.BR geninfo (1),
+.BR genpng (1),
+.BR gendesc (1),
+.BR gcov (1)
diff --git a/chromium/third_party/lcov-1.9/man/geninfo.1 b/chromium/third_party/lcov-1.9/man/geninfo.1
new file mode 100644
index 00000000000..488c1b422cb
--- /dev/null
+++ b/chromium/third_party/lcov-1.9/man/geninfo.1
@@ -0,0 +1,366 @@
+.TH geninfo 1 "LCOV 1.9" 2010\-08\-06 "User Manuals"
+.SH NAME
+geninfo \- Generate tracefiles from .da files
+.SH SYNOPSIS
+.B geninfo
+.RB [ \-h | \-\-help ]
+.RB [ \-v | \-\-version ]
+.RB [ \-q | \-\-quiet ]
+.br
+.RS 8
+.RB [ \-i | \-\-initial ]
+.RB [ \-t | \-\-test\-name
+.IR test\-name ]
+.br
+.RB [ \-o | \-\-output\-filename
+.IR filename ]
+.RB [ \-f | \-\-follow ]
+.br
+.RB [ \-b | \-\-base\-directory
+.IR directory ]
+.br
+.RB [ \-\-checksum ]
+.RB [ \-\-no\-checksum ]
+.br
+.RB [ \-\-compat\-libtool ]
+.RB [ \-\-no\-compat\-libtool ]
+.br
+.RB [ \-\-gcov\-tool
+.IR tool ]
+.RB [ \-\-ignore\-errors
+.IR errors ]
+.br
+.RB [ \-\-no\-recursion ]
+.I directory
+.RE
+.SH DESCRIPTION
+.B geninfo
+converts all GCOV coverage data files found in
+.I directory
+into tracefiles, which the
+.B genhtml
+tool can convert to HTML output.
+
+Unless the \-\-output\-filename option is specified,
+.B geninfo
+writes its
+output to one file per .da file, the name of which is generated by simply
+appending ".info" to the respective .da file name.
+
+Note that the current user needs write access to both
+.I directory
+as well as to the original source code location. This is necessary because
+some temporary files have to be created there during the conversion process.
+
+Note also that
+.B geninfo
+is called from within
+.BR lcov ,
+so that there is usually no need to call it directly.
+
+.B Exclusion markers
+
+To exclude specific lines of code from a tracefile, you can add exclusion
+markers to the source code. Exclusion markers are keywords which can for
+example be added in the form of a comment.
+
+The following markers are recognized by geninfo:
+
+LCOV_EXCL_LINE
+.RS
+Lines containing this marker will be excluded.
+.br
+.RE
+LCOV_EXCL_START
+.RS
+Marks the beginning of an excluded section. The current line is part of this
+section.
+.br
+.RE
+LCOV_EXCL_STOP
+.RS
+Marks the end of an excluded section. The current line not part of this
+section.
+.RE
+.br
+
+.SH OPTIONS
+
+.B \-b
+.I directory
+.br
+.B \-\-base\-directory
+.I directory
+.br
+.RS
+.RI "Use " directory
+as base directory for relative paths.
+
+Use this option to specify the base directory of a build\-environment
+when geninfo produces error messages like:
+
+.RS
+ERROR: could not read source file /home/user/project/subdir1/subdir2/subdir1/subdir2/file.c
+.RE
+
+In this example, use /home/user/project as base directory.
+
+This option is required when using geninfo on projects built with libtool or
+similar build environments that work with a base directory, i.e. environments,
+where the current working directory when invoking the compiler is not the same
+directory in which the source code file is located.
+
+Note that this option will not work in environments where multiple base
+directories are used. In that case repeat the geninfo call for each base
+directory while using the \-\-ignore\-errors option to prevent geninfo from
+exiting when the first source code file could not be found. This way you can
+get partial coverage information for each base directory which can then be
+combined using the \-a option.
+.RE
+
+.B \-\-checksum
+.br
+.B \-\-no\-checksum
+.br
+.RS
+Specify whether to generate checksum data when writing tracefiles.
+
+Use \-\-checksum to enable checksum generation or \-\-no\-checksum to
+disable it. Checksum generation is
+.B disabled
+by default.
+
+When checksum generation is enabled, a checksum will be generated for each
+source code line and stored along with the coverage data. This checksum will
+be used to prevent attempts to combine coverage data from different source
+code versions.
+
+If you don't work with different source code versions, disable this option
+to speed up coverage data processing and to reduce the size of tracefiles.
+.RE
+
+.B \-\-compat\-libtool
+.br
+.B \-\-no\-compat\-libtool
+.br
+.RS
+Specify whether to enable libtool compatibility mode.
+
+Use \-\-compat\-libtool to enable libtool compatibility mode or \-\-no\-compat\-libtool
+to disable it. The libtool compatibility mode is
+.B enabled
+by default.
+
+When libtool compatibility mode is enabled, geninfo will assume that the source
+code relating to a .da file located in a directory named ".libs" can be
+found in its parent directory.
+
+If you have directories named ".libs" in your build environment but don't use
+libtool, disable this option to prevent problems when capturing coverage data.
+.RE
+
+.B \-f
+.br
+.B \-\-follow
+.RS
+Follow links when searching .da files.
+.RE
+
+.B \-\-gcov\-tool
+.I tool
+.br
+.RS
+Specify the location of the gcov tool.
+.RE
+
+.B \-h
+.br
+.B \-\-help
+.RS
+Print a short help text, then exit.
+.RE
+
+.B \-\-ignore\-errors
+.I errors
+.br
+.RS
+Specify a list of errors after which to continue processing.
+
+Use this option to specify a list of one or more classes of errors after which
+geninfo should continue processing instead of aborting.
+
+.I errors
+can be a comma\-separated list of the following keywords:
+
+.B gcov:
+the gcov tool returned with a non\-zero return code.
+
+.B source:
+the source code file for a data set could not be found.
+.RE
+
+.B \-i
+.br
+.B \-\-initial
+.RS
+Capture initial zero coverage data.
+
+Run geninfo with this option on the directories containing .bb, .bbg or .gcno
+files before running any test case. The result is a "baseline" coverage data
+file that contains zero coverage for every instrumented line and function.
+Combine this data file (using lcov \-a) with coverage data files captured
+after a test run to ensure that the percentage of total lines covered is
+correct even when not all object code files were loaded during the test.
+
+Note: currently, the \-\-initial option does not generate branch coverage
+information.
+.RE
+
+.B \-\-no\-recursion
+.br
+.RS
+Use this option if you want to get coverage data for the specified directory
+only without processing subdirectories.
+.RE
+
+.BI "\-o " output\-filename
+.br
+.BI "\-\-output\-filename " output\-filename
+.RS
+Write all data to
+.IR output\-filename .
+
+If you want to have all data written to a single file (for easier
+handling), use this option to specify the respective filename. By default,
+one tracefile will be created for each processed .da file.
+.RE
+
+.B \-q
+.br
+.B \-\-quiet
+.RS
+Do not print progress messages.
+
+Suppresses all informational progress output. When this switch is enabled,
+only error or warning messages are printed.
+.RE
+
+.BI "\-t " testname
+.br
+.BI "\-\-test\-name " testname
+.RS
+Use test case name
+.I testname
+for resulting data. Valid test case names can consist of letters, decimal
+digits and the underscore character ('_').
+
+This proves useful when data from several test cases is merged (i.e. by
+simply concatenating the respective tracefiles) in which case a test
+name can be used to differentiate between data from each test case.
+.RE
+
+.B \-v
+.br
+.B \-\-version
+.RS
+Print version number, then exit.
+.RE
+
+
+.SH FILES
+
+.I /etc/lcovrc
+.RS
+The system\-wide configuration file.
+.RE
+
+.I ~/.lcovrc
+.RS
+The per\-user configuration file.
+.RE
+
+Following is a quick description of the tracefile format as used by
+.BR genhtml ", " geninfo " and " lcov .
+
+A tracefile is made up of several human\-readable lines of text,
+divided into sections. If available, a tracefile begins with the
+.I testname
+which is stored in the following format:
+
+ TN:<test name>
+
+For each source file referenced in the .da file, there is a section containing
+filename and coverage data:
+
+ SF:<absolute path to the source file>
+
+Following is a list of line numbers for each function name found in the
+source file:
+
+ FN:<line number of function start>,<function name>
+
+Next, there is a list of execution counts for each instrumented function:
+
+ FNDA:<execution count>,<function name>
+
+This list is followed by two lines containing the number of functions found
+and hit:
+
+ FNF:<number of functions found>
+ FNH:<number of function hit>
+
+Branch coverage information is stored which one line per branch:
+
+ BRDA:<line number>,<block number>,<branch number>,<taken>
+
+Block number and branch number are gcc internal IDs for the branch. Taken is
+either '-' if the basic block containing the branch was never executed or
+a number indicating how often that branch was taken.
+
+Branch coverage summaries are stored in two lines:
+
+ BRF:<number of branches found>
+ BRH:<number of branches hit>
+
+Then there is a list of execution counts for each instrumented line
+(i.e. a line which resulted in executable code):
+
+ DA:<line number>,<execution count>[,<checksum>]
+
+Note that there may be an optional checksum present for each instrumented
+line. The current
+.B geninfo
+implementation uses an MD5 hash as checksumming algorithm.
+
+At the end of a section, there is a summary about how many lines
+were found and how many were actually instrumented:
+
+ LH:<number of lines with a non\-zero execution count>
+ LF:<number of instrumented lines>
+
+Each sections ends with:
+
+ end_of_record
+
+In addition to the main source code file there are sections for all
+#included files which also contain executable code.
+
+Note that the absolute path of a source file is generated by interpreting
+the contents of the respective .bb file (see
+.BR "gcov " (1)
+for more information on this file type). Relative filenames are prefixed
+with the directory in which the .bb file is found.
+
+Note also that symbolic links to the .bb file will be resolved so that the
+actual file path is used instead of the path to a link. This approach is
+necessary for the mechanism to work with the /proc/gcov files.
+
+.SH AUTHOR
+Peter Oberparleiter <Peter.Oberparleiter@de.ibm.com>
+
+.SH SEE ALSO
+.BR lcov (1),
+.BR genhtml (1),
+.BR genpng (1),
+.BR gendesc (1),
+.BR gcov (1)
diff --git a/chromium/third_party/lcov-1.9/man/genpng.1 b/chromium/third_party/lcov-1.9/man/genpng.1
new file mode 100644
index 00000000000..b2a46ea00e7
--- /dev/null
+++ b/chromium/third_party/lcov-1.9/man/genpng.1
@@ -0,0 +1,101 @@
+.TH genpng 1 "LCOV 1.9" 2010\-08\-06 "User Manuals"
+.SH NAME
+genpng \- Generate an overview image from a source file
+.SH SYNOPSIS
+.B genpng
+.RB [ \-h | \-\-help ]
+.RB [ \-v | \-\-version ]
+.RS 7
+.br
+.RB [ \-t | \-\-tab\-size
+.IR tabsize ]
+.RB [ \-w | \-\-width
+.IR width ]
+.br
+.RB [ \-o | \-\-output\-filename
+.IR output\-filename ]
+.br
+.IR source\-file
+.SH DESCRIPTION
+.B genpng
+creates an overview image for a given source code file of either
+plain text or .gcov file format.
+
+Note that the
+.I GD.pm
+PERL module has to be installed for this script to work
+(it may be obtained from
+.IR http://www.cpan.org ).
+
+Note also that
+.B genpng
+is called from within
+.B genhtml
+so that there is usually no need to call it directly.
+
+.SH OPTIONS
+.B \-h
+.br
+.B \-\-help
+.RS
+Print a short help text, then exit.
+.RE
+
+.B \-v
+.br
+.B \-\-version
+.RS
+Print version number, then exit.
+.RE
+
+.BI "\-t " tab\-size
+.br
+.BI "\-\-tab\-size " tab\-size
+.RS
+Use
+.I tab\-size
+spaces in place of tab.
+
+All occurrences of tabulator signs in the source code file will be replaced
+by the number of spaces defined by
+.I tab\-size
+(default is 4).
+.RE
+
+.BI "\-w " width
+.br
+.BI "\-\-width " width
+.RS
+Set width of output image to
+.I width
+pixel.
+
+The resulting image will be exactly
+.I width
+pixel wide (default is 80).
+
+Note that source code lines which are longer than
+.I width
+will be truncated.
+.RE
+
+
+.BI "\-o " filename
+.br
+.BI "\-\-output\-filename " filename
+.RS
+Write image to
+.IR filename .
+
+Specify a name for the resulting image file (default is
+.IR source\-file .png).
+.RE
+.SH AUTHOR
+Peter Oberparleiter <Peter.Oberparleiter@de.ibm.com>
+
+.SH SEE ALSO
+.BR lcov (1),
+.BR genhtml (1),
+.BR geninfo (1),
+.BR gendesc (1),
+.BR gcov (1)
diff --git a/chromium/third_party/lcov-1.9/man/lcov.1 b/chromium/third_party/lcov-1.9/man/lcov.1
new file mode 100644
index 00000000000..184c5b45213
--- /dev/null
+++ b/chromium/third_party/lcov-1.9/man/lcov.1
@@ -0,0 +1,707 @@
+.TH lcov 1 "LCOV 1.9" 2010\-08\-06 "User Manuals"
+.SH NAME
+lcov \- a graphical GCOV front\-end
+.SH SYNOPSIS
+.B lcov
+.BR \-c | \-\-capture
+.RS 5
+.br
+.RB [ \-d | \-\-directory
+.IR directory ]
+.RB [ \-k | \-\-kernel\-directory
+.IR directory ]
+.br
+.RB [ \-o | \-\-output\-file
+.IR tracefile ]
+.RB [ \-t | \-\-test\-name
+.IR testname ]
+.br
+.RB [ \-b | \-\-base\-directory
+.IR directory ]
+.RB [ \-i | \-\-initial ]
+.RB [ \-\-gcov\-tool
+.IR tool ]
+.br
+.RB [ \-\-checksum ]
+.RB [ \-\-no\-checksum ]
+.RB [ \-\-no\-recursion ]
+.RB [ \-f | \-\-follow ]
+.br
+.RB [ \-\-compat\-libtool ]
+.RB [ \-\-no\-compat\-libtool ]
+.RB [ \-\-ignore\-errors
+.IR errors ]
+.br
+.RB [ \-\-to\-package
+.IR package ]
+.RB [ \-\-from\-package
+.IR package ]
+.RB [ \-q | \-\-quiet ]
+.br
+.RB [ \-\-no\-markers ]
+.br
+.RE
+
+.B lcov
+.BR \-z | \-\-zerocounters
+.RS 5
+.br
+.RB [ \-d | \-\-directory
+.IR directory ]
+.RB [ \-\-no\-recursion ]
+.RB [ \-f | \-\-follow ]
+.br
+.RB [ \-q | \-\-quiet ]
+.br
+.RE
+
+.B lcov
+.BR \-l | \-\-list
+.I tracefile
+.RS 5
+.br
+.RB [ \-q | \-\-quiet ]
+.RB [ \-\-list\-full\-path ]
+.RB [ \-\-no\-list\-full\-path ]
+.br
+.RE
+
+.B lcov
+.BR \-a | \-\-add\-tracefile
+.I tracefile
+.RS 5
+.br
+.RB [ \-o | \-\-output\-file
+.IR tracefile ]
+.RB [ \-\-checksum ]
+.RB [ \-\-no\-checksum ]
+.br
+.RB [ \-q | \-\-quiet ]
+.br
+.RE
+
+.B lcov
+.BR \-e | \-\-extract
+.I tracefile pattern
+.RS 5
+.br
+.RB [ \-o | \-\-output\-file
+.IR tracefile ]
+.RB [ \-\-checksum ]
+.RB [ \-\-no\-checksum ]
+.br
+.RB [ \-q | \-\-quiet ]
+.RE
+
+.B lcov
+.BR \-r | \-\-remove
+.I tracefile pattern
+.RS 5
+.br
+.RB [ \-o | \-\-output\-file
+.IR tracefile ]
+.RB [ \-\-checksum ]
+.RB [ \-\-no\-checksum ]
+.br
+.RB [ \-q | \-\-quiet ]
+.br
+.RE
+
+.B lcov
+.BR \-\-diff
+.IR "tracefile diff"
+.RS 5
+.br
+.RB [ \-o | \-\-output\-file
+.IR tracefile ]
+.RB [ \-\-checksum ]
+.RB [ \-\-no\-checksum ]
+.br
+.RB [ \-\-convert\-filenames ]
+.RB [ \-\-strip
+.IR depth ]
+.RB [ \-\-path
+.IR path ]
+.RB [ \-q | \-\-quiet ]
+.br
+.RE
+
+.B lcov
+.RB [ \-h | \-\-help ]
+.RB [ \-v | \-\-version ]
+.RS 5
+.br
+.RE
+
+.SH DESCRIPTION
+.B lcov
+is a graphical front\-end for GCC's coverage testing tool gcov. It collects
+line, function and branch coverage data for multiple source files and creates
+HTML pages containing the source code annotated with coverage information.
+It also adds overview pages for easy navigation within the file structure.
+
+Use
+.B lcov
+to collect coverage data and
+.B genhtml
+to create HTML pages. Coverage data can either be collected from the
+currently running Linux kernel or from a user space application. To do this,
+you have to complete the following preparation steps:
+
+For Linux kernel coverage:
+.RS
+Follow the setup instructions for the gcov\-kernel infrastructure:
+.I http://ltp.sourceforge.net/coverage/gcov.php
+.br
+
+
+.RE
+For user space application coverage:
+.RS
+Compile the application with GCC using the options
+"\-fprofile\-arcs" and "\-ftest\-coverage".
+.RE
+
+Please note that this man page refers to the output format of
+.B lcov
+as ".info file" or "tracefile" and that the output of GCOV
+is called ".da file".
+.SH OPTIONS
+
+
+.B \-a
+.I tracefile
+.br
+.B \-\-add\-tracefile
+.I tracefile
+.br
+.RS
+Add contents of
+.IR tracefile .
+
+Specify several tracefiles using the \-a switch to combine the coverage data
+contained in these files by adding up execution counts for matching test and
+filename combinations.
+
+The result of the add operation will be written to stdout or the tracefile
+specified with \-o.
+
+Only one of \-z, \-c, \-a, \-e, \-r, \-l and \-\-diff may be specified
+at a time.
+
+.RE
+
+.B \-b
+.I directory
+.br
+.B \-\-base\-directory
+.I directory
+.br
+.RS
+.RI "Use " directory
+as base directory for relative paths.
+
+Use this option to specify the base directory of a build\-environment
+when lcov produces error messages like:
+
+.RS
+ERROR: could not read source file /home/user/project/subdir1/subdir2/subdir1/subdir2/file.c
+.RE
+
+In this example, use /home/user/project as base directory.
+
+This option is required when using lcov on projects built with libtool or
+similar build environments that work with a base directory, i.e. environments,
+where the current working directory when invoking the compiler is not the same
+directory in which the source code file is located.
+
+Note that this option will not work in environments where multiple base
+directories are used. In that case repeat the lcov call for each base directory
+while using the \-\-ignore\-errors option to prevent lcov from exiting when the
+first source code file could not be found. This way you can get partial coverage
+information for each base directory which can then be combined using the \-a
+option.
+.RE
+
+.B \-c
+.br
+.B \-\-capture
+.br
+.RS
+Capture coverage data.
+
+By default captures the current kernel execution counts and writes the
+resulting coverage data to the standard output. Use the \-\-directory
+option to capture counts for a user space program.
+
+The result of the capture operation will be written to stdout or the tracefile
+specified with \-o.
+
+Only one of \-z, \-c, \-a, \-e, \-r, \-l and \-\-diff may be specified
+at a time.
+.RE
+
+.B \-\-checksum
+.br
+.B \-\-no\-checksum
+.br
+.RS
+Specify whether to generate checksum data when writing tracefiles.
+
+Use \-\-checksum to enable checksum generation or \-\-no\-checksum to
+disable it. Checksum generation is
+.B disabled
+by default.
+
+When checksum generation is enabled, a checksum will be generated for each
+source code line and stored along with the coverage data. This checksum will
+be used to prevent attempts to combine coverage data from different source
+code versions.
+
+If you don't work with different source code versions, disable this option
+to speed up coverage data processing and to reduce the size of tracefiles.
+.RE
+
+.B \-\-compat\-libtool
+.br
+.B \-\-no\-compat\-libtool
+.br
+.RS
+Specify whether to enable libtool compatibility mode.
+
+Use \-\-compat\-libtool to enable libtool compatibility mode or \-\-no\-compat\-libtool
+to disable it. The libtool compatibility mode is
+.B enabled
+by default.
+
+When libtool compatibility mode is enabled, lcov will assume that the source
+code relating to a .da file located in a directory named ".libs" can be
+found in its parent directory.
+
+If you have directories named ".libs" in your build environment but don't use
+libtool, disable this option to prevent problems when capturing coverage data.
+.RE
+
+.B \-\-convert\-filenames
+.br
+.RS
+Convert filenames when applying diff.
+
+Use this option together with \-\-diff to rename the file names of processed
+data sets according to the data provided by the diff.
+.RE
+
+.B \-\-diff
+.I tracefile
+.I difffile
+.br
+.RS
+Convert coverage data in
+.I tracefile
+using source code diff file
+.IR difffile .
+
+Use this option if you want to merge coverage data from different source code
+levels of a program, e.g. when you have data taken from an older version
+and want to combine it with data from a more current version.
+.B lcov
+will try to map source code lines between those versions and adjust the coverage
+data respectively.
+.I difffile
+needs to be in unified format, i.e. it has to be created using the "\-u" option
+of the
+.B diff
+tool.
+
+Note that lines which are not present in the old version will not be counted
+as instrumented, therefore tracefiles resulting from this operation should
+not be interpreted individually but together with other tracefiles taken
+from the newer version. Also keep in mind that converted coverage data should
+only be used for overview purposes as the process itself introduces a loss
+of accuracy.
+
+The result of the diff operation will be written to stdout or the tracefile
+specified with \-o.
+
+Only one of \-z, \-c, \-a, \-e, \-r, \-l and \-\-diff may be specified
+at a time.
+.RE
+
+.B \-d
+.I directory
+.br
+.B \-\-directory
+.I directory
+.br
+.RS
+Use .da files in
+.I directory
+instead of kernel.
+
+If you want to work on coverage data for a user space program, use this
+option to specify the location where the program was compiled (that's
+where the counter files ending with .da will be stored).
+
+Note that you may specify this option more than once.
+.RE
+
+.B \-e
+.I tracefile
+.I pattern
+.br
+.B \-\-extract
+.I tracefile
+.I pattern
+.br
+.RS
+Extract data from
+.IR tracefile .
+
+Use this switch if you want to extract coverage data for only a particular
+set of files from a tracefile. Additional command line parameters will be
+interpreted as shell wildcard patterns (note that they may need to be
+escaped accordingly to prevent the shell from expanding them first).
+Every file entry in
+.I tracefile
+which matches at least one of those patterns will be extracted.
+
+The result of the extract operation will be written to stdout or the tracefile
+specified with \-o.
+
+Only one of \-z, \-c, \-a, \-e, \-r, \-l and \-\-diff may be specified
+at a time.
+.RE
+
+.B \-f
+.br
+.B \-\-follow
+.br
+.RS
+Follow links when searching for .da files.
+.RE
+
+.B \-\-from\-package
+.I package
+.br
+.RS
+Use .da files in
+.I package
+instead of kernel or directory.
+
+Use this option if you have separate machines for build and test and
+want to perform the .info file creation on the build machine. See
+\-\-to\-package for more information.
+.RE
+
+.B \-\-gcov\-tool
+.I tool
+.br
+.RS
+Specify the location of the gcov tool.
+.RE
+
+.B \-h
+.br
+.B \-\-help
+.br
+.RS
+Print a short help text, then exit.
+.RE
+
+.B \-\-ignore\-errors
+.I errors
+.br
+.RS
+Specify a list of errors after which to continue processing.
+
+Use this option to specify a list of one or more classes of errors after which
+lcov should continue processing instead of aborting.
+
+.I errors
+can be a comma\-separated list of the following keywords:
+
+.B gcov:
+the gcov tool returned with a non\-zero return code.
+
+.B source:
+the source code file for a data set could not be found.
+.RE
+
+.B \-i
+.br
+.B \-\-initial
+.RS
+Capture initial zero coverage data.
+
+Run lcov with \-c and this option on the directories containing .bb, .bbg
+or .gcno files before running any test case. The result is a "baseline"
+coverage data file that contains zero coverage for every instrumented line.
+Combine this data file (using lcov \-a) with coverage data files captured
+after a test run to ensure that the percentage of total lines covered is
+correct even when not all source code files were loaded during the test.
+
+Recommended procedure when capturing data for a test case:
+
+1. create baseline coverage data file
+.RS
+# lcov \-c \-i \-d appdir \-o app_base.info
+.br
+
+.RE
+2. perform test
+.RS
+# appdir/test
+.br
+
+.RE
+3. create test coverage data file
+.RS
+# lcov \-c \-d appdir \-o app_test.info
+.br
+
+.RE
+4. combine baseline and test coverage data
+.RS
+# lcov \-a app_base.info \-a app_test.info \-o app_total.info
+.br
+
+.RE
+.RE
+
+.B \-k
+.I subdirectory
+.br
+.B \-\-kernel\-directory
+.I subdirectory
+.br
+.RS
+Capture kernel coverage data only from
+.IR subdirectory .
+
+Use this option if you don't want to get coverage data for all of the
+kernel, but only for specific subdirectories. This option may be specified
+more than once.
+
+Note that you may need to specify the full path to the kernel subdirectory
+depending on the version of the kernel gcov support.
+.RE
+
+.B \-l
+.I tracefile
+.br
+.B \-\-list
+.I tracefile
+.br
+.RS
+List the contents of the
+.IR tracefile .
+
+Only one of \-z, \-c, \-a, \-e, \-r, \-l and \-\-diff may be specified
+at a time.
+.RE
+
+.B \-\-list\-full\-path
+.br
+.B \-\-no\-list\-full\-path
+.br
+.RS
+Specify whether to show full paths during list operation.
+
+Use \-\-list\-full\-path to show full paths during list operation
+or \-\-no\-list\-full\-path to show shortened paths. Paths are
+.B shortened
+by default.
+.RE
+
+.B \-\-no\-markers
+.br
+.RS
+Use this option if you want to get coverage data without regard to exclusion
+markers in the source code file. See
+.BR "geninfo " (1)
+for details on exclusion markers.
+.RE
+
+.B \-\-no\-recursion
+.br
+.RS
+Use this option if you want to get coverage data for the specified directory
+only without processing subdirectories.
+.RE
+
+.B \-o
+.I tracefile
+.br
+.B \-\-output\-file
+.I tracefile
+.br
+.RS
+Write data to
+.I tracefile
+instead of stdout.
+
+Specify "\-" as a filename to use the standard output.
+
+By convention, lcov\-generated coverage data files are called "tracefiles" and
+should have the filename extension ".info".
+.RE
+
+.B \-\-path
+.I path
+.br
+.RS
+Strip path from filenames when applying diff.
+
+Use this option together with \-\-diff to tell lcov to disregard the specified
+initial path component when matching between tracefile and diff filenames.
+.RE
+
+.B \-q
+.br
+.B \-\-quiet
+.br
+.RS
+Do not print progress messages.
+
+This option is implied when no output filename is specified to prevent
+progress messages to mess with coverage data which is also printed to
+the standard output.
+.RE
+
+.B \-r
+.I tracefile
+.I pattern
+.br
+.B \-\-remove
+.I tracefile
+.I pattern
+.br
+.RS
+Remove data from
+.IR tracefile .
+
+Use this switch if you want to remove coverage data for a particular
+set of files from a tracefile. Additional command line parameters will be
+interpreted as shell wildcard patterns (note that they may need to be
+escaped accordingly to prevent the shell from expanding them first).
+Every file entry in
+.I tracefile
+which matches at least one of those patterns will be removed.
+
+The result of the remove operation will be written to stdout or the tracefile
+specified with \-o.
+
+Only one of \-z, \-c, \-a, \-e, \-r, \-l and \-\-diff may be specified
+at a time.
+.RE
+
+.B \-\-strip
+.I depth
+.br
+.RS
+Strip path components when applying diff.
+
+Use this option together with \-\-diff to tell lcov to disregard the specified
+number of initial directories when matching tracefile and diff filenames.
+.RE
+
+.B \-t
+.I testname
+.br
+.B \-\-test\-name
+.I testname
+.br
+.RS
+Specify test name to be stored in the tracefile.
+
+This name identifies a coverage data set when more than one data set is merged
+into a combined tracefile (see option \-a).
+
+Valid test names can consist of letters, decimal digits and the underscore
+character ("_").
+.RE
+
+.B \-\-to\-package
+.I package
+.br
+.RS
+Store .da files for later processing.
+
+Use this option if you have separate machines for build and test and
+want to perform the .info file creation on the build machine. To do this,
+follow these steps:
+
+On the test machine:
+.RS
+.br
+\- run the test
+.br
+\- run lcov \-c [\-d directory] \-\-to-package
+.I file
+.br
+\- copy
+.I file
+to the build machine
+.RE
+.br
+
+On the build machine:
+.RS
+.br
+\- run lcov \-c \-\-from-package
+.I file
+[\-o and other options]
+.RE
+.br
+
+This works for both kernel and user space coverage data. Note that you might
+have to specify the path to the build directory using \-b with
+either \-\-to\-package or \-\-from-package. Note also that the package data
+must be converted to a .info file before recompiling the program or it will
+become invalid.
+.RE
+
+.B \-v
+.br
+.B \-\-version
+.br
+.RS
+Print version number, then exit.
+.RE
+
+.B \-z
+.br
+.B \-\-zerocounters
+.br
+.RS
+Reset all execution counts to zero.
+
+By default tries to reset kernel execution counts. Use the \-\-directory
+option to reset all counters of a user space program.
+
+Only one of \-z, \-c, \-a, \-e, \-r, \-l and \-\-diff may be specified
+at a time.
+.RE
+
+.SH FILES
+
+.I /etc/lcovrc
+.RS
+The system\-wide configuration file.
+.RE
+
+.I ~/.lcovrc
+.RS
+The per\-user configuration file.
+.RE
+
+.SH AUTHOR
+Peter Oberparleiter <Peter.Oberparleiter@de.ibm.com>
+
+.SH SEE ALSO
+.BR lcovrc (5),
+.BR genhtml (1),
+.BR geninfo (1),
+.BR genpng (1),
+.BR gendesc (1),
+.BR gcov (1)
diff --git a/chromium/third_party/lcov-1.9/man/lcovrc.5 b/chromium/third_party/lcov-1.9/man/lcovrc.5
new file mode 100644
index 00000000000..3258ba04da0
--- /dev/null
+++ b/chromium/third_party/lcov-1.9/man/lcovrc.5
@@ -0,0 +1,608 @@
+.TH lcovrc 5 "LCOV 1.9" 2010\-08\-06 "User Manuals"
+
+.SH NAME
+lcovrc \- lcov configuration file
+
+.SH DESCRIPTION
+The
+.I lcovrc
+file contains configuration information for the
+.B lcov
+code coverage tool (see
+.BR lcov (1)).
+.br
+
+The system\-wide configuration file is located at
+.IR /etc/lcovrc .
+To change settings for a single user, place a customized copy of this file at
+location
+.IR ~/.lcovrc .
+Where available, command\-line options override configuration file settings.
+
+Lines in a configuration file can either be:
+.IP " *"
+empty lines or lines consisting only of white space characters. These lines are
+ignored.
+.IP " *"
+comment lines which start with a hash sign ('#'). These are treated like empty
+lines and will be ignored.
+.IP " *"
+statements in the form
+.RI ' key " = " value '.
+A list of valid statements and their description can be found in
+section 'OPTIONS' below.
+.PP
+
+.B Example configuration:
+.IP
+#
+.br
+# Example LCOV configuration file
+.br
+#
+.br
+
+# External style sheet file
+.br
+#genhtml_css_file = gcov.css
+.br
+
+# Coverage rate limits
+.br
+genhtml_hi_limit = 90
+.br
+genhtml_med_limit = 75
+.br
+
+# Width of line coverage field in source code view
+.br
+genhtml_line_field_width = 12
+.br
+
+# Width of branch coverage field in source code view
+.br
+genhtml_branch_field_width = 16
+.br
+
+# Width of overview image
+.br
+genhtml_overview_width = 80
+.br
+
+# Resolution of overview navigation
+.br
+genhtml_nav_resolution = 4
+.br
+
+# Offset for source code navigation
+.br
+genhtml_nav_offset = 10
+.br
+
+# Do not remove unused test descriptions if non\-zero
+.br
+genhtml_keep_descriptions = 0
+.br
+
+# Do not remove prefix from directory names if non\-zero
+.br
+genhtml_no_prefix = 0
+.br
+
+# Do not create source code view if non\-zero
+.br
+genhtml_no_source = 0
+.br
+
+# Specify size of tabs
+.br
+genhtml_num_spaces = 8
+.br
+
+# Highlight lines with converted\-only data if non\-zero
+.br
+genhtml_highlight = 0
+.br
+
+# Include color legend in HTML output if non\-zero
+.br
+genhtml_legend = 0
+.br
+
+# Include HTML file at start of HTML output
+.br
+#genhtml_html_prolog = prolog.html
+.br
+
+# Include HTML file at end of HTML output
+.br
+#genhtml_html_epilog = epilog.html
+.br
+
+# Use custom HTML file extension
+.br
+#genhtml_html_extension = html
+.br
+
+# Compress all generated html files with gzip.
+.br
+#genhtml_html_gzip = 1
+.br
+
+# Include sorted overview pages
+.br
+genhtml_sort = 1
+.br
+
+# Include function coverage data display
+.br
+genhtml_function_coverage = 1
+.br
+
+# Include branch coverage data display
+.br
+genhtml_branch_coverage = 1
+.br
+
+# Location of the gcov tool
+.br
+#geninfo_gcov_tool = gcov
+.br
+
+# Adjust test names if non\-zero
+.br
+#geninfo_adjust_testname = 0
+.br
+
+# Calculate a checksum for each line if non\-zero
+.br
+geninfo_checksum = 0
+.br
+
+# Enable libtool compatibility mode if non\-zero
+.br
+geninfo_compat_libtool = 0
+.br
+
+# Directory containing gcov kernel files
+.br
+lcov_gcov_dir = /proc/gcov
+.br
+
+# Location for temporary directories
+.br
+lcov_tmp_dir = /tmp
+.br
+
+# Show full paths during list operation if non\-zero
+.br
+lcov_list_full_path = 0
+.br
+
+# Specify the maximum width for list output. This value is
+.br
+# ignored when lcov_list_full_path is non\-zero.
+.br
+lcov_list_width = 80
+.br
+
+# Specify the maximum percentage of file names which may be
+.br
+# truncated when choosing a directory prefix in list output.
+.br
+# This value is ignored when lcov_list_full_path is non\-zero.
+.br
+
+lcov_list_truncate_max = 20
+.PP
+
+.SH OPTIONS
+
+.BR genhtml_css_file " ="
+.I filename
+.IP
+Specify an external style sheet file. Use this option to modify the appearance of the HTML output as generated by
+.BR genhtml .
+During output generation, a copy of this file will be placed in the output
+directory.
+.br
+
+This option corresponds to the \-\-css\-file command line option of
+.BR genhtml .
+.br
+
+By default, a standard CSS file is generated.
+.PP
+
+.BR genhtml_hi_limit " ="
+.I hi_limit
+.br
+.BR genhtml_med_limit " ="
+.I med_limit
+.br
+.IP
+Specify coverage rate limits for classifying file entries. Use this option to
+modify the coverage rates (in percent) for line, function and branch coverage at
+which a result is classified as high, medium or low coverage. This
+classification affects the color of the corresponding entries on the overview
+pages of the HTML output:
+.br
+
+High: hi_limit <= rate <= 100 default color: green
+.br
+Medium: med_limit <= rate < hi_limit default color: orange
+.br
+Low: 0 <= rate < med_limit default color: red
+.br
+
+Defaults are 90 and 75 percent.
+.PP
+
+.BR genhtml_line_field_width " ="
+.I number_of_characters
+.IP
+Specify the width (in characters) of the source code view column containing
+line coverage information.
+.br
+
+Default is 12.
+.PP
+
+.BR genhtml_branch_field_width " ="
+.I number_of_characters
+.IP
+Specify the width (in characters) of the source code view column containing
+branch coverage information.
+.br
+
+Default is 16.
+.PP
+
+.BR genhtml_overview_width " ="
+.I pixel_size
+.IP
+Specify the width (in pixel) of the overview image created when generating HTML
+output using the \-\-frames option of
+.BR genhtml .
+.br
+
+Default is 80.
+.PP
+
+.BR genhtml_nav_resolution " ="
+.I lines
+.IP
+Specify the resolution of overview navigation when generating HTML output using
+the \-\-frames option of
+.BR genhtml .
+This number specifies the maximum difference in lines between the position a
+user selected from the overview and the position the source code window is
+scrolled to.
+.br
+
+Default is 4.
+.PP
+
+
+.BR genhtml_nav_offset " ="
+.I lines
+.IP
+Specify the overview navigation line offset as applied when generating HTML
+output using the \-\-frames option of
+.BR genhtml.
+.br
+
+Clicking a line in the overview image should show the source code view at
+a position a bit further up, so that the requested line is not the first
+line in the window. This number specifies that offset.
+.br
+
+Default is 10.
+.PP
+
+
+.BR genhtml_keep_descriptions " ="
+.IR 0 | 1
+.IP
+If non\-zero, keep unused test descriptions when generating HTML output using
+.BR genhtml .
+.br
+
+This option corresponds to the \-\-keep\-descriptions option of
+.BR genhtml .
+.br
+
+Default is 0.
+.PP
+
+.BR genhtml_no_prefix " ="
+.IR 0 | 1
+.IP
+If non\-zero, do not try to find and remove a common prefix from directory names.
+.br
+
+This option corresponds to the \-\-no\-prefix option of
+.BR genhtml .
+.br
+
+Default is 0.
+.PP
+
+.BR genhtml_no_source " ="
+.IR 0 | 1
+.IP
+If non\-zero, do not create a source code view when generating HTML output using
+.BR genhtml .
+.br
+
+This option corresponds to the \-\-no\-source option of
+.BR genhtml .
+.br
+
+Default is 0.
+.PP
+
+.BR genhtml_num_spaces " ="
+.I num
+.IP
+Specify the number of spaces to use as replacement for tab characters in the
+HTML source code view as generated by
+.BR genhtml .
+.br
+
+This option corresponds to the \-\-num\-spaces option of
+.BR genthml .
+.br
+
+Default is 8.
+
+.PP
+
+.BR genhtml_highlight " ="
+.IR 0 | 1
+.IP
+If non\-zero, highlight lines with converted\-only data in
+HTML output as generated by
+.BR genhtml .
+.br
+
+This option corresponds to the \-\-highlight option of
+.BR genhtml .
+.br
+
+Default is 0.
+.PP
+
+.BR genhtml_legend " ="
+.IR 0 | 1
+.IP
+If non\-zero, include a legend explaining the meaning of color coding in the HTML
+output as generated by
+.BR genhtml .
+.br
+
+This option corresponds to the \-\-legend option of
+.BR genhtml .
+.br
+
+Default is 0.
+.PP
+
+.BR genhtml_html_prolog " ="
+.I filename
+.IP
+If set, include the contents of the specified file at the beginning of HTML
+output.
+
+This option corresponds to the \-\-html\-prolog option of
+.BR genhtml .
+.br
+
+Default is to use no extra prolog.
+.PP
+
+.BR genhtml_html_epilog " ="
+.I filename
+.IP
+If set, include the contents of the specified file at the end of HTML output.
+
+This option corresponds to the \-\-html\-epilog option of
+.BR genhtml .
+.br
+
+Default is to use no extra epilog.
+.PP
+
+.BR genhtml_html_extension " ="
+.I extension
+.IP
+If set, use the specified string as filename extension for generated HTML files.
+
+This option corresponds to the \-\-html\-extension option of
+.BR genhtml .
+.br
+
+Default extension is "html".
+.PP
+
+.BR genhtml_html_gzip " ="
+.IR 0 | 1
+.IP
+If set, compress all html files using gzip.
+
+This option corresponds to the \-\-html\-gzip option of
+.BR genhtml .
+.br
+
+Default extension is 0.
+.PP
+
+.BR genhtml_sort " ="
+.IR 0 | 1
+.IP
+If non\-zero, create overview pages sorted by coverage rates when generating
+HTML output using
+.BR genhtml .
+.br
+
+This option can be set to 0 by using the \-\-no\-sort option of
+.BR genhtml .
+.br
+
+Default is 1.
+.PP
+
+.BR genhtml_function_coverage " ="
+.IR 0 | 1
+.IP
+If non\-zero, include function coverage data when generating HTML output using
+.BR genhtml .
+.br
+
+This option can be set to 0 by using the \-\-no\-function\-coverage option of
+.BR genhtml .
+.br
+
+Default is 1.
+.PP
+
+.BR genhtml_branch_coverage " ="
+.IR 0 | 1
+.IP
+If non\-zero, include branch coverage data when generating HTML output using
+.BR genhtml .
+.br
+
+This option can be set to 0 by using the \-\-no\-branch\-coverage option of
+.BR genhtml .
+.br
+
+Default is 1.
+.PP
+
+.BR geninfo_gcov_tool " ="
+.I path_to_gcov
+.IP
+Specify the location of the gcov tool (see
+.BR gcov (1))
+which is used to generate coverage information from data files.
+.br
+
+Default is 'gcov'.
+.PP
+
+.BR geninfo_adjust_testname " ="
+.IR 0 | 1
+.IP
+If non\-zero, adjust test names to include operating system information
+when capturing coverage data.
+.br
+
+Default is 0.
+.PP
+
+.BR geninfo_checksum " ="
+.IR 0 | 1
+.IP
+If non\-zero, generate source code checksums when capturing coverage data.
+Checksums are useful to prevent merging coverage data from incompatible
+source code versions but checksum generation increases the size of coverage
+files and the time used to generate those files.
+.br
+
+This option corresponds to the \-\-checksum and \-\-no\-checksum command line
+option of
+.BR geninfo .
+.br
+
+Default is 0.
+.PP
+
+.BR geninfo_compat_libtool " ="
+.IR 0 | 1
+.IP
+If non\-zero, enable libtool compatibility mode. When libtool compatibility
+mode is enabled, lcov will assume that the source code relating to a .da file
+located in a directory named ".libs" can be found in its parent directory.
+.br
+
+This option corresponds to the \-\-compat\-libtool and \-\-no\-compat\-libtool
+command line option of
+.BR geninfo .
+.br
+
+Default is 1.
+.PP
+
+.BR lcov_gcov_dir " ="
+.I path_to_kernel_coverage_data
+.IP
+Specify the path to the directory where kernel coverage data can be found
+or leave undefined for auto-detection.
+.br
+
+Default is auto-detection.
+.PP
+
+.BR lcov_tmp_dir " ="
+.I temp
+.IP
+Specify the location of a directory used for temporary files.
+.br
+
+Default is '/tmp'.
+.PP
+
+.BR lcov_list_full_path " ="
+.IR 0 | 1
+.IP
+If non-zero, print the full path to source code files during a list operation.
+.br
+
+This option corresponds to the \-\-list\-full\-path option of
+.BR lcov .
+.br
+
+Default is 0.
+.PP
+
+.BR lcov_list_max_width " ="
+.IR width
+.IP
+Specify the maximum width for list output. This value is ignored when
+lcov_list_full_path is non\-zero.
+.br
+
+Default is 80.
+.PP
+
+.BR lcov_list_truncate_max
+.B " ="
+.IR percentage
+.IP
+Specify the maximum percentage of file names which may be truncated when
+choosing a directory prefix in list output. This value is ignored when
+lcov_list_full_path is non\-zero.
+.br
+
+Default is 20.
+.PP
+
+.SH FILES
+
+.TP
+.I /etc/lcovrc
+The system\-wide
+.B lcov
+configuration file.
+
+.TP
+.I ~/.lcovrc
+The individual per\-user configuration file.
+.PP
+
+.SH SEE ALSO
+.BR lcov (1),
+.BR genhtml (1),
+.BR geninfo (1),
+.BR gcov (1)
diff --git a/chromium/third_party/lcov-1.9/rpm/lcov.spec b/chromium/third_party/lcov-1.9/rpm/lcov.spec
new file mode 100644
index 00000000000..ae9b8bf1580
--- /dev/null
+++ b/chromium/third_party/lcov-1.9/rpm/lcov.spec
@@ -0,0 +1,48 @@
+Summary: A graphical GCOV front-end
+Name: lcov
+Version: 1.9
+Release: 1
+License: GPL
+Group: Development/Tools
+URL: http://ltp.sourceforge.net/coverage/lcov.php
+Source0: http://downloads.sourceforge.net/ltp/lcov-%{version}.tar.gz
+BuildRoot: /var/tmp/%{name}-%{version}-root
+BuildArch: noarch
+
+%description
+LCOV is a graphical front-end for GCC's coverage testing tool gcov. It collects
+gcov data for multiple source files and creates HTML pages containing the
+source code annotated with coverage information. It also adds overview pages
+for easy navigation within the file structure.
+
+%prep
+%setup -q -n lcov-%{version}
+
+%build
+exit 0
+
+%install
+rm -rf $RPM_BUILD_ROOT
+make install PREFIX=$RPM_BUILD_ROOT
+
+%clean
+rm -rf $RPM_BUILD_ROOT
+
+%files
+%defattr(-,root,root)
+/usr/bin
+/usr/share
+/etc
+
+%changelog
+* Wed Aug 13 2008 Peter Oberparleiter (Peter.Oberparleiter@de.ibm.com)
+- changed description + summary text
+* Mon Aug 20 2007 Peter Oberparleiter (Peter.Oberparleiter@de.ibm.com)
+- fixed "Copyright" tag
+* Mon Jul 14 2003 Peter Oberparleiter (Peter.Oberparleiter@de.ibm.com)
+- removed variables for version/release to support source rpm building
+- added initial rm command in install section
+* Mon Apr 7 2003 Peter Oberparleiter (Peter.Oberparleiter@de.ibm.com)
+- implemented variables for version/release
+* Fri Oct 8 2002 Peter Oberparleiter (Peter.Oberparleiter@de.ibm.com)
+- created initial spec file