Skip to content

tronnetdevops/performance-audit

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

@gist https://gist.github.com/tronhammer/df651e353be29f2ff221

Performance Audit Suite

This suite is designed to audit data flow, load times and many other facets through the realms of the Ontraport landscape. Once enough metrics have been derived from a subsequent number of audits, custom reports can be generated and queried against by time ranges with selective verbosity and rendered into graphs and tables.


Setup

Prior to using the Performance Audit Suite, the database and user must be created. Additionally, depending on the type of audits desired, there may also be some dependencies required as well.

Basic

To perform a standard setup, you should only need to run make install CONF="performance-audit.conf". The config file has already been setup for the standard Ontraport Environment.

Currently, the default install will setup the following:

  • performance_audit MySQL database and tables.
  • op_perfaudit MySQL user.
  • xhprof PHP extension.
  • xdebug PHP extension.
  • Modify the .htaccess file.

Setup is done through a bash script which loads in a config file with the database credentials and attempts to connect to the database and see if it exists yet. If it doesn't it will create the user, database and tables.

Once the database is created, dependency checks for Boomerang, XHProf and subsequent installations are executed. Also, depending on audits desired, this dependency resolution may also happen for YSlow and PageSpeed headless audits.

Finally, the .htaccess file is updated so that the Performance Audit Suite header and footer files are included on pageload.

Setup Files

Server Setup Script: /PerformanceAudit/bin/server_update.sh

PHP Database Query Script: /PerformanceAudit/bin/check_database.php (deprecating)

Data Files

Database SQL: /PerformanceAudit/data/performance_audit.database.sql

Database Credentials: /PerformanceAudit/data/db.conf

xhprof PHP extension ini file: /PerformanceAudit/data/php.extension.ini

Running

To begin a server update, simply run the following command and answer the prompts.

cd /PerformanceAudit
make install

Architecture

There are two primary actors in the performance audit landscape. They can be considered controllers, but they exist in that capacity very loosely.

Controllers

The Performance Audit Manager [PerformanceAuditManager]

The Performance Audit Manager acts as the primary interface for auditing. It is responsible for initializing, starting, stopping, saving and retreiving reports from all enabled Auditors. It also has a fairly comprehensive logging and debugging system integrated in order to encapsulate itself as much as possible from the Ontraport codebase (this matters as certain functions can be cherry picked and/or ignored during some audits).

Ideally, all Auditors are solely interacted with through The Manager. But the architecture was initially designed to be extendable while still attempting to be decoupled (not always possible without bloat and performance degredation) and maintain security.

Auditors [PerformanceAuditor]

Auditors act as the interfaces between the Performance Audit Manager and profiling resources. They are responsible for performing the basic requirements of an audit, which are:

  • Ensuring that their auditing resource and its dependencies are available, initialized and configured correctly.
  • Modifying the environment and output in order to inject profiling hooks.
  • Starting and stopping their respective auditing resource.
  • Identifying themselves and their metrics in normalized comprehensive reports.
  • Passing reports along to the Performance Audit Manager to be saved to a centralized database.

New Auditors are highly encouraged, but not required, to extend the PerformanceAudit abstract class to gain common functionality, as well as implement the iPerformanceAudit interface so as to ensure integrity is enforced.

Models

Performance Metrics [PerformanceMetric]

A Performance Metric is a Model for reports generated by Auditors that need to be normalized and saved to the database.

Performance Metrics are stored in the performance_audit.metrics table, which looks like the following:

CREATE TABLE IF NOT EXISTS `performance_audit`.`metrics` (
    `id` INT(11) NOT NULL AUTO_INCREMENT, 
    `aid` INT(20) NOT NULL, 
    `uri` VARCHAR(255) NOT NULL,
    `name` VARCHAR(255) NOT NULL, 
    `type` TINYINT(2) NOT NULL DEFAULT 0,
    `realm` ENUM("backend", "frontend", "network", "system", "remote") NOT NULL,
    `source` ENUM("xhprof", "xdebug", "boomerang", "yslow", "pagespeed", "browsercache", "network") NOT NULL,
    `created` TIMESTAMP NOT NULL DEFAULT NOW(),
    `start` INT(10) NOT NULL,
    `end` INT(10) NOT NULL,
    `data` TEXT, 
    `counter` INT(3) NOT NULL DEFAULT 0,
    PRIMARY KEY (`id`) 
) ENGINE=`InnoDB` DEFAULT CHARSET=`utf8` COLLATE=`utf8_unicode_ci` AUTO_INCREMENT=1;

Auditing

Auditing begins when a new PerformanceAuditManager instance is initialized and its init() is called (or alternatively if it has already been setup, you can call start() directly). During initialization, each Auditor is initialized and completes their respective dependency checks. Auditing continues until stop() is called.

The general rule of thumb is to attempt to complete as much setup and configuration in the initialization period as possible so Auditors don't cannibalize their own profiling.

Realms

Auditors' Performance Metrics are partly identified by the realms they audit. This allows report criteria to be more granular during filtering.

Back End [performance_audit.metrics.realm[backend]]

These are Performance Metrics that profile the code executing on the server. This includes data such as:

  • CPU
  • Memory usage
  • Time spent processing
Front End [performance_audit.metrics.realm[frontend]]

These are Performance Metrics that profile the code executing on the clients' browser. This includes data such as:

  • Page load time
  • Performance optimization and compression recommendations
  • Bandwidth
  • Browser cache state
Others...

And a few others that haven't been fully fleshed out yet.

  • Network [performance_audit.metrics.realm[network]]
  • System [performance_audit.metrics.realm[system]]
  • Remote [performance_audit.metrics.realm[remote]]

Sources

Auditors' Performance Metrics are also identified by the source that derived the data. These are also known as the Auditor's "resource" which is an initialized object representation of a "source".

Some of the pre-packaged sources are:

######* Currently the performance_audit.metrics tables source and realm columns are type enum for sql look up optimization, but might change to be just varchar in the future to allow for expansion.


Conditional Audits

Auditing only happens by chance to allow for randomized sampling, while also not kicking our clients in the teeth with a slough of performance audits. A discrete stochastic variable is created and a probability of it equaling 1 determines if the audit proceeds.

Currently the probability for each realm is:

Back End Audit Probability: 1/100 chance

Front End Audit Probability: 1/25 chance

If auditing does occure, there is an additional throttle on how many audits can happen per account. This also allows for a more diverse sampling group.

Current Per Account Audit Threshold: 15

######* Future plans are to create a means for individual Auditors to define what percentage of the total audits they would like to execute during


Audit Workflow - Entry Point

The Performance Audit Suite injects itself into the environment through the PHP auto_prepend_file and auto_append_file directives that were added into the .htaccess file during setup. These allow for a PHP header and footer script to wrap the index.php file and start profiling as close to the entry point as possible. This also allows us to open a string buffer and capture all output prior to render so that individual Auditors can manipulate it and inject additional front end and netowrk auditing hooks.

The conditional initialization happens as close to the entry point as possible. If the chance conditional passes, or the flag __oppa is found in the request headers, then the constant PERFORMANCE_AUDIT_MODE flag is set and auditing begins.

A new PerformanceAuditManager instance is initialized and all subsequent Auditors are initialized. Auditors may also utilize the __oppa request flag to identify when ingress requests are targeted for them. Typically this check happens during initialization of an Auditor, who can then save the data as a Performance Metric and exit the script quickly to avoid overhead.

Once all Auditors are initialized, normal bootstrapping occures. We unfortunetly can't begin auditing until we have an AccountHandle to determine if the audits per account threshold has been met. Luckily most of the Back End Auditors are extended directly into PHP and can thusly retroactively retrieve stats from load time after the fact. Because of this, the init() method for the PerformanceAuditManager is called in the footer script.


Reporting

Reporting is done via a browser GUI located at /PerformanceAudit/reports/index.php.

On the initial load, you are provided several options for selecting criteria such as a time range, the available realms and sources from which audits are available.

Metrics

Some stats currently available are:

  • Counts for available auditors and reports they accumulated over the specified time range.
  • Averages on load times derived from Performance Metrics.
  • Usage and distribution graphs that detail where load is bearing heaviest.
  • Browser cache states across time range.

Components

Stats are displayed through various Dashboards which contain Widgets.

Dashboards

These components are generally modules that render stats in a specific form, such as list stats, graph stats, recommendation stats etc. It is generally responsible for managing Widgets and the data they use to formulate stats.

Widgets

These components are responsible for the actual number crunching for individual stats and rendering them to the page. Each graph, list stat, etc is it's own Widget that is managed by a Dashboard.

Rendering

Results are rendered with a few of the following Dashboards and Widgets:


Future Todos...

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published