How OCAP effects AS9100 Companies

The revised SAE AS9104/1A requires AS 9100 CBs (Certification Bodies… i.e., Registrars) to use the “Organization Certification Analysis Process (OCAP)” to determine an overall “risk rating” for each registered company. This “risk-rating” will then be used during audit planning — to provide reductions or increases (by ±10%, as per AS9104/1A, “Table 9 - Audit Duration Risk Adjustments”) in certification and surveillance audit durations (i.e., number of audit days required) from baseline audit durations found in SAE AS9104/1A, “Table 8 - Audit Duration Per Site”.

What is OCAP

The OCAP is defined in SAE AS9104/1A as:

3.8 Organization Certification Analysis Process (OCAP)
An interactive process between the organization and CB to determine the organization’s AQMS scope and associated certification audit program, and conduct a risk assessment for certification within the ICOP scheme.

The OCAP does NOT impact ISO 9001 certified companies. However, ISO 9001 audit time is subject to numerous considerations (addressed in IAF MD5 "Determination of Audit Time for QMS-EMS-OHS Audits”, sec. 8, “Factors for Adjustments of Audit Time of Management Systems (QMS, EMS, and OH&SMS)”)).

The OASIS database is scheduled to begin supporting the new SAE AS9104/1A criteria by 3/31/23. And the new OCAP process is required to be fully implemented by the Certifying Bodies (CBs) no later than 3/31/24.

Information concerning the timeline for OCAP implementation by the registrars is online at: https://oasishelp.iaqg.org/91042022-series-and-91012022-transition/

What is "Audit Duration" vs. "Audit Time"

The OCAP focuses on “Audit Duration” rather than “Audit Time”. What's the difference?

IAF MD 5:2019, "Determination of Audit Time of Quality, Environmental, and Occupational Health & Safety Management Systems" (Issue 4, Version 2), Sec. 1, “Definitions”, states:

1.7. Duration of Management System Certification Audits
Part of audit time (1.6) spent conducting audit activities from the opening meeting to the closing meeting, inclusive.
Note: Audit activities normally include:
• conducting the opening meeting
• performing document review while conducting the audit
• communicating during the audit
• assigning roles and responsibilities of guides and observers
• collecting and verifying information
• generating audit findings
• preparing audit conclusions
• conducting the closing meeting

And for “Audit Time”, IAF MD 5:2019 simply duplicates the definition provided in ISO 17021-1, “Conformity assessment − Requirements for bodies providing audit and certification of management systems − Part 1: Requirements” stating:

1.6. Audit Time
Time needed to plan and accomplish a complete and effective audit of the client organization’s management system (ISO/IEC 17021-1).

At first glance, “Audit Duration” may appear to be the same as “Audit Time”. However, IAF MD 5:2019, sec. 2, “Application” clarifies this difference, stating:

80% on-site (“Audit Duration”) + 20% off-site = 100% of “Audit Time”.

2.1. Audit Time
2.1.1. The audit time for all types of audits includes the total time on-site at a client's location (physical or virtual) (1.7) and time spent off-site carrying out planning, document review, interacting with client personnel and report writing.

In other words… IF the total “Audit Time” allocated is 2.5 days (20 hours)… and the CB allocates 16 hours (80%) to audit activities taking place on-site (or virtual)… and the remaining 4 hours (20%) to off-site audit activities (e.g., planning, document review, interacting with client personnel, and report writing), this equals 100% of the “Audit Time”.

OCAP Requirements

The new OCAP process requires each Certified Organization (i.e., AS 9100 series registered company) to submit data to the Certifying Body (CB) — no more than 90 days prior to the start of the audit. This data will likely be gathered using a questionnaire (which some CBs have already begun to gather - so you should begin planning to address this information ASAP) — and the answers provided will be used by the CB for determining an overall “risk rating” (High, Medium, Low) for the certified company. The overall “risk rating” is calculated using AS9104/1A, Table 7 - “Organizational Risk Determination”, shown below.

AS9104/1A, Table 7 - Organizational Risk Determination
Risk Factor Data Source LOW (1) MED (3) HIGH (6) Risk Score
Complexity Figure 2 Low Med High A
Internal Audit Table 5 Low Med High B
On-Time Delivery Organization Exceeds Meets Below C
Conformity of Delivered Product or Service (e.g., item escape rate) Organization Exceeds Meets Below D
Customer Complaints / Feedback Organization Exceeds Meets Below E
AQMS Process Effectiveness from Previous Audit Report PEARs (lowest value) 5 3-4 1-2 F
Total Risk Score = ∑(A+B+C+D+E+F) = R R
When R = (36 to 25) Risk is HIGH, (24 to 12) Risk is MED, (11 to 6) Risk is LOW
Example: A=High (6), B=Low (1), C=Low (1), D=Med (3), E=Med (3), and F=Low (1). Therefore ∑(6+1+1+3+3+1) = 15
Organizational Risk = Medium

Complexity

While there is really nothing that a company can do to alter its complexity (without major changes in how it operates), the remaining “Risk Factors” are completely in control of the business. However, if there are changes in your organization, the graphic image below is essentially the same as the one shown in AS91041A, “Figure 2 - Organization complexity risk level”. The risk level for your organization should coincide with the criteria contained in the quadrant that best describes your organization.

AS91041A, "Figure 2 - Organization complexity risk level"

Internal Audit

The “Risk Factor” criteria relating to internal audit performance is:

AS9104/1A, Table 5 - Internal audit program risk analysis
Internal Audit Program Risk Characteristics
High Performing Audit Program Low • Properly resourced audit program
• Multi-event audit program, audit full QMS annually
• Audit program driven by risk and data
• Effective corrective action program
Average Audit Program Medium • Limited resources for audit program
• Internal audit is an annual event
• Full QMS is covered annually
• Conforming corrective action program
Low Performing Audit Program High • Audit program is not properly resourced
• Primarily desktop audits
• Audit program does not prevent major nonconformities from third-party audits
• Full QMS not covered annually
• Ineffective corrective action program

Strategy

  1. To ensure a “Properly resourced audit program”, have more than one qualified internal auditor. If your company has limited resources, then consider outsourcing either a portion or the entirety of the internal audit program to a company that specializes in quality auditing… and can support AS9100 internal auditing activities (e.g., Quality Auditing, LLC).
  2. The criterion relating to a “Multi-event audit program, audit full QMS annually” means that, in order to be classified as a low risk, the company must “spread out” their internal audits over the course of a year (e.g., Quarterly, Monthly). For small companies, this could be difficult, but with only two or three core processes, these could be broken out to be 2 or 3 separate audits (with separate audit reports).
  3. The criterion relating to an “Audit program driven by risk and data” is not defined AS9104/1A. For further guidance see Risk-Based Internal Audits. And use my free “Risk-Based Audit Planning Criteria” form (in MS Word).
  4. The best way to ensure that you have an “effective corrective action program”, is to avoid engaging in Corrective Action "Whac-A-Mole".

On-Time Delivery & Conformity of Delivered Product or Service

AS9100sec. 5.1.2 “Customer Focus”, already requires (1) On-Time Delivery and (2) Conformity of Delivered Product or Service to have “Quality Objectives” associated with them (as KPIs - Key Performance Indicators).

AS9100, sec. 5.1.2 “Customer Focus”
Top management shall demonstrate leadership and commitment with respect to customer focus by ensuring that:
d. product and service conformity and on-time delivery performance are measured and appropriate action is taken if planned results are not, or will not be, achieved.

These metrics are also required to be monitored and reviewed in AS9100, sections: 9.1.2 & 9.3.2

The problem that I see in many AS9100 companies, is management establishing “Quality Objectives” based on the “average” of the KPI data. This approach guarantees that they will fail to achieve their “Quality Objectives” half of the time. Referring to AS9104/1A, Table 7 - “Organizational Risk Determination”, shown above, this approach would categorize them as a “High” risk company.

Other Aerospace companies take a conservative approach by establishing “Quality Objectives” that are easily achieved. While this approach guarantees that they will consistently achieve their “Quality Objectives” (and be classified as a “Low” risk company), it calls into question whether any “real” quality improvements are taking place (Ref. AS9100, sec. 10.3, “Continual Improvement”).

Strategy
In order to meet the requirements AS9100 and realize “value” from these KPIs, management “should” input this data to a “Process Capability Analysis” to determine what their actual process is capable of achieving. These charts are normally unilateral with an automatically calculated control limit and trend line. This allows management to make better-informed decisions and even identify problems earlier by tracking trends (which may be seasonal).

Use the control limit(s) established by the “Process Capability Charts” to determine what your “quality objectives” should be (for purposes of AS9100). Improvements should be driven by data-based management - rather than emotion-based management (e.g., “pie-in-the-sky” objectives that ignore the reality reflected in the process capability charts).

Customer Complaints / Feedback

In addition to the above KPIs, the OCAP requirement includes “Customer Complaints / Feedback” as requiring a “Quality Objective”.

This data is typically highly subjective (e.g., customer “perceptions”) and is much more difficult to analyze due to numerous factors. For example, some customer complaints are directly related to customer-imposed restrictions on the level of control the company has over its processes (e.g., customer-mandated suppliers and/or service providers, customer-specified material/parts, fixed/frozen or customer approved processes).

In many instances, customers will complain to the company about nonconforming work provided by the customer-mandated suppliers/service providers. And when provided the details, the customer refuses to allow the company to utilize better-performing suppliers/service providers. And IF the company is re-working nonconforming products from customer-mandated suppliers/service providers, which results in late deliveries, the customer still blames the company rather than their own mandated suppliers/service providers! After years of notifying the customer that the nonconformities and late deliveries are due to the poor performance of one or more mandated external providers - the customer remains unwilling to allow a change. Yet, the business continues to be awarded work under these conditions… apparently accepted by the customer.

A similar situation exists where the customer has mandated that the company utilize “fixed/frozen” or customer-approved processes. While the customer complains about nonconformities and/or late deliveries, they will not allow the process to be improved by the company.

In other cases, after reasonable efforts have been made to understand the customer’s requirements, the customer continues to communicate incorrectly/poorly stated requirements – primarily due to the customer’s continued inability to adequately define and/or articulate their requirements.

Many companies attempt to gather information and monitor “customer perceptions” through “Customer-Satisfaction Surveys”. I discourage the use of these due to their subjectivity. However, if a company insists upon utilizing these types of surveys, then I suggest that the surveys and resulting data be handled through the Marketing function rather than the Quality function (because Quality Professionals have little or no training in the development and use of these types of surveys).

With all of this being understood, AS9100, sec. 9.1.2, “Customer Satisfaction” requires:

Information to be monitored and used for the evaluation of customer satisfaction shall include, but is not limited to, product and service conformity, on-time delivery performance, customer complaints, and corrective action requests. The organization shall develop and implement plans for customer satisfaction improvement that address deficiencies identified by these evaluations, and assess the effectiveness of the results.

Based on the concerns described above, the company should have a process for analyzing “customer complaints” to determine their validity prior to being input to the KPI.

The other required “Customer Satisfaction” input data is “corrective action requests” received from customers. Again, this is complicated by so many customers, and the AS9100 standard, arbitrarily assumes that all nonconformities (particularly those delivered to customers - aka “escapes”) are due to “Assignable Cause” (aka “Special Cause”) variations in a process. However, because AS9100, clause 8.5.1,c,2 allows for sampling, it is recognized that there will be an acceptable number of defects delivered (through an assigned AQL or AOQL). Consequently, the company should have a process for analyzing all “corrective action requests” received from customers to determine whether the associated nonconformity was the result of an “Assignable Cause” variation (which can be addressed through a corrective action) or was the result of a “Common Cause” variation (with NO assignable cause) in the process. Where the nonconformity was due to a “Common Cause” variation in the process, the only way to address this is through changing the process (if possible) or introducing risk controls to mitigate the probability or impact of recurring nonconformities.

AQMS Process Effectiveness from Previous Audit Report

AS9100 auditors are required (by SAE AS9101, "Quality Management Systems - Audit Requirements for Aviation, Space, and Defense Organizations") to complete a “Process Effectiveness Assessment Report” (PEAR) for EACH “key” process identified by the customer. The PEARs (AS9101 (Rev. F) Form 3) include a matrix (shown below) where the auditor selects a number (1-5) as an indication of the effectiveness of the process based upon whether:

  • No nonconformities AND the “Quality Objective(s)” were met (scoring a “5”)
  • One or more nonconformities were identified BUT the “Quality Objective(s)” were met (scoring ≤4)
  • No nonconformities were identified BUT one or more “Quality Objective(s)” were NOT met (scoring ≤4)
  • One or more nonconformities were identified AND one or more “Quality Objective(s)” were NOT met (scoring ≤3)

If the terminology in the PEAR is confusing (i.e., “Planned Activities” vs. “Planned Results”), this is explained in AS9101 (Rev. F) sections:

3.7 Planned Activities
The means, methods, and internal requirements by which the organization intends to achieve planned results of a given process to meet customer requirements. Planned activities include conformity to process requirements and maintained documented information.

3.8 Planned Results
The intended performance of a process as determined and measured by the organization. Planned results include product and service conformity and On-time Delivery (OTD) to meet customer requirements, and may include other elements related to the process, as defined by the organization.

As per AS9104/1A, “Table 7 - Organizational Risk Determination”, the AS9100 auditor will use the PEAR with the lowest score to calculate the “Risk Factor”.

Strategy
The strategy to have a low “Risk Factor” in this category is to ensure that ALL “Quality Objectives” (associated with PEARs) are (1) realistic and (2) consistently met. In addition, you should focus your internal audits on those processes associated with PEARs to mitigate the possibility of any non-conformities arising from those areas.