510(k) Premarket Notification

How do I justify substantial equivalence using multiple predicate devices?

When utilizing a multiple predicate (or "split predicate") strategy in a 510(k) submission, where a new device combines key features from two or more different legally marketed devices, what is a comprehensive framework for building a robust substantial equivalence argument? Specifically, how can sponsors effectively de-risk this approach and proactively address common FDA concerns? This involves several key considerations: 1. **Predicate Selection Rationale:** How should a sponsor formally document the justification for using multiple predicates? What level of detail is required to demonstrate that no single predicate device could serve as a suitable basis for comparison, and how can this rationale be presented to clearly establish the context for the entire submission? 2. **Structuring the Comparison:** What are the best practices for organizing the substantial equivalence comparison tables? Is it more effective to compare the subject device to each predicate individually in side-by-side sections, or to create a single, integrated table that attributes specific features to "Predicate A" and "Predicate B"? How can this documentation transparently map the features of the new device to their origins while clearly identifying any new or modified elements resulting from their integration? 3. **Risk-Based Testing Strategy:** How does a multiple predicate strategy influence the risk analysis and subsequent performance testing plan? For instance, with a device combining a handle from one predicate and an operative tip from another, what testing is needed to: * Confirm that the "borrowed" features perform equivalently to their original predicate (e.g., grip strength of the handle, sharpness of the tip). * Critically, evaluate the safety and effectiveness of the integrated system, focusing on the mechanical integrity, functional performance, and any potential user-related risks at the interface of the combined components. * Specifically address any new failure modes or hazards that could arise from the novel combination, which would not exist in either predicate alone. 4. **Addressing "New Questions of Safety or Effectiveness":** The core challenge of a multiple predicate submission is demonstrating that the combination of existing features does not create a new type of device with fundamentally different safety or performance questions. What types of technological differences or changes in the principle of operation are most likely to trigger this concern from FDA reviewers? How can sponsors use bench performance data, and if necessary, usability, animal, or clinical data, to proactively demonstrate that the integrated device remains as safe and effective as the predicate devices? 5. **Pre-Submission (Q-Sub) Strategy:** Given the increased complexity, when is it most advantageous to engage with the FDA through a Pre-Submission (Q-Sub)? What specific questions about the predicate rationale, testing strategy, and overall justification for equivalence are most productive to include in a Q-Sub to gain agency feedback and mitigate the risk of an Additional Information (AI) request or a Not Substantially Equivalent (NSE) decision? --- *This Q&A was AI-assisted and reviewed for accuracy by Lo H. Khamis.*
💬 1 answers 👁️ 9 views 👍 1
Asked by Lo H. Khamis

Answers

✓ Accepted Answer
👍 2
## Justifying Substantial Equivalence: A Framework for Using Multiple Predicate Devices in a 510(k) When a new medical device innovates by combining features from two or more existing, legally marketed devices, a single predicate may not be sufficient for a 510(k) submission. In these cases, sponsors can use a multiple predicate (or "split predicate") strategy to demonstrate substantial equivalence (SE). This approach allows a sponsor to argue that their new device is as safe and effective as a combination of features already cleared by the FDA. While powerful, this strategy introduces complexity. The central challenge is proving that integrating existing components does not raise new questions of safety or effectiveness. A successful submission requires a robust, transparent rationale, a clear comparative framework, and a comprehensive testing plan that addresses the integrated system as a whole. This article provides a detailed framework for building a strong multiple predicate 510(k) submission and proactively addressing common FDA concerns. ### Key Points * **Justification is Paramount:** You must start with a clear, documented rationale explaining why no single predicate device is adequate for comparison and why the chosen predicates are appropriate. * **Integrated Comparison is Best:** A single, integrated comparison table that maps every feature of the new device to its corresponding predicate is more effective than separate, side-by-side comparisons. * **Focus on the "Newness":** The testing strategy must go beyond verifying individual components. It must rigorously evaluate the safety and performance of the integrated system, focusing on the interfaces and any new risks created by the combination. * **No New Questions of SE:** The ultimate goal is to demonstrate that the combination of cleared features does not create a new intended use or a different fundamental scientific technology that would raise new questions of safety or effectiveness. * **Early FDA Engagement is Critical:** Due to the inherent complexity, using the Q-Submission program to gain FDA feedback on the predicate rationale and testing strategy is a crucial de-risking step. ### ## 1. Building and Documenting the Predicate Selection Rationale The foundation of a multiple predicate submission is the justification for using this approach. FDA expects a clear explanation of why a single predicate device was not sufficient. This rationale should not be an afterthought; it should be a formally documented analysis that sets the stage for the entire submission. A robust rationale should include the following elements: 1. **Summary of the Subject Device:** Briefly describe the new device, its intended use, and its key technological features. 2. **Predicate Search Methodology:** Document the process used to search for a primary predicate, including databases searched (e.g., FDA’s 510(k) database) and search terms used. This demonstrates due diligence. 3. **Analysis of the "Best" Single Predicate:** Identify the closest single predicate found and explain precisely why it is inadequate on its own. For example, "While Predicate X has the same intended use and energy source, it utilizes a different material for the patient-contacting tip, which has different performance characteristics." 4. **Introduction of Multiple Predicates:** Clearly introduce the selected predicates (e.g., Predicate A and Predicate B). For each, specify which key features, intended use aspects, or technological characteristics it contributes to the SE argument. 5. **Final Justification Statement:** Conclude with a clear statement summarizing why the combination of Predicate A and Predicate B provides a more appropriate basis for comparison than any single device. ### ## 2. Structuring the Substantial Equivalence Comparison Table Clear documentation is essential for helping the FDA reviewer understand your argument. While sponsors can compare the subject device to each predicate individually, a more effective method is to use a single, integrated table. This format transparently maps all features of the new device to their origins and highlights any differences. **Best Practice: The Integrated Comparison Table** An effective integrated table should include the following columns: | Feature/Characteristic | Subject Device | Predicate A | Predicate B | Discussion of Differences & Impact on SE | | :--- | :--- | :--- | :--- | :--- | | **Intended Use** | [Description] | [Description] | [Description] | The subject device's intended use is identical to Predicate A and does not raise new questions of SE. | | **Indications for Use** | [Description] | [Description] | [Description] | The indications are a subset of those for Predicate B and do not affect safety or effectiveness. | | **Technological Characteristics** | | | | | | *Handle Ergonomics* | [Description] | Same as Subject | N/A | Feature derived from Predicate A. No differences. | | *Operative Tip Material* | [Description] | [Different Material] | Same as Subject | Feature derived from Predicate B. Performance data (Section X) confirms equivalence. | | *Control Mechanism* | [Description] | Same as Subject | [Different Mechanism] | Feature derived from Predicate A. No differences. | | *Energy Source* | [Description] | Same as Subject | Same as Subject | Identical to both predicates. | | **Performance Data** | | | | | | *Biocompatibility* | [Test Results] | [Test Results] | [Test Results] | All patient-contacting materials are identical to Predicate B and supported by biocompatibility testing per FDA guidance. | | *Mechanical Strength* | [Test Results] | [Test Results] | [Test Results] | Integrated testing (Section Y) shows the combined device meets the performance specifications of both predicates. | This structure forces a clear, feature-by-feature analysis and ensures that any differences—especially those arising from the integration of components—are identified and addressed with supporting data. ### ## 3. Developing a Risk-Based Testing Strategy A multiple predicate strategy necessitates a multi-faceted testing plan that addresses risks at both the component and system levels. The risk analysis must be updated to specifically consider failure modes that could only exist because of the novel combination. The testing plan should be designed to answer three fundamental questions: #### ### A. Do the "Borrowed" Features Perform as Intended? The submission must include performance data demonstrating that the features taken from each predicate continue to perform equivalently in the new device. * **Example:** If a surgical instrument uses a handle from Predicate A and a tip from Predicate B, testing should confirm that the handle's grip strength and actuation force are equivalent to Predicate A's, and the tip's sharpness and material composition are equivalent to Predicate B's. This is often done through side-by-side bench testing. #### ### B. Does the Integrated System Perform Safely and Effectively? This is the most critical part of the testing strategy. Data must be generated to evaluate the safety and effectiveness of the complete, integrated system, with a special focus on the interface between the combined components. * **Example:** For the surgical instrument, testing must evaluate the mechanical integrity of the handle-tip connection. This could include fatigue testing, torque strength testing, and simulated-use testing to ensure the connection does not fail under clinically relevant stress. #### ### C. Have New Risks from the Combination Been Mitigated? The combination of two known components can create entirely new hazards. The risk analysis must identify these potential new failure modes, and the testing plan must address them directly. * **Example:** If the handle from Predicate A requires a specific cleaning protocol and the tip from Predicate B is made of a material sensitive to those cleaning agents, a new risk of material degradation has been introduced. The testing plan must include validation of the cleaning and sterilization instructions for the combined device to ensure compatibility and prevent device failure. ### ## 4. Strategic Considerations and the Role of Q-Submission Given the increased scrutiny applied to multiple predicate submissions, engaging with the FDA via the Q-Submission program is highly recommended. A pre-submission meeting allows sponsors to get early, non-binding feedback on their strategy, potentially preventing significant delays or a Not Substantially Equivalent (NSE) determination. Key topics to address in a Q-Sub for a multiple predicate device include: * **Predicate Rationale:** Present the justification for using multiple predicates and ask, "Does the Agency concur with the sponsor's rationale for using Predicates A and B as the basis for determining substantial equivalence for the subject device?" * **Comparison Table:** Provide a draft of the integrated comparison table and ask for feedback on its clarity and completeness. * **Testing Strategy:** Outline the proposed performance testing plan, highlighting the tests designed to assess the integrated system and any new risks. Ask, "Does the Agency agree that the proposed testing plan is adequate to address the potential risks associated with combining features from the selected predicates?" * **Potential for New Questions of SE:** Proactively ask if the FDA believes the combination of features raises any new questions of safety or effectiveness that have not been addressed by the proposed plan. Early alignment with the FDA on these core issues can significantly de-risk the submission process and lead to a more predictable review timeline. ### ## Key FDA References - FDA Guidance: general 510(k) Program guidance on evaluating substantial equivalence. - FDA Guidance: Q-Submission Program – process for requesting feedback and meetings for medical device submissions. - 21 CFR Part 807, Subpart E – Premarket Notification Procedures (overall framework for 510(k) submissions). ## How tools like Cruxi can help Navigating a complex 510(k) submission, such as one involving a multiple predicate strategy, requires meticulous organization and documentation. Regulatory intelligence platforms can help teams manage predicate device files, structure comparison tables, and track the development of the rationale and testing evidence. By centralizing all submission-related information, these tools can streamline the creation of a well-organized and defensible 510(k) package. *** *This article is for general educational purposes only and is not legal, medical, or regulatory advice. For device-specific questions, sponsors should consult qualified experts and consider engaging FDA via the Q-Submission program.* --- *This answer was AI-assisted and reviewed for accuracy by Lo H. Khamis.*