Difference between revisions of "Interorganizational SEND"

From PHUSE Wiki
Jump to: navigation, search
(Project Team)
Line 165: Line 165:
:*Lynda Sands, GlaxoSmithKline  
:*Lynda Sands, GlaxoSmithKline  
:*Paul Sidney, Charles River Laboratory
:*Paul Sidney, Charles River Laboratory
:*Jared Slain, MPI Research
:*Troy Smyrnios, Zoetis
:*Troy Smyrnios, Zoetis
:* Eric Sun, Sanofi
:* Eric Sun, Sanofi

Revision as of 14:57, 19 February 2014

Interorganizational SEND Project

Welcome to the Wiki for our project to identify challenges and propose solutions to help Sponsors and CROs, to work efficiently together, in a world where nonclinical data, possibly derived from different organizations, will be required to be aggregated in SEND format, for submission to the FDA.


The responsibilities for creating the SEND files for a study is often shared across organizations. Clarity is needed on how these responsiblities can be effectively managed.


  • in 2012
  • We developed a framework to classify and prioritize scenarios in which data from multiple orginzations needs to be aggregated.
  • We selected 3 scenarios and for each developed workfows "on-paper" to create and submit SEND datasets.
  • While considering these scenarios we identified several questions that needed to be answered and collaberatively developed answers. These have been published on the PhUSE SEND User's Group wiki: Handling of SEND in Study Documentation
  • We started preparing a Scenarios White Paper
  • We published a poster at the 2013 FDA/PhUSE Computational Science Symposium, which is available as slides File:I-SEND Poster 2013 as slides.pdf
  • in 2013
  • Summaries of actual experience with scenarios


At the 2013 PhUSE/FDA conference we made the following plans:

  • Test each of the work-flows presented at the 2013 conference by November with as close to real processes as possible and with 3 of the CROs participating in this working group
  • Prepare a whitepaper/poster for 2014 FDA/PhUSE conference to share what we learned.

This table is being populated with the names of sponsor companies and the quarter of 2013 that will be testing the secnario with each CRO.

Workflow Covance CRL MPI Research
Scenario 1: CRO Study with SEND Dataset Assembled by CRO Sanofi q2-4 J&J q4 Lilly q2-3 done.
Scenario 2: A CRO generating SEND with PK/TK data from outside Sanofi, GSK; Pfizer(SafetyPharm) BMS 2014q1 Lilly q2-3 done.

Scenarion 3 will be tested by INDS in Q2 and Roche in Q3.
The common Regulatory Data Flow will be tested by GSK and possibly others.

Questionnaire for Testing Organizations

Question Sanofi/Covance CRL MPI Research, w/ Lilly et al.
Does your experience match the scenario flow-chart?
  >If not, where does it differ? Which way is better?
  >If x happened instead of y, what was the impact? Is there a recommendation from these differences?
Scenario 1: The SEND datasets generated were prepared after report was finalized/approved as a data exchange exercise so the workflow differed greatly. Data sets created by CRO, Sponsor reviewed datasets and interacted with CRO to address questions to finalize the datasets. For the most part.
Scenario 1: The “CRO generates final datasets” step is not generally performed if everything was fine from the previous round.
Scenario 2: there is an additional path necessary to split the case where the Sponsor provides the PK files in SEND format. The handling varies whether or not the Sponsor adheres to the CRO’s conventions on such things as the USUBJID formation, representing either a plug-and-play situation or one requiring additional manipulation to incorporate with the main package.
Do you have any experiences with re-work initiated by a request from the FDA. How long did the process take, and what can you share about the experience? This was not done as the datasets were not part of a current submission. Use of the OpenCDISC Validator with resolution of any issues it uncovered removed the need for a feedback loop with the FDA.
During the pilot (and before a publicly available validator), there were cases where the sponsor was actively working with the FDA and required re-generation of datasets per some validator findings. In this case, the process went pretty smoothly, wrapping up in a few days. However, that was not “real” production.
What were the challenges and solutions for the scenarios? How do we address study numbering and animal numbering conventions as well as conventions for arms and sets that have been defined by Sponsor The biggest challenge is mapping trial design and exposure for non-boiler-plate cases. This is something that previously did not have to be done, and has to be learned and then defined by the preparer using an interface.
  >What would you want to do differently in the future? Work through “template” cases for some of the more common designs.
  >What would you need to work out in advance to ensure a smooth process? Expectations that were defined ahead of time would reduce rework and questions about datasets. With expectations set ahead of time (decoupled from the study timeline), everything goes smoothly.
  >Where there any areas that you were unable to resolve? No No.
  >How long did it take?
It took Covance approx. 1 week to prepare dataset this would also depend on the complexity and number of endpoints Packaging of a submission takes between a few days to a couple weeks, depending on the complexity of the contents and the number of endpoints.
  >What were the activities that determined the length of the project (critical path)? Complexity of study design and if PK data part of datasets could increase the timeline

PK data

Complexity of study design
Complexity of lot regimen
Manually collected data (e.g., paper or Excel)
PK data
  >Do you have any guides on estimating the effort to do the work? No, not at this time Starting with a barebones tox study (inlife, path, clinpath), estimate the total work for that. Then think through any add-on sets of endpoints or circumstances which equate to a chunk of work, such as adding pk, or adding a crazy study design, etc. This can give you a rule of thumb for estimating the effort behind individual studies.
  >How many times have you done this? Is this the first experience? 1 with 1 CRO 20-25 studies for about 10 Sponsors
  >If you have done this several times, can you describe the learning curve? N/A It takes a few studies to hit a stride. Special cases that pop up (such as a special study design that hasn’t been done in SEND yet) can represent an additional bump as they come along.
  >How long (calendar time, person hours) did each phase take? (determining what needs to be done, doing the work, confirming/closing the project) Few hours to several days Estimation: 0-1 hour.
Doing the work/closing: Anywhere from a few hours to several days for a reports person, and 0-2 hours for an IT resource to assist.
What tools (software) did you use? CRO tool Custom-developed add-on to reporting solution.
Were any domains not provided? Why? Yes, some domains are not collected while others currently cannot be provided by CRO No, but there could be cases where they are not, to cut down on costs for non-GLP studies where the Sponsor just wants key data to subsume into their own system for discovery/mining purposes.

Participation Needs

In 2013 as we will contiue this work, we will need participation from individuals with the following areas of expertise with an interest and ideally gaining some experience exchanging data or reports with other organizations to contribute to this effort:

  • Scientists
  • IT Specilists
  • SEND implementers
  • GLP QA auditors

The total group will be limited to 12-20 people, with representation from all of the above areas of expertise.

What is the commitment?

  • Time - will vary widely (minimum of 1 hour every two weeks for team meetings, up to 4-8 hours / month)
  • Expected to contribute, not just be a spectator

If you would like to participate, please contact the co-leads for this group:

  • Debra Oetzman (debra.oetzman@covance.com)
  • William Houser (william.houser@bms.com)


1 telecon every 4 weeks at the same time. One such meeting is 2pm Eastern Jan 6, 2014.

Project Team

  • Pranav Agnihotri, PointCross
  • Kenjie Amemiya, Genentech
  • Kathryn Brown, Sanofi
  • Susan DeHaven, Sanofi
  • Steven Denham, MPI Research
  • Jennifer Feldmann, Instem
  • Jeff Foy, Celgene
  • Liz Graham, MPI Research
  • Geoff Ganem, Genentech
  • Erika Gigante, Amgen
  • Nancy Gongliewski, Novartis
  • William Houser, Bristol-Myers Squibb
  • Laura Kaufman, Preclinical Data Systems
  • Jyotsna Kasturi, Johnson & Johnson
  • Lou Ann Kramer, Eli Lilly
  • Wayne Kung, Genentech
  • Carolyn McGary, Bristol-Myers Squibb
  • Maureen Rossi, Roche
  • Connie Marvel, Bristol-Myers Squibb
  • Shree Nath, PointCross
  • Louis Norton, Covance
  • Debra Oetzman, Covance
  • Kathleene Powers, Pfizer
  • Gerard Randoph, Roche
  • Lynda Sands, GlaxoSmithKline
  • Paul Sidney, Charles River Laboratory
  • Jared Slain, MPI Research
  • Troy Smyrnios, Zoetis
  • Eric Sun, Sanofi
  • Nathan VanSweden, MPI Research
  • Audrey Walker, Charles River Laboratory
  • Heather Williamson, PointCross
  • Peggy Zorn, INDS

Conference Calls and Minutes

Interorganizational SEND Minutes

Last revision by William.houser,02/19/2014