Talk:WG6 Nonclinical - Standardization Roadmap
Please list items that belong on the roadmap within this Discussion Thread. Prioritization will take place after we document the relevant pieces of the puzzle.
- Safety Pharmacology
- Reproductive Toxicity
- Genetic toxicity
- Animal Rule/Medical Counter Measures
- Special Toxicology studies (in vitro, ex vivo, in vitro - e.g. direct exposure of embryos, please list more...)
- Receptor Screens
- Device combination
- Formulation information (Lot/Study linkage to impurities and excipients)
- Tags (i.e. route of admin, pharm class)
- Biomarker incorporaton
- Elements of interconnectivity (See other subgroup)
- Image data (e.g. x-ray of NHP fetal development, of bone or skeletal development)
- Nonclinical efficacy/pharmacology studies - relationship to clinical therapeutic areas prioritized for standization (?)
- Mitogenicity studies
- Local tolerance studies (e.g. Draize test)
- Data storage
- Which data are stored and how? Which data are derived by tools? Need from data guides rule. (needs expansion)
- Data context
- Elements of interconnectivity among data, appropriate documentation of trial design, efficient tool to gather data.
- 1 Roadmap process -- Gfrausing 01:50, 24 April 2012 (CDT)
- 2 GLP adherence and electronic data -- Gfrausing 14:26, 25 April 2012 (CDT)
- 3 Concept and Context of Data
- 4 Intro for prioritzation exercise
- 5 e-Paper Content -- Bob 07:48, 26 September 2012 (CDT)
Roadmap process -- Gfrausing 01:50, 24 April 2012 (CDT)
Here's an idea for how to create the roadmap:
- Create a list of elements (nonclinical topics)
- Perform an initial prioritization
- First draft of roadmap
- Look into standards initiatives in the different nonclinical areas
- Potentially reprioritize
- Final draft of roadmap
GLP adherence and electronic data -- Gfrausing 14:26, 25 April 2012 (CDT)
In this section we will place the issues and challenges with standardized electronic data and GLP adherence. The GLP rules state that a study will be finalized with one (and ONLY one) final report. There is nothing in the GLP rules about other deliveries such as submission documentation (e.g. eCTD tables, written summaries, datasets, define file). There are several challenges to the process of creating and finalising the datasets and the define file. One of these is ownership. The GLP rules state that the study director has the full responsibility for the study being conducted according to the protocol (study plan) and eventually signs the final report with this statement. + An electronic data package (SEND or otherwise) is signed off by the sender, thereby handing over the responsibility for what happens to the data afterwards.
Re: GLP adherence and electronic data -- Gfrausing 14:58, 25 April 2012 (CDT)
There are several challenges to the process of creating and finalising the datasets and the define file. One of these is ownership. The GLP rules state that the study director has the full responsibility for the study being conducted according to the protocol (study plan) and eventually signs the final report with this statement.
- Define file: Should reflect the standard (e.g. SEND), the protocol specifics and the rules that apply to data collection (such as controlled terminology). Ideally it should be prepared before any data is collected in the study (as it describes the plan), it may however be updated throughout the study when the protocol is amended.
- Should there be versioning of the various define files that follow each amendment?
- Should it be part of the protocol or should the protocol state that a define file and datasets are being created?
- How do we ensure that it adheres to the protocol? How much QC is warranted?
- When can the define file be considered final? and who signs it?
- Datasets: Should be created based on the study specific define file.
- When is the appropriate time to create the submission datasets? (e.g.to avoid having parallel data flows that could require a lot of QC to ensure consistency with the final report)
- Who signs/own the final submission datasets? (e.g. in particular in the light that both a sponsor and a CRO may contribute data to the same dataset in a study)
- What are the requirements to traceability back to the raw data?
- What are the requirements for archieving of the datasets and the define file?
Re: GLP adherence and electronic data -- Gfrausing 07:44, 15 June 2012 (CDT)
Maybe we can make any GLP claim on electronic data other than what is considered raw data. With that said, there still must be a sign off on electronic data relating to the quality and trustworthiness of the data. GLP rules are intended to tell you something about the overall quality of the study conduct and thereby the data generated as part of this study. When it comes to electronic data (and specifically in a submission standard), the quality will depend on the methods used to generate them (any standards agreed upon for the data collection, eData captured during the study, any terminology conversions done, manual conversion of data or mapping tools employed ect.). This quality statement should be part of the signature sign off for eData.
An electronic data package (SEND or otherwise) is signed off by the sender, thereby handing over the responsibility for what happens to the data afterwards.
This implies that the CRO signs off on the SEND package going from the CRO to the sponsor. One could argue that this should be the study director, depending on when the datasets are created. IF the datasets are created as part of the conduct of the study (as dictated by the study plan), then the study director must sign off on the datasets as he is overall responsible for the study conduct (including the data generated in this period). This means that he signs off on all the information in the datasets, that they are a true reflection of the data generated according to the study plan (GLP rules). Whenever there is in-house activities on CRO studies, the sponsor effectively works as a test site, where the CRO is expected to trust and/or audit that we perform all activities according to GLP, since the study director has to sign off on our data as well. Now the CRO may not necessarily generate the datasets from our activities, but they (the study director) will have to sign off on the data that is generated (what is reflected in the report).
The sponsors responsible person during a submission (in our company it is the nonclinical project manager (PM)) signs off on the SEND package going from the sponsor to the FDA. This is more an administrative signature (the standard version used and the system used for generating them), but it also reflects the authenticity of the data, that the data is a true reflection of the data generated in the study. However this is not GLP-related, it only tells the FDA that they can trust the data. Anything to do with GLP (and auditing of procedures) will be related to the active study phase (study director sign off).
Along the same path we would expect a consultant responsible for migrating legacy data to sign off on the datasets as well. Simply to reflect that the methods used for generating the data are trustworthy and ensuring that the datasets are a true reflection (to the extent possible) of the data generated in the study.
So far we have worked with something called a ‘Data transfer Report’ (alongside the define file and the datasets). This is where we had our signature. It reflected the systems, their versions, part 11 compliance and any methods used to create the datasets. I expect that we will concatenate this with the proposed ‘Data guide’ that will accompany submission datasets.
Concept and Context of Data
For Discussion and edits:
Data comes in different forms: Raw data, captured data (GLP), operational, or data in a repository for analysis. Data which are submitted for regulatory purposes are yet another form. Regulatory data appear to be a final form from some perspectives, but are seen as elemental building blocks to a reviewer. It is evident that data can refer to many things and our discussions require a more nuanced view of the various forms of data. Defining the various steps that data travels in its life cycle (from acquisition through complete regulatory review) will help us in our discussions. We can then consider how the Roadmap effort gels with data in its various forms.
- Explore how need from types of data defines the rules which govern the data.
- This is preliminary. Please share your ideas here.
- This is preliminary. Please share your ideas here.
Intro for prioritzation exercise
So much data; so little time! Imagine the possibilities if only we could harness the power of the data that already exists and continue to add to it in near real time!
Data standardization has been proven to help us move toward these possibilities, and in order to continue moving forward, we are developing a strategy on how to prioritize, maximize, and transform the standardization effort; we are creating a roadmap to show which data types will be modeled next. Resources, study and data complexity, timelines, new approaches, technological advancements in collection and/or reporting systems are valid considerations when developing this roadmap, which is why your input is critical to ensuring the ability to move strategy to action.
The vision is to ensure the availability of useful electronic meta- and study data to enable more effective and efficient review of nonclinical data at both the operational and regulatory level. Data must be accessible to further investigate class effects and address regulatory science questions, looking across, within, and between...moving the idea of predictive toxicology forward. Currently, the SEND standard allows for the submission of general tox and carcinogenicity studies in electronic standardized format; development of certain safety pharmacology and reproductive toxicology standards are well under way with pilots due to start soon.
Data gaps have been identified for many assessments, including genetic tox, hERG, animal rule/MCM, receptor screen, and device combination. Additionally, drug metadata could be better utilized so that, lot and impurity information could be linked to study data, class, and structure; study metadata surrounding protocol-related information, deviations, regulatory information, and scientific/regulatory interpretation could be structured and linked for better accessibility across or within studies.
Where would you like to see the standardization effort go next? Are there data types that are of particular importance? Is data utility and accessibility more critical to you? Are there advancements in collection systems or science that are on the horizon and should be taken into consideration? Consider the entire expanse of drug development, focus on nonclinical, and tell us...what should come next?
Attached is a listing of stages of drug development, study types, methodologies, endpoints, collection systems. Feedback from across the industry will be consolidated to develop the strategy and roadmap for moving forward. What is most important to you?
The scale is 1=high priority, 2=medium priority, 3=low priority, and blank is no particular preference; this goes in column G. Rationale for the preference can be provided in column H if desired. Columns B and C also need to be filled in; be as specific as you wish in your response. Remember to keep data utility and accessibility in mind. Finally, if there are nonclinical areas currently not captured, please add them in the space provided.
e-Paper Content -- Bob 07:48, 26 September 2012 (CDT)
Questions to Consider
1. What does your audience know about SEND? Perhaps they know/understand CDISC/SDTM clinical side and just have to extrapolate to nonclinical; perhaps data standards are totally new.
Short description of data standards and how SDTM, CDISC, SEND interrelate.
2. What does your audience know about the SEND Consortium and the time/process involved in getting to 3.0, the time/process for developing a new domain?
Short description of SEND history and process for developing new domains
3. What does your audience know about the Janus Data Warehouse and the FDA's initiatives surrounding it?
Provide info about progress of this initiative
4. What does your audience know about PDUFA V and the draft guidance that will allow the FDA to make requiring SEND output a regulation?
Provide info about this...time lines and expected outcomes
5. What are your company's plans for submissions using SEND in the next year? 3 years? 5 years?
6. Is your audience siloed (focusing on their area of expertise) or big picture (responsible for nonclinical data and/or the submission as a whole)? Do you have the right audience?
7. Does your company have its own data warehouse?
8. Are their study types on this list which are not in your organization's capability list?
9. Are their other data/IT initiatives related to a given study type which would cause someone to either raise or lower the ranking in the hopes of less interference/more information?
Other Thoughts We all know and understand the ideas of Garbage In, Garbage Out and One Person's Trash is Another's Treasure. By taking the many and varied processes, systems, lexicons, and formats used in the collection and reporting of drug development data across our industry and collaborating to create a common standard for visualizing it, we remove the idea of garbage; all data can be transformed into treasure. As long as our disparate systems can allow the visualization to be electronic and standardized, we can move forward. By basing both clinical and nonclinical data on the same data tabulation model, the science of predictive toxicology can be furthered, proving hypotheses, moving theory to reality. This is the power of data standardization.
Vast improvements have been realized over the years. We have moved from paper, to static pdfs, to electronic data. The FDA's Janus data warehouse was built to house these bits and bytes; organizations have built warehouses of their own to do the same. Data standards for both clinical and nonclinical, both based on SDTM have provided a mechanism to fill those warehouses in a meaningful way. Automation. Structure. Queries that return Answers. We are moving toward the fulfillment of "What if?"
Keywords for future development
Considerations for Implementing Standards
Links for Stakeholders
Roadmap: Groups concept of a Roadmap for Nonclinical Data focused on data accessibility and standards. Supportive material will include the results of the Prioritization Survey.