Chapter 10: CDISC Data Review and Analysis

Safety Evaluations with JMP Clinical

Getting Started with JMP Clinical

Safety Analyses

Patient Profiles

Event Narratives

One PROC Away with ADaM Data Sets

Transposing the Basic Data Structure for Data Review

Progression Free Survival with Investigator and Central Assessment

Chapter Summary

As mentioned in the first chapter of this book, those who stand to gain the most from adapting to CDISC standards are the true users of clinical trial data—those who are most involved with evaluating the safety and efficacy of medical products and who are responsible for assessing the results of a trial. Until now, this book has primarily focused on the process of getting clinical trial data into a CDISC format. In this chapter, however, we will focus on the benefits that you can reap when evaluating data that are already in a CDISC format.

There are three primary benefits that the users of CDISC data stand to gain:

1.   Instant familiarity with the structure and format of the data with which they are working

2.   Analysis data that is both “one PROC away” and traceable to the source

3.   Ability to develop and use software and tools built around the standards

The first benefit is especially true for regulatory reviewers who, until recently, have had to learn entirely new data formats from each new NDA or BLA submission to which they were assigned. The second benefit is specific to ADaM data. Sections of this chapter cover some examples of how ADaM data can be used to quickly produce analysis results.

The third benefit to users of CDISC data was, initially, a hypothetical benefit. From the early stages of development, the greatest promise of data standards was the ability to develop software that could automatically perform many of the routine tasks associated with clinical trial data analysis and review. JMP Clinical is one such tool that has been developed to fulfill this promise. The initial sections of this chapter provide a tutorial to JMP Clinical 5.0 (using JMP 11) and show many examples of the tasks that JMP Clinical can perform.

Safety Evaluations with JMP Clinical

JMP has long been used as a data exploration tool either alongside SAS or completely independent of SAS. It is menu-driven, which makes it easy to learn and use for those who are not statistical programmers. And it has a scripting language, which enables advanced users to write code or scripts that can be reused for repetitive tasks. It has seamless integration with SAS, which obviates the need to convert SAS data sets to a different format, and it enables you to incorporate SAS procedures into a JMP analysis. Lastly, JMP has nice interactive graphics that can be both menu-driven (and, therefore, easy to generate) and customizable.

With these traits, JMP is well suited for leveraging the advantages of CDISC data for the purposes of data exploration. As such, JMP Clinical has been developed as an add-on package to JMP. With JMP Clinical installed, a new menu item, appropriately titled Clinical, is added to the JMP menu bar. From this menu item, users have at their disposal a wealth of built-in processes for evaluating patient demographics; producing patient profiles and patient narratives; running routine summaries of adverse events; and using cutting-edge techniques and graphical tools for conducting explorations into a medical product’s safety. Most of all, these tasks can be run automatically because they rely on the presence of SDTM data (and ADaM’s ADSL) and can therefore harness the knowledge of how the data are organized and structured simply by virtue of the fact that JMP Clinical works only on CDISC data.

Getting Started with JMP Clinical

By default, opening JMP Clinical brings you to the Clinical Starter window. The following screenshot displays this window, along with the options available in the Clinical menu item. While the other menu items are standard to JMP, the Clinical menu item is unique to JMP Clinical. A common first step when using JMP Clinical is to register a new study. This can be done by clicking the Add Study from Local Folders button in the Clinical Starter window, and then entering the study name and information about the location of both the SDTM and ADaM data sets. After this information is entered, JMP will know where to find all of the necessary data sets relating to a study. A logical next step would be to select Check Required Variables, shown under Clinical Studies. This is not a CDISC compliance check. Rather, it is primarily a check for the existence of certain variables that JMP Clinical requires for performing automated tasks.

image

JMP Clinical comes pre-installed with CDISC-compliant study data. These data are from a large clinical trial conducted in the 1980s for a cardiovascular drug called nicardipine, which was subsequently approved to treat hypertension. This pivotal trial randomized more than 900 subjects with subarachnoid hemorrhages, approximately half of whom were randomized to receive nicardipine, while the other half received a placebo. In this chapter, this study is used to demonstrate the tools that JMP Clinical can provide, including the required variable check.

Selecting Check Required Variables from the Clinical Starter window creates a new window shown in the following screenshots. Under the General tab, you can choose the study for which you wish to run the report and choose the format of the resulting report file (RTF or PDF).

image

On the Output tab, you can specify the names of the resulting data sets and JMP script files and the folder name for all related outputs.

Running the required variable check creates the specified report file (in RTF or PDF format) and a results window in JMP.

image

The results window is a dynamic report.  Selecting, for example, the AE Narrative report (to be discussed later) and then the ‘No’ button under “Variable Exists” in the Data filter panel, you can view the missing variables that the report would use if available.  In this case, the HO domain is missing altogether, so the variables HOTERM and HODECOD will be available for the narrative reporting.  Clicking around this report can provide additional insights regarding the adequacy of your CDISC data for the JMP Clinical built-in reports.  Depending on the results of your required variable check, you might need to make some modifications to your SDTM data in order to take full advantage of JMP Clinical. In certain situations, however, you might want to modify the requirements of JMP Clinical. In later sections, we show you how to do this, but not before demonstrating some of the features of JMP Clinical.

Safety Analyses

JMP Clinical comes prepackaged with tools for running both standard and cutting-edge safety analyses, some of which are demonstrated here. Many of the graphical displays of safety data touted by Amit, Heiberger, and Lane (2007) can easily be provided by JMP Clinical.

One common concern when the safety of new drugs and biologics is being assessed is that of liver toxicities. The severe consequences of drug-induced liver injury (DILI) were first published by Dr. Hy Zimmerman in 1978. His observations of signs of DILI have since been informally referred to as Hy’s Law (Temple, 2001; Reuben, 2004). They are the motivation behind an FDA guidance document titled “Drug-Induced Liver Injury: Premarketing Clinical Evaluation.” Observations of bilirubin (BILI) values >2 times the upper limit of normal (ULN) seen concurrently with severe elevations (for example, >3 times the ULN) of a transaminase such as alanine aminotransferase (ALT) or aspartate transaminase (AST) are now typical criteria for identifying cases of meeting Hy’s Law and signaling the potential for severe DILI. With a set of criteria such as these applied to standard laboratory data, including standard liver function tests, evaluations of Hy’s Law can be applied in a standard manner.

JMP Clinical has a tool for Hy’s Law screening under the Clinical Findings menu. In this dialog box, under the General tab, is an option to adjust the time lag for identifying Hy’s Law cases. By default, elevations meeting the criteria must occur on the same day in order to be classified as meeting the criteria. This is reasonable if visits are spread over long intervals. However, if visits are more frequent, such as daily or weekly, you might improve detection sensitivity by adjusting the sliding scale to allow for a larger time lag between the elevated liver function tests (LFTs).

image

Under the Filters tab, more customizations are possible.  Although any occurrence of Hy’s Law would, in most study populations, be a reason for concern, you can perform the analysis on a chosen set of BY variables or on a particular subset of subjects that can be filtered using the optional WHERE statement. For example, if the ADSL data set contained a flag for subjects with pre-existing liver disease, then this flag could be used to filter these subjects out of the analysis. By default, the analysis is run on the safety population, identified either by the SAFFL flag in ADSL or, if not present, the SAFETY flag in SUPPDM (using the SDTM reserved name).

image

For our example, we ran the Hy’s Law Screening routine on the entire nicardipine study population without any BY groups using the default zero time lag. The results from this test are as follows:

image

In the scatterplot matrix, the peak value for each relevant lab parameter (AST, ALT, BILI, and alkaline phosphatase, or ALP) is divided by the parameter’s upper limit of normal (ULN). These values are then plotted on a log scale (log base 2) for each lab pair. Different shapes are used for each treatment group (in this case, triangles for placebo subjects and circles for nicardipine subjects). Symbols are blue if the subject’s data meet the Hy’s Law criteria and are red otherwise. The specific criteria are annotated at the top of the scatterplot matrix.

The scatterplot containing BILI on the y-axis and ALT on the x-axis has potential Hy’s Law cases in the upper right quadrant. The horizontal reference line for this quadrant is at 2 for BILI (for two times the ULN), and the vertical reference line is at 3 for ALT (for three times the ULN). Note that these reference lines can be changed on the Output tab of the Hy’s Law Screening dialog box shown previously. Dots that fall within this quadrant are not necessarily classified as patients who meet the criteria because what is being plotted are peak values for each subject. In this way, each subject is represented only once within each scatterplot. Placing your mouse pointer over each point reveals the subject IDs and the lab values divided by the ULN. Two blue dots, signaling nicardipine subjects that meet the criteria, are outliers in each of the plots. These two subjects are 141058 and 282031. By placing your pointer over the dot for subject 282031, we can see that this subject’s peak ALT and BILI values are 104 times and 9 times the ULN, respectively.  A contingency table in the right pane displays the number of days that subjects spent meeting the criteria.

Next to the Scatterplots tab in the center panel is a tab labeled 3D Scatterplot, which displays results for three of the lab parameters at once: ALT, AST, and BILI. Subjects 141058 and 282031 again appear as outliers, this time with respect to each of the three LFTs. Selecting one of the dots enables you to take advantage of the options under Drill Downs in the left pane, which are Profile Subjects, Show Subjects, Cluster Subjects, Create Subject Filter, Graph Time Profiles, and Graph Trellis Plot. Patient profiles are addressed in the next section. Graphing time profiles enables you to view the course of the LFTs over time and to help rule out cases, if any, where the elevations occurred at different times.

image

The nicardipine data are especially rich for identifying DILI cases. For other data sets, however, you might want to explore more liberal criteria, either by changing the allowable time lag, or by reducing the ULN multipliers used in the scatterplots. Going back to the Hy’s Law Screening dialog box, under the Options tab, you can modify the defaults used in the scatterplots, such as the type of log transformation or the reference lines used.

The Hy’s Law Screening routine is just one of many tools available in JMP Clinical for safety evaluations. A number of other tools are available for evaluating and summarizing adverse events, laboratory data, and demographics. Like the Hy’s Law Screening, many of the others are full of customizations that can make your safety evaluation more relevant to your drug or clinical trial under study. In the next section, we focus on the use of patient profiles for the purpose of examining data across domains for subjects of interest.

Patient Profiles

One of the key features of JMP Clinical is its ability to create patient profiles for safety reviews of individual patients. With your study’s CDISC data already registered and the required variable check showing no major problems, you should be able to move ahead with either running profiles for individual patients or, if you want, for the entire study population.

To get started, you must have a subject or set of subjects selected. As shown in the previous section, subjects can be selected from a JMP Clinical graphical display, such as the Hy’s Law Screening plots. Another simple way to select a group of subjects is by having a data table open and at least one subject highlighted in that table. When at least one subject has been selected, select Clinical Subject Utilities Profile Subjects from the Clinical Starter window to generate a report for each subject. The following screenshot shows a sample patient profile for one (subject 282031) of the two nicardipine subjects identified as outliers in the previous section.

image

From one profile, clinicians can review all laboratory values, vital signs, ECG results, adverse events, and concomitant medications that a subject experienced during the course of a single study. Baseline information such as demographics and medical history are displayed on the right side. Many times, the automatically generated profiles do not need any alterations in order to get their intended point across. However, for certain studies, or for certain subjects, alterations and customizations might be needed in order to present the salient information in a more readable fashion. Fortunately, the graphical interface provided by JMP gives users the ability to customize the default view in a fairly straightforward point-and-click or click-and-drag manner, both at the domain and term level. These modifications can be saved as a template to be recalled later or provided as the default patient profile view. Users are encouraged to explore these various options as a way to become familiar with the many features that JMP provides.

Whether profiles are being reviewed by a medical monitor during the course of a clinical trial or by a medical reviewer during the course of an NDA review, the clinician might want to add notes regarding his or her findings to each profile. The Review Status area in the upper right panel of each profile provides an area for notes to be added and a mechanism for marking each profile as reviewed.

An added feature for those who may want to read or access the profiles without requiring access to JMP (or even a digital device for that matter) is the ability to export the profiles to a separate file that can be shared electronically or as a hard copy.  To do this, click the Create Report button on the left panel. Provide a name for the file; select the type (PDF or RTF); adjust the sizes; select a subject or group of subjects for whom you would like profiles created and whether you would like to add medical history; and review comments, the legend, and the data tables that have been added to the patient profile document. (Note that the data tables will be sent to a separate file.) Getting the profile to fit properly on a page might take some trial and error, but hopefully when one set of settings is found to work for one subject, those settings can be applied to all subjects for whom you want paper reports.

Event Narratives

One of the newer features in JMP Clinical, and possibly one of the most useful, is its ability to automatically create safety event narratives.  Safety event narratives are typically labor-intensive, manually written write-ups that describe the clinical circumstances surrounding a particular safety event, such as a serious adverse event (SAE).  They describe the demographics and medical history of the subject who experienced the event; exposure information to the clinical trial study drug; the events surrounding the event, such as other AEs, lab values, and vital signs; and, of course, the event itself.  Historically, patient profiles have been useful for providing these data to the person responsible for writing the narrative or separate data listings, but these have also historically required additional man-hours.  With standardized data and some clever, artificial-intelligence-like algorithms, these man-hours can be replaced with computer automation. (However, be aware that some man-hours are usually recommended for quality control, and possible customizations that may be necessary beyond what can be performed automatically.)

In order to avoid confusion, keep in mind that in a clinical trial and pharmacovigilance setting, there are typically two types of safety narratives.  SAE narratives are those that typically accompany an SAE report and are usually written by the investigator who witnessed or has first-hand knowledge of the event.  SAE reports have certain required information contained within them and sometimes this information, particularly the information contained within the written SAE narrative, is not captured within the study’s CRFs.

Often for regulatory submissions and Clinical Study Reports (CSRs), additional narratives are needed not only for SAEs, but also for events that lead to study drug discontinuation or possibly deaths that may not have been considered adverse events (for example, deaths in a cancer trial that either occur well beyond the last dose of study drug or were not unexpected due to disease progression).  This latter type is the type that can be generated by JMP Clinical.

Getting back to our example patient profile for subject 282031, notice the button on the left panel for Create Narratives.  This will open up the narrative dialog box that can also be accessed from the menu via Clinical Events Adverse Events AE Narrative.  Selecting this sequence of options from the patient profile will prepopulate the Additional Filter to Include Subjects field shown on the Filters tab. Also, on this tab, we will manually add AESEV=”SEVERE” to the Filter to Include Adverse Events field as shown in the screenshot.  This is added so that the narrative reports include only the severe events that this subject experienced.

image

As you can see, there are many possible customizations that can be made to the narrative.  We won’t go through all of them, but let’s look at one example, the tab under Findings A.  This one defaults to the laboratory data that appear in the narrative (but as can be seen on the drop-down menu, any findings domain can be selected).

image

Deciding which options you like best may take some trial and error.  It may depend, for example, on the number of relevant lab results that appear and whether you consider the patient’s baseline status relevant.  For this example, we have chosen, in the interest of space, not to include a summary of baseline results.  Since this patient was identified originally from the Hy’s Law Screening report, we could set filters to include only the severe or serious hepatic-related events and then filter the labs so that only liver function tests are reported.

After you click Run, the algorithms churn for a few seconds, and up comes an RTF file (presumably opened in Microsoft Word on your machine) with a narrative for each severe AE experienced by subject 282031.  In the screenshot, we have selected the one that is related to the severe liver toxicities identified by the Hy’s Law Screening.

image

As you read through these narratives, your initial reaction may be amazement that they were written by a machine.  As you read closer, you may notice some idiosyncrasies here and there, such as abbreviations that are not capitalized that probably should be (such as frequent references to “sah” for Subarachnoid Hemorrhage in the medical history) or perhaps items that are capitalized that don’t need to be (such as vital sign units written as “BEATS/MIN”), but these are minor criticisms that can either easily be ignored or fixed with a simple search-and-replace.

Although you should take caution to ensure that the information contained within the narratives is in fact correct, especially if they are to be included in a regulatory submission, it shouldn’t take long to appreciate the advantages that such a tool can provide compared to the expensive, time-consuming, and error-prone manual method.

One PROC Away with ADaM Data Sets

Since safety analyses and safety data tend to be more standard across studies, without regard to the therapeutic area or drug class, they lend themselves well to the development of a tool that can perform those standard tasks. Efficacy data, however, are rather different. Although the statistical methods used to analyze efficacy data are somewhat discrete, the number of endpoints, the number of analysis visits, the different approaches for dealing with missing values, and the different analysis populations create a myriad of methods and techniques for summarizing and analyzing efficacy data. This makes standard tool development a bit more complicated.

A basic tenet of ADaM data, as mentioned in Chapter 1, is that the data be both traceable and ready for analysis. To make the data ready for analysis, flags and other variables are often used in a WHERE statement to filter the data down to the records of interest. Then one SAS procedure with this aforementioned WHERE statement (or a WHERE clause) should be all you need to run the analysis.

This brings us to another basic tenet of ADaM data—that it be accompanied by metadata. One particular piece of metadata (the programming statements that can accompany analysis results metadata) can be used to show the exact programming statements necessary to replicate an analysis (or to run an analysis for the first time). If such metadata are not provided, then an analysis plan should suffice, provided it adequately describes how to conduct the analysis.

So, although the many different types of endpoints and analyses associated with efficacy data make tool construction less straightforward, the process for running an analysis after the ADaM data and metadata have been created is rather simple:

1.   Read the documentation (either the analysis results metadata or the analysis plan).

2.   Filter down the data.

3.   Run a statistical procedure on it.

In order to demonstrate this rather straightforward process, consider two analyses of the pain data from our sample data in earlier chapters: a responder analysis by visit and a time-to-pain relief analysis. The CRIT1FL variable in ADEF can be used for the responder analysis by visit. Suppose responder rates between the treatment groups are to be compared by use of Fisher’s exact test. The following code conducts this analysis:

*---------------------------------------------------------------*;

* Use Fisher’s exact test to compare responder rates

* between treatment groups by visit

*---------------------------------------------------------------*;

PROC SORT

  DATA = ADAM.ADEF

  OUT = ADEF;

    BY CRIT1 AVISITN;

RUN;

ODS SELECT CrossTabFreqs FishersExact;

PROC FREQ

  DATA = ADEF;

    BY CRIT1 AVISITN;

    WHERE ITTFL='Y';

    TABLES TRTPN * CRIT1FL / CHISQ;

  TITLE "Fishers Exact Test on Responder Rates Between Treatment Groups by Visit";

RUN;

Technically, having to sort the data makes the analysis two PROCs away, but most agree that the sort does not count. Alternatively, you could select just one set of records for analysis and therefore not need the BY statement. In this example, the only necessary filtering involves keeping the subjects in the ITT population. Other common scenarios would involve additional filtering, perhaps by using analysis flags (for example, selecting records where ANLzzFL='Y').

Next, we will look at the time-to-event analysis using PROC LIFETEST. We use the following code for this example.

PROC LIFETEST   

  DATA = ADTTE PLOTS=s;

    WHERE ITTFL = 'Y' and PARAMCD = 'TTPNRELF';         

      ID USUBJID;    

      STRATA TRTPN;    

      TIME AVAL*CNSR (1, 2, 3);    

      TEST TRTPN;

RUN;

From this one procedure, we can produce Kaplan Meier estimates, have them plotted for each treatment group, and conduct a log-rank test.

So, unlike analyses of safety data, which tend to be fairly standard across studies and therapeutic areas, efficacy analyses are more specific from one study to the next. This can make tool construction more difficult. However, with the proper metadata (such as programming statements for analysis results metadata) and properly constructed ADaM efficacy data (that are “one PROC away”), running key analyses can be rather simple. In fact, it is even conceivable that a tool could be developed to automate this process. We leave that as a challenge for the reader.

Transposing the Basic Data Structure for Data Review

As discussed in earlier chapters, the BDS data structure is very flexible for a wide range of applications.  But dating back to when it was first being developed and tested as a usable standard for FDA review purposes, there were concerns among some that it didn’t totally support their review needs.  Indeed, the focus of an FDA reviewer does differ from the focus of an analysis programmer.  Sponsor programmers and statisticians tend to be primarily interested in using ADaM data to create the analysis results for which the data are intended.  Secondarily, a sponsor statistician and perhaps some clinicians may wish to use ADaM data for review—peeks at certain patients, subgroups, or other ad hoc collections of records.  The FDA reviewer, while interested in using the ADaM data to reproduce the primary and secondary study results, also has an intense interest in, as their title suggests, reviewing the data.  How a reviewer goes about this depends on a variety of factors—the therapeutic area, the type of data, even individual reviewer preferences.  Obviously, the combinations and permutations are too vast to allow for the development of a data standard that would have broad applicability.  As such, the BDS was adopted with the understanding that, for those various situations where a reviewer may prefer some type of data re-organization, in most cases it could be handled as a simple transposition of the BDS.  While this is still true, there hasn’t been enough done to assist ADaM creators, users, and reviewers alike with this transposition.  There is no standard way to go about it operationally.   Now that therapeutic area standards have been advancing, the time is right to provide some guidance and tools to assist with this standards gap.  In this section, we will review an example of this gap and how a SAS macro can be used to close it.

Progression Free Survival with Investigator and Central Assessment

In oncology clinical trials, especially those for solid tumors, one of the most common efficacy endpoints is progression free survival (PFS).  “Progression” in this sense refers to the growth of the tumors under study and is assessed by standard criteria such as the Response Evaluation Criteria in Solid Tumors (RECIST) guidelines.  Even with standard criteria by which these assessments are made, progression is still often “in the eyes of the beholder.”  That is, it is not uncommon for two different reviewers to have conflicting RECIST assessments.

Many oncology trials are open-label, meaning there is no blinding with respect to which arm a patient has been randomized, so the study investigator is aware of who is receiving the control therapy and who is receiving the experimental therapy.  To alleviate concerns of bias, an Independent Review Committee (IRC) is often commissioned to perform a central, blinded assessment of the scans used to determine whether the patients’ tumors have progressed.  However, even with an IRC in place, the investigator needs to make his or her own assessment of progression during the course of the trial in order to make treatment decisions.  For example, if a patient’s tumors appear to be progressing, then it is often in the best interest of the patient to try a new course of action sooner rather than later.  So many trials that have PFS as a primary endpoint have the primary results based on the IRC assessment with the results from the investigators assessment being considered a sensitivity analysis.

During the course of their review, FDA statisticians and clinicians may be interested in reviewing the degree of concordance, or discordance, between the investigator and IRC assessments of tumor progression or response.  To facilitate this, it is often helpful to have the two PFS assessments for the same patient side-by-side in different columns of a data set.  From a standard time-to-event BDS structure where the two assessments are represented by different parameters, the records could be transposed.  To facilitate this, a SAS macro called %horizontalize can be used.

Consider the data set shown in the screenshot.  It has the BDS time-to-event (TTE) structure with three parameters being shown for two example patients: PFS (IRC Assessment), PFS (Investigator Assessment), and Overall Survival.

image

Let’s propose that at a pre-NDA meeting, the FDA’s clinical reviewer requested that the submission include a data set from the pivotal trial. The data set contains one record per patient and separate columns for the two PFS assessments and overall survival.  In order to convey all the relevant information, the censoring indicator, event date, and event description columns for each of the three parameters should also exist as separate columns.

The %horizontalize macro was created for a situation such as this (this macro can be found on the author pages).  It was developed to transpose parameters, analysis visits, or both from the ADaM basic data structure.  Having a tool that can do this in an automated and predictable way avoids the need for standards for similar situations in specific therapeutic areas.  Such standards would likely require that the variable names meet the SAS transport file limitation of being only 8 characters in length, which can make the interpretation of the variables based on their names rather limited.  The %horizontalize macro creates variables that are longer than 8 characters, but that are named in a systematic way—with a prefix that corresponds to the PARAMCD value and/or the AVISITN value and then with a suffix that corresponds to the name of the variable that was transposed (except for the variable that contains the AVAL value, which has no suffix).

An example of using this macro for the given scenario is shown here:

data adtte;

  set library.adtte;

    where paramcd in('OS', 'PFS', 'PFS_PI');

run;

 

%horizontalize(indata=adtte,            

               outdata=tte,            

               xposeby=paramcd,             

               carryonvars=adt cnsr evntdesc);

   The data set is first read in as a WORK data set keeping only the desired parameters for the resulting, transposed data set.

   The outdata parameter to %horizontalize is where the name of the new transposed data set is specified.

   The xposeby parameter can take on values of either PARAMCD, AVISITN, PARAMCD AVISITN, or AVISITN PARAMCD.  Those are the only 4 allowable options.  This parameter is used to indicate by which variables the data set should be transposed.

   The carryonvars parameter is used to specify which variables are to be transposed along with AVAL (or AVALC).  This can often include variables like ADT (the default value) and ADY.  In the example above, we have involved the TTE-specific variables CNSR and EVNTDESC in addition to ADT.

A fourth parameter, sortby, is not shown.  It is used to specify the sort order and key variables in the resulting data set.  The default value is USUBJID, which works for this example, so it therefore does not need to be changed.

The resulting data set should have one row per sortby variable and additional columns for every PARAMCD/AVISITN combination (containing the value of AVAL or AVALC), plus additional columns for every PARAMCD/AVISITN/carryonvar combination.  Variables AVAL, AVALC, PARAMCD/PARAM and/or AVISITN/AVISIT, and each carryonvar are dropped.

image

As expected, we have two rows for each of the two subjects in the example and 3 x (1+3) = 12 new columns of transposed data that now enables a reviewer to compare PFS days, dates, and censoring indicators for the two types of PFS assessments plus similar additional information about overall survival.

In this example, there were no visits involved.  However, as stated above, AVISITN can also be added as an xposeby variable.  Doing this adds values of AVISITN as the prefix to variable names (preceded by an underscore if it is the only xposeby variable since variable names cannot start with a numeric value).  This is similar to how PARAMCD values are added as variable name prefixes when PARAMCD is the xposeby variable.

Having this standard approach to transposing BDS-structured ADaM data sets (and naming the transposed variables) has several advantages.  One is that it obviates the need for the transposed data sets to be created by sponsors.  This, in turn, means that the transposed data sets do not have to abide by SAS transport file variable name limitations.  It also prevents the need for these transposed data sets to follow ADaM principles and standards.  A validated tool such as SAS macro that can implement the standard approach can provide easy access to the transposed data for both sponsors and reviewers.  With the development of therapeutic area standards, there is a possibility that these ad hoc transposed data structures created by sponsors for marketing applications could instead be replaced by standard parameter-naming conventions and variable-naming conventions that could be applied to transposed data sets created “on the fly” with review tools.

Chapter Summary

The promise of data standards has long been built on the assumption that they lead to the development of software and other tools to make routine tasks more automatic. JMP Clinical is an example of this predicted evolution. When applied to CDISC data, JMP Clinical can do numerous routine safety analyses very easily. It can do many advanced safety analyses as well. And, like all JMP products, it is especially adept with graphical data displays. Some of these features were demonstrated in this chapter.

The “one-PROC-away” tenet of ADaM data also simplifies data analysis and supports the possibility of tool development. Examples were shown in order to demonstrate the dividends that adapting to ADaM standards pays.

The flexibility of the BDS structure lends itself to the creation of transposed data sets for side-by-side review purposes.  The %horizontalize macro was introduced as a way that standard variable names (and labels) could be applied to data sets that are transposed by parameters, visits, or both.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset