Practical Assessment, Research & Evaluation, Vol 14, No 13 Page 7
Randolph, Dissertation Literature Review
careful, accurate records must be kept of the date of each
search, the databases searched, the key words and key
word combinations used, and the number of records
resulting from each search.
In my experience, electronic searches lead to only about
ten percent of the articles that will comprise an
exhaustive review. There are several approaches to
locate the remaining 90%. The most effective method
may be to search the references of the articles that were
retrieved, determine which of those seem relevant, find
those, read their references, and repeat the process until
a point of saturation is reached—a point where no new
relevant articles come to light.
When electronic and reference searching is exhausted,
the reviewer is advised to share the list of references with
colleagues and experts in the field to determine if they
detect any missing articles. Sending a query to the main
Listserv of experts in the relevant field, with a request
that they identify missing articles, is often effective to
yield additional references. It is also advisable to share
the final list of potentially relevant articles with
dissertation supervisors and reviewers, as they, too, may
be aware of additional relevant literature.
The data collection process can stop when the point of
saturation is reached, and the reviewer has sufficient
evidence to convince readers that everything that can
reasonably be done to identify all relevant articles has
been diligently undertaken. Of course, it is likely that
new articles will come to light after the data collection
period has concluded. However, unless the new article is
critically important, I suggest leaving it out. Otherwise,
the reviewer may have to open the floodgates and start
anew the data collection process.
Now the reviewer must devise a system to further cull
the collected articles. For example, to separate the
potentially relevant from the obviously irrelevant
studies, the reviewer might read every word of every
electronic record, just the abstract, just the title, or some
combination. Whichever method is chosen, the reviewer
is advised to accurately document the process
undertaken. When the obviously irrelevant articles have
been identified and discarded, the reviewer can begin to
determine which of the remaining articles will be
included in the literature review. Again, when reliability
is critical, it is common for two or more other qualified
individuals to determine which articles in the new subset
meet the criteria for inclusion and exclusion to estimate
and consider the level of interrater agreement.
(Neuendorf [2002] provides a thorough discussion of
methods to quantify interrater agreement.) When the
reviewer is satisfied that the final subset of relevant
articles is complete, the data evaluation stage can begin.
Data evaluation
In the data evaluation stage the reviewer begins to
extract and evaluate the information in the articles that
met the inclusion criteria. To begin, the reviewer devises
a system for extracting data from the articles. The type of
data extracted is determined by the focus and goal of the
review. For example, if the focus is research outcomes
and the goal is integration, one will extract research
outcomes data from each article and decide how to
integrate those outcomes. As the data are evaluated, the
reviewer is advised to document the types of data
extracted and the process used. Because it requires
extensive detail, this documentation is sometimes
recorded using separate coding forms and a coding
book, which are included as dissertation appendices. Or,
the documentation may be included within the main
body of the dissertation.
Whether the procedures for extracting the data are
recorded in a separate coding book or included within
the body of the dissertation, the level of detail should be
such that, actually or theoretically, a second person could
arrive at more or less the same results by following the
recorded procedure.
A coding book is an electronic document, such as a
spreadsheet, or a physical form on which data are
recorded for each article. The coding book documents
the types of data that will be extracted from each article,
the process used to do so, and the actual data. If the
focus of the research is on outcomes, for example, the
coding book should include one or more variables that
track the extraction of research outcomes. The literature
review, of course, will require the extraction of
additional types of data, especially data that identify the
factors that may influence research outcomes. For
example, in experimental research the reviewer’s coding
book will extract from each article the measurement
instruments used; the independent, dependent, and
mediating/moderating variables investigated; the data
analysis procedures; the types of experimental controls;
and other data. Of course, the influencing factors vary
depending on the topic.
Examining previous literature reviews, meta-analyses, or
coding books is helpful to understand the scope and
organization of a coding book. A freely-downloadable