BIBLIOMETRIC ANALYSIS OF DISSERTATION RESEARCH IN MANAGEMENT

,


Introduction
There are various ways to evaluate the quality of scholarly communication.Some of them have been categorized as objective and others being labelled as subjective methods of evaluation of quality.Under the objective methods, increasingly, various emerging bibliographic techniques are being developed and used to evaluate the quality of scholarly communication.The current science of bibliometrics allow us to detect patterns of authorship, scholarly output and literature usages.Many of these measures are currently being used as proxy indicators for the research impact or even quality of research produced.
In every field today, new research is being built upon previous works and the process of acknowledging those previous works in a research paper is the concept of citation for acknowledging those works.Therefore, citation count of a research paper can be assumed to be a proxy for research impact as well as for its quality.However, experts advise metrics based on factors like citation counts must be devised through a robust process and they should be interpreted by discipline experts, who appreciate the specific Initially, citation analysis of reference lists of theses and dissertations produced by students was used by specialists in the field of library sciences to figure out the usage of library resources and to best manage their resources on the basis of usage by students enrolled in any given institution.It is the most widely used method of current science of bibliometrics.Bibliometrics can be defined as the research field in which the quantitative aspects of bibliographic material are studied (Broadus, 1987).It is generally believed that Alan Pritchard (1969) coined the term Bibliometrics around the same time when the term scientometrics was used by Nalimov & Mulchenko (1969) concerning literature of science and technology, while some other researcher around the same time used another closely related term of infometrics to describe the same science of quantitative study of bibliographic data.It is a field that exists at the convergence of library and information science research disciplines, where bibliographic material is analyzed using various quantitative methods aided by breakthroughs in the information sciences.Bibliometrics have been used for figuring out research trends in a given field, for discovering the structure of a field and for locating the missed opportunities in a field.It is also used to identify the most important authors in a field, seminal papers, research institutions, countries and regions, which have been contributing significantly to a given field.Numerous software tools have been available to do bibliometric analysis and experts opine that each one of them has its own strengths and limitations.
There are three major bibliographic data-bases; Web of Science (WoS), Scopus and Google Scholar.The first two are subscription based but Google Scholar is accessible free of cost.
WoS is currently owned by Thomson Reuters and it is the oldest and most reliable data-base for performing citation analyses and numerous studies have been conducted to study its characteristics.It contains a number of citation indices.Scopus, ISSN  owned by Elsevier is also a subscription-based database launched in 2004; it has been studied by scholars as well (Waltman, 2016).Google Scholar, which is actually a search engine and it indexes online scholarly literature.Google Scholar was also launched in 2004 and very little is known about its coverage (Waltman, 2016).Higher Education Institutions often have access to WoS and or Scopus through a web interface, which is good enough for performing basic citation analyses at a smaller scale.Advanced citation analyses require direct access without the limiting impact of web interface.Professional bibliometric centers said to have direct access or in the alternative they use specialized web-based tools such as InCites and SciVal for WoS, and Scopus respectively (Waltman, 2016).Similar mega citation analyses through Google Scholar are difficult, however sometimes an app called Publish or Perish (Harzing, 2010) is used to conduct such analyses.
Theses present an important component of overall research activities; therefore, it is important to know theses' state of art of any field to see the trends, direction and structure of the research.Consequently, this study evaluated the reference lists of a sample of 30 theses in Management completed over a period of 10 years in various Pakistani universities.The study focused on descriptive citation analysis, citation patterns, quality of citations by using Tunon & Brydges' objective rubric (2005a) and comparison of mean impact factors of split data design to see quality differences over time as well as to calculate quality differences between theses produced in private universities versus those completed at public universities.As the researcher did not have access to any paid subscription based data-bases, he conducted manual citation analysis of those theses reference lists by relying on sources available online.

Research Problem
For the purposes of this paper, the researcher identified the research problem as such: How do reference lists of theses in Management look like in terms of their composition and quality of sources cited in those reference lists?What are some of trends in their composition and is quality of theses' reference list improving over time?

Research Questions
Later, researcher converted the research problem into two quantifiable research questions: 1. What are the differences between the quality levels of reference sources used in doctoral dissertations in Management completed at public universities versus those completed at private universities from 2008 to 2017? 2. Is quality of Management theses' reference lists improving over time as measured by comparison of mean impact factor of citations' sources of two lusters of theses completed from 2008-2012 and 2013-2017?

Literature Review
Traditional peer review, which is the gold-standard of research quality evaluation, is a time-consuming process, which causes bottlenecks in academic scientific publishing.Therefore, alternative forms of quality evaluation mechanism have been suggested, discussed and evaluated in the academy itself and by the allied commercial players operating in the arena of scholarly publications.Bibliometric analysis of references cited in a scholarly publication, though the technique initially developed to manage library acquisition practices has become another process to evaluate the quality of scientific communication.This appears to be a somewhat objective criterion.Most of such bibliometric studies initially have been conducted either in the field of library sciences or in conjunction with some of the objectives of library sciences like library search skills (Tunon & Brydges, 2005a), library collection management (Mahajan, 2017), library collection (Romic & Mitrovic, 2014), collection development tool budget (Edwards, 1999), utility of library holdings, (Becker & Chiware, 2015), subscription of relevant journals (Jayaprakash & Kannappanavar, 2015) etc.However, of late bibliometric research has also been used to evaluate the quality of student researchers' library search skills set after training (Green & Bowser, 2003), quality of thesis (Beile et al., 2003(Beile et al., , 2004)), quality of cited work (Tunon & Brydges, 2008;Eckel, 2009), evaluation of a researchers' status and their research productivity, of institutions and countries (Serenko & Bontis, 2004) and citation patterns and composition, and comparison of related PhD programs (see, e.g., Brydges & Tunon, 2005a;Tunon & Brydges, 2006;Tunon & Brydges, 2009).Adoption of bibliometrics techniques to evaluate the quality of scientific communications by national quality regimes has given bibliometrics a whole lot prestige and those exercises implicitly legitimatized it by validating its techniques and science (Gogolin, 2012).
There is a long line of criticism on bibliometric analysis and its validity.As early as 1998, Seglen warned us that citation rates and journal impact factors both were not suitable for evaluation of research.The study cited many nuanced factors which must be considered before assigning appropriate weightage to such seemingly objective criteria like bibliometrics, citation analysis (Seglen, 1998).
Bibliometric data is readily available and the last decade saw a rise in those easyto-use tools, which would generate bibliometric tools for evaluation purposes.This method of comparison, if used properly, also give a semblance of objectivity.The deployment of bibliometric data for comparing research groups like individual researchers has also its limitations.Bornmann et al. (2008) listed three conditions which must be met before bibliometric data be considered valid for comparative evaluation purposes between research groups: 1) the scientific impact of the research groups or their publications are looked at by specific statistical techniques, namely box plots, Lorenz curves and Gini coefficients to represent distribution characteristics of the data, 2) appropriate and relevant reference standards after a thorough critical examination are used to assess the impact of research group, 3) statistical analysis of citation counts is done keeping in mind that citations are a function of various factors besides quality (Bornmann et al., 2008).However, citation impact becomes more questionable as demonstrated by a 2014 study.The study investigated the relationship between a research paper's visibility and the number of citations it received (Ebrahim et al., 2014).A case study of two researchers, who were avid users of marketing tools, showed that a research article's visibility greatly influences the citation impact, thus making citation analysis a questionable tool to assess research quality (Ebrahim et al., 2014) Another study on similar lines by Jarwal et al. (2009) evaluated whether three most commonly used bibliometric indicators; impact factor, citations per paper and the Excellence in Research for Australia's list of ranked journals can predict the quality of research papers as determined by experts.The analysis was based on Mock Research Quality Framework exercise conducted by Monash University in 2006-7 (Jarwal et al., 2009).The study noted a significant relationship between all three bibliometric indicators and the international assessors' scores on the basis of five-point quality scale, however surprisingly only a small amount of variance could be explained.However, some evidence suggests variation among disciplinary groupings and researchers suggest a cautious approach should be taken while using these indicators as proxies for research quality (Jarwal et al., 2009).
In fact, research by two Iranian scholars, Azadeh & Vaez (2013) revealed numerous inaccuracies in references of PhD theses.Edwards (1999) used citation analysis as a collection development tool by doing a bibliometric study of polymer science theses and dissertations.Similarly, a seminal longitudinal study on Iowa State University's Digital Repository was done by Kushkowski et al. (2003).These researchers studied 9100 citations from 629 theses written between 1973 and 1992 and concluded graduate students prefer current research regardless of their discipline of study (Kushkowski et al., 2003).They discovered over half of the citations were less than 10-year-old, while more than 87 percent of total citations were of journals and monographs.They further observed that theses were getting lengthier with time and the average number of citations varied by the discipline groupings they used in their study (Kushkowski et al., 2003).Eckel (2009) conducted a citation analysis of 96 Master's theses and 24 PhD dissertations completed between 2002 and 2006 at Western Michigan University's College of Engineering and Applied Sciences.After analyzing 2903 citations from master's theses and 2886 citations from doctoral dissertations, the study concluded among other things that the sources used by doctoral candidates tended to be of higher academic and scholarly quality (Eckel, 2009).Walsh & Ridge (2011) used bibliometric methods to evaluate American dissertation research from 1999 to 2009 in nanotechnology to report dissertations on nanotechnology had experienced tremendous growth, and that those theses were mainly concentrated at engineering departments of major research universities and were primarily sponsored by federal funding.
There have been numerous bibliometrics studies done in the neighboring Indian context.Zafrunnisha (2012) undertook to analyze the characteristics of literature used by writers of PhD theses in Psychology of selected universities in Indian state of Andhra Pradesh.In the business field, during the same year as above, Kumar & Dora (2012) conducted a citation analysis of 49 doctoral dissertations submitted at the prestigious Indian Institute of Management, Ahmedabad during the period 2004 to 2009.They concluded that top 48 journals ranked among the 30 most used journals contributed to more than 55 percent journal citations, while journals happened to be the most cited sources.(Kumar & Dora, 2012).In a case study, Banateppanvar et al. (2013) conducted citation analysis of doctoral theses in botany submitted in Kuvempu University, India.Jayaprakash & Kannappanavar (2015) conducted a citation analysis of doctoral dissertations in commerce submitted to Goa University (2015).They prepared a list of ranked cited journals and authors and discovered that majority of the scientists preferred to publish research papers in joint authorship and that too preferably in the journals (Jayaprakash & Kannappanavar, 2015).Mahajan (2017) conducted a citation study of PhD history dissertations at Punjab University, Chandigarh to collect data to help collection management of Library of Panjab University.Wani and Zainab (2017) reviewed and examined eminence of scientometric indicators in scientific research productivity in their article.Using Harzing's Publish or Perish software, Halverson et al. (2012) determined the most frequently cited sources on the subject of blended learning and through these findings they concluded the places, people and trends at the forefront of blended learning scholarship.Five researchers from Spain conducted a bibliometric analysis of research activity in the agronomy category by evaluating quantitative and qualitative features of the researches from the web of science between 1997 to 2011 to establish ranking of countries and research centers (Canas-Guerrero et al., 2013).Mishra et al. (2014) researched citation pattern of PhD research scholars in English by using bibliometric techniques.In the same year, Romic & Mitrovic (2014) used citation checking of PhD dissertation references as a tool for evaluating library collections of the National and University Library of Zagreb.Borthakur (2015) did citation analysis of theses and dissertations in chemistry submitted to the LNB library, Dibrugarh University between 2009-2013.
A cross-sectional exploratory study was done by Jessica M. Hanna (2015), which was part of her doctoral thesis.She compared the quality of research being produced in two different types of doctoral program: traditional and online doctoral degree programs exclusively in the field of education leadership (Hanna, 2015).Condic (2015) examined and compared bibliographies of student dissertations and faculty publications and then presented various descriptive and analytical data from the study.Faculty consistently used higher quality citation sources in comparison to students, Condic (2015) concluded.
Renown scholar and internet fame Harzing (2005) evaluated Australian research output in Economics and Business by employing citation analysis, since then Harzing (2017) has been publishing more and more on citation analysis after her strenuous stints with the UK REF and Australian ERA.Her latest white paper claims that a long cumbersome peer reviewed process as currently practiced in UK's REF could be replaced by a much simpler exercise based on metrics.
There is a series of papers by Tunon & Brydges in which first they created and validated two rubrics, one objective and another one subjective.Later, they used these validated rubrics to evaluate and compare theses produced in the field of youth studies of various traditional and non-traditional programs.education department doctoral students enrolled in distance education programs.In order to assess 144 youth studies dissertations reference lists, they constructed a subjective and objective rubric (Brydges & Tunon, 2005).Johanna Tunon and Bruce Brydges (2005a) initial study looked into reference lists obtained from 143 doctoral dissertations from the Child and Youth Studies program at Nova Southeastern University to evaluate quality of students' library search skills.In short, they examined the relative merits of applying citation analysis and evaluative bibliometric techniques for making judgments on candidate's library search skills (Tunon and Brydges, 2005a).They used a validated subjective rubric with five criteria: the breadth of resources, the depth of literature review as shown through the citing of critical historical and theoretical works, depth as demonstrated through the scholarliness of citations chosen, currency and relevancy (Tunon & Brydges, 2005a).In fact, this study by measuring the quality of doctoral dissertation reference lists of two different programs by using rubrics and citation analysis yielded similar outcomes.The pair did not stop there and followed it up doing three more studies on the same line.
In their second study in the same year, Tunon and Brydges (2005) analyzed the entire youth studies program and further compared the results of local and distance groups within the program while validating both instruments.After their two back-to-back studies, the researchers concluded that citation analysis could be used in conjunction with their subjective or objective rubrics or in other words they claimed both rubrics to be effective for assessing the quality of reference lists.Later, the following year they conducted what they termed as their formative study, in which they used their subjective rubric with descriptive citation analysis to compare 144 theses from non-traditional programs with 59 dissertations from 10 traditional institutions.They concluded that dissertation reference lists of both traditional and non-traditional institutions were more similar that those reference lists were different.In explaining their results, they employed both constructivist and social construction theories as those constructs provided insights into the citation analysis as a tool for evaluating the library search skills of PhD candidates.The study also documented the differences in composition of cited sources in those reference lists.Their formative study was followed by one more study, which they rightly labelled as an expanded study (Tunon & Brydges, 2008) in which they attempted to see that the results of formative study were generalizable to other programs.A total of 452 dissertations produced by distance students in education to include educational programs from the same institution in addition to the initial 144 dissertation reference lists in youth studies used in the formative study.Furthermore, 34 more recent youth studies dissertations were added to the original 144 dissertations.Those dissertations were from six different education doctoral programs, each varied in terms of its methodology of delivering instruction.As had been the case with the previous study, citations were used as descriptive indicators of document content via the use of a citation analysis method that counted resources used in dissertation reference lists.The same methodology was used for analyzing the citations as in the researchers' previous studies (Brydges & Tunon, 2005;Tunon & Brydges, 2005, 2006).An algorithm was used to generate an objective score for each reference list.Consequently, the current study uses an adapted version of Tunon & Brydges objective rubric.The researcher of the current study contacted Johanna Tunon for algorithm but because of its proprietary nature, it was not shared.However, its scoring logic was shared as it has been in public purview in one of their papers.Consequently, research in management the study used its own scoring logic complementary to the field of Management, which it intends to assess.The adapted version and the scoring scheme were validated by a panel of local experts in the field after initial validation by researchers in other faculties in an iterative process lasting over 9 months.Student engagement (SE) literature especially in the context of higher education is one emerging field and Aparicio et al. ( 2020) went one step further by employing bibliometric analysis techniques instead of just doing a subjective evaluation of bibliographic data dating from 1998 to 2018.Their study reveals seminal publications and those journals, which aided most to those publications by circulating them, places where those researchers conducted their research, and identified those groups of scientists, which generated most important strands of SE research (Aparicio et al., 2020).
Ali & Aboelmaged (2020) did something interesting by observing discrepancies in emphasis, methodology and time horizons of previous literature reviews in the field of academic misconduct research in higher education; they decided to do a bibliometric analysis of the last twenty years of research in the field.Their painstakingly constructed analysis had key insights as to the identities of important research clusters, countries in the forefront of the research and the actual evolving trajectory of the research in the field during the past twenty years.In doing so, they discovered areas suitable and promising for future research as well (Ali & Aboelmaged, 2020).
Like the above two studies in the specialized fields, four researchers contributed the same to the field of family firm internationalization, a phenomenon strengthened and noticed by improvements in ICT and trade liberalization regimes of the world.They identified important journals and authors in the field, whose work form the intellectual basis of the field by doing what researchers themselves labeled as performance analysis; a mix of quantitative analysis like number of publications and qualitative indicators measuring impact like the number of citations received by a paper (Alayo et al., 2020).Current study could not have traversed into the realm of performance analysis since theses in the sample never received any citation as they were never published to begin with.However, above bibliometric studies of the three emerging field of student engagement in higher education, academic misconduct in higher education and family firm internationalization do validate current research's main idea that bibliometric study is the right tool to assess research output in the emerging field of management in Pakistan as research in this field is an emerging phenomenon in the country fueled by a number of economic, social, academic factors and some global phenomenon like Neoliberalism, improvements in ICT and trade integration.

Methodology
For the purposes of current study, the researcher used mean impact factor score as a proxy for quality and the overall Tunon & Brydges (2005Brydges ( , 2008) ) adapted objective rubric's score.
For analysis purposes, the researchers adapted Tunon and Brydges (2005) objective rubric.This adaptation was a product of an iterative inductive process as first Tunon and Brydges objective rubric was selected.The researcher decided to condensed various categories of objective rubric according to the data requirements through an inductive process.Hence, the original Tunon & Brydges objective rubric was simplified into the following categories:

Sample of the Study
This part of the research entails evaluation of reference lists of theses of Management.Initially, all theses stored in the repository under the categories of Business research in management Administration and Management were located.Then the researcher by reading their abstracts and other descriptors attempted to categorize them into various fields within the Business Administration and Management Sciences.In creating those categories, the researcher took great inspiration from an article published in British Journal of Management, which reviewed UK state of Management Research after 2001 Research Assessment Exercise (Besant et al., 2003).The categories evolved and the researcher every now and then sought opinion of his supervisor to define the contours of these categories; initially 69 theses got categorized into Management category, which was later expanded into some 98 theses.
Finally, based on the said configuration, the researchers decided to do a purposive, non-random, judgmental sampling as 36 theses out of initial 69 theses were produced by two Islamabad based universities.Had random sample been chosen, there was a high probability that it might not have been representative in terms of multidimension criteria that the researcher eventually used for his purpose, judgmental sample.
Through an iterative process of several rounds over a period of a year or so, the researcher questioned and re-questioned his assumptions and reconsider his strategies in light of evolving data, the researcher eventually arrived at this list.Before the start of purposive sampling, the researcher spent time in setting out a multi-dimensional criterion that would guide him in his selection.After much debate, reading of the literature and soul-searching the criteria were developed; the researcher realized that the sample must represent the length and breadth of Pakistani higher education scene in Business Administration and Management Sciences.Being aware of the fact that education is a provincial subject after the passage of landmark 18th Amendment to the Constitution, the researcher attempted to traverse inter-provincial differences in terms of quality of higher education by composing a sample to include this fault line.Obviously, the story does not stop there as intra-provincial differences were equally important and so was the urbanrural divide or rather major urban centers versus smaller towns differences.Likewise, another fault line that the researcher wanted to account for was the differences between public and private universities.In a similar vein, the researcher wanted to see if quality of theses is being improved over time so the researcher decided that the sample should cover at least a period of 10 years.
For a sample size of 30 theses, the researcher managed to incorporate theses from 19 different universities situated across Pakistan in six different cities.The list further appears to be a good mix of private and public universities as thirteen universities were public and the remaining 17 universities were private in the list.Furthermore, these 30 theses were evenly spread over a 10-year period, thus allowing the researcher to see any overall improvements in five-year lusters.A sample of 30 theses may seem small but earlier, two citation analysis studies of PhD dissertations in the field of education had comparable sample size to the final list of 30 dissertations as Beile et al. (2003) study assessed 30 education dissertations' reference lists from three different universities while Haycock (2004) evaluated 43 dissertations completed at University of Minnesota's education department by conducting a bibliometric study.

Findings and Discussion
The three research questions and findings are discussed below: What are the differences between the quality levels of reference sources used in doctoral student theses in Management completed at public universities versus those completed at private universities?
In order to find answer to the above hypothesis, the 30 theses sample was divided into two groups, one set of theses completed at public universities and another group of those theses, which were completed at private universities.Public universities produced 13 theses versus private universities, which granted 17 doctorates in the sample.There was no statistically significant difference in any category of data compared between theses completed at public universities versus theses completed private universities.This is interesting finding as private universities charge a lot in terms of fee etc. yet analysis revealed that there is not any difference between theses completed at private universities when they were compared with theses completed at public universities over a 10-year period.
Is quality of Management theses' reference lists improving over time as measured by comparison of mean impact factor of citations' sources of two lusters 2008-2012 and 2013-2017?To find out if the overall quality of theses is improving over time, the purposive sample of 30 theses scattered over a ten-year period was divided into two lusters of 15 theses each.The impact factor of each eligible source of reference lists was calculated and then mean impact for each thesis was determined.Finally, the total of mean impact factors of each luster was compared to see if there was a statistically significant difference between them.The statistical analysis yielded that there was a statistically significant difference between total mean impact factors of both lusters, thus supporting the hypothesis that overall quality of theses is being improved over time.ISSN  As these values are above than the required cut-off of 0.05, you can conclude that there is no difference between thesis completed in public and private university.
The effect size was calculated when the value in the Sig.(2-tailed) column is equal or less than 0.05 (e.g., .03,.01,.001)and the "t" value is greater than 2.5.As all the sig values are greater than 0.50, therefore effect size was not calculated.
The Independent sample t-test was performed after checking assumption of quality of variance using Levene's Test.research in management As these values are above than the required cut-off of .05,you can conclude that there is no a statistically significant difference between thesis completed in public and private university.
1.The effect size was calculated when the value in the Sig.(2-tailed) column is equal or less than .05(e.g., .03,.01,.001)and the "t" value is greater than 2.5.2. As all the sig values are greater than 0.50, therefore effect size was not calculated.3. The Independent sample t-test was performed after checking assumption of quality of variance using Levene's Test.

Contribution
No such study has ever been conducted in Pakistan, which evaluated reference lists of Management theses for any purposes.This research attempted to see if the quality of reference list is improving over time by comparing two lusters of the sample spread over a period of 10-year.Furthermore, the research attempted to gauge any differences ISSN (P): 2788-4821 & ISSN (O): 2788-483X Volume 1, Issue 2, Page 63-81, December 31, 2020 between references lists of theses produced in public universities versus those theses, which were produced in private universities.Business universities' administrators, PhD program managers, curriculum designers, thesis supervisors, evaluators, examiners and last but not the least, the regulators in Pakistan can take stock of this research to improve the quality of teaching on how to improve library search skills and especially information search skills among PhD students.These skills eventually impact on the quality of theses produced as those are essential for optimal usage of electronic databases of scholarly communication.

Practical and Scholarly Significance
No such study has ever been conducted in Pakistan, which evaluated reference lists of Management theses.In fact, Pakistan's assertive higher education regularity body has not done any descriptive citation analysis, citation patterns on PhD theses in Management.This study has shed light on much-needed librarian faculty collaboration in the field for strengthening PhD programs in general and especially PhD programs in the field of Business Administration and Management Sciences.Besides, the importance of library search skills for PhD candidates come to the fore because of this paper.Currently, no such programs are in place and administrators and PhD program managers could strengthen their PhD programs by adding a library training and electronic data-bases search training component to their course offerings.
The university administrators, PhD program managers, curriculum designers, thesis supervisors, evaluators, examiners and last but not the least, the regulators in Pakistan can take stock of this research to improve the quality of teaching how to conduct library research skills and especially those competencies, which are required to search scholarly communication from electronic databases.

Limitations and Future Directions
The researchers did not have access to any paid/subscription-based databases.Likewise, now numerous softwares have been developed to analyze bibliometric data, however researchers did not have access to those apps either.Therefore, most of the tedious work of listing of citations, identification of sources and their ranking etc. was done manually.Of course, at a time of artificial intelligence and machine reading, this manual process is inherently flawed and entailed numerous risks.Moreover, the research evaluated only a small sample of 30 theses for reference list analysis, perhaps in future a more comprehensive study could be done to get a more accurate picture.Besides, broadening the dataset, the researchers can also look into currency factor of theses' reference lists, which is also an important implicit indicator of quality (Tunon & Brydges, 2009).

Table 1 .
1 Stats of Experts Participated in Rubric Validation Exercise

Table 1 .
5 Descriptive Comparison of Thesis between Public and Private Sector Universities In the above presented table, all the sig.(2-tailed) values are greater than 0.05