Peregrine Global Services and the ACBSP Doctoral Assessment Task Force—Creating a Way Forward

During a session break at ACBSP Conference 2018 in Kansas City, Dr. Anthony Negbenebor (Gardner-Webb University), Dr. Jay Hochstetler (Anderson University), and I started a conversation regarding the applications of Standard #4 for doctoral-level programs. The issue we discussed was how best to assess doctoral programs and the limitations of instruments for doctoral program assessment that are commonly used to assess undergraduate and graduate programs. New processes and instruments were needed for doctoral-level assessment.

In early 2019, a task force was created by ACBSP to examine this issue and to provide specific options for doctoral program assessment. The purpose of this article is to summarize the work of the task force and present our results.

THE STRATEGIC CONTEXT

Standardized multiple-choice question testing is not adequate to evaluate a doctoral level learner and the academic program. Doctoral programs tend to be mission-driven, and there is significant diversity in how doctoral programs are designed, marketed, and delivered. Some programs focus on creating terminally degreed faculty, while other programs focus on developing business and government leaders.

About the only area of commonality with doctoral programs across the spectrum of doctoral programs is the area of the dissertation. Some doctoral programs, however, do not have a dissertation but rather a project-based culminating achievement. A more comprehensive assessment approach was needed that takes into consideration the diversity of doctoral programs and nature of the programs in terms of their learning outcomes and strategic aims.

The ACBSP Doctoral Assessment Task Force met in early 2019 to consider approaches to doctoral-level assessment. The Task Force included members with doctoral programs including Ph.D., DBA, and DM programs. The Task Force identified three assessment approaches: critical thinking, soft skills, and standardized dissertation evaluation.

The concept was not to replace the doctoral comprehensive exam process that most schools currently have for their doctoral programs – the doctoral comps process needs to remain a school function. Rather, the idea was to supplement what the school is already doing with standardized instrumentation that can be used for academic benchmarking and programmatic assessment. Such instrumentation would help satisfy the requirements associated with ACBSP Standard #4. Most importantly, the concept was to provide school officials with options so that they could make the best possible choices for program assessment based on their specific doctoral program.

SOFT SKILL ASSESSMENT

This subgroup included Dr. Jeff Boyce (Indiana Wesleyan University), Dr. Sandy Kolberg (Walden University), Dr. Allyson Gee (Walden University), Dr. Cliff Butler (Capella University), Dr. Tara Peters (Northwood University), Dr. Craig Cleveland (Saint Leo University), and Dr. Sandra Mankins (Gardner-Webb University). The focus of the subgroup pertained to the instrumentation that would be used to assess soft skills.

The outcome of the subgroup was the design considerations for EvaluSkills: Workplace Skills Assessment. Peregrine Global Services was just starting to create this online 360-degree service in late 2018. The subgroup assisted with the design considerations and conducted the initial beta-testing starting in 2019 for a doctoral program.

EvaluSkills: Workplace Skills Assessment is a soft skills assessment service based on 360-degree assessments used in the business world but adapted for higher education. The perspectives of peers, supervisors, advisors, mentors, and colleagues are gathered using an online 360-degree assessment process to provide feedback on the individual’s behavior and performance from a variety of sources. This process provides an objective and accurate measure of skills essential to success in the workplace, whereby learners and faculty gain an in-depth understanding of areas of strength and opportunities for improvement.

Academic programs vary in their learning outcomes related to soft skills, so the EvaluSkills platform offers a vast menu of almost 300 assessment items from which administrators can select and create an instrument for programmatic evaluation. For each assessment item, a specific skill is defined and measured with a five-point Likert-type scale corresponding to specific behaviors associated with each level of performance. The use of standardized rubrics for evaluating each assessment item removes subjectivity as the rubrics provide evaluators examples of exceptional, competent, or marginal performance on each skill.

EvaluSkills is used in higher education as a direct measure of learning outcomes. EvaluSkills was designed for pre/post comparison whereby learners are assessed at the start of the program (pre-test) and again at the end of the academic program (post-test). Pre-test/post-test results are compared to understand changes that occurred based on the academic experience.

Higher education institutions can also use EvaluSkills to help measure the employability (career-readiness) of their graduates, thus satisfying various stakeholder needs. The results could also be used by employers of graduating students so that they can better align job needs with student skills.

Figure 1.  An EvaluSkills assessment rubric as presented to an Evaluator. Each assessment item, in this example Change Leadership, has a multi-part rubric that includes statements, behaviors, and the scale.

CRITICAL THINKING ASSESSMENT

This subgroup included Dr. Elaine Elder (Colorado Technical University), Dr. Larry Hughes (Northcentral University), Dr. Helen MacLennan (Saint Leo University), and Dr. Anthony Negbenebor (Gardner-Webb University). The focus of the subgroup pertained to the instrumentation that would be used to evaluate critical thinking and concept application and provide comparisons.

The outcome of the subgroup was the design considerations for a critical thinking assessment service. The learner would be presented with a short case study or scenario categorized by the Common Professional Component topics (e. g. Marketing, Strategic Management, Business Ethics, Accounting, etc.). The learner would then answer 10 questions related to the case study including six multiple choice and four open-ended questions. Each question is categorized based on Bloom’s Taxonomy with three questions for Bloom’s levels I/II, three questions for Bloom’s levels III/IV, and four questions for Bloom’s levels V/VI.

The test bank for the critical thinking assessment includes 20-40 assessment items per topic. An assessment item is the 10-question case/scenario analysis. Learners would be presented the assessment item randomly. The system would grade the multiple-choice questions automatically. A designated school official (e.g., course professor) would manually grade the open-ended items. The grader is provided with a recommended response and will compare that response to the learner’s response.

The critical thinking assessment items could be presented to learners by themselves or in conjunction with knowledge-based assessment items. Each adopting program could have a customized assessment instrument based on the program learning outcomes.

Although originally designed for doctoral program assessment, the test bank is also appropriate for undergraduate and master’s level programs because of its organization based on Bloom’s Taxonomy. Bloom’s levels I/II relate to undergraduate learning, Bloom’s levels III/IV relate to master’s levels, and Bloom’s levels V/VI tend to relate to doctoral-level understanding. The results would be aggregated based on the academic degree level of the assessed program so that school officials could benchmark accordingly. The soft skill assessment service will be available for beta-testing by the fall of 2021.

Figure 2. An example of how the learner is presented with the critical thinking assessment item. The learner sees the case study/scenario to answer each of the 10 questions. The learner can review responses before submitting the responses to the item. Questions 1-3 are written at the Bloom’s I/II levels. Questions 4-6 are written at Bloom’s III/IV levels. Questions 7-10 are written at Bloom’s VI/VII levels.

Figure 3. An example of how a scorer (e.g., course professor) is presented with the critical thinking short answer assessment items to be scored. The scorer sees the learner’s response and then evaluates the response based on a scoring rubric. Once the scorer has completed the scoring, the learner receives their assessment results.

DISSERTATION ASSESSMENT

This subgroup included Dr. Ann Saurbier (Walsh College), Dr. Jay Hochstetler (Anderson University), Dr. Alisa Fleming (University of Phoenix), Dr. Bulent Aybar (Southern New Hampshire University), Dr. Melissa Williams (Colorado Technical University), Dr. Matthew Andrews (The International School of Management), Dr. Michael Williams (Thomas Edison State University), and Dr. Shani Carter (Wagner College). The focus of the subgroup pertained to creating a dissertation evaluation rubric used for programmatic evaluation.

Specifically, the goal was to not make all dissertations look alike or use a standardized dissertation template (that would be a detractor); rather, the goal was to set some expectations that if a school requires a dissertation that there are some standards that should be in place. The key concept was to provide an instrument that school officials could use for programmatic assessment that included benchmarking against results from other programs.

Through a series of meetings and reviews, the subgroup developed a dissertation evaluation rubric in 2019 and then conducted a beta-test of the rubric using a double-blind review process of dissertations from different doctoral programs in 2020. The results from these double-blind reviews were compiled and evaluated. The dissertation rubric was finalized in early 2021.

As a program review, it is preferable for fewer reviewers to evaluate multiple dissertations, rather than many evaluators reviewing one dissertation each. Reviewers should have a basis for comparison across the program. Reviewers should strive to separate potential biases from the rubric and consider that the rubric is being used for programmatic evaluation, not an individual dissertation acceptance as part of the doctoral process. The dissertation review will most likely be conducted after the learner has completed the doctoral program.

Participating institutions may send their evaluation results to Peregrine Global Services for aggregation. Results will be categorized based on various types of aggregate pools. Universities will have access to their internal results and external benchmarking data to set targets for improvement and demonstrate results.

Figure 4. A portion of the dissertation evaluation rubric. Reviewers complete the evaluation in Excel.

THE WAY FORWARD

Doctoral program managers now have options to consider to help satisfy programmatic assessment requirements and to conduct data-driven program reviews. These options include knowledge-based assessment, soft-skill assessment, critical thinking assessment, and dissertation assessment. These options can be used individually or in combination to create a customized solution that aligns best with the mission and program learning outcomes of the doctoral program.


Olin O. Oedekoven, Ph.D.
President & CEO
Peregrine Global Services
Christina Perry, MS
Director of Organizational Learning
Peregrine Global Services

Peregrine Global Services has been a Corporate Member of ACBSP since 2009 and a Valued Partner since 2010, sponsoring multiple virtual and in-person events, including ACBSP’s ongoing Academic Effectiveness Webinar Series. Learn more at peregrineglobal.com.