DEFINITION AND MEASUREMENT ISSUES IN CAREER AND TECHNICAL EDUCATION
By Thomas Goldring & Daniel Kreisman
Career & Technical Education Policy Exchange
October 2024
Suggested Citation
Goldring, T., & Kreisman, D. (2024). Definition and measurement issues in Career and Technical Education. Georgia Policy Labs.
Does high school Career and Technical Education (CTE) work as intended to prepare students for both college and careers? In recent years, researchers have used richer data and improved methods to understand the impact of CTE for students and early career professionals. The evidence to date is broadly positive, suggesting that students who complete CTE programs in high school are more likely to graduate, have increased employment rates, and earn more in the short- to medium-term.1 Yet, many studies draw their conclusions based on analyses of data from a single state, often benefiting from the use of the type of comprehensive administrative data lacking in national surveys. Because states have been given significant latitude in establishing measures of CTE and setting performance levels, comparisons of indicators across states, or even over time within one state, can pose a challenge.
Consider, for example, an important measure of student progress through a CTE program: concentration. The U.S. Department of Education (ED) has issued guidance to states that a CTE concentrator refers to a student who earned two or more credits within a single program of study, such as Health Science or Business Management and Administration.2 Figure 1 shows CTE concentration rates for five states using each state’s administrative data. The bar colors show different Grade 9 cohorts over time, starting with the cohort of first-time ninth graders during school year (SY) 2014-15 through the cohort in SY 2017-18. Grade 9 students in SY 2017-18, for example, were expected to graduate on-time in SY 2020-21, and the graph shows the fraction of these students who ever concentrated in a CTE program.
The problem with interpreting this graph is simple: Definitions of CTE performance indicators, including concentration, often vary across states and, at some points, within them over time. The graph notes for Figure 1 relate two measurement challenges over time. First, a new concentrator definition was adopted in Michigan in SY 2020-21, which may have affected the concentration rate for the Grade 9 cohort in SY 2017-18. Thus, Michigan’s red bar is not directly comparable to the blue, orange, and green bars for earlier student cohorts. Second, Montana’s CTE data for the Grade 9 cohort in SY 2016-17 was collected using a different system and so is not directly comparable to other cohorts; thus, there is no green bar for Montana.
Regardless, these difficulties are relatively minor compared to the challenge in consistently measuring CTE concentration across states. Table 1 provides the state-specific definitions of CTE concentrator status for the five states in Figure 1: Massachusetts, Michigan, Montana, Tennessee, and Washington. Montana and Washington adhere closely to the federal guidance for concentration, although Washington uses at least three credits rather than two. In Massachusetts, the definition is based on the length of time a student takes courses in a defined program of study. Michigan does not have a consistent measure of credits and instead defined its own measure of curricular progress, known as segments. Tennessee’s definition is based on counting CTE courses but changed in SY 2019-20 to comply with the most recent iteration of the Perkins Act, a federal law covering CTE.
Figure 1 comes from a recent report published by the Career & Technical Education Policy Exchange (CTEx). CTEx is a multi-state consortium in which researchers and state and local partners, who are often CTE administrators, develop data-driven policy recommendations. A long-standing aim of CTEx is to answer research questions as similarly as possible across multiple states, to better understand the generalizability of findings, and to highlight important contextual differences. As Figure 1 illustrates, though, definitional differences and measurement issues make it impossible to know whether differences across states or over time are real or driven by issues with the indicators themselves. For example, the concentration rate in Massachusetts is under half that of Montana or Tennessee. What does this tell us about student progression in these three states? It is unclear whether these differences are the result of different CTE policies across states and over time, the result of different definitions, or both.
How Did We Get Here?
Through five renewals, the federal Perkins Act has consistently granted the states significant latitude in establishing measures of CTE. In its first four iterations, the Perkins Act did not explicitly define CTE concentration or participation, although many states defined concentration as a student who completed at least two courses in an aligned CTE pathway. Concentration and participation were defined by law for the first time in Perkins V (2018), but because it uses the term “course” rather than “credit,” states continue to have significant flexibility. In its reports to Congress on state CTE performance, ED has repeatedly noted the difficulty in comparing state data because of the various definitions for CTE concentrator used by states.3
Indicators of performance are explicitly defined by Perkins IV and Perkins V, but the measures often rely on state definitions that vary across states. For example, three indicators under Perkins V measure academic proficiency in reading/language arts, math, and science. Since each state defines and sets its proficiency level, however, a true apples-to-apples comparison of the measures across states is impossible. Other Perkins V indicators are similarly compromised when comparing across states; participation in work-based learning, for instance, begs the question of what exactly counts as work-based learning.
Even when uniform definitions have been introduced by law or through non- regulatory guidance issued by ED, states may be unable to fully comply with the law or guidance as written. Under Perkins IV, ED provided guidance that defined a CTE concentrator as a student who earned three or more credits in a single CTE program area, or two or more credits in states with recognized two-credit sequences. In Michigan, however, there is no uniform definition of a credit, so the state adopted an alternative measure of progression by categorizing each program’s content into 12 “segments” (see Table 1). A concentrator in Michigan, along with 11 other states, was defined as a student who completed at least 50% of a defined program sequence.4
Why Do Definition and Measurement Issues Matter?
The variety of definitions used to measure progress and quality in CTE makes it difficult to accurately compare CTE programs’ effectiveness across states or within a single state over time. Inconsistent definitions and measures can hinder both administrators and policymakers in their ability to assess program outcomes and identify areas for improvement. Added-cost funding is often determined based on reported student progression and other measures, so inconsistent data may result in the suboptimal allocation of resources— potentially depriving students of the support they need to succeed in CTE pathways.
Moreover, standardized definitions and measures are essential to accurately make meaningful national comparisons and benchmark CTE programs. Without consistent data, it is challenging to identify best practices, trends, and areas of improvement across states and regions. To take a prime example, imagine comparing the labor market outcomes of CTE concentrators to non-concentrators across states and over time. In some states and for some cohorts, this means comparing students who completed at least two aligned courses to students who completed fewer courses. In other states (or simply in other years), this involves comparing students who completed three courses with those who completed fewer courses.5 Consider students who completed two aligned CTE courses. In the first case, they are considered concentrators, while in the second case, they are not considered concentrators despite taking the same number of courses.
What Would Be Better?
The purpose of this brief is to shine a light on the myriad issues that arise from using inconsistent definitions to measure student progress and program quality in CTE. A promising first step is better recognition and awareness of the problem. In addition to the definitional problems already mentioned, an additional issue involves the use of CTE clusters as an organizing framework to categorize CTE pathways. When measures are broken out by cluster, it is important to recognize that clusters do not encompass the same CTE programs across states. Thus, depending on the cluster, different programs (CIP codes) may be aggregated within each state, making comparisons across states problematic.
A concrete step that would improve upon the current situation is using standardized measures for CTE, such as Carnegie Units. A Carnegie Unit represents one credit hour of study, typically equivalent to one hour of instruction per week over the course of one academic year (approximately 120 hours of instruction). It serves as a standardized measure used to quantify the amount of time and effort students dedicate to a particular course or subject. Applying Carnegie Units to CTE courses, programs, or pathways could help to establish consistent measures for student participation, progression, and achievement in CTE. Additional benefits may include better credit transferability between schools and additional alignment of CTE programs with academic standards, industry requirements, and workforce needs. Using Carnegie Units for CTE would allow for consistent data collection and reporting practices within and across states. It would enable educators and policymakers to track student progress, program completion rates, and outcomes more effectively, strengthening evidence-based decision-making and program improvement efforts.
Endnotes
Brunner, E. J., Dougherty, S. M., & Ross, S. L. (2023). The effects of career and technical education: Evidence from the Connecticut technical high school system. Review of Economics and Statistics, 105(4), 867–882.
Dougherty, S. M. (2018). The effect of career and technical education on human capital accumulation: Causal evidence from Massachusetts. Education Finance and Policy, 13(2), 119–148.
Ecton, W. G., & Dougherty, S. M. (2023). Heterogeneity in high school career and technical education outcomes. Educational Evaluation and Policy Analysis, 45(1), 157–181.
Hemelt, S. W., Lenard, M. A., & Paeplow, C. G. (2019). Building bridges to life after high school: Contemporary career academies and student outcomes. Economics of Education Review, 68, 161–178.
Kemple, J. J., & Willner, C. J. (2008). Career academies: Long-term impacts on labor market outcomes, educational attainment, and transitions to adulthood. New York: MDRC.
2. See https://cte.ed.gov/accountability/nonregulatory-guidance-for-accountability.
3. Past reports to Congress are available at https://cte.ed.gov/accountability/reports-to-congress.To give one example, the 2015-16 report states, “[t]he Department has indicated in its past reports to Congress on … Perkins III … that it was difficult to compare state data because there was a variety of definitions for CTE concentrator used by states that made an impact on whom they were counting in their CTE accountability system” (p. 56).
4. For additional detail on the definition of a CTE concentrator under Perkins IV and Perkins V, see https://careertech.org/wp-content/uploads/sites/default/files/SecondaryConcentratorBackground_2019.pdf.
5. Students who completed three aligned courses are often referred to as “CTE completers.”