top of page
CKB Logo.JPG
Search
Writer's pictureRoss Markle

I’m done with noncognitive skills

This probably comes as a shock to many of you. The headlines in Inside Higher Ed, EMIP, and other notable publications will likely read, The Noncog Guy Gives up on Noncogs, or something to that effect. But it’s true. After years of singing the praises of noncognitive skills, including an ardent defense of the word “noncognitive itself on this very blog, I’m giving it up. I’ve realized that noncognitive skills are NOT what education needs. What it needs is much greater than that…



The Story of Noncognitive Skills

For the uninitiated, let me give a brief history of noncognitive skills as it relates to educational research broadly and student success specifically. Despite roughly a century of educational psychology demonstrating the importance of factors like self-efficacy, motivation, and social integration, these concepts seemed to exist on the fringes of practical educational conversations.


Then, around the year 2000, a line of economic research (e.g., Bowles et al., 2001; Heckman & Rubinstein, 2001) started to question the role of education in the development of human capital. While sociologists had known for a long time that education played a key role in the development of human capital, we assumed that it was from gains in cognitive ability (i.e., intelligence). Yet these economic studies, using data from the GED, began to suggest that much of what we gain from education is not represented by cognitive ability, but other, noncognitive factors.


Moreover, several other researchers (e.g., Markle et al., 2013; Poropat, 2009; Richardson, Abraham, & Bond, 2012; Robbins et al., 2004) noted that these noncognitive factors were not only outcomes of education, but a key determinant of educational achievement.


Throughout the 20th century, American education focused almost exclusively on cognitive metrics. While the public narrative touted innovation, ingenuity, and tenacity, within the system, metrics focused on literacy, numeracy, and critical thinking (see Oswald et al., 2004). Yet these empirical revelations about noncognitive skills – which emphasized the real, tangible value of the domain – seemed to suggest that a tide was turning in education.


The PPI

It was not long after this that I joined an exciting research center at ETS. What was previously “The Center for New Constructs” was now called “The Center for Academic and Workforce Readiness and Success.” Through the name change, though, the center was led by two innovative and dynamic researchers: Rich Roberts and Pat Kyllonen. I cannot think of anyone who has created more interesting, innovative, and high-quality assessment programs than Rich and Pat. They also didn’t focus themselves on any particular sector: K12, higher education, industry, military, international assessment – you name it, they’d build it.


One of the more promising programs was the Personal Potential Index, or PPI (Kyllonen, 2008). At the time, the PPI was developed as a compliment to the GRE. One of the challenges in making graduate admissions decisions is that you rely on two primary pieces of evidence when looking at a large candidate pool. Standardized metrics like the GRE are helpful, but at that point, there can be a restriction of range that limits the value of such indicators. For example, does someone who scores a 1530 on the GRE have a significantly better chance at success than someone who scores a 1490? Meta-analytically? Yes, but individually? That might not be the datum upon which you want to make an admission decision.


Other metrics – personal statements and letters of recommendation primary among them – are difficult to interpret. For example, imagine a program that values innovation and creativity in its graduate students. A letter of recommendation for one applicant raves about the student but mentions nothing of these values. What is a program supposed to conclude regarding the students ability to create and innovate?


Another interesting thing was the quality of writing from professors. Is an eloquent, well-written reference more valid than one that is short, or perhaps has a few typos in it? Should the student be dinged for the writing ability and/or motivation of the recommending faculty? There are many issues with the standardization of this process that limit the validity of the inferences drawn from such pieces of evidence.


The PPI solved all this. Recommendations were standardized, asking professors to specifically rate the student some generally accepted criteria. It provided both quantitative (i.e., Likert-type ratings) and qualitative (open-ended statements) evaluations of students. In other words, this was the beauty of standardization - a word often used derisively, but in the eyes of measurement folks, an absolute necessity for drawing inferences from information.


ETS also thought of the little practical things that often trip up assessments like this. Recommending professors had to validate their credentials through an academic email address. Although it took years to develop, ETS first offered the PPI as a completely free supplement for schools to use. So barriers like perceived authenticity and cost were removed.


Despite addressing a popular topic using cutting edge research within a very specific market and doing so for free, the PPI … well there’s no other way to say it… it failed. Despite initial support from ETS and general excitement among the graduate education community, it never caught on as a means for standardizing holistic graduate admissions.


Years later, a large medical education organization asked to put together a panel on the role of noncognitive skills in their field. A variety of psychometric, educational, and psychological experts were asked to shed light on possible research and/or development opportunities.


The PPI came up during this conversation, and I began to reflect on how and why it had come up short. I realized that the issue wasn’t with any of those advantages I’d mentioned above. The issue was with the paradigm through which the PPI’s results were viewed. Even though all the research showed the value of “noncognitive skills,” that language still seemed to suggest that these factors were secondary to traditional measures of intelligence. If educators were ever going to benefit from noncognitive factors, they needed an entirely new paradigm – one that viewed students in a truly holistic way.


The Years that Followed

Shortly after the release of the PPI, I began working on another relatively large-scale noncognitive assessment: SuccessNavigator. Once again, there was a match between quality assessment, novel research, and practical need. In addition to the overall call for improved student success, community colleges were exploring ways to improve course placement and developmental education (e.g., Conley, 2007). Thus, a broad-based measure of noncognitive skills could be helpful to identify both the academic and co-curricular supports necessary for student success.


After eight years of research and development, SuccessNavigator, too, failed to achieve its (my?) aims at revolutionizing assessment, support, and interventions in the world of student success. Of course, this ultimately led to the general dissolution of my work at ETS, both occupationally and practically, which in turn led to the creation of DIA Higher Education Collaborators.


Initially, DIA was meant to carry on the mantle of the work that had begun at ETS – helping institutions better assess and understand the role of noncognitive factors in student success. However, after beating my head against the wall for a few years, I began to feel the same frustrations.


Much of this came from trying to incorporate noncognitive data into advising, coaching, and other student support mechanisms. After all, it had always made sense to assume that those individuals charged with talking to students about their success would be best suited to address issues like motivation, sense of belonging, and the like.


In some cases, this worked incredibly well. We were able to equip staff with the right training, resources, and information to holistically advise students. In other cases, though, I found similar frustrations to my time at ETS. Despite my utmost belief in the importance of our work, something was missing. In time, I would begin to realize that it might not be our approach that was faulty, but the ecosystem into which we were trying to inject these concepts.


So What’s the Solution?

Over the last several decades, I’ve been both amazed and frustrated by the immensely slow pace with which higher education has switched to a focus on noncognitive skills. Despite a wealth of research evidence demonstrating their efficacy in predicting success, and the variety of interventions that have been developed, and the promise of helping articulate the strengths and challenges of many traditionally underserved populations… colleges and universities still haven’t figured it out.


I believe it’s because of this approach to “noncognitive factors.” Throughout my career, I’ve basically said, “look, what you’re doing is great, but if you add these other data, it can be so much better!” This is the noncognitive approach – adding onto or evolving existing practice, which is largely based on an academic preparation/cognitive ability model.


The fact of the matter is that this extant model isn’t “doing great.” It’s fundamentally flawed. It doesn’t need to be improved by the addition of more valuable data (which, by the nature of their being supplementary, are automatically viewed as less important). It needs to be completely abandoned in service of a different approach – a holistic model. Here are a few ideas about how we might begin to make this transition.


First, institutions must be willing to acknowledge and pursue a shift from traditional approaches to success to a holistic one.


As mentioned, advising, coaching, and counseling are often appealing connection points for noncognitive skills. The practical challenge we often face is that advisors already have too many “tools in their toolbox.” At many institutions, this is the approach that is taken with advising: offer as many valuable resources as possible and trust advisors to use whichever ones they deem most helpful in supporting a student.


The fact of the matter is that this has basically been the approach of advising since its inception. For the most part, advising operates under the charge of scheduling and planning: which courses does the student need to take in order to graduate on-time? However, leaders such as Terry O’Banion have long fought for a more holistic approach to “academic advising.” When we meet with advisors and hear, “Well, I only have so much time to talk with a student and that already gets taken up by XYZ…” I’m struck by a thought: If talking about XYZ was the best use of that time, would we even be having this conversation?


This is a clear example of where adherence to the existing model is holding us back. Yes, there is certainly the possibility that a shift to a truly holistic approach produces worse outcomes than the current system, but for many institutions, the status quo is simply indefensible (a quick Google search reveals a wealth of schools with graduation rates lower than 5%). In other words, we need stop tweaking the current system and drastically reimagine the ways we support students, starting with a holistic paradigm for potential and success.


Second, we must create some common guideposts.


One of the challenges of abandoning traditional paradigms is that there isn’t a clear map for what the new world might look like. "Noncognitive skills" are often criticized because they are simply a negative reference – i.e., those things that aren’t intelligence – and don’t provide any clear identification of precisely which skills are important.


Research hasn’t helped. When Marcus Crede and colleagues (2017) published their criticism of grit, one of their primary qualms was that, even in Duckworth’s own work, correlations with existing constructs (i.e., conscientiousness) were VERY high.  While there is certainly an argument of calling someone out for trying to reinvent the wheel, the major epistemological threat of such academic rebranding is that consumers of the research cannot connect and build upon the literature without the proper chain of citation. Thus, by failing to link to previous research, researchers and practitioners alike lose the ability to see a much larger network of relevant theories, assessments, interventions, and best practices.


In order for holistic student success to take root, it must be based in a commonly accepted articulation of what exactly goes into a holistic framework. Admittedly, yes, some component of intelligence and academic preparation would be needed here, but I’m uncertain of how similar or different this might look compared to extant models of academic potential.


With regard to the noncognitive space, there is just too much noise to identify where the commonalities exist. Patrick Kyllonen (2013) once wrote that the establishment of the Big Five theory of personality was a massive achievement for the field of psychology because it gave a common, relatively inarguable framework into which many factors could connect. Yes, certain traits could be explored in a specific context, but they would always be connected back to some part of the Big Five.


This allowed for a larger nomological network of research – essentially a ladder to access the shoulders of researchers who came before. If student success researchers could agree on a set of core concepts (e.g., time management, goal commitment, stress management, self-efficacy, sense of belonging) then connections within the field could be accelerated.


Third, we need to align assessments and interventions.


When we first started work with SuccessNavigator, I was incredibly nervous to share our progress with Sara Finney. The esteemed professor from James Madison University was not only my boss in grad school, she’s also smart as a whip and probably knows more about building measures of noncognitive skills than just about anyone. Though Sara has a reputation for being intimidating, it’s largely unfounded. After sharing our work, she was so supportive and enthused about where we were heading, especially given her wealth of experience in student affairs assessment.


“This is basically the curriculum for student success! Just like you use a placement test to figure out which math or English course a student needs, this type of tool can recommend what sort of co-curricular supports a student needs.”


That’s a paraphrase of her initial reaction, but I think I’m pretty close. And this was the point that I’d felt so deeply but hadn’t yet articulated. The major advantage of a holistic approach to student success – particularly in comparison to demographic or sociological approaches – is the connection to an intervention. It may be complex, difficult, or challenging, but noncognitive assessment is rooted in the assessment of malleable factors.


We can’t change a student’s race, ethnicity, or first-generation status, and yet those variables often dominate our conversations about student success. The major benefit of moving to a truly holistic model is that it connects directly to actions that can benefit student success. In doing so, it also sheds light on the critical role of co-curricular and student affairs efforts in improving success.



Ok, so I click-baited you into believing I might be done with “noncognitive skills.” I’m surely not, but I do feel that we need a new and different way of talking about them. We have a few initiatives going on this year that will hopefully shed some light on how we put these things into practice, so until then – stay tuned!



References

Bowles, S., Gintis, H., & Osborne, M. (2001). The determinants of earnings: A behavioral approach. Journal of Economic Literature39(4), 1137-1176.


Conley, D. T. (2007). Toward a more comprehensive conception of college readiness. Eugene, OR: Educational Policy Improvement Center.


Kyllonen, P. C. (2008). The research behind the ETS personal potential index (PPI). Princeton, NJ: ETS.


Kyllonen, P. C. (2013). Soft Skills for the Workplace. Change: The Magazine of Higher Learning45(6), 16-23.


Markle, R.E., Olivera-Aguilar, M., Jackson, T., Noeth, R., & Robbins, S. (2013). Examining evidence of reliability, validity, and fairness for SuccessNavigator. (ETS RR–13-12). Princeton, NJ: Educational Testing Service.


Oswald, F. L., Schmitt, N., Kim, B. H., Ramsay, L. J., & Gillespie, M. A. (2004). Developing a biodata measure and situational judgment inventory as predictors of college student performance. Journal of Applied Psychology89, 187-207.


Poropat. A.E. (2009). A meta-analysis of the five-factor model of personality and academic performance. Psychological Bulletin, 135(2), 322-338.


Robbins, S. B., Lauver, K., Le, H., Davis, D., Langley, R., & Carlstrom, A. (2004). Do psychosocial and study skill factors predict college outcomes? A m



75 views0 comments

댓글


bottom of page