Career Research Series: Incivility in the Job Search

by Desalina (Alina) Guarise and James W. Kostenblatt

This post is part of a series of interviews that will explore career-related research. As recipients of a NACE Research Grant, we are partnering with nearly 40 institutions to explore the long-term impact unpaid internships have on career success and are looking for more partners to join. Contact us if interested!

Through our research project, we have had the pleasure of working closely with Abdifatah A. Ali, a doctoral candidate in organizational psychology at Michigan State University graduating in May 2017 who has closely studied motivation in the job search.  In an interview, Abdifatah shared details about his research paper, “The long road to employment: Incivility experienced by job seekers,” published in October in the Journal of Applied Psychology.

Tell us a little bit about your professional background. How did you become interested in career-related research?
I graduated with my undergraduate degree from San Diego State studying psychology with a minor in statistics. Here, I started doing research with an industrial-organizational psychology professor who encouraged me to pursue a Ph.D. My research interests early on dealt with motivation—in particular how individuals self-regulate their emotions, behavior, or actions in order to achieve their goals when looking for work. For example, when people are unemployed or college students are looking for work, how do they motivate themselves and what are the factors that influence their level of motivation and persistence so they can get a job?

At Michigan State University, I collaborated with Dr. Ann Marie Ryan to examine how people’s emotional reactions impacted their job search success (defined as whether the candidate received interview call backs, job offers, etc.). We were able to show that BOTH positive and negative emotions that people experience when they are looking for work motivate them. So, for example, if you just get a call back from a company that makes you feel excited or happy, that will motivate you and encourage you to continue to put effort into the job search. On the other hand, if you are experiencing challenges or anxiety, these negative emotions can actually also motivate effort, which is contrary to what we thought.

I’ve more recently made a switch and begun to look at factors that undermine job search efforts, which relates to my current research.

Your current research focuses on incivility experienced by job seekers—how did you come up with this research topic?
Before this paper there was very little research looking at which contextual factors undermine motivation—they were only looking at things that facilitate it. When you talk to individuals that are job searching, they constantly talk about experiencing incivility, which got us wondering what effects these incidents have on the job search process.

How do you define incivility? Can you give some examples?
Incivility is defined as generally rude or discourteous behaviors that are ambiguous in terms of intent. For example, a snide comment or a funny look from a recruiter or interviewer. They are perceived as behaving in a rude way but you don’t necessarily know if they are doing it intentionally.  

Tell us more about the research design and findings.
A majority of the research on incivility has been conducted about incivility experienced by professionals once they actually work at an organization; we were instead focusing on the job search. We began with a qualitative study to understand the nature of incivility during the job search. In the first stage of our research, we interviewed 100 job seekers and asked them whether they had experienced incivility and collected details about the incident. We’d then ask them what they thought the cause of that behavior was. We were interested in how people interpreted the ambiguous nature of these incidents. In one example, the interviewer is abrupt and doesn’t give the candidate a lot of time. Some candidates may view that experience by simply thinking that the interviewer was busy (i.e., externalizing the cause), while others may think that the interviewer was rude to them because of their incompetence (i.e., internalizing the cause).

The second and third study were more empirical. We wondered if there was a way we could predict who will externalize or internalize these incidents. We found that, for those who internalize the cause of these incidents, incivility undermines one of the best predictors of job-search motivation which is job-search self-efficacy or self-confidence.  Conversely, job-search motivation was not impacted for those who externalized the cause of these incidents.

What implications do you think this has for career services practitioners and employers?
Our findings support the need for resilience training and other tactics that would help job seekers re-frame the cause of these incidents.  If we can help them by not attributing the cause to themselves we can ensure their job search motivation doesn’t suffer.

I think it also has implications for those who are recruiting, as they are seeking ways to ensure candidates have a great experience and ultimately accept an offer.  Incidents of incivility can have a real influence on the talent pipeline.  

What are you working on now with your research?
A project really relevant to the NACE audience is one I’m working on with Dr. Phil Gardner related to internships. We are examining the role employees have on student interns including both employees who are assigned as formal supervisors and those that act as informal mentors. We are studying how these individuals impact whether or not interns accept full-time offers at the end of their internship experience. Results should be out in the near future.   

Alina GuariseDesalina (Alina) Guarise, Associate Director of Career Advancement Center at Lake Forest College
LinkedIn: https://www.linkedin.com/in/desalina

 

 

 

James W. KostenblattJames W. Kostenblatt, Associate Director, New York University’s Wasserman Center for Career Development
LinkedIn: https://www.linkedin.com/in/jameskostenblatt

Career Research: How to Measure Career Success

by Desalina Guarise and James W. Kostenblatt

This is the first of several blog posts that will explore career-related research and feature interviews with those researchers. Let us know if you have individuals you would like to see featured.

As recipients of a NACE Research Grant, we will be kicking off a study this fall to explore the long-term impact unpaid internships have on career success (30 institutions are participating and we are looking for more partners to join; contact us if interested!) As a part of this project, we began exploring how to define  and measure career success—a complicated and somewhat nebulous concept. Luckily, we came across a research study recently published in the Journal of Organizational Behavior, “Development of a New Scale to Measure Subjective Career Success: A Mixed-Methods Study” 37.1 (2015): 128-53, where Dr. Kristen Shockley and her colleagues did the heavy lifting for us. We will be using both objective measure of success (i.e. salary) and the subjective measures Dr. Shockley developed in our study.

In an interview, Dr. Shockley shared details about her research and the release of the “Subjective Career Success Inventory”—a 24-item questionnaire and validated measure of career success that resulted from the study.

Tell us about your professional background. How did you come to focus on career-related research?
I did my undergraduate degree at the University of Georgia and my Ph.D. in Industrial organizational psychology from the University of South Florida. From beginning of my doctoral program I was interested in how people manage work and family and how to have a fulfilling work life and personal life at the same time.

In the article, you mention that measures of career success have evolved over time, shifting from objective measures to subjective measures. Tell us more about this evolution.
Years ago, you started at one company, paid your dues, and retired from that company. Now people have something like seven jobs over the course of their career.  With this shift people’s values have also changed. When we interviewed people for the study, they spoke about how important it is to having a meaningful personal life outside of their career as well as clear work/life boundaries. They also want to feel like their work is meaningful. When we looked at the existing career research, they used to measure career success very objectively—you are successful if you make a lot of money. The previous subjective measures were just about satisfaction. That’s where this career success model—the subjective model—came from; it takes into account these newer values. 

Tell us about your development of the Subjective Career Success Inventory?
We began by conducting interviews and focus groups with people from all different types of careers. We then transcribed all of the interviews, coded them, and came up with themes. Finally we focused on testing the scale for validity and reliability. The research took seven years from start to finish and we ended up with a 24-item questionnaire broken out into eight dimensions or categories that were important to these professionals when assessing their own career success: recognition, quality work, meaningful work, influence, authenticity, personal life, growth and development, and satisfaction.  

What implications do you think this has for practitioners like ourselves in career services?
Practitioners already know that it’s a daunting task for students to pick a field to go into. It’s important to have some sense of what you value and how likely it is that those values will map onto careers you are considering. This research further supports the importance of students reflecting on their values and thinking beyond objective measures of career success when decision-making. 

What are you working on now with your research?
Right now I’m trying to establish an inventory for the family-friendliness of different career paths using data from O*Net.  Hopefully this will help students and professionals make decisions about which career path may be right for them.

If you’d like to participate in research exploring the long-term impact unpaid internships have on career success , contact Alina Guarise or James Kostenblatt.

Alina GuariseDesalina (Alina) Guarise, Associate Director of Career Advancement Center at Lake Forest College
LinkedIn: https://www.linkedin.com/in/desalina

 

 

 

James W. Kostenblatt

James, W. Kostenblatt, Associate Director, New York University’s Wasserman Center for Career Development
LinkedIn: https://www.linkedin.com/in/jameskostenblatt

The Assessment Diaries: Implementing NACE First Destination Standards

Desalina Allen

Desalina Allen, Senior Assistant Director at NYU Wasserman Center for Career Development
Twitter: @DesalinaAllen
LinkedIn: www.linkedin.com/in/desalina

If you haven’t been living under a rock (or trampled by a continual flow of student traffic) you know that the amazing NACE First-Destination Survey Task Force put together some guidelines to help career offices align the way we collect post-graduation outcome data for undergraduate students. You can view the standards, a sample survey, and an informative webinar hosted by Manny Contomanolis, who chaired the task force, on the NACE website.

The standards are not meant to give you a detailed, step-by-step, roadmap. Instead, they are guidelines are a framework to ensure that as a profession we are aligned in terms of our timeframe and the basic type of information we are collecting.

There is an emphasis on flexibility and professional judgment—acknowledging that institutions will add their own questions or adapt their surveys to ensure they are able to meet existing reporting requirements. Additionally, as mentioned in the webinar, these standards have and will continue to evolve.  

With that being said, I will be sharing details of how we are applying the standards to our existing first-destination survey process at NYU. I would love to hear and include other schools’ interpretations as well.  Please contact me or leave your comments below if you would like to participate!

The topics I will be touching on include:

  • Timeline: Defining our graduating class and planning for when and how to collect their placement information

  • Survey Instrument:  Designing and testing our survey; Ensuring the questions/data align with NACE standards

  • Survey Distribution/Data Collection: Partnering with schools to distribute the survey; Collecting information from various sources (electronic and phone survey, faculty, employers, etc.)

  • Data Analysis/Integrity: Verifying results, cleaning and analyzing information

Desalina Allen writes about assessment. She will be blogging occasionally about New York University’s Wasserman Center for Career Development process as an early adopter of the First Destination Survey Standards.

Read more from Desalina Allen.

The Assessment Diaries: Quick and Qualitative

Desalina Allen

Desalina Allen, Senior Assistant Director at NYU Wasserman Center for Career Development
Twitter: @DesalinaAllen
LinkedIn: www.linkedin.com/in/desalina

Some of the assessment activities I have shared take time to develop (like the Pre/Post Dining Etiquette Survey) and/or require staff buy-in, training and socialization (like the Resume Rubrics).  Just last week, I decided super last minute that I wanted to assess a networking presentation for international students…last minute, as in, 20 minutes before the event. This exercise is proof that assessment doesn’t have to take hours and hours of your time—sometimes a quick pre/post writing exercise can give you insight into what needs to be changed about a program.

I need to invoke my earlier reminder that I promised to be honest when sharing my experiences with assessment, and this post is no different.  I’d like to say I was happy with these results, when instead I was disappointed to find that I probably assessed the wrong learning goal. I started with the fact that I wanted students to gain a more nuanced understanding of networking. Here’s what I did:

Twenty minutes before the presentation I grabbed some colorful paper—yellow would be used for my pre-assessment and pink for the post assessment. This color choice was not at all based on any carefully planned and research-supported theory that bright paper makes people happy; in fact, I did it to make sure I could keep the two “surveys” separate.

At the beginning of the event, I asked the students to spend two minutes writing about networking. It could have been their definition of networking or just words that come to mind; grammar and complete sentences not necessary. I then did the same thing at the end of the event.

I could have just looked through and summarized key trends from each sample, but I decided to get fancy, transcribe the text, and enter it into Wordle, a tool that generates word clouds.

Here’s the Pre-Workshop Wordle

Screen Shot 2014-02-11 at 3.18.09 PM.png

And the Post:

Screen Shot 2014-02-11 at 3.20.44 PM.png

While the results show that I focused on the importance of relationships, I don’t think I can claim that students gained a more in-depth understanding of networking.  What I did learn is that it seems like students already had a handle on the definition of networking, so perhaps I needed to assess their comfort level actually knowing how to network!

While this wasn’t the most successful assessment attempt, I do think that it can be great when you are trying to compare students’ knowledge of more difficult to assess topics (think professionalism, diversity, self-awareness).

Would you try it?

Read more of Desalina Allen’s blogs on assessment!

The Assessment Diaries: Rubric Roundup

Desalina Allen

Desalina Allen, Senior Assistant Director at NYU Wasserman Center for Career Development
Twitter: @DesalinaAllen
LinkedIn: www.linkedin.com/in/desalina

I recently wrote about the problem with asking students to assess their own learning. In a nutshell—studies show we are not able to accurately measure our own learning and that we even tend to overestimate what we have learned.

This concept can definitely be applied to resumes.  Even the best resume presentation or one-on-one review isn’t always enough to really teach students what makes an excellent vs. just an OK resume.  We already know this anecdotally—when students come back for two, three, or four reviews and haven’t yet mastered some of the basics it demonstrates just how complex marketing yourself on paper can be.  Thus, we cannot use students’ self-reported learning after these events or meetings as evidence that they really learned. 

As career services professionals, we could critique resumes in our sleep.  I know I’ve easily reviewed thousands of resumes in my five short years working in career development! For this reason, when we want an accurate understanding of how well our students are marketing themselves via their resumes it makes more sense for us as experts to evaluate them.

Enter resume rubrics. Rubrics are a way to standardize the way we define and measure something.  They also make our evaluation techniques more transparent and clear to students and can be a useful tool in training new staff members.

When I started developing a rubric for NYU Wasserman, I found it extremely helpful to look at examples from NACE and other schools.  I then created a draft and brought it first to my assessment team and then to staff as a whole for feedback.  Several revisions later, we had a document that made explicit what we look for in a resume.  More specifically, we defined what makes an excellent resume vs. a good resume vs. one that needs improvement.

Once you have your rubric, you can track and report on how your students are doing as a whole (or by class year, major, etc.).  If you have enough time and patience, you can also follow a student’s progress over time or after a specific resume intervention. For example, evaluate a student’s resume before a workshop and then encourage them to come back with changes and evaluate it again.  Did they improve? Which topics were still difficult to grasp? Might you need to spend more time addressing those during the workshop?

Below you will find some examples of resume rubrics that I have found helpful, as well as the rubric we use at NYU.  Do you use rubrics at your institution?  If so, please share them in the comments section!

Examples:
NACE (Resume), ReadWriteThink (Resume and Cover Letter), Amherst Career Center (Resume), Illinois State (Resume), Liberty University (Online Resume)

NYU Wasserman:

Don’t miss “The Mystery of the Resume Writing Assessment” Part 1 and Part 2.

Read more of Desalina Allen’s blogs on assessment!

The Assessment Diaries: The Mystery of the Resume Writing Assessment (Part 2)

Desalina Allen

Desalina Allen, Senior Assistant Director at NYU Wasserman Center for Career Development
Twitter: @DesalinaAllen
LinkedIn: www.linkedin.com/in/desalina

When we last left off, you were shocked at the fact that your post-resume writing seminar survey results could have been so misleading.  Students reported to have learned the basics of resume writing but, when you followed up with an in-person meeting with one of your attendees, it was obvious that the tips and guidelines you provided were not applied.

Have you ever created or taken a survey with a question or questions like the ones below?

This seminar improved my understanding of resume writing basics:
Strongly Disagree/Disagree/Neutral/Agree/Strongly Agree

I learned something from this seminar:
True/False

As a result of this seminar, I now understand what employers look for in a resume:
Strongly Disagree/Disagree/Neutral/Agree/Strongly Agree

What is the problem here?  Well, if you are simply looking for evidence that students believe they have learned something from your event there is no problem at all.  But, if you are trying to collect evidence that students actually learned something well then …..

Why? Because studies show* that students are not able to accurately measure their own growth or learning.  Not only do they incorrectly estimate growth, they tend to overestimate it.  It makes sense, right? If someone asks you after a presentation or a class if you learned something, how do you really know if you did?  

As a result of this, we cannot use students’ self-reported growth as evidence of growth.  Instead, we have to utilize other assessment methods to really prove they learned something.  How? By doing a pre- and post-assessment of student knowledge (like I did for our etiquette dinner) and comparing results, or coming up with a standardized way to evaluate resumes (via a rubric) and look at the change over time.

Last year, one of our learning goals was to ensure that students were learning career- related skills like resume writing.  We did away with our post seminar surveys and instead created resume rubrics to use with students.  I’ll be sharing that experience in my next few posts, along with helpful resources if your office is look to create your own resume rubrics!
*Thank you to Sonia DeLuca Fernandez, our Director of Research and Assessment for Student Affairs, for this article that can be found in Research and Practice in Assessment, Volume 8.

The Assessment Diaries: The Mystery of the Resume Writing Assessment (Part 1)

Desalina Allen

Desalina Allen, Senior Assistant Director at NYU Wasserman Center for Career Development
Twitter: @DesalinaAllen
LinkedIn: www.linkedin.com/in/desalina

In career services, most of us are used to facilitating workshops that teach our students or alumni skills.  The topic could be leadership, networking, career research, or social media and the job search.  Oftentimes, after these events we send out surveys to determine just how much students learned.  We ask if students feel more comfortable with the topic and understand some of the key take-a-ways. We may even throw in a satisfaction question or two.

Today,  I want you to imagine that you’re getting ready to facilitate one of those workshops and the topic is: Resume writing!  Don’t get too excited….

You know how when you start a presentation, especially one you’ve done often, you pretty immediately get a sense of how the audience will respond?  Sometimes you walk in and students are just waiting for you with that expression on their face that tells you even if Eddie Murphy were giving this presentation they might sleep through the entire thing?

Well, on this day you experience the exact opposite. Students are eager, smiling, even awake. They raise their hand when you ask for input and they actually laugh at your pathetic resume jokes (that you’ve managed to add just to keep yourself interested). You talk about clarity of format, keeping it to a page, customizing it for each position and you look around only to see heads nodding vigorously.

After the presentation you review the post event surveys. Students are giving you high marks across the board: they now understand resume basics, they feel they can apply these concepts to their own resumes, they even write comments about how great of a presenter you are.

That night, you check your e-mail and you have a very sweet request from one of the participants:  She notes that she learned a lot from the presentation but wants to come in tomorrow for a quick resume review just make sure everything is OK before she applies to a position. You reply “Sure!” thinking to yourself, “this should take only 15 minutes.”

Fast forward to tomorrow.  The student is seated in front of you.  As she reaches into her backpack to pull out her resume, your view switches to slow motion.  Suddenly, you catch a glimmer of light bouncing off of the object she’s taking out….

…..wait

…what the

….is that

….is that a staple??  

So, obviously this is a HUGE exaggeration (cue sarcastic snickers), but what went wrong here? Didn’t you talk about page length? Weren’t you clear about editing out non-relevant content? Surely you touched on including pictures. How could it be that after all of your hard work and intuition the student just didn’t get the point?  What about all of your positive survey results? Could they have misled you?

Stay tuned for part 2 of The Mystery of the Resume Writing Assessment where I’ll discuss the post-event assessment.  In the meantime…any guesses, comments, or thoughts on why this approach doesn’t always work? Leave them in the comments section below!