Is your organization really ready to embrace the potential of AI-augmented human potential?
Ideogram takes a crack at capturing the juxtaposition
I was reading this CNET article the other day and it took longer than I’d prefer to admit, but eventually I connected some interesting dots. Namely, the juxtaposition of higher education and the workplace when it comes to AI.
First, a brief recap.
The writer, Rachel Kane, is a Professor of Practice at Arizona State University. She offers a professor’s perspective and some tips. It’s short and sweet with an example to illustrate her point..
Side note - you may find yourself questioning the post’s credibility or authenticity due to 1 letter. I thought it was odd that a commonly used business term and tool was misspelled as SWAT. We all know it’s acronym for Strengths, Weaknesses, Opportunities, and Threats. It’s an odd error. Perhaps, she did it on purpose or she used voice to text and ChatGPT misunderstand her? Nevertheless, there are some good nuggets in her post.
Moving on.
I don’t know this for a fact, but both this article and my limited understanding indicate that Rachel Kane’s perspective on the use of AI by students is representative of higher education’s stance. To sum it up - it’s cheating, don’t do it.
What makes this fascinating is what is happening in the real world of business and work! While the higher education stance continues throughout recruitment for applicants at least…the tone shifts dramatically afterwards.
Companies are investing and deploying AI like it’s limited edition Stanley mugs at Target. Their stance is the opposite. Some companies seek to deploy it enterprise-wide while others are using a more cautionary approach. Either way the long-term expectation is workers will be more productive.
Now the part that I think will be difficult for everyone, talent and performance management.
Most of the conversation here is about how organizations will use AI to improve existing talent and performance management processes.
What’s missing is how are organizations going to distinguish the role of AI and the worker. Something that will only get harder as AI evolves. Even with where AI is today this will be a challenge. AI is a tool and using it is a skill.
Questions Abound
How will companies figure out where AI left off and the worker picked up? Is that ratio 20-80? Is that the ratio of effort or output? Does one matter more than the other? Is there an ideal ratio?
How will companies distinguish how much of a project’s success is due to the technology versus how well the worker used the technology?
Will employees be expected to indicate AI’s involvement and the degree of its involvement?
Will existing goals and performance metrics work?
How will companies view AI as a skill when it comes to evaluating talent and potential? Will it boost an employee or hinder them for promotion?
Those are just some of the questions that come to mind.
Reflection
The AI companion piece(s) is here. This time I felt the need to check out the other players. Claude’s dataset is older in comparison…
I did not use the same prompt for Claude as Gemini and ChatGPT 4o. Claude did fine writing the article and I was curious if Gemini and ChatGPT 4o had anything else to offer if the prompt was more question based. See below for the prompts.
Gemini offered a nice structured breakdown and touches upon the main challenge straight away.
ChatGPT 4o took a follow-up comment calling out its original naive response. From that point it did better diving into the nuances of the main challenge.
Claude’s task had another layer to consider, but its output touches on the main challenge and in 2 sentences:
How do organizations measure individual performance when outputs are increasingly AI-augmented? The traditional metrics of productivity and capability become blurred when AI tools can dramatically enhance certain aspects of work while potentially masking skill gaps in others.
Here are the prompts:
Claude
Write a post that highlights the challenges and nuances of using AI that face higher education and business. Those 2 perspectives have seemingly opposite views on using AI. One views it as cheating while the other is figuring out how to exploit it. Business has even greater challenges ahead when it comes to trying measure performance and assess talent. How will it adapt to a line that blurs and moves? The line being how much of an output and results are attributed to the human versus the AI? How does this impact promotions, particularly those that manage people?
Gemini and ChatGPT 4o:
how are companies evaluating employee performance for employees who use ai? how do they distinguish the role of AI and worker?