HRchat Podcast

AI and the Future of Work: Navigating the HR Revolution

February 25, 2024 The HR Gazette Season 1 Episode 686
HRchat Podcast
AI and the Future of Work: Navigating the HR Revolution
Show Notes Transcript Chapter Markers

In this final and special episode of the AI in the Workplace mini-series, guest hosts Pauline James and David Creelman will distill key insights shared by leading experts they were fortunate to learn from. 

Expert insights are highlighted from Avi Goldfarb, Rotman Chair in Artificial Intelligence and Healthcare at the University of Toronto; Shingai Manjengwa, Head of AI Education at ChainML; Ben Zweig, CEO of Revelio Labs; Dr. Jarik Conrad, VP of Human Insights at UKG; Kate Bischoff, Founder of Thrive Law and Consulting; Jesslyn Dymond, Director of Data Ethics at TELUS; Frank Rudzicz, Associate Professor at Dalhousie University; and Vimal Sharma, VP of HR at WIS International.

Tune in and Discover: 

•        AI's Impact on the Workforce: Explore insights on AI's disruptive potential, its role in leveling the skills field by automating tasks, and HR's pivotal role in managing change and leading reskilling initiatives.

•        Streamlining HR Processes: Discover how AI automates labor-intensive HR tasks, enabling professionals to focus on strategic activities, and the importance of ethical AI use through human oversight and governance.

•        Advancing Organizational Competence: Learn about the critical need for AI literacy within organizations to responsibly leverage AI, and AI's potential to enhance empathy and reduce biases in decision-making.

•        Enhancing HR Practices with AI: Valuable resources for deepening AI knowledge and engaging in responsible AI use discussions are shared, emphasizing HR's evolving role in the age of AI.

As we conclude this insightful journey, it's clear that AI's role in HR is both transformative and complex. By embracing AI with informed enthusiasm, HR leaders can steer their organizations towards innovative, efficient, and ethical futures. The insights shared in this series are just the beginning; the real work lies in applying these learnings to navigate the evolving landscape of AI in HR.  Please keep us posted on your progress in this regard! 

Feature Your Brand on the HRchat Podcast

The HRchat show has had 100,000s of downloads and is frequently listed as one of the most popular global podcasts for HR pros, Talent execs and leaders. It is ranked in the top ten in the world based on traffic, social media followers, domain authority & freshness. The podcast is also ranked as the Best Canadian HR Podcast by FeedSpot and one of the top 10% most popular shows by Listen Score.

Want to share the story of how your business is helping to shape the world of work? We offer sponsored episodes, audio adverts, email campaigns, and a host of other options. Check out packages here.

Speaker 1:

Welcome to the HR Chat Show, one of the world's most downloaded and shared podcasts designed for HR pros, talent execs, tech enthusiasts and business leaders. For hundreds more episodes and what's new in the world of work, subscribe to the show, follow us on social media and visit hrgazettecom.

Speaker 2:

Hello and welcome to the HR Chat Podcast. I'm Pauline James, founder and CEO of AnchorHR, and it's my pleasure to be your pod host today. Along with David Krillman, ceo of Krillman Research, we're partnering with the HR Chat Podcast on a series to help HR professionals and leaders navigate AI's impact on organizations, jobs and people. In this last episode of our mini series, we will consider the key takeaways and insights shared by the experts. We were fortunate to have join us on this journey.

Speaker 3:

Pauline, going back over the series, one of the things I enjoyed was the wide range of perspectives we had. We heard from economists and HR pros, entrepreneurs, vendors. One of the themes that most interested me was this issue of what is the level of disruption that we're going to expect as AI roles into the world.

Speaker 2:

Let's begin by considering our experts' thoughts on the level of disruption we can expect. Avi Goldfarb, the Rotman Chair in Artificial Intelligence and Health Care and a Rotman School of Management Professor, made a number of interesting points in our discussion. It is well worth listening to the full interview with this leading and international expert, but we want to highlight a few that stood out for us, the first being that even when technology is disruptive, it tends to take much time before it is embedded deeply and replaces jobs, allowing time for adoption.

Speaker 4:

It's also a lot harder to eliminate a job than people expect. Technology typically allows you to make certain processes more efficient or identify certain parts of an individual's workflow that can be automated, but to automate a whole job is hard. The best example from history here is telephone operators. It used to be the number one job in the US, especially for young women. Millions of young women worked for AT&T as telephone operators. The first person to try to automate that job got a patent for an automatic telephone operator in 1890. Long time ago. The last telephone operator stopped working at AT&T in 1978. It took 88 years to figure out how to completely automate that job because just the telephone it was a small piece of the overall puzzle.

Speaker 3:

One of the interesting points he made, and one I think is hopeful, is that AI might end up being an equalizer of skills amongst different people.

Speaker 4:

There's a lot of worry in saying, hey, ai is going to take jobs, ai is going to take all jobs and the rich are just going to get richer. Where does that come from? Let's just start with that point. The first place it comes from is if we look at the technologies we've seen over the past 50 years, particularly computers and the internet. That's been the story that computers and the internet, the people who are good at abstract thinking, who are already doing well, have done better and better. Everybody else, they haven't done worse, but they've been left behind. The expectation is hey, we have AI. It seems like the next generation of information technology. Given what happened with computing and the internet, we should expect the same with artificial intelligence. The counter argument, what I hypothesize. I want to be clear. We don't know yet, but what I hypothesize is going to happen with my co-authors Ajay and Joshua, is something quite different. If you think about what AI does, what prediction technology has been doing. It often is replacing those tasks that the higher paid workers are doing In medicine.

Speaker 4:

The core role of prediction in medicine is around diagnosis and treatment. Diagnosis is the special domain of physicians, of the doctors, the highest paid people in the profession. Once you have a good diagnosis, a lot of the actions to take to help treat somebody are done by nurses and pharmacists and others. What prediction in the healthcare industry has potential to do is? I mean, we need fewer doctors, especially at the primary care level, fewer of the higher paid people, because the prediction can then be handed to a nurse or a pharmacist or another medical professional. They can help you with the actual treatment, with navigating the stress of the healthcare system and all these other things that doctors do but aren't really what they're trained for In a lot of industries. What we've seen so far as the newer generations of prediction machines, newer generations of AI, start infiltrating the industry, is that they're not that useful often to the people at the top, because what they're doing for the people at the top of many, many industries is showing them how to do things well, and they already do things well.

Speaker 4:

Another study by Erick Brynjolsson and Lindsey Raymond on call centers. They looked at the implementation of generative AI in a call center. What was it? It was a salesperson, effectively, and they were getting a script. The best people in the call center, the people who had experience and were really really good at it. The script was basically telling them what they already would have said, if anything. It was distracting them, but the people who were new and the people who weren't as good were getting a script that looked like the script of the people who were at the top, and it made them much more productive. Another example in the taxi industry there was an AI in Japan that was implemented to help taxi drivers identify where they were likely to be, people who needed to be picked up and where they were going to get rides. And for the best taxi drivers people who were very, very productive it made almost no difference.

Speaker 2:

Shingai Minjengua, the head of AI Education at ChainML, shared a complimentary perspective on the level of disruption we can expect and how we can prepare.

Speaker 3:

Shingai also noted that HR has an important role in being proactive, in preparing the organization, basically through education, but also tracking the kinds of trends that HR has clued into. Let's take a moment to hear a short clip from that discussion.

Speaker 5:

I don't believe that the job displacements are going to be an army of robots like we've seen in, let's say something like the Matrix movies. It's not going to happen like that. It's going to be a little productivity gain here, a little efficiency there, and we'll start to see some roles really lose whole aspects of what they were doing. Now we've said artificial intelligence and computers really are good at repetitive tasks, etc. But with this advent of generative AI and the language models we start to see even tasks like copywriting have. They do them incredibly well and the models are only getting better. I just want to be very clear some jobs will go. That's just the reality and it's not the exciting topic to talk about. It's not the sexy headline at this point, so I'm not seeing it being discussed nearly as much as I would like it to be. But I would like to say categorically that we are going to see a significant shift in the workforce due to the advent of these large language models and generative AI. Specifically, the role of HR is to lead. I would say to HR professionals it's your time to shine. This is your moment to really understand what these technologies are doing, how they're going to impact the workforce at different levels, at different skill sets and different functions, and get ahead of it. Help us to plan out. What does reskilling look like. Is that the solution? What does the transition for different roles look like in order for them to stay in the workforce, to stay in the organization or to consider different options?

Speaker 5:

It's tough and it's going to be messy. We have unionized roles. We have folks that have tenure in organizations. We are going to have to deal with all of the consequences of these tools that we are adopting quite rapidly and that are having an impact both on our productivity, our gains in organizations, but as well as our workforce. To inform Really get the organization ready for what is about to happen, potentially what is happening, some of those change management functions that need to happen as AI is adopted in an organization. Really just information and reliable information, because we also know misinformation as people reference their social media and their friends to get information as a challenge Really, let HR be that place we go to to get the facts about what tools are being deployed in the organization or being used by the organization more broadly. So that's information and sort of. The next step after that is education. I spoke about AI literacy. I speak about that very often and it's very important for everybody to understand what the tools are and how they work.

Speaker 2:

There are some important takeaways here about how this is a really important time for HR to lean in, understand this technology and ensure that organizations and employees are informed and supported in adapting effectively.

Speaker 3:

The risk is that if HR doesn't lean in and proactively try to understand the technology and get involved, well, different parts of the operation are going to charge ahead anyways, and then we HR and the organization will miss the opportunity to mitigate risks, particularly when we think about in terms of the risk that individual employees will face, which is something HR will be attuned to. I also appreciate that Shinkai's argument that with our expertise in HR and the trends that we're able to see, we'll have an opportunity to contribute to the conversation of the impact of AI at the community level. Okay, pauline, what should we look at next?

Speaker 2:

What's next? Consider the important but cumbersome tasks being made easier for HR. We heard from large and also entrepreneurial vendors about the advances they are making and automating work. That is important but very labor-intensive, making it difficult to do well or to do well in a timely way. Great examples are the work Boyd Reed, co-founder of Hoppin Technologies, spoke to about analyzing employee commutes, as well as Anya Jarjenska, co-founder network perspective, around network and communication analysis. And here is Benzweg, CEO of Revello Labs, speaking to how they can automate job taxonomy analysis.

Speaker 6:

That really is a lot of work that people do within HR. They spend a lot of time just categorizing their people. So that was something that we were able to do using large language models five or six years ago and that was really exciting. In that area we're still experimenting, but we've used it in a few different ways, which I think are kind of exciting. One is when we collect a job posting. Very often a job posting is just a bunch of text and have the qualifications, responsibilities, the skills all in this lob of five paragraphs.

Speaker 6:

But really what we want to actually deliver to people that analyze this data is a segmented job posting. We want qualifications in a section. We want skills in its own section. We want to tag it, whether it's remote work. We want to tag the salary. We want to outline responsibilities in a structured way.

Speaker 6:

So creating structure is something that Generative AI does really well. You feed it into this thing and say, hey, please make this structured into the following seven categories, and it does and it spits up the results. That's really exciting. I think that's been kind of fun. Another thing we've gone is if we take the reviews that people write about companies think about sites like Glassdoor and stuff like that. You have the positive reviews and the negative reviews. Sometimes, if we want to summarize that and give people a sense for what people like and don't like, we can create word clouds, but it's not easy to kind of make sense of that. So what we're doing now is we're asking GPT to create a synthetic paragraph that summarizes what do people like about this job, and then you have five or six sentences of what people like and what do people not like about this job and you have a paragraph written the way a human would communicate that. So I think for summarization it's really useful.

Speaker 3:

I think Ben has clearly demonstrated that there is enormous opportunity to automate some labor-intensive work that we in HR have found cumbersome. And not just cumbersome, but it ate up time and mental energy that could be better spent on more strategic projects.

Speaker 2:

Yes, spending an enormous amount of time updating all of our job descriptions and developing our own structure is a huge effort, and by the time we're finished, the data is often out of date.

Speaker 3:

We keep taking on these projects, like updating all the job descriptions, because we can see there is value in having it done. It's just the amount of work involved is overwhelming, and so there really will be value in being able to automate this. This is going to be a big win for HR folks. Let's conclude this portion of the discussion by hearing from Jerrick Conrad. Jerrick leads the UKG Workforce Institute and he talks about how AI has advanced their ability to distill insight from company surveys, and I start by asking him to compare how we used to get those insights from surveys versus what he's doing now.

Speaker 7:

You're giving me nightmares and flashbacks. What we used to do were we give these surveys and then you'd have people like me who was first an intern, then an early on in my HR career. So I would draw the lucky straw that I got to sit down now and I got to go through all of this free form text and I got to try to read things. What does this mean? Let's put these in there, and so we get in conference rooms and for hours we would be trying to decipher what employees might be telling us in a survey that we put out and there was not a whole lot of science involved in that.

Speaker 7:

We probably weren't very accurate, we certainly weren't fast and we had our own biases in terms of how we read it what this word might mean to me versus what this word might mean for you and so we spent a lot of time and I don't know that we were actually getting the voice of the employee Right. We're getting the voice of the employee as interpreted through my eyes and somebody else's eyes. Now I feel sorry. I know that we probably have some folks on a call that may not be using those tools yet. They may be still going through that manual process. So I commiserate with you. I understand it, but certainly that's one place where I think that we have a solution that can help make that process a lot better.

Speaker 3:

So I've spent a lot of time in the world of data analytics and the advances that I see going on with machine learning and causal modeling, all this AI work. I think it's going to really be extraordinary for HR's ability to still information and make good predictions about what kind of interventions will be effective.

Speaker 2:

The third key takeaway from our series we want to highlight was how we can enable our organizations to adapt to this technology and mitigate risk.

Speaker 3:

Our experts have two general pieces of advice first, to educate ourselves and our teams with basic AI literacy so we understand what we're dealing with, and then to go from that to provide some guidance on safeguards that we should be introducing to our organizations. So, with that in mind, let's hear first from Kate Bischoff. She's the founder of Thrive Law and Consulting and she's a renowned expert in employment law. Take it away.

Speaker 8:

Kate, my biggest concern about AI is the potential for perpetuating discrimination that we already see in workplaces.

Speaker 8:

When you're trying to find the perfect candidate, it looks at all of your performance data in the past and so you're going to get what you've had in the past for that very reason.

Speaker 8:

So my biggest piece of advice is remember humans need to be integrated into these decisions. This is just a tool in your tool kit. We have to stay active and really in control of the decision making, and when you see the complacency happening, we have to step in right away because it's a significant risk to the org. I believe it needs to evolve to understand how the technology works and then ask really difficult questions, not only because our role is going to change with the use of these tools, but the importance of managing, auditing, researching how the tools work is going to become critically important as that risk changes, as that risk gets bigger, as it gets more important, as we use more and more of these tools. We cannot, in human resources, enjoy just pushing paper anymore. The tools are going to help do a lot of that stuff for us, but we're going to have to check it, we're going to have to manage it and we're going to have to hold the tool, speak to the fire to get through the risks that they could create for us as well.

Speaker 3:

So Kate's pointing out something we really need to be clear about, which is that, while AI can assist us, it can generate draft text, generate recommendations Humans are needed to ask the right questions and to validate the answers. We can't delegate responsibility to the AI. The level of sophistication required for this assessment will vary based on whether, for example, we're just assessing the validity of a job posting that the AI has written, or whether we're considering some kind of enterprise risk resulting from an organizational network analysis.

Speaker 2:

Jesslyn Diamond, Director of Data Ethics at TELUS, speaks to the importance of having a human in the loop and provided some great and practical guidance on governance. I suggest listening to her full interview, but let's listen to some of her guidance here.

Speaker 9:

One important concept for responsible AI is certainly human in the loop.

Speaker 9:

I think that idea of ensuring that we have a person who is understanding how the technology works, can explain what it's doing, what it's trying to do and how it's been trained and ensure that those outcomes are consistent with the objectives of everyone who has worked together to put that AI model in place. So having that verification and that accountability for the technology is really critical and, I think, aligning on what is meant by AI. There is certainly lots of different iterations of the technology, but I think how it gets used can be very, very different depending on the organization or institution or the researcher that's using it. Ensuring everyone understands what the technology is, having that AI literacy, but also understanding that it is a tool. I like to think of it as augmented intelligence, not just artificial intelligence, because it's something that is here to help make work easier and really extend the possibilities that you need that human in the loop to ensure that it is achieving that expected outcome and monitoring and overseeing how it works and what it's working towards.

Speaker 2:

If we narrow in on governance committees, do you have a perspective on who should be on the committee, how it operates, who does it report to, or suggestions for organizations in that regard? I think?

Speaker 9:

having a council to support governance is really critical, and ensuring that you have both members from the technology team or your AI group, as well as the businesses that are going to use the technology and the data owners the people who are responsible for the data that the AI is working with needs to be aware of how it is being governed and enabled. Having a governance committee can really help bring that discussion and that collaboration together. I think that when you have different leaders sharing the work they're doing on AI and the concerns and the interests that they have, you'll see a lot of synergies. If you have a governance committee in place and, of course, within a governance team, there may be different aspects of privacy, data governance, of security, the legal requirements and concerns that need to be incorporated into that discussion to really ensure that risks are appropriately identified, managed and mitigated, to ensure that everyone is aligned with what AI is being used for and how it is being controlled.

Speaker 3:

Justin also spoke to how we can embed some risk management in the technology itself. I also appreciate a discussion with Frank Rujic, who is an associate professor at Dalhousie University. He brought insights on risks, but he also underlined the fact that there are risks of not using the AI technology and, in fact, that the AI technology could potentially assist us in being less biased. Let's hear from Frank.

Speaker 10:

I think there's lots of ways in which AI can help us be more objective and lead to increased safety and efficiency. That's really beneficial to people more generally. So I can speak a little bit specifically about my main domain. In healthcare, there's a ton of different use cases or applications in which we might see AI becoming the standard of care in the not too distant future. Some of them are much more risky or in more high risk situations, and some of them involve a lot more access to patient data. And then, therefore, questions of privacy are much more important, and we're still at the early phase of sort of integration of AI into into practice, when computer scientists, technologists and users are working with organizations to deploy it, sometimes rightfully so, there's lots of questions about privacy, bias, accuracy, unforeseen risks, and it's also a problem.

Speaker 10:

It's also a risk to do nothing. So if in that whole spectrum of tasks, some of them are very low risk, some of them don't involve very much access to patient or employee information, and not implementing those just because other applications are risky is a failure. It's an invisible risk to just do the status quo. But as we see across various healthcare implementations those of where currently we don't do something different. That's a huge problem. Yeah, I think, on one hand, not using AI is also risky and it can lead to continued problems of physician burnout, safety problems for patients phishing, the problems that cost healthcare systems. So it's ethical of us to try to do everything we can to actually deploy these technologies where we can, and starting in low risk situations is the best place to do that.

Speaker 3:

What I like about Frank's perspective is that I think some people will actually use the fear of potential risk as a kind of excuse to wait and do nothing, which I think is the worst of all strategies.

Speaker 2:

Agree. In our discussion with Frank, we also talked about how AI may actually help us to be more empathetic, which is interesting to consider. We'd like to emphasize human's capacity for empathy, but our ability to be empathetic is impeded at times by our human tendency for bias, and advances in this tech could actually help us in this regard. We think about individuals in public facing professions and how much time we spend training them to overcome defensiveness and bias and sometimes ego, where we don't have the same challenge, I believe, with machines.

Speaker 3:

In sci-fi, we often fear that robots in AI will be cold and inhuman, but I think we're seeing that machines may actually help us improve some human qualities, can help us remove bias, it can help identify people who can struggle and it can be empathetic. Really it can make us better humans.

Speaker 2:

We've covered a lot of ground and pulled only a few of our favorite insights from the full series. Before we finish, we'd like to remind our listeners how to begin their AI journey. Let's listen to some advice from one of the execs. We interviewed Vimal Sharma, VP of HR of Wizz International, about HR's role in change management.

Speaker 11:

So we must think through how we support our employees, and so effective change management will be key. Some of the benefits of AI will include creating capacity so we can complete those valuable activities, improving process efficiencies and also, to an extent, improving employee job satisfaction by eliminating those repetitive tasks. Now we don't have all the answers, but what I would say is we need to be open to change, we need to communicate often and open the mind transparently with our teams through the process and also create those opportunities and provide the support to our team members to develop new skills, to learn new skills, so they can leverage these efficiencies and advancements and really be a part of the journey.

Speaker 2:

And here are some specific resources that Jesslyn Diamond had recommended to assist us in leaning in and supporting our learning.

Speaker 9:

For anyone who wants to learn more about how AI works. Cifar, one of Canada's leading research institutes around AI, has developed a course called Destination AI, and that is a really helpful walkthrough of what AI is, how it works, and can provide a really significant base level understanding of the technology and serve as that foundation for AI literacy and for those who want to be part of the conversation. I think it is really helpful to bring together different voices and provide feedback on how AI should be used and what sorts of considerations should be put in place to help ensure that its use is safe, and so at teletscom slash responsible AI. We have information there about how you can provide input into what the technology does and be part of this conversation that we need to have to ensure that we have technology that really works for us and is both inclusive and responsible.

Speaker 3:

As we wrap up our discussion, I think it's important to recognize how hard we're all going to have to work to make the necessary adaptations for the changes that AI is going to bring. As we've said, there are surprising things. We had always thought that the creative and empathetic work would be the last thing left to humans. Machines couldn't do that. Well, that does not appear to be the case. Other things, for example. We've talked about how great it will be that we can remove repetitive tasks by automating them, and yet sometimes people like to have some repetitive tasks. That can be a bit of a relaxation for the brain. I mean, we can't be doing high end intellectual work all the time, so there could be downsides to giving machines all the routine work. So we're going to have a lot of things to figure out on how to integrate human and AI capabilities so that we use them both to best advantage.

Speaker 2:

Yes, and organizations will need the strengths and capabilities of strong HR leadership to do so effectively.

Speaker 3:

So I'd like to thank everyone in the audience for taking this journey with us. Please reach out with your own key takeaways either things you've picked up from this series or from your own experience grappling with AI and let's keep learning together.

AI's Impact on Workforce and HR
The Future of HR Technology
AI Governance and Responsible Deployment
Human-Machine Integration for Efficiency

Podcasts we love