HRchat Podcast

Trustworthy AI at Work: Avoiding Bias with Jo Stansfield

July 16, 2024 The HR Gazette Season 1 Episode 725
Trustworthy AI at Work: Avoiding Bias with Jo Stansfield
HRchat Podcast
More Info
HRchat Podcast
Trustworthy AI at Work: Avoiding Bias with Jo Stansfield
Jul 16, 2024 Season 1 Episode 725
The HR Gazette

In this episode, we consider ways to monitor and avoid AI bias in the workplace. The guest this time is Jo Stansfield, Trustee at BCS, The Chartered Institute for IT.

Jo is also the Founder and Director of Inclusioneering Limited, offering Inclusive Innovation consultancy to support tech and engineering organizations with data-led culture transformation for diversity, equity and inclusion, integrally connected with the innovation process, for fair and equitable outcomes by design of products, operations and services.

Questions for Jo include:  

  • You're on the Board of ForHumanity. Tell me about the org and it's mission
  • You're also a Trustee at BCS, The Chartered Institute for IT. Tell me about the Institute and how it helps to raise professional standards and support career progression
  • You recently spoke at the first Cambridge AI Summit. Your talk was called Trustworthy AI: Avoiding Bias and Embracing Responsibility. Tell us more. 
  • Fairness, Non-Bias, and Non-Discrimination: What are some best practices for (HR) professionals to ensure fairness and non-discrimination in AI tools used for recruitment and employee evaluation?
  • How can users be confident in their AI tools? 
  • Privacy, Data Protection, and Safety: With increasing concerns about data privacy, what steps should companies take to protect employee data when using AI?


We do our best to ensure editorial objectivity. The views and ideas shared by our guests and sponsors are entirely independent of The HR Gazette, HRchat Podcast and Iceni Media Inc.


---

Message from our sponsor:

Looking for a solution to manage your global workforce?

With Deel, you can easily onboard global employees, streamline payroll, and ensure local compliance. All in one flexible, scalable platform! Join thousands of companies who trust Deel with their global HR needs. Visit deel.com to learn how to manage your global team with unmatched speed, flexibility, and compliance.

---

Feature Your Brand on the HRchat Podcast

The HRchat show has had 100,000s of downloads and is frequently listed as one of the most popular global podcasts for HR pros, Talent execs and leaders. It is ranked in the top ten in the world based on traffic, social media followers, domain authority & freshness. The podcast is also ranked as the Best Canadian HR Podcast by FeedSpot and one of the top 10% most popular shows by Listen Score.

Want to share the story of how your business is helping to shape the world of work? We offer sponsored episodes, audio adverts, email campaigns, and a host of other options. Check out packages here.

Show Notes Transcript Chapter Markers

In this episode, we consider ways to monitor and avoid AI bias in the workplace. The guest this time is Jo Stansfield, Trustee at BCS, The Chartered Institute for IT.

Jo is also the Founder and Director of Inclusioneering Limited, offering Inclusive Innovation consultancy to support tech and engineering organizations with data-led culture transformation for diversity, equity and inclusion, integrally connected with the innovation process, for fair and equitable outcomes by design of products, operations and services.

Questions for Jo include:  

  • You're on the Board of ForHumanity. Tell me about the org and it's mission
  • You're also a Trustee at BCS, The Chartered Institute for IT. Tell me about the Institute and how it helps to raise professional standards and support career progression
  • You recently spoke at the first Cambridge AI Summit. Your talk was called Trustworthy AI: Avoiding Bias and Embracing Responsibility. Tell us more. 
  • Fairness, Non-Bias, and Non-Discrimination: What are some best practices for (HR) professionals to ensure fairness and non-discrimination in AI tools used for recruitment and employee evaluation?
  • How can users be confident in their AI tools? 
  • Privacy, Data Protection, and Safety: With increasing concerns about data privacy, what steps should companies take to protect employee data when using AI?


We do our best to ensure editorial objectivity. The views and ideas shared by our guests and sponsors are entirely independent of The HR Gazette, HRchat Podcast and Iceni Media Inc.


---

Message from our sponsor:

Looking for a solution to manage your global workforce?

With Deel, you can easily onboard global employees, streamline payroll, and ensure local compliance. All in one flexible, scalable platform! Join thousands of companies who trust Deel with their global HR needs. Visit deel.com to learn how to manage your global team with unmatched speed, flexibility, and compliance.

---

Feature Your Brand on the HRchat Podcast

The HRchat show has had 100,000s of downloads and is frequently listed as one of the most popular global podcasts for HR pros, Talent execs and leaders. It is ranked in the top ten in the world based on traffic, social media followers, domain authority & freshness. The podcast is also ranked as the Best Canadian HR Podcast by FeedSpot and one of the top 10% most popular shows by Listen Score.

Want to share the story of how your business is helping to shape the world of work? We offer sponsored episodes, audio adverts, email campaigns, and a host of other options. Check out packages here.

Speaker 1:

Welcome to the HR Chat Show, one of the world's most downloaded and shared podcasts designed for HR pros, talent execs, tech enthusiasts and business leaders. For hundreds more episodes and what's new in the world of work, subscribe to the show, follow us on social media and visit hrgazettecom and visit HRGazettecom.

Speaker 2:

Welcome to another episode of the HR Chat Show. Hello listeners, this is your host today, Bill Bannam, and joining me on the show today is the amazing Jo Stansfield. Usually at this point I give a long bio and then I ask the guests to tell our listeners a bit more about themselves, but Jo does so much, she's got so many different hats and she's so super cool.

Speaker 3:

I'm going to mix things up today, and instead I'm going to just start by saying hello, joe. How are you doing? Hi, bill, uh, I'm doing great.

Speaker 2:

Thank you, it's great to be here so, uh, lovely for you to join us today. Thank you very much, uh, what I'd like to do with you today, just to get to know you a little bit, is break down some of those hats and get you to talk about some of the roles that you do get up to. Let's start with the fact that you're on the board of for humanity.

Speaker 3:

Tell us about that organization and its mission yeah, sure, I certainly do wear a lot of hats and juggle them all continuously or whatever mixed metaphor you you fancy using today. So For Humanity has really been an amazing program and organization to be involved with. I started getting involved with them back in about 2020 when the pandemic hit. For Humanity is a charity that's dedicated to building an infrastructure of trust in AI. So it's based out of the United States, run from New York, but has a global network of contributors, so its work is carried out by volunteers. We are 1700 plus people in 94 countries around the world, which I found just absolutely amazing.

Speaker 3:

So when we say, you know, building an infrastructure of trust in AI, what that is about is analyzing and understanding some of those downside risks of AI and working to identify what those are and ways of mitigating them to help ensure that AI is really working for the benefit of humanity, and hence its name. So the primary work for humanity is in building independent audits of AI systems. So if you think about kind of financial and accounting services our founder used to work in that area and his inspiration came from the audits of financial systems so essentially using an independent third party to have a really thorough assessment and check on all of the practices you know in terms of the organization, in terms of ai systems, to really ensure that that level of trust is there and that they are working beneficially for people that's great.

Speaker 2:

Thank you very much. Let's let's move on to uh your next role, and that's uh as a trustee at bcs, the chartered institute for it. Tell me about the institutes and how it helps to raise professional standards and support career progression yeah, yeah.

Speaker 3:

So the bcs is, um, you know, the british computer society, also known as the Chartered Institute for IT in the UK. It's a charity, a professional membership body with a royal charter, which means that we are empowered to award chartered status to IT professionals. So both as a chartered IT professional or as a chartered engineer for people that are building systems, as a software engineer, for example, engineer for people that are building systems, as a software engineer, for example. So the BCS mission is making IT good for society. It's another member organization. We have 70,000 members or so in the UK and also abroad, so in around 150 countries actually the BCS has got a presence.

Speaker 3:

So we're raising standards of competence and conduct across the IT industry in quite a number of different ways. One is through our membership community, so bringing people together to share good practice, to share ideas, share inspiration, learn from each other. Also through what we call inspiration, so computing education and inspiring people to enter the profession. We also offer L&D courses, opportunities for members and members of the public to upskill. And the final pillar of the strategy is around influence, so influencing government and society towards more ethical professional conduct for IT. So one of the kind of key strands around. That recently has been around AI and AI ethics and being involved in government roundtables and policy papers to essentially share views with how responsible AI should look in the UK.

Speaker 2:

Thanks for listening to this episode of the HR Chat Podcast. If you enjoy the audio content we produce, you'll love our articles on the HR Gazette. Learn more at hrgazettecom. And now back to the show, which is absolutely fascinating, and I think I could take the whole rest of this conversation down just that line. But I can't, because this is the first time that we've had you on the show, joe, and therefore I need to do you justice by talking about all the things that you get up to, and then I will drill down a little bit, if you don't mind, into some of the things you just mentioned there in terms of how we can monitor and better understand and audit AI in the world of work. But before we get there, your other hat, your other role is as founder of Inclusioneering Limited. How are you helping to level the playing field through a focus on DEI?

Speaker 3:

Yeah, so inclusioneering is a social enterprise. I say what we offer is consultancy for inclusive innovation. So that is looking at diversity, equity and inclusion in the workforce, then looking through into how companies innovate so new products and new services, how do teams work together and really produce the best innovative outcomes and really produce the best innovative outcomes and then, on the final piece of that story, how equitable are the outcomes from that? So particularly looking at AI there and looking at how to use that diverse workforce be inclusive of the way that the innovation process happens to really ensure equitable outcomes of the products and services that are delivered. So I'll wind back a little bit in time and explain a little bit about how I've ended up as this slightly bizarre mashup of all kinds of things, because I'm sure people are wondering how on earth do these things actually sit together?

Speaker 3:

So my background is actually from software. I was an engineer for about 20 years making software for industry. So I went through a very, very technical career and, as you may expect, didn't find too many other women doing roles like I did. Then when I became a parent. Actually, it really dawned on me just how people who didn't quite fit the expectation of the type of people who work here or succeed here really face some disadvantage to really being able to have the careers that they want to have and to progress in the way that they want to progress.

Speaker 3:

And over time I've kind of learned more and more about that, understood from far broader perspectives, from talking to far more people than just myself about how this really doesn't just impact women and parents. It impacts people from ethnic minorities, people with disabilities, people with different ways of thinking, and you know really the the diversity of diversity is huge and it gradually just took over my purpose at work. I realized realized, you know, had been very much on the technical side to start with, but I realized over time that my passion was really becoming much more about how to help engineering and tech be the best that they can be. And we do that through the people that we have and the people that we attract in and how we work with everybody. So it's through that understanding I guess I pivoted to look at the human dimensions of tech and engineering and that's where inclusion, hearing, came from.

Speaker 2:

And are you loving it? I am loving it.

Speaker 3:

Yeah, I have to say it's a little bit terrifying at times. So I've been running inclusion hearing for the past three years now. The past year of that has been my full focus. I had a part time employee job for the first two years of it while I was getting things started, and you know, the roller coaster is real. I'm sure Anyone else who's started a small business will have been there and done that and so I'm busy being there and doing that at the moment and clinging on tight and it's a very exciting ride. So, yes, loving it and also slightly terrified by it ride. So, yes, loving it and also slightly terrified by it.

Speaker 4:

Once in a while, an event series is born that shakes things up, it makes you think differently and it leaves you inspired. That event is Disrupt HR. The format is 14 speakers, five minutes each and slides rotate every 15 seconds. If you're an HR professional, a CEO, a technologist or a community leader and you've got something to say about talent, culture or technology, disrupt is the place. It's coming soon to a city near you.

Speaker 2:

Learn more at disrupthrco right, I want to switch now, uh, and focus on a recent talk that you did. You did a keynote at the the first ever cambridge ai summit. It was held at anglia ruskin university. I I was lucky to be one of the co-hosts, along with the awesome chan, carlo, era and um. I think people had a jolly good time, um, and your talk was called trustworthy ai, avoiding bias and embracing responsibility. I'd like to take a little bit of time now. We've got 10 minutes left of this particular episode, so I'm probably going to challenge you to answer in 60 seconds or less for some of these, but I'd like to run through now some of the the key topics in your talk, if that's okay. Absolutely, thank you, and I'd like to start with fairness, non-bias and non-discrimination. What are some of the best practices for professionals? Perhaps those are hr professionals, if you want to, if you want to tackle that aspect, uh, to ensure fairness and non-discrimination in ai, tools used for functions such as recruitment and employee evaluations.

Speaker 3:

Yeah. So there's been lots of kind of research and studies that are really beginning to show how AI tools have been learning from society. As you'd expect, they're trained on data about people from kind of historic outcomes. So, for example, in organizations, especially in tech and engineering, where my background has been, you know, the historic workforce is about 20 percent female, 80 percent male. I think a really interesting example from quite a few years back that you know was quite well known at the time was amazon began using a tool for recruitment to automate scanning cvs or resumes, depending on which country you're on to select candidates for the next stage in the pipeline, and what they quite quickly found was that actually it was preferring the male CVs to the female CVs and the reasons for that were because that was what their workforce already looked like. So it hadn't actually learned who was the best candidate for the role. It had learned what kind of person typically works here and it was screening for maleness rather than likely criteria to be successful in the role. So Amazon scrapped the tool and discontinued using that approach.

Speaker 3:

You know, when I'm talking to HR professionals about you know kind of employment tech, you know what I really encourage is to kind of think quite critically about what it's doing and monitor its outcomes. I always worry a little bit about AI vendors in this space, because a lot of them are very great salespeople. They've got a tool that they really believe in and they're really meaning to do good things with it, and so there's often a really great values alignment between HR professionals and the people who are building and selling these tools. We want to make the employee experience better. We want to make recruitment more fair. We want to make progression opportunities more fair, but unfortunately, the nature of the way that AI learns means that there will always be some biases in the system.

Speaker 3:

We can't eradicate bias from the data that goes in, because it's there in history. We can make some changes to mitigate those things, but we can't eradicate that altogether. We've also got to be very conscious about how we use it. You know, are we using it in ways that are valid for the way that it was intended to be used? So it may be tempting to stretch the use of the tool into a slightly new use case, but we don't actually know that it's going to be accurate if we do actually start using it in ways that hadn't been developed for that.

Speaker 3:

So an example that I came across I think it was one of the big cosmetics manufacturers had used a tool that was for essentially video interviews of people, of candidates, to assess their skills for employment. That actually made use of this tool to assess candidates at risk of redundancy and, as a result, a few people were made redundant who, in fact, hadn't actually completed their interviews, and so the tool decided they didn't pass. However, what happened was that there'd been some technical problems, the interview hadn't gone ahead, the result had been recorded as a zero, the decision redundancy was made. So the outcomes, of course, were hugely problematic that people lost their work because of a piece of software making a decision. So this is where having human oversight and really thinking carefully about what the results and the outcomes of it are saying is crucially important.

Speaker 4:

Fidelio Inc is a consulting firm specializing in improving human performance and we're proud to support the HR Chat Podcast. We help identify strategic competencies and behaviors that drive results. Our team offers an HR web software to manage systems, reports and data for HR people that need the best insights to make the right decisions and achieve better results. Learn more at Fidelocom.

Speaker 2:

I guess, using the example that you just shared there, I guess one of the big concerns that I hear from HR folks on this show is will they be able to interpret the technologies that are being used to identify an issue, like you just mentioned there, that resumes weren't completed and that was the reason why it failed? You know well they have those skill sets. Or is that all in the code? What are your thoughts around that? How can we make sure that the tools we're using are user-friendly?

Speaker 3:

yeah, I think this is a real challenge in procurement, actually in finding those tools, and you know sets of questions to be asking the AI vendors, like how do we actually assess what it's done? How explainable are the results? How can we check over time what its performance has been like? Um, it's, you know, typically a challenge that affects a lot of AI tools that they don't explain their outcomes very well. But what that outcome is needs to be explained. And for those edge cases you know the extreme scores at either end of the scale that actually influence a decision about someone having a job or not having a job, for example really do need that level of analysis and that attention because the outcomes to people are huge.

Speaker 2:

Okay. Okay, I'd like to talk about something else that you discussed in your presentation. You touched upon it a little while ago, but let's let's delve a bit deeper into it, and that's uh privacy, data protection and safety. With increasing concerns about data privacy, joe, what steps can companies take to better protect their employees data when using all these new fantastic technologies?

Speaker 3:

Yeah, so some of the kind of big stories I shared around privacy and data protection and safety in the talk were around some of those high-profile things that had gone wrong again. And one was around a company called Clearview AI, which big company makes software for law enforcement to help them identify people who may be carrying out some criminal activity. And what they did to gain their data sets to identify who people are was to scrape the internet and social media sites to essentially find photos of people, links to their names and their identities. So no one had consented to this. It was not something we signed up to when we opened our Facebook accounts or logged into Twitter, yet this is how it was used. So some countries the law lets that happen, but that's not okay under GDPR, not okay in the UK, not okay in Europe, and actually other countries with privacy laws, you know, such as Canada, would not allow that kind of thing. So how do we ensure it doesn't happen Again? It's all really around that governance and oversight and accountability for the systems.

Speaker 3:

I think that we have a tendency to trust automated systems. You know we want to believe the outcome. You know the computer says something. We think OK, well, that's the right thing it happens, but we do need to pay that that level of attention. When we're looking at tools, actually read the the privacy policies, read about how the data is going to be used when we're sharing our own personal data. I think that's a great way to practice. Actually, I I think I probably drive my family nuts like whenever I'm browsing to a website or whatever and the cookie policy pops up. I will never let anyone just press okay to get rid of the thing. It's like. We are going to look and see what it's sharing and I'm going to untick all of those boxes I don't consent to. But it's worth reading it. You know, know what your data is being used for and, especially if you're deploying a tool in an organization, know what it is using the data for, because this isn't your data. You're taking responsibility for your employees data as well. I.

Speaker 2:

I am the same way. I am also an unticker. I do want. I was wondering just the other week how many minutes of my life have I spent unticking? There you go. That's a sad thought.

Speaker 3:

There's a lot of unticking minutes over here.

Speaker 2:

Jo, we're pretty much out of time before we wrap up for today. How can our listeners connect with and learn more about you?

Speaker 3:

great. Yeah, I'd be really delighted to connect with anybody who'd like to keep talking. So you can find me on LinkedIn. My name is Joe Stansfield. Um, my profile is very easy and it's just called Joe Stansfield. You can also find me from the Inclusion Earring website, which is wwwinclusionearingcom.

Speaker 2:

Right, we definitely better do it again, since my phone rang right, how did I start um? People can connect with your LinkedIn yes, yes.

Speaker 3:

So yeah, I'd be delighted for anybody to connect with me. Um, please do. I'd love to get in touch and hear from you. So you can find me on LinkedIn my profile name is Jo Stansfield, so very easy to find, and you can also find me from the Inclusioneering website, which is at wwwinclusioneeringcom.

Speaker 2:

Rock and roll and you also have a cool podcast. Briefly, mention your cool podcast so folks can jump over from this show to yours. Mention your cool podcast, please, so folks can jump over from this show to yours.

Speaker 3:

I do so. If you are interested in diversity, equity and inclusion in industry, such as metals, ceramics, cements, materials manufacture, I have a podcast called the Equity Edge, which you can find on all good podcast platforms. You just search for Equity Edge.

Speaker 2:

Excellent. Well, that just leaves me to say all today. Joe Stansfield, you legend. Thank you again for speaking. Equity Edge Excellent. Well, that just leaves me to say for today. Jo Stansfield, you legend. Thank you again for speaking at our AI Summit. Thank you for being my guest today. Thank you for everything you do. Let's get you back on soon.

Speaker 3:

Great, that'd be brilliant. Thanks, bill.

Speaker 2:

And listeners as always, until next time happy working.

Speaker 1:

Thanks for listening to the HR Chat Show. If you enjoyed this episode, why not subscribe and listen to some of the hundreds of episodes published by HR Gazette and remember for what's new in the world of work. Subscribe to the show, follow us on social media and visit HRGazettecom.

Exploring Roles in IT and AI
Building Trustworthy AI for Inclusion
Protecting Employee Data in AI

Podcasts we love