
HRchat Podcast
Listen to the HRchat Podcast by HR Gazette to get insights and tips from HR leaders, influencers and tech experts. Topics covered include HR Tech, HR, AI, Leadership, Talent, Recruitment, Employee Engagement, Recognition, Wellness, DEI, and Company Culture.
Hosted by Bill Banham, Pauline James, and other HR enthusiasts, the HRchat show publishes interviews with influencers, leaders, analysts, and those in the HR trenches 2-4 times each week.
The show is approaching 1000 episodes and past guests are from organizations including ADP, SAP, Ceridian, IBM, UPS, Deloitte Consulting LLP, Simon Sinek Inc, NASA, Gartner, SHRM, Government of Canada, Hacking HR, McLean & Company, UPS, Microsoft, Shopify, DisruptHR, McKinsey and Co, Virgin Pulse, Salesforce, Make-A-Wish Foundation, and Coca-Cola Beverages Company.
Want to be featured on the show? Learn more here.
Podcast Music Credit"Funky One"Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 3.0http://creativecommons.org/licenses/by/3.0/
HRchat Podcast
Ethical Use of AI in HR Tech with Emre Kazim, Holistic AI
In this HRchat episode, we delve into what constitutes robust AI management, how we can be more ethical with our use of AI-powered recruitment tech, and ask: Is it enough to audit recruitment algorithms for bias?
Our guest this time is Emre Kazim, co-founder of Holistic AI, a company helping businesses to adopt and scale AI with confidence.
Holistic AI is focused on providing a Platform-as-a-Service solution to organizations that want to harness AI ethically & safely. It services many large and medium-sized organizations on their journey to adopting AI, ensuring due risk management, and compliance with the changing regulatory & standards environment.
One of their key services is to help HR deal with NYC Bias Audit legislation that is taking into force January 2023. Also, he holds a PhD in Computer Science and is a Research Fellow in Computer Science at University College London.
Questions For Emre Include:
- Tell us about the mission of Hollistic AI and the idea of 'trustworthiness in context' and different levels of assurance based on the industry.
- How can companies using AI in HR ensure they are managing risks and staying compliant with emerging regulations?
- New York City passed legislation that goes into effect on 1 January 2023, mandating bias audits of automated employment decision tools. Meanwhile, California has proposed amendments to existing legislation and introduced new legislation to regulate the use of AI in the workplace. What is the NYC Local Law 144 bias audit mandate?
- What other upcoming legislation affects HR technology?
- Where else do you see AI audit playing a role in AI HR solutions?
About Emre Kazim
Dr. Emre Kazim is the co-founder and COO of Holistic AI, a start-up focusing on software for auditing and risk management of AI systems. He is also a Research Fellow in Computer Science at University College London. His research interests include algorithmic assessment, the impact of new digital technologies on the structures of the state and informed policymaking.
He has a track record of interdisciplinary and knowledge exchange through community and consortia building. Furthermore, Dr Emre Kazim has been facilitating conversations around pressing ethical challenges arising from the increasing adoption of algorithmic decision systems. His focus has been on Governance, Policy and Auditing of
Feature Your Brand on the HRchat Podcast
The HRchat show has had 100,000s of downloads and is frequently listed as one of the most popular global podcasts for HR pros, Talent execs and leaders. It is ranked in the top ten in the world based on traffic, social media followers, domain authority & freshness. The podcast is also ranked as the Best Canadian HR Podcast by FeedSpot and one of the top 10% most popular shows by Listen Score.
Want to share the story of how your business is helping to shape the world of work? We offer sponsored episodes, audio adverts, email campaigns, and a host of other options. Check out packages here.
Welcome to the HR Chat podcast, bringing the best of the HR talent and leadership communities to you. For more episodes and the latest articles covering what's new in the world of work, visit hr gazette.com, subscribe and follow us on social media.
Speaker 2:Welcome to another episode of the HR Chat Show. I'm your host today, bill Banham. And in this episode, we're gonna delve into what constitutes robust AI management, how we can be more ethical with our use of AI powered recruitment tech. And we're gonna ask, is it enough to audit recruitment algorithms for bias? My guest today is Emery Kam, co-founder over at Holistic ai, a company helping businesses to adopt and scale AI with competence. Holistic AI is focused on providing a platform as a service solution to orgs that want to harness AI ethically and safely. It services many large and medium sized organizations on their journey of adopting ai, ensuring due risk management and compliance with a changing regulatory and standards environment. One of their key services to help is to help HR deal with New York City bias audit legislation that is taking force in January of next year, January 23. And, uh, and we're gonna talk a bit about that amongst many other topics today. Em, welcome to the HR Catch Show.
Speaker 3:Thanks, bill. Uh, real pleasures to be here.
Speaker 2:So beyond my we introduction there, why don't you start by taking a minute or so and, uh, telling our listeners a bit more about yourself. You've got a very impressive academic background and what you guys doing over at Holistic AI is, is, uh, a little bit above me in places that for sure. Um, and it's super important, and you guys have got a great mission, which we'll get into in a bit. But for now, just start by telling the listeners a bit about you, your academic and career background.
Speaker 3:Thanks. Uh, thanks, bill. So just to kind of, I'm also new to, to AI, if you will, or I also came new to ai. So as an undergraduate I studied chemistry at ucl. Um, and then I actually went and did a postgraduate in chemical physics. Uh, it was actually part of a PhD program, but, um, I decided pretty early on that actually I wasn't that interested in that. And then I left, um, the sciences, um, and I did a master's in philosophy. So I did my master's in, in general philosophy. I wasn't sure exactly what I wanted to do. Um, and then that went well and I ended up doing my PhD and actually completed my PhD in philosophy. And I did it in the philosophy of ethics and more specifically in the philosophy of, uh, Emmanuel cant. So I was interested in questions of conscience. So how, how, when you feel guilty or when you feel, um, when you, when you, when you evaluate your own moral judgements. That's why I did my PhD in. So it was about moral morality, uh, ethics and moral judgment. So after completing that, I left the university, uh, and I was working, I was just working and I wasn't working in anything to do with my academic research. I was actually went into industry, into business. And, um, while I was there, I was just really fascinated and interested in new digital technologies. So that's ai, uh, blockchain and, and generally, you know, um, how these new digital technologies were just emerging and being ubiquitously adopted. So that interest just aroused me to, to write an article about how these new digital technologies are causing us to have to re-evaluate the social contract. And I, and I self-published that article, and then that article got picked up by a, uh, a mathematician at UCL computer science department. And, um, and she said, Hey, look, we're working on the ethics of digital technologies. Uh, we are engineers, uh, but really do need non-engineers who are interested in this space to be able to come along and ask us and, and for us to engage on the ethics of these questions. So I ended up working in the computer science department of UCL with the engineers who are actually building these technologies and looking at the ethics, um, the governance, um, and the, the kind of laws and the, all the impending laws around this space. So that's where I met Adrian Koyama, the co-founder of Holistic ai. And it was, you know, within that kind of context that we started to do this experimental work on AI ethics and AI governance and just trying to find different ways in which to, to solve the problem of, of creating trust in these kind of systems. And it was in that ecosystem that we worked and formed a friendship. And then off the back of that, we, we spun the company out in 2020. And, um, and, you know, haven't looked back since.
Speaker 2:That's an awesome story. Thank you very much for sharing that with us. You've got, you've got a really unique and powerful background, which, um, is a perfect fit for what you guys do. So you just hinted there a little bit about the mission of holistic ai. Um, maybe you can talk now in a bit more detail, if you don't mind, emett, about the idea of trustworthiness in context. Uh, that's a direct quote I got from you from a, a video I saw you on, on YouTube earlier today. Um, and also different levels of assurance based on the particular industry and and why that matters.
Speaker 3:Yeah, so, so I think Bill, the, the, really, the, the core thing is about asking the question about trust and what do we mean by trustworthiness? So, so the first question is trusted by whom. So it's very different to say, I wanna establish trust amongst engineers, or I wanna establish trust in a broader public sense, or I wanna establish trust in a, a, in the C-suite with respect to these systems and so on and so forth. Or I wanna establish trust with respect to regulators. So the umbrella overall in terms of the trust in algorithm systems is when you break it down a very complicated kind of psych, psychological, sociological, and then in some sense, as kinda peripheral side legal question. And what we are interested in at the core, if you will, the, the real central problem that we're trying to solve as a company is how do we have meaningful technical assessment of algorithms, but with a view to communicating that to non-technical audiences. So that would be people like the C-suite customers, regulators, the non-engineers such as myself and yourself. So that's really at the core of that. And the reason why that's important is because if we get that right, if we solve that problem, then really we can harness all the benefits that algorithms, um, that algorithms can, can have on society and science and services and everything. Everything helps. So to give you an analogy, um, you know, most of us have been on a plane, but, but I would, I guess that a very tiny, tiny group of people actually understand the, um, all the various kind of certifications and assurances that go into the aeronautical industry. So we all trust that these systems or these governance procedures in aeronautical industry is, is robust, uh, is very strong, is updated, is trustworthy, even though we don't know about them, uh, in detail in terms of the actual average user such as myself or yourself. Algorithms are, if you will, we're at that point before that trust is being established. Cause it's a new space. Um, it's really, it's still having impact. People are still seeing high profile cases of harm. So we are really in that early phase, that phase where we're trying to say, look, can we really build, can we really establish processes? Can we, can we put into to, to place everything that's necessary for people to be confident in the deployment of algorithms and the, and, and if you will, the mitigation of harms that those algorithms might have, or prevention of those harms that those algorithms might have when they're deployed. So that's really at the core of what we're doing as a business. Uh, we're enabling companies to adopt and innovate and to do that responsibly. So to give some kind of, um, examples, let's say a lot of the, the, probably the most important example is the use of, of algorithms in the HR process. And so probably the most high profile cases are those that are used in recruitment. So, you know, when people are just applying for jobs, that could be by doing a, uploading a CV or answering questions or doing some other, and then an algorithm might do an assessment and decide to move that candidate forward or actually, um, you know, reject the candidate's application. You know, it's super important, it's paramount. So ethical, uh, importance for us to ensure that that has been, that that's done in a, in a way, in a, in a really robust kind way, in a fair way. So that's really at the core in terms of that use case. But you can imagine the use of algorithms, for example, in credit scoring when you, you know, apply for a loan and then you get rejected. Uh, people wanna know that the rejection has happened for a reasonable reason. That could be, for example, cause you're somebody with a poor credit history, it shouldn't be cause um, you are female or it shouldn't be because you lack a, let's say anglosaxon name or something like that. So really what we're looking at when we say trust in context, we're talking about who we're interested, who are we trying to establish trust with? So it's important who the end point of that trust, uh, relationship is with. And secondly that it's about where it's being used. So trust in the use of algorithms in a medical context is very different to trust in the use of algorithms that say in recruitment,
Speaker 2:This is why I do this. Show listeners, I say over time, I that's because I get to sit here, listen and learn. Okay, so you're fighting the good fights and uh, there's, there are, there are fights to be had conversations to be had certainly, um, all over the place at the moment. So for example, New York City past legislation that goes into effect on jam one of 2023 mandating bias audits of automated employment decision tools. Meanwhile, California has proposed amendments to existing legislation and introduced new legislation to regulate the use of AI in the workplace. What is, what is the New York City local law 1 44 bias audit mandate? Tell us a bit about that and why it's important.
Speaker 3:Yeah, so I think this is just such a interesting and pioneering, uh, piece of legislation because to the best of my knowledge, it, well, I don't think it was a first, but it's certainly the most high profile. And, and it's ostensibly I, in terms of how we're actually receiving it, it's really the first major intervention in terms of AI regulation. And it's taken, it's garnered a lot of attention and lots of people have been, um, looking at it from, from outside of the New York City context. So really it's a piece of legislation. I think at the core of it, one of the legislators was describing, uh, that it was a transparency, a legislation. And really at the core of it is about transparency. It's about, uh, obliging or, or forcing companies to maximize their transparency in the use of, of algorithms in this context. But more specifically at the core of it, I think there's basically three parts. The first part is about how does the algorithm work? So that's really about doing an assessment of the performance of the algorithm across different protected characteristics or different demographics. So how is your system working, let's say with respect to gender or how is the system working with respect to to race and so on and so forth. Um, and the second part about the legislation, I think the second core part about it is actually publicizing the results. So a summary of the results are, should be published. And I think that, again, both of these two parts are really pushing forward that kind of core, um, uh, uh, principle, which is the one of, of transparency. And the third part is about the, if you'll communicating that so beyond public publishing, a summary of the results, communicating it and, and allowing, uh, people to exempt themselves from such, uh, processing. So, you know, at the heart of it, it really is place putting maximum transparency and giving maximum agency to people in this process. So it's a, it's a really important, it's a very, um, positive move by the legislators and it's really exciting to see how this is really focused the attention in, uh, in New York. And more broadly on, on bias in the use of algorithms in HR systems.
Speaker 2:What other upcoming legislation is perhaps gonna affect HR tech and recruitment tech over the next six to 12 months? And any others that you wanna, you wanna highlight that people should be aware of?
Speaker 3:I think we should really, I think what we will probably see is other states in the US adopt a similar legislation to the New York City one. So that's the first thing. Look out for other states taking similar moves. You mentioned the new, you mentioned rather Californian, uh, legislation, which is super important in this sense, but I think the big legislation that people should be looking out for is the EU AI Act. So the EU AI Act is the equivalent of the, the GDPR legislation that was passed with respect to privacy or data governance. So it really is a big piece of legislation being passed by the eu and it will have, you know, global ramifications and, um, without, let's say going into a whole discussion about that legislation, it, the act takes a risk based approach to, um, to algorithms. So it has a classification of some systems which should be completely banned. Some systems which are high risk, some systems which are medium, uh, risk, and some other systems which are low risk. So employment, the use of algorithms in employment is listed as one of the high risk systems. So really, you know, there's gonna be lots of focus on that. It's gonna be pretty clear, pretty explicit that anybody who wants to develop or deploy algorithms in employment contexts are gonna have to be compliant to that. That's really the one you should see. The, um, you know, the New York City legislation really is a kind of, of a, of, of really a very small, you know, beginning of a much bigger wave that, that really will be the EU AI Act. It's the, it should be passing next year. And, um, you know, at the latest that we passed before the European elections, which is in the spring of 24, uh, and you know, companies really should be getting aware of that. That's gonna be huge.
Speaker 2:Okay. So what, uh, what do companies need to do to, to future proof ahead of the EU AI Act? And, um, what are your predictions in terms of some of the, some of the, um, uh, I don't wanna use the word cowboys. Geez, uh, some of the companies out there who have perhaps don't follow as stringent policies as others, uh, how, how could they be impacted, you know, on the Bender side, um, but by the EU AI Act?
Speaker 3:So, so all I would say to companies just get, just start to get ready. So be aware of the legislation. So there's lots of, there's lots of publications in this space, there's lots of people doing work in this space. But really, um, if you're a good actor, it's funny, you should talk about the kinda cowboys, but really, if you're a good actor, you shouldn't, you know, you shouldn't be too afraid of this legislation. It really is just being able to, to evidence that you've taken the problem of, of bias and explainability seriously. And you should be able to provide the relevant documentation and assessments that validate that you've taken it seriously and you've investigated this question and manage kind of risk. So really it's getting ahead of curve, anticipating, um, it's probably so what you could, it's a good idea anyway, irrespective of the legislation or prior to the legislation passing to be aware or to be cognizant of, you know, all the systems that we are using. Are they defendable? Are they, are they fair? Are they, they they bias, you know, it's just good practices. It's an ethical imperative to, to be confident that these, you can say, Hey, these systems are reasonable that they work well. So really I think that for, for the industry, it's gonna be a very positive thing. Cause it's really gonna just set a good standard. We're gonna start to be able to be confident and assured that these systems are, are trustworthy and safe. And, and, and I think, yes, I think that's a good point. I don't think it's a snide one actually about, um, let's say cowboys. It's, you know, it's, isn't it a good thing for, for responsible players in the ecosystem to say, Hey, it's great that we've got a standard, or that we've got regulation that, that can at least try go beyond us claiming that our systems are fair or safe to say, Hey, look, we are actually complaining and thereby, you know, we've, we've met particular kinds of standards, so it's gonna be good for the good players. Um, and, and people who are, if you are setting snake oil, then, um, they should be worried, I guess. But, um, but for the responsible players in this space, you know, they can be confident that this is gonna be a positive thing for the industry. The really, the kind of core, the real core problem, if you will, is that if you wanna deploy algorithms in this space, and, and the reason why you would wanna do that is because they're more efficient. You can do things at scale, you can do them systematically and so on and so forth. In order to get companies to be able to do that, do that in order to, to drive through, if you will, the kind of, the use of algorithms. The, the, the central barrier to that is do they work and are they fair in the context of HR systems? So this should be a positive thing, a positive movement in that direction.
Speaker 2:I introduced this episode today with a, a particular slant on the, on the recruit recruitment side of things. You know, whether that's passing or reference checking or very subtraction tools or the rest of it to do the candidate experience. But of course, um, it sounds like we're talking a lot about, a lot more, uh, when it comes to the offerings of different HR tech solutions, right? Uh, we're talking performance, we're talking assessment, we're talking compensation calculations, these sorts of AI audits. It's, it's gonna permeate, it's gonna, it's gonna affect everything, correct?
Speaker 3:Absolutely, yes. I think that this is really just the beginning. It's about the use of algorithms within, uh, any kind of HR process. So a lot of the focus now is on the recruitment side, but as you said, performance assessment, salary assessments, bonuses, and so on and so forth, uh, monitoring and work, et cetera, et cetera. All of this is gonna be liable to, to auditing,
Speaker 2:Changing the subject a little bit, uh, because, um, it's something that I enjoy getting involved with and taking part in. All the rest of it is, um, uh, getting together with people because, you know, hopefully we're through the worst of the pandemic now, and we're finally, and your team hosts a lot of in-person events. Uh, so for example, I saw some, uh, let's see, how to manage the privacy risks of AI systems, ai, fairness and bias. What does it mean for your business and, uh, manage risks, embrace ai? So that's some of the titles mm-hmm.<affirmative> of the meetups that you guys have. It's, it's good to see that you're encouraging the community to get back together again. And, um, uh, you, you hold it a pretty cool location. I understand. Why don't you tell me a bit more about those events, why you do them, and what's that community that you're trying to build?
Speaker 3:We came out of the university, you know, it's collegiate, it's about knowledge sharing, it's about engagement, it's about meeting people and just, just getting as much exposure to the different ideas and different perspectives. So it's really at the core of our culture, of our personalities. Um, it's about hosting these events and having these discussions and bringing people from different backgrounds and different stakeholders and different industries to come and talk about the same team. So I think there's a couple reasons why. One reason is, as I said, it's just a cultural reason. You know, we, this, you know, it really is what we love doing. We enjoy doing that. I think it's super important to be present and to be communicating and have an open forum as people to engage in a, a second reason is that, hey, these questions aren't answered. You know, fairness in ai, you know, um, uh, privacy of AI systems, what does that even mean? It's a novel area. So it's still speculative questions. So in order for us to meaningfully assess these systems and move forward with these systems, it'll be fantastic for us to actually be able to, to have these different perspectives and maybe see what comes outta it. So actually it's, these questions are genuine rather than rhetorical. Um, like we're interested in these kind of questions. And I think the third thing is actually the community is growing and we're getting to a critical mass where a lot of these kind of things, which were, let's say, things that we pioneered in the academic environment are becoming mainstream in industry and even broader society.
Speaker 2:Okay. And I think they're also free. So that's another reason to go long, isn't it? Ah,
Speaker 3:Yeah, absolutely. They know they're all free. Everyone can come. They're hosted at, uh, a center, a university college, London, a center for digital, uh, innovation, uh, in, in East London. One of the centers is there. And, you know, west assured, I can tell everyone that we are gonna be doing lots more events in 2023. They've all gonna remain free. And, um, you know, please do come, come along and, and say hello.
Speaker 2:Okay, there you go. If, uh, if you, if you go on in the middle of 2023 and it's suddenly 2000 pounds to 10 because you, they've got some cool speakers and whatnot, you say, hang on a minute, I ticket
Speaker 3:Can contact me directly. I'll say, uh,<laugh>.
Speaker 2:Okay. Hey, listen, we're, we're almost at the end of this conversation, uh, which makes me sad cuz I, I'm enjoying it. Um, I, I've been learning a lot today, thank you very much. But before we do wrap up, how can our listeners connect with you? So maybe you might wanna share your email, maybe your LinkedIn details, maybe you are on Instagram, et cetera. And also, of course, how can they learn more about all the cool work happening over at Holistic ai?
Speaker 3:So it's, um, I'm quite active on LinkedIn and Twitter. On LinkedIn you can just, it's Emma, e m r e, Kam, k a z i m, that you can just find me co-founder of holistic ai. I'm also active on Twitter. It's the, my, my name and then with an underscore, so at my name with underscore, uh, and, you know, you could just add me and then the message me, my email address is my name, but with a full stop between my first and my second name@holisticai.com. But really, if you just, just stumbled upon our website, um, you'll be able to see all the, everything's there, all of our publications, our blogs, our news articles, our open source tools, uh, use cases. So like, it's quite a vibrant and open, uh, space as you can see. And hopefully it's accessible. But you know, the easiest way is if you really wanna just have a chat, just, just pop me over an email or, or reach out to me on one of those forum and I'll definitely, or come along to one of the events and almost certainly I'll find you. Then we'll be able to have a great chat.
Speaker 2:Love it. Lots of ways to get in contact. Okay, well that just leads me to say for today, em, thank you very much for being my guest on this episode of the HR Chat Show.
Speaker 3:Fantastic. Thank you so much. Thank you for this opportunity to talk and really excited to engage with as many people as
Speaker 2:Possible and listeners as always. Until next time, happy working.
Speaker 1:Thanks for listening to this episode of Ehh our Chat podcast. There are hundreds of conversations with business experts available for free on the h or Gazette website, apple, Spotify, and all the main platforms. And remember to like, subscribe and follow us on social media.