HRchat Podcast

How Chat GPT Will Impact HR and Employees with Shay David, Retrain.ai

March 09, 2023 The HR Gazette Season 1 Episode 552
HRchat Podcast
How Chat GPT Will Impact HR and Employees with Shay David, Retrain.ai
Show Notes Transcript

The guest this time is Shay David, Co-Founder, Chairman and CEO at Retrain.ai, a talent intelligence platform designed to help enterprises hire, retain, and develop their workforce, intelligently.

Questions For Shay Include:

  • ChatGPT has put quite a spotlight on generative AI. Although it’s arguably the most impressive chatbot yet, how does it compare to other generative AI tools? How is it similar to or different from retrain.ai?
  • Given the huge number of Chat GPT users, how will it change the mindset of the average person/employee when it comes to generative AI?
  • We’re also hearing more about Responsible AI, particularly as it relates to the new AI Audit Law coming to New York City in April, and others like it around the country. What does Responsible AI look like? Why is it important?
  • What’s the vision for Retrain.ai in 2023 and beyond?

About Retrain
 
Leveraging the power of artificial intelligence and real-time market data, enterprises unlock key talent insights and optimize the hiring and upskilling of their workforce.

For employees, the Talent Intelligence Platform seamlessly assesses the skills they have today, the skills they need for the future and delivers the resources they need to get them there.

To book a demo today visit: www.retrain.ai/book-a-demo/

We do our best to ensure editorial objectivity. The views and ideas shared by our guests and sponsors are entirely independent of The HR Gazette, HRchat Podcast and Iceni Media Inc.   




 
 

Feature Your Brand on the HRchat Podcast

The HRchat show has had 100,000s of downloads and is frequently listed as one of the most popular global podcasts for HR pros, Talent execs and leaders. It is ranked in the top ten in the world based on traffic, social media followers, domain authority & freshness. The podcast is also ranked as the Best Canadian HR Podcast by FeedSpot and one of the top 10% most popular shows by Listen Score.

Want to share the story of how your business is helping to shape the world of work? We offer sponsored episodes, audio adverts, email campaigns, and a host of other options. Check out packages here.

Speaker 1:

Welcome to the HR Chat Show, one of the world's most downloaded and shared podcasts designed for HR pros, talent execs, tech enthusiasts, and business leaders for hundreds more episodes and what's new in the world of work, subscribe to the show, follow us on social media, and visit hr gazette.com.

Speaker 2:

Welcome to another episode of the HR Chat Show. I'm your host today, bill Ham , and joining me on this episode is Shai David, co-founder, chairman, and c e o over at Retrain ai , a talent intelligence platform designed to help enterprises hire, retain, and develop their workforce in intelligent ways. Shai , welcome to the show today.

Speaker 3:

Hey, bill, so great to be here.

Speaker 2:

Thank you very much for being my guest . Why don't you start by , uh, telling our listeners a bit about yourself, your career background, and of course the mission of Retrain.

Speaker 3:

So Bill, again, thanks for the opportunity. Uh , very excited to be here. My name is Hai David , Dr. Hai David , I'm a serial entrepreneur, spent the last few decades building large scale software products, enterprise B2B SaaS . Last big project was a company called Cultura , which was an enterprise video platform serving customers in both media and telecom as well as enterprise, anybody from Voice of America to Bank of America. Uh , so there was , uh, quite a nice experience growing it , scaling that business from , um, basically startup mode with , uh, three other co-founders from my living room up to Nasdaq ipo. So that kept me busy for a while . And prior to that , um, other companies including Designator or G Ps Navigation Company and a few other projects. Um, and then , um, about two and a half years ago, started retrain AI with two fantastic partners, Avi and Isabelle, who are also very experienced in , uh, educated entrepreneurs. And we are excited to be solving what we believe, and I'm sure we'll talk more about this over the next , uh, few minutes and , and maybe half an hour about , um, talent intelligence and the skills gap problem and basically a solution that we believe is an important ingredient in solving the skills gap emergency that we see globally today.

Speaker 2:

Let's talk about something that gets me super excited and I listen to loads of podcasts and and whatnot on the , on the walks with the dogs and going for a run and whatever else I can. And that's chats, G P T , uh, it's gonna change so many things and so many different industries, not least , uh, the world that I work in and that's, you know , uh, business media. How do you think Chat GT has, has put a spotlight on generative ai? And maybe also you can share that perhaps around, although it's arguably the most impressive chatbot yet, perhaps you could tell our audience how you think it compares to, to other generative AI tools out there. And is it similar or different from what you guys do over at Retrain?

Speaker 3:

Absolutely. So I was joking with the team earlier this week that chat , G p t, is the marketing campaign that we could have never afforded. And , um, I think that the reason I say that is because Chat g p t really brought to the headlines and kind to the common imagination, the true power of generative ai. In that sense, I think it's for, for good or for worse, it puts right, left and center the power of AI in the minds of the common people. I think that on the other hand, while generative AI is a transformative piece of technology, I think that it has both promise and parallel . And I think that beyond the fascination of the first few days, when people started digging down into it, they saw a lot of the risks that are associated with this general purpose tools , including, I don't know if you saw that , um, chat , G P t proclaiming its love for some journalists and making , uh, factual errors. And that's true not only for , we're using chat GT here as a scarecrow, but it's true for Google's version of it , which is called Bar . And for the Microsoft Enterprise grade version of , uh, chat g pt, which is incorporated into its Bing engine. I think that those technologies are fascinating and I think that they have huge potential, but generative AI is all about pre-training a model on very large corpus of text. In that sense, it's a semantic set of technologies that learn from past text examples and basically generate, or another way of putting it hallucinate an answer. And if the question is, help me find the best snowboarding sites in Europe, then maybe the answer is pretty straightforward. But if the question, just to go back to our topic of HR for example, is how is this person fit for a job that maybe that type of generative technology is not very helpful. And I think that soon enough we're learning that generative AI is no panacea and no pixie dust. It can learn from past examples and it can generate future responses, but it could also fall into the trap of inventing stuff that didn't happen. A good example of that, if we people wanna understand the risk of that is give it to , uh, CVS of two different people and ask Chad Gpt to format that CV for a particular company. And what you're gonna start seeing that for different people, the answer is probably pretty similar. In other words, it takes two different candidates and makes them look sexy and the same. That of course is counterproductive. If the objective is to hire the right person, on the other hand, it has tremendous potential. You can give it a simple job description and ask it to make it more descriptive. Um, so I think it has promise and peril and I think that people are gonna have to learn to work with it Then comes the set of questions about what makes the piece of technology enterprise ready . And I think that one of the things that make a big difference between consumer grade technology and enterprise grade technology is how the technology is packaged. Chat g PT or the Google version of chat g PT board do not have any of the necessary requisites for enterprise consumability. They don't come with any sort of service level agreement. They might work, they might not, nobody's gonna sell you that service level agreement. They don't have any semblance of transparency. It comes back with an answer. You don't know whether that, where that answer came from. It has no way of explainability. So those like transparency, explainability, sla, performance guarantee, those would be some absolutely fundamental dimensions for enterprises that are looking to consume the technology. And particularly in regulated industry and particularly in industries like the HR tech industry. It actually might be illegal definitely in the US and some states to use this technology because there is no auditing and there rules and regulations coming into sharp focus that prevent people from using AI for HR related decisions if there's no audit trail and no transparency. So there's uh , uh, kinda a big risk in using these types of, of technologies. All that being said, I think it , whats the appetite? It shows us the huge power of ai and it shows us that using these types of technologies, if we could add all of those requisite components, including SLA and transparency and explainability and art trail , then we're beginning to understand the huge power that these technologies have.

Speaker 1:

Thanks for tuning in to the HR Chat podcast . If you're enjoying this episode, we'd really appreciate it if you could subscribe and leave a five star review on your podcast platform of choice. And now, back to the conversation,

Speaker 2:

That was one of the most interesting answers to a question that I've heard on this show in a very, very long time. Thank you. Thank you. Uh , I'm , I'm passionate about that

Speaker 3:

When this , this is not, this is not me, this is all chat g PT just wrote that for me. No, I'm just kidding.

Speaker 2:

<laugh>. Okay . I wanna, I wanna follow up on a couple of things you mentioned there. So , uh, one thing that I wanted to ask you about is , uh, if , if I understood what you said, you said it's kind of not okay to make a resume. Uh, I think you used the term sexy , um, you know , to, to improve it so it fits the language required by a particular employer. Uh , I used the example of two different resumes for two different employees. I , I , I think why , why isn't that okay? And isn't it, isn't it really more about where, where the in-person interviewing part of it comes in? So if someone's got the gumption, they've got the , um, they've got the, the , the confidence to use a tool like, like chat gp , PT or Bart or another one of these tools to get them in front of the employer, doesn't that then give 'em a chance to, to wow them in the interview part of it and show them how resourceful they are ?

Speaker 3:

Absolutely. And, and you're not gonna hear me criticize the use of tools to make a CV to that same term sexy, but you know, in the investment world, there's this term of putting lipstick on the pig. Uh, so long as there's enough pig and not too much lipstick, that's okay . The question becomes a problem when there's more lipstick than pig, right? And , and the comment I made earlier is about sometimes these technologies, and again, the term generative is the operative word here . It generates the text. And the question is, how close is this to reality? I think people have a much easier time understanding this and the generative AI that is used for image creation, I was actually playing with it last night , um, and, and you can get pretty, pretty horrible results. So everybody saw this famous example, I think about , uh, the generative imaging technology called Dali , also coming from open ai . And there was this famous example circulating in social media about an astronaut riding a horse on the moon or something like that. So it's clear to everybody that that is completely fictitious. I was playing with it last night and I've asked it to , uh, draw pictures of salespeople and to my horror , all of a sudden the image generator generates people with multiple hands or people crossing their hands, but also pointing at the same time images that look completely messed up and completely fictitious. So when we see something like that in an image, you see a person with three hands, you understand that something went wrong. And the generative ai, if you saw that in a cv, you wouldn't be able to tell the difference. And the technology underlying the generation of images and the generation of text is the same technology. So in images, it's very clear to us in text, not so much. So I'm just pointing to the fact that sometimes there's great mis accuracies in the process of generating text and that we have to be very careful for relying on that. That does not take away from the capability of taking existing text like a CV and dressing it up and highlighting or changing the tone of voice or adapting it to a specific task . Those are all fantastic uses of this general DVI technology. And in fact, I wanna encourage , um, my colleagues within the industry to really think about the question, which is what are the tasks that AI is really good at and what are the tasks that we should live for humans?

Speaker 1:

Atlas is proud to be a supporter of the HR Chat podcast. Our company enables innovative companies to compete in a global economy, believing that businesses should employ whomever they want, wherever the talent exists. As the largest direct employer of Record Atlas is an expertise enabled technology platform that delivers flexibility for companies to expand across borders, onboard talent, manage compliance, and pay their global workforce without the need for a local entity or multiple third party providers. Learn more@atlashxm.com.

Speaker 2:

So as long as there is a human component, someone to , uh, in , in , in my industry , um, to Subedit and to fact check because, you know , uh, chap GPT is known for, for making stuff up as well, isn't it , um, then , uh, that those are acceptable uses of, of a fantastic new tool? Or are you saying we need to be more cautious than that?

Speaker 3:

I think that generally speaking, we need to be pretty cautious, but that caution does not need to be , uh, halt the industry or we don't need to basically stop the development of the technology. We just need to be very cautious of the way that we're developing it , provide both enough internal checks and balances as well as probably some regulatory oversight. And , um, I think that , uh, many people that have been thinking and writing about these technologies in the past , voice or concern , uh, you know, just to mention Elon Musk in their speech , um, last week saying about, Hey guys, this is really dangerous technology. Let's continue to develop it, but let's make sure that we have the proper oversight over it so that again, with proper transparency and explainability and oversight and regulation, we understand what we're getting ourself into. The other , um, metaphor that kind of comes to mind is come from Nick Bostrom's book Super Intelligence. Highly recommend that for those that are interested in this topic, he uses a metaphor about the sparrows that are , um, having a meeting, discussing how hard the sparrow life is and what if they could find some technology to help them with all the menial tasks that sparrows need to do. And one of the sparrows suggested they use owls instead because owls are much more powerful birds and sparrows, and they could get a lot of the dust done faster. And one of the sparrows says, but by the way, alls actually eat sparrows. And then the other sparrows says , yeah, that's right. But they are also very powerful creatures. So I think that's kind of where we are with this type of , uh, intelligence. It could make our life very simple, but it could also eat us alive if we're not careful. So I'm always off the opinion of proceed with caution. This is not a call to hold innovation. This is is not a call to avoid technology , it's just a call to engage with the curious eye and making sure that we understand that these technologies are so powerful and they could be used, but they could also be misused. And we need to make sure as an industry and as users of this technology, and then , uh, at some point in the processes with , together with government and , and regulators, that we developed some minimum standards for both measuring the performance and evaluating the, the risks of these types of technologies.

Speaker 2:

In, in a previous answer, Shai you, you , uh, you mentioned bringing generative AI to, to the common people , uh, I think is what you said, gi given the huge numbers of chats, G p t users and bar users and and other tools, how, how would it change the mindset of , of the average person or the average employee when it comes to, to using this kind of technology? Now , now, now it's, now it's out there, now it's very much in the news. Now people are playing about these technologies themselves. Um, yeah , how , how's that gonna change the perception?

Speaker 3:

For sure. So lemme just clarify when I mean the common people, I don't mean that they're common in any way other than they are not technology people. Most of the people that have experienced AI to date have been people that are themself within the technology ecosystem, and they've been using AI and machine learning technology in order to incorporate these types of technologies into other products. Now we're getting to the point where AI is powerful enough and there's been enough applications so that people that are not technologies can use it. You know, if Microsoft, for example, made a big investment in open AI and they're gonna incorporate that in Bing search, then now anybody using Bing search is gonna experience that. They don't need to program anything. They don't need to integrate, they don't need to install any special apps. That's what I mean by common people in common use. And I think we're getting to the point where AI is basically gonna be incorporating so many products that we're just gonna use it , um, on a daily basis. I think that when we understand the AI ecosystem, and , and we can use, again, examples from the HR tech and AI intersection, we need to understand that some people are developing baseline technologies, and that's like chat G P T and Open AI and Googles Lambda . And those are large language models. Those are the core technologies that enable ai. Some people, and I would include return and AI in that category, are developing a model that is a layer on top of the basic language models that it doesn't, it's not developing capabilities for, say, understanding human language, but it's developing capabilities within a specific domain. In our case, it's an HR tech domain and a skills framework. And then some people on top of that are developing specific applications. Those applications are specific tools that use all the underlying stack in order to generate some sort of consumer experience. A good example of that, again, in generative uh , imaging for example, is an application like Mid Journey . So when we think about more people using it, it's because there's gonna be more and more of those apps. Those are gonna be tools that are not developing the underlying models, but they're using those underlying models to solve every day applications. And that's where most of the people are gonna , uh, meet AI for the first time, probably on their phones, maybe on their desktop, but over time integrated in into the fabric of our lives, you know, in smart refrigerators and in smart cameras and in phones and in voice response systems and in air traffic control or anything , uh, in the industry is probably gonna have an application that relies on the underlying models of generative AI in the future.

Speaker 2:

So you've been talking around this concept then of, of , of responsible ai. Um, and we've been hearing a lot about this obviously in the news, particularly as it relates to the new AI audit law coming to New York City in , in April and others like it around the world. What , what does responsible AI look like to you? What , you know , what , what , if you could paint a picture for our listeners today and, and also why , why is it important?

Speaker 3:

So in our case, just , um, just a few more words about our products. So what retrain AI is developing is a skills framework. It's a system and a model that's designed to understand jobs, people, and training pathways. Our objective is to enable large employers to hire faster, retain their best people longer, and develop and train their employee bases, their talent base , uh, with the skills of the future. In order to do that, we've developed an AI model that feeds on billions of data points, including LinkedIn profile and job descriptions and cvs and learns and creates a map of vocational opportunity and understands what are the skills to first century skills that are required for each job. Where does vocational opportunity exist and how would both existing employees and new candidates match against that? We could do that at an employer level, at the state or country level. And then from a candidate or employee perspective, an individual level nor relies on that same framework. In order for that to be a scalable solution, it needs to be completely responsible. And what we mean by that is that the responses of the system needs to be reliable, they need to be transparent, they need to be explainable, and they need to be , uh, safe enough so that they don't introduce biases. And part of why this audit law is coming into sharp focus in New York is there's been too many cases in the past where people tried to use AI for HR related decisions, both on the pre-hire and post-hire side, and the ai, which is no more and no less than pattern recognition, basically comes in and dramatically, dramatically explodes the types of biases that exist in the organizations. There's been one specific case with a very large tech employer, I'm not gonna name names. They wanted to introduce AI in order to increase the diversity of their team. And in most of these tech organizations, there's a bunch of white dudes working in r and d . So they feed the AI with examples, and guess what happens as a result? The AI looks at examples of who have these people been hiring and recommends new people to hire, but it only learns from example. So you introduce a bunch of white dudes as input and you get a bunch of white dudes as output . So , so the exact opposite of the intention is happening . Instead of reducing biases , it increases new types of biases and, and really makes those biases explode. The audit law is intended to put a check and balance on that . So it says, if you wanna use this type of technology, you have to be able to explain why did the algorithm make that type of recommendation. And when you have to explain things, all of a sudden it makes you wanna check the algorithm better. So we think that that's actually a very welcome law and it's probably just one of a series of laws that that should come in . The capability to audit the systems is fundamental. Most of the technologies that you have in the field today are black box solutions. You put input, you get an output. There is no way, not for the system, not for the people that design the systems , definitely not for the end users to even understand why those recommendations were made. And that is mostly based on the very rudimentary design and basically bed design systems that are responsible and explainable by design should be able to not only give you the capability to audit the decisions within the system, but actually explain them, trace back every step of the way and to say, this is why the system came up with a specific response. And if you can't do that, it's probably , uh, not a good enough system to use. And unfortunately, many of the systems in the market today are like that. We see that in many domains, not only in nature tech , but there are sentencing systems used in the judicial system and there are , uh, organ transplant prioritization systems using hospitals. And there are definitely systems in HR tech. I'm just using three very different examples to show that the risk of bias is fundamental. If people are getting an organ donation or people are getting their sentences commuted based on some AI recommendation system that we cannot audit and cannot explain, then good luck to all of us, right? And now that these systems have a potential impact in HR tech, it's high times that these regulations come to the fore because , uh, there's gonna be tremendous change in our industry because of these technologies. And we are trying to promote the view that if we're gonna have that change, we might as well have it with systems that are explainable, transparent and auditable . Otherwise, the biases are just gonna be too expensive.

Speaker 2:

Okay . I like it. I like it. So , uh, uh, I guess next question for you is, what , what's next for your team? How are you gonna continue to fight the good fight Shai and uh, what's the vision for retrain in 20 23, 20 24 and beyond?

Speaker 3:

So our mission is to really help , uh, large employers solve the skills emergency. We believe that the skills gap is truly an emergency. I call that an emergency because on one hand, 77% of Fortune 500 CEOs report access to skilled labor as their number one limiting factor for growth. The short version is people cannot get enough skilled laborers and skilled labor. You know, we think about pilots and nurses skilled labor, but skilled labor is, is up and down the skills ladder. Uh , people that can drive long haul trucks or also skilled laborers and assistant nurses are skilled laborers . And there's just a shortage of talent across the board in retain in healthcare and financial services in tech. It's not just programmers and pilots and nurses . So our mission is to help tens of millions of people find meaningful employment by being able to overcome the skills gap. Because at the same time that Fortune 500 CEOs are having a hard time hiring skilled labor, there are millions of people sitting at home and they're not even considered job seekers because they gave up on looking for jobs because they do not have 21st century skills. So that's the tragedy, and it's both kinda a two-sided tragedy. Employers cannot get enough people, people can get enough work. How could both of those statements be true at the same time because of the skills gap? And our mission is to help solve that skills gap. So 2023 for us is all about scale. We've been working at steals for the last three years, developing this technology. We developed what I believe is the , the most robust, actionable, granular, largest ever skills framework. And that technology is now being put to good use. We worked with several very large scale design partners over the last few years to out the kinks in the system and to put into production. And now 2023 for us is about finding scale. So we're working with the largest of the large, I'm not gonna name names, but some of the largest retailers, healthcare companies, consumer packaged goods companies, and many others in different industries to really bring this technology to work with . Also working with several governments and different places around the world to bring this , uh, gospel to consumers. And I think it's all about scale 2024 for us is going beyond that and connecting that stack also to the development stack and to the education stacks so that people could start learning new 21st century skills. So in that sense, I think, you know, what we're developing is the chat g p t of the HR tech world or the labor market. It's bringing the power of generative AI to the masses so that tens of millions of people could really use the power of generative AI to not only make their CV look better, but to actually understand what their skills gaps are so they can find better jobs, whether with an existing employer or looking for a job opportunity elsewhere and be able to get personalized career pathways, personalized training pathways, and live to ai, what AI does best, which is to help find the pattern so that the people could do what the people do best, which is to use their human potential and to maximize it, all of that for their employer's benefit . So that post the employer's benefit and the individual's benefit by being able to move core HR processes into the language of skills. So I guess you can say we got our plate full and I'm very excited for this future.

Speaker 2:

And just finally, before we do wrap up for today, how can, how can our listeners connect with you, Shai ? So maybe that's through LinkedIn, maybe you wanna share your email address, maybe all over TikTok and places, and of course, how can they learn more about , uh, about retrain.ai and I've, I guess as part of that, they'll be going on their journey then to make sure that they're putting , um, more lipstick than, than pig Exactly . Or processes.

Speaker 3:

Exactly. Absolutely. So, so first of all, again, thank you for the opportunity and, and thank you for the listeners. I think the best way to find out more and request the demo is retrain.ai and uh , just ask for demo. I'm always happy to talk with partners, with customers, with the , the general public or anybody that wants to learn more. So my email is hai David , s h a y dot dv , id retrain.ai, send me an email, find me on LinkedIn, connect with us on our website. Come see a very cool demo that I think is gonna blow your mind because this is really bringing the power of , uh, AI in a responsible and explainable way to this market. And we're always looking for opportunities for large employers that wanna use this technology for partners that wanna reset this technology for government agencies that wanna understand the power of AI in a responsible way. Uh, we're all for that. So any opportunity, please bring them our way and , uh, we're happy to support.

Speaker 2:

Okay , rock and roll. And that just leaves me to say for today, Shai, thank you very much for being my guest on this episode of the HR Chat Show.

Speaker 3:

Thank you, bill. Uh , it's always a pleasure listening to you and it's my to be here tonight and , uh, good night

Speaker 2:

And listeners as always. Until next time, happy working.

Speaker 1:

Thanks for listening to the HR Chat show. If you enjoyed this episode, why not subscribe and listen to some of the hundreds of episodes published by HR Gazette? And remember for what's new in the world of work, subscribe to the show, follow us on social media and visit hr gazette.com.

Podcasts we love