HRchat Podcast

Are You Using AI Wrong? A Guide to Avoiding Hallucinations and Leveraging LLMs with Dr. Keith Duggar

The HR Gazette Season 1 Episode 848

The technological landscape is evolving at breakneck speed, and AI stands at the forefront of this transformation. But how can HR professionals and business leaders navigate this new terrain effectively?

Dr. Keith Duggar, CTO of X-Ray and co-host of Machine Learning Street Talk, brings clarity to this complex topic through a lens shaped by his fascinating career trajectory. From his roots in chemical engineering to high-frequency trading on Wall Street, then to Microsoft, and now leading an innovative AI startup, Dr. Duggar offers practical wisdom for organizations grappling with AI adoption.

His company, X-Ray Glass, emerged from a deeply human need – creating augmented reality subtitles for those with hearing impairments. This mission of "subtitling life" exemplifies how AI can enhance human connection rather than diminish it. Through this work, Dr. Duggar has developed invaluable mental models for understanding large language models that cut through the hype and confusion.

"They're order of magnitude more efficient search engines," Dr. Duggar explains, while cautioning about their limitations, particularly hallucinations – convincingly wrong information that can appear authoritative. His advice? Approach AI as an interactive dialogue, start simple, refine iteratively, and always verify critical information through traditional sources.

Looking ahead, Dr. Duggar envisions a shift toward "constellations of narrow intelligences" rather than ever-larger general models, with specialized AI tools working in concert to solve complex problems. For organizations seeking to harness AI's potential, he recommends practical approaches like hackathons and workshops alongside robust governance frameworks addressing privacy and misinformation risks.

Whether you're an AI skeptic or enthusiast, this conversation offers a balanced perspective on embracing innovation while mitigating risk. Subscribe to the HR Chat Show for more insights on navigating the evolving workplace, and follow Dr. Duggar on LinkedIn or through the Machine Learning Street Talk Discord to continue exploring the frontiers of AI.

Support the show

Feature Your Brand on the HRchat Podcast

The HRchat show has had 100,000s of downloads and is frequently listed as one of the most popular global podcasts for HR pros, Talent execs and leaders. It is ranked in the top ten in the world based on traffic, social media followers, domain authority & freshness. The podcast is also ranked as the Best Canadian HR Podcast by FeedSpot and one of the top 10% most popular shows by Listen Score.

Want to share the story of how your business is helping to shape the world of work? We offer sponsored episodes, audio adverts, email campaigns, and a host of other options. Check out packages here.

Speaker 1:

Welcome to the HR Chat Show, one of the world's most downloaded and shared podcasts designed for HR pros, talent execs, tech enthusiasts and business leaders. For hundreds more episodes and what's new in the world of work, subscribe to the show, follow us on social media and visit hrgazettecom and visit hrgazettecom.

Speaker 2:

Hello and welcome to the HR Chat Podcast. I'm Pauline James, founder and CEO of Anchor HR and associate editor of the HR Gazette. It's my pleasure to be your host. Along with David Krillman, ceo of Krillman Research, we're partnering with the HR Chat Podcast on a series to help HR professionals and leaders navigate AI's impact on organizations, jobs and people.

Speaker 3:

In this episode we speak with Dr Keith Duggar, cto of X-Ray and co-host of Machine Learning Street Talk, one of the world's top AI podcasts. Keith brings a unique perspective, shaped by his work at Microsoft on Wall Street and now at the forefront of AI-powered augmented reality. Dr Duggar shares how AI is transforming human interaction, what it means to subtitle life, and how organizations can practically and safely leverage generative AI tools. He walks us through helpful mental models, the risks of hallucinations and why a thoughtful, hands-on approach is key.

Speaker 2:

Dr Keith Duggar, we're so pleased to have this time with you today and welcome this conversation to support our community and learning more about AI and how they can leverage it in their day-to-day practices, and also as they look to inform how they scale their approach within the organizations and for themselves personally. Could you take a few minutes to tell us about your background, your current work? I understand that you worked in finance.

Speaker 4:

Yeah Well, first, thanks for having me. I appreciate the opportunity to, let's say, extol some of the positive virtues of AI and get people interested in embracing it. So yeah, I was educated as an engineer, actually as a chemical engineer, but I also minored in computer science and, as like chance would have it, all the work I was getting was for software. You know software engineering and applied math and that sort of thing. So I tried out academia for a couple of years. I postdoc'd. Just wasn't really a fit for me and so, after a bit of soul searching, I had a friend who worked in finance in Wall Street and he said well, why don't you send your resume to a recruiter? And he gave me a name. I sent the resume and a few weeks later I had a job because there's high demand for that skill set there. So I think I spent about eight years doing trading.

Speaker 4:

So equity trading, all types of equity and equity derivatives, did that mainly for market making and high frequency, and so you know I was one of those evil sort of rocket you know quant, rocket scientists or whatever that got scapegoated for. You know the crash in 2008 and all that, truth be told, it's not our fault, it's the same old story. The people with all the power are like, you know, the big people right, the banks, the politicians, all that. They needed a scapegoat. So that was us and, um, some point I just realized that I was kind of um, just moving other people's money around and every time I moved it, taking a little bit, and I just felt like it wasn't really contributing to any tangible, you know, concrete benefit I could perceive.

Speaker 4:

So I got out of that business and went to Microsoft actually doing technology strategy. But the cool thing was it was for Microsoft's manufacturing customers, so they're our largest, you know, manufacturing customers, which really brought me back to my engineering roots, loved it, had a great time there. But then and I met Tim there, by the way, so my co-host on Machine Learning Street Talk we met at Microsoft and, by way of him and meeting his brother, we came up with this idea for X-Ray Glass, which is the startup that I now am the CTO for it's xrayglass, if anybody wants to check it out. So we started that up and I went full time there and that's where I'm at right now.

Speaker 3:

And, by the way, just xray glass is a AI company, and what does?

Speaker 4:

it do. Yeah, so, and it's. It's xraglass, which we we call it, you know, x-ray, but that stands for extended reality, artificial intelligence. And so the mission started with, actually, tim's grandfather, who's 90-something years old and has lost his hearing, but he's cognitively fine, and they really, at Christmas one year, they were seeing what an isolating experience that is, you know, because he doesn't know sign language, he doesn't know how to read lips, and they thought, well, hang on a second, he watches TV all the time with subtitles.

Speaker 4:

Why can't we subtitle life? So that was the original mission is can we just subtitle life? Can we, through these, these, you know, ar glasses that were just starting to become a thing, can we display in real time subtitles of what people are saying around you in these glasses, so you can hold your head high, you can look at the people you're talking to, you can engage, you know, but just have this augmentation that makes up for the hearing loss. That was the original vision and it's since expanded significantly, really based on, you know, just demand from customers and companies and enterprises into a lot of other things. But in essence it's a software that in real time transcribes and translates and applies artificial intelligence to speech-to-text and text-to-speech for all kinds of applications.

Speaker 2:

What an interesting journey I think it just is. You speak to wanting to really add value with the work that you're doing and how you've been able to do that with the startup. Can you also tell us a bit about the mission of Machine Learning Street Talk?

Speaker 4:

Yeah, so that was a brainchild of Tim. So when we were at Microsoft he put together an internal set of paper review calls where folks at Microsoft would get together and go over the latest machine learning and AI papers and it was really the mission at that time inside Microsoft was let's just explore and learn what's going on. As a team, a small group of people at Microsoft, he was posting those on the YouTube channel. They started to gain traction. Him and I met early on and just kind of hit it off.

Speaker 4:

We're very complimentary to each other intellectually and so it was a really fun and good fit for us and the mission of Machine Learning Street Talk really became to explore and talk with people in the field, in the trenches, you know, but in a way that's that's friendly to to a wide audience. So we it's a difficult balance we want to both have technical depth but try, if we could, to present it in ways that was understandable both to to technical, deep technical folks as well as hobbyists, enthusiasts, you know, business executives like really, really anyone. So that's what we try to do we try to communicate what's happening in AI and machine learning to a broader audience, but with technical depth.

Speaker 3:

And I want to sort of underline for particularly the HR managers out there. This is a wonderful example of peer learning, learning of you.

Speaker 4:

Just get people together and they collectively help drive forward their learning in an area yeah, yeah, I mean so my I think my two kind of, let's say, intellectual passions in life are learning and problem solving, and so so for me, the, the partnership with tim and Machine Learning Street Talk is just yeah, it's been a godsend. It's really transformed my life because it makes it so fun to learn and I get this privilege right of talking with leaders in the field and practitioners in the field and everybody, and it's fun.

Speaker 3:

Now, the main thing on on managers mind these days are the large language models like chat, gpt, and they need some kind of mental model about if I'm going to have this tool. What is it likely to be good at? What might it be able to do if I put some effort into fine-tuning the prompts? What's a waste of time because it just cannot do that. Do you have a mental model or can you talk about the different mental models? People have to make sense of what this tool is.

Speaker 4:

Yeah, I think you probably need a couple of mental models because they are, in a sense they're very general, so you need to think about them in a couple different ways. The first way to think about them is they are literally language models, and what this means in effect is that they can communicate with you, you can communicate to them using natural language, and so they have a good ability to process and digest and produce well-formed natural language. You know in a variety, in a variety of languages. So, whereas in you know, let's say before them, the primary interface to computers would either be, say, a gui, you know, some graphical interface, or some type of programming language or domain specific language or ht, all this type of stuff which you would have to learn as a person because you wouldn't start knowing those. They're not your first language, they're not your natural language. So the first thing is a transformation of the interface. Now I do want to say up front and we'll probably maybe get into more detail about this later but there are trade-offs, like anything in life. There's no free lunch. So by communicating in natural language, you lose the precision and the specificity of kind of those machine languages and programming languages, and so it introduces ambiguity, flexibility and things like that. So there are trade-offs, there are pros and cons, but as a first pass, it's this language interface to and from. The second thing is that almost I wouldn't say by accident, but in order to learn language, what they do is they train it on pretty much any language that's available in digital form, so this is like the entirety of the internet and any other library sources and things like that and so kind of. For you know, along the way of learning language they've also ingested just a massive quantity of information from all this language that they've learned, and neural networks just kind of naturally kind of compress and form structures and representations of this knowledge. So there's a sense in which they're kind of a massive repository or compression, if you will, of all the knowledge that they were fed in when they were learning language, and so that allows them to act as really excellent search engines, and not just search engines, but search engines that can first of all understand what you're asking in natural language and then produce back, let's say, results and examples and things like that that are tailor-made to your question, because it's like they're sort of ad-libbing and putting together all the pieces you know to make it exactly what you asked for. And just an example of how transformative that is you know before, llms, for example.

Speaker 4:

Suppose I wanted to learn about, you know, and I'll use programming just because that's what I do from day to day. But, as an example, suppose I wanted to learn about how to you know, display a dialog box on Android or something like that. What would I do? Well, I'd have to go to a search engine like, say, google, type in some of those keywords, scroll through the results to find an article.

Speaker 4:

Suppose it's a medium article. I'd go read this medium article. I'd have to slog through like four paragraphs of the person telling me why this is cool and why I would want to do it which is unnecessary because I've already decided I want to do it and then eventually slog through more material to finally get to an example that maybe wasn't exactly what I wanted, it was sort of slightly different, and then I'd have to mentally transform that myself. Large language models completely streamline all that into a single query and result, and so they're just order of magnitude more efficient search engines. So I'm going to pause here. There are other mental models, but let's pause on these two and then maybe talk about them a little bit.

Speaker 3:

The one thing that I would dig in on a bit is the fact that, because it's ambiguous, language is going to be ambiguous. Sometimes it gives you exactly what you want, and sometimes people get frustrated because they said that isn't really what I wanted, and so you have to go down some kind of path of rephrasing the questions, and I've seen people have very long prompts sometimes Tim may be being one of them, but what is your thought about how you interact with it when it doesn't give you what you want?

Speaker 4:

So you just hit the key word there, which is interact. This is an interactive process. You should always start a let's call it a dialogue. You should always start a dialogue with an LLM as an interactive process.

Speaker 4:

And you know, keep it simple. You, you ask a question, you do want to, and there are sort of tricks that you'll learn over time and I'm hoping one day. You know there's training on how to do this, but you'll kind of learn tricks of how to phrase things, just like you had to learn how to phrase Google searches to give it a good shot at kind of getting to where you want to go to, like initially. But keep it simple, you know, keep it concise. You fire off something. It gives you an answer. If it's a bit off or even far off from what you were looking for, then you iterate this process. So just keep going with it, like say, you know, thanks, but that's not really, uh, what I was looking for. I'm really more asking about this, right, and so you do this kind of back and forth with it and you can, but let's say, triangulate, you know to where you to, where you're trying to get to. That's the process you should follow initially.

Speaker 4:

Now, like you mentioned, tim with his massive prompts, so those come from after you go through this process. You go through this kind of iterative triangulation. You get to where you want to go. You'll learn kind of a prompt that you could have given it at the beginning, almost like, let's say, a composite of all the prompts that got you to where you wanted to go. You'll have an idea of a prompt that you could have given it to really get directly to this answer. And what you'll want to do if you think you're ever going to use this again is kind of copy those prompts somewhere, clean them up a bit or even ask the LLM itself to do that compiled prompt and you put it in there and then you're starting farther ahead in the pathway, the journey, if you will, than you would have been otherwise.

Speaker 3:

By the way, I really like that idea, that gosh, I'm struggling with these prompts and I want to put them all together and I think, well, how am I going to do that? Well, I just ask the LLM In the early days people and you still hear this people talking about, well, it's just a stochastic parrot or it's just super-powered autocomplete. Do you want to explain what those mental models are and what you think of those ones?

Speaker 4:

Sure, sorry, the stochastic parrot. The idea there is that, as we said, it's been trained on this massive corpus kind of of all, or you know most of the available say, hopefully non-copyrighted, but who knows? You know material from the web. It's been trained on this huge corpus and, of course, even though these models have hundreds of billions of parameters, right, that's still not enough to store petabytes of information. So what they're doing is kind of finding patterns, compressions, projections. You know they're really distilling and digesting all that information and that, necessarily, is a lossy process. So they've kind of got this all compressed down and you can sort of think about the parrot, that part of a parrot which is, if anybody's had parrots, sure they can repeat some of what you say, but there's some loss there. It's not quite exactly right, it sounds really close, but they can repeat it and it's usually parts. It's parts of what you said. It's, you know, sub, sub sentences and phrases that they've heard, you know, many times over. Like siren is a pretty bird, right. You know that that kind of stuff, right, curse words usually show up pretty often in these things. And so that's the process that the lm it's digesting that information, that's breaking it up up. It's creating compressions and billions of artifacts, little Lego blocks that can reassemble.

Speaker 4:

And then we get to the second part of the stochastic parrot. So we've got the parroting there and then the stochastic part is it's got all these pieces and now it needs to reassemble them. Right, but to a degree that information has been lost. It's been converted into kind of, let's say, you know, networks of probability, right, and so it starts putting out these parts and it puts out a part and then it looks at what it's put out so far and it probabilistically it sort of flips some dice and some coins. It decides what the next part would be. That's the stochastic part of it. So this is what people mean when they say it's a stochastic parrot. It's digested everything into these parts and then it rolls some dice and kind of strings together the parts, like a parrot would to form a response Okay and go ahead.

Speaker 3:

And I suppose what's good about that is that if you're interested technically what it's doing, without getting really into how neural networks design, that gives you a pretty good understanding of what's going on under the hood and it ensures that you don't think well. I'm dealing with a human behind the screen here, so it highlights some of the limitations. In general, though, I don't think well. I'm dealing with a human behind the screen here, so it highlights some of the limitations In general, though I don't feel it's a particularly useful mental model because I think it maybe undermines. It makes you think that it's less than it is Well.

Speaker 4:

I mean yes and no. So there certainly are limitations to this process. And I mean there certainly are limitations to this process and I mean, for example, hallucinations is, you know, a well-known problem with large language models, like you know. You ask it for a reference, like, hey, you know, llm, I heard that David wrote a paper recently, a paper about XYZ. It doesn't, in a sense, have that information anymore because it's been compressed and chopped up and stripped away, and so it's going to give you an answer.

Speaker 4:

It'll string together, oh yes, like David wrote a paper entitled Transphysical Appropriation of Such and Such, and it'll string it together and give you something that looks really convincing. It's like, wow, that sounds like great paper. Heck, you could probably even ask it to generate an abstract for you and it would seem great. But it's all fantasy. It never existed, because once you've kind of chopped up to the world into this probability space, that space is much larger than the actual world. So I think it's good to know that those are possible. Limitations is that you have to be aware that maybe what you're getting is a stochastic combination of parts and not something that was really there.

Speaker 2:

Along those lines. I would say what hearing and understanding is, that it's definitely not autocomplete with this, that it's actually making a prediction around what the best response would be, which explains both hallucinations and also, as you mentioned earlier, we can get much more sophisticated in our prompts and how we leverage them, but the exact same prompt in the same system can actually elicit a different response, which can be confusing for individuals as well.

Speaker 4:

I mean, you're right about that. And that was the other mental model, which is some form of autocomplete autocomplete on steroids or whatever the sort of phrase is, sort of phrases and and the way in which it differs from, well, I mean, the traditional auto complete would really only be looking at a very tiny context first of all. Right, it would be looking at the last couple of words that you typed and trying to find the next, you know the next few words. Okay, the. The context in these large language models is orders of magnitude larger than that and it's also much more sophisticated and more complex. It doesn't just look at, say, the last part of what came before. It doesn't even look uniformly. It can be intelligent about how it kind of looks around in there to find the next matches, around in there to find the next matches, and it's just a far more sophisticated and much larger model.

Speaker 4:

So it's not a fair comparison in terms of sophistication, it's a fair comparison at the lowest level, which is that, yes, at the lowest level, this machine with hundreds of billions of parameters is looking at the context and it's deciding the next, the next part, the next token, if you will, and then it kind of moves forward, and moves forward, and moves forward. You know, much like an autocomplete would. But that's about the end of the comparison. You know there, and so you're right, that in these very low level kinds of senses, I think that in these very low-level kinds of senses it's correct. But they do distract from, and sometimes are meant to diminish, the capability that's been added on top, just the massive amount of capability, difference, difference.

Speaker 3:

Now, if we look at different kinds of AI, people are familiar with, as I say, the large language models, chat GPTs of the world, but they'll also. They may remember that recommendation engines were the classic example of an AI, as well as, more recently, something like alpha geometry, which is doing extremely well on difficult geometry questions. So are there different categories of AI that we should be thinking about, that this category is quite different from that category.

Speaker 4:

Absolutely I mean. So we kind of in technical parlance, if you will, we refer to things that are called narrow intelligences. So these are artificial intelligences that are tasked to do something very specific folding proteins, integrating, you know, performing mathematical integration, recommending videos, cetera. And so the advantage of narrow intelligences like that is that, when you train them, all your resources, all your parameters, all your computation, all the energy you're using, all the money you're spending are focused on that one task. Right. And so instead of being a jack of all trades but master, master of none, it's a master of one thing.

Speaker 4:

And large language models are, in a sense, kind of both narrow and general. They're narrow and at least you know out of the box. The common ones really, up until recently, only understood you know text, right, they only understood language. They really couldn't make much useful of, say, audio data or you know video data. Now they're starting to. So people are starting to train. You know what are called multimodal models. This means it has multiple modes of of input data that it understands. And then, on the other hand, they're. They're general in the sense that you can, because natural language is itself flexible.

Speaker 4:

You can ask all kinds of questions in natural language and pose all sorts of problems, and you might get an answer from the LLM, right, but it turns out that the answers really come from what you can imagine is kind of a Swiss cheese. It's like this block of Swiss cheese. And what you can imagine is kind of a Swiss cheese. It's like this block of Swiss cheese. You know, it's taken all the knowledge that was in its training corpus and it's compressed it and chopped it up and it can kind of reassemble it, but there's a bunch of holes in there. And so for some questions you ask, you'll get great answers. For some questions you ask, you'll get bad answers. For some you'll get answers that really seem right, but they're subtly wrong and sometimes in dangerous ways, if you just apply them without domain expertise.

Speaker 3:

Yeah, and that's, I think, another key lesson for managers when they're educating their employees as well as using it themselves. Is this danger of answers that are just wrong in a subtle way. They're very convincing, but you do have to apply domain expertise. Yeah absolutely Absolutely.

Speaker 2:

Keith, can I ask if there's any developments in the world of Gen AI that have surprised you or surprised you lately?

Speaker 4:

Well, honestly, believe it or not, the things that are surprising me lately are some of the geopolitical and social phenomenon happening in this sphere. So, for example, I was just really quite shocked that DeepSeek was as open as they were with their methodologies and techniques and code. I agree, I didn't expect that. Because of the geopolitical tension between the United States and China in particular, I wasn't expecting a Chinese company to be so open and it just happened again with these sort of G1 robotics release open sourcing, the models and methods of that, and kudos to them. I mean, this is really amazing and I think it's good actually for the world.

Speaker 4:

I think that the path, the best path forward for humanity is widespread, distributed, open, diffuse. You know development by everyone you know, by all countries, all corporations, all hobbyists, all enthusiasts. So things like that are surprising me in terms of technological developments. So I don't want to act like I want to say no, but it's not because I don't think AI has achieved great things. I say no because I had great expectations for AI and I didn't mean to say AGI. I've had great expectations for AI. So I think this is kind of expected in the sense that, yeah, progress is tremendous and very cool, but otherwise I think it's not so surprising that we've made it this far and even that we've made it this quickly.

Speaker 3:

And now everyone's talking about agents as being the next big thing where the LLM will take a more active role in actually doing things in the world and controlling your computer, just as if you were controlling it. What's your thought about that technology?

Speaker 4:

Yeah. So I'm a little bit concerned about, you know, adding agency, essentially adding in these control loops right where AIs can directly control more and more, and the reason why is because we don't know how to really engineer these things well enough to be reliable enough for all use cases. So for some use cases sure, like an AI that changes your desktop background with cool AI-generated images composed from all the latest news feeds and ex-post posts and everything like that that's fine. There's no real chance of harm there. But I worry about hooking up agentic systems to things that had the potential to do harm, and I just don't think we're at a level of sophistication of AI engineering yet to do that. So I think it's a good goal, but we really, you know, it needs to be pursued with caution and with transparency as well. I mean, I think people should know if an AI is going to start taking control of certain activities or if the content that they're consuming was generated by AIs that sort of thing.

Speaker 3:

Yeah, and even if people like to use a travel agent example, where the damage wouldn't be that great except that you are hoping to go to Australia and you end up in Sydney, nova Scotia, right which can happen with humans as well but nonetheless, to hook it up to actually making financial decisions of any kind, buying things on your behalf, as you say, the engineering may not be there to do it reliably.

Speaker 4:

Or even, for example, in the travel agent thing you know, maybe for example a book's. You know a sequence of legs, you know flight legs that take you through a certain airport where you don't, like you don't have what you need to actually pass through the airport, like there's a visa requirement that you haven't completed or aren't able to complete, or vaccinations that you do or don't have. I mean, those are the types of kind of very detailed engineering, um, uh, type of things that can slip through l or can slip through, in particular, llms, but other types of AI systems that have a degree of probabilistic, stochastic activity, right.

Speaker 2:

Just to shift gears. Can you share with us how you use Gen AI for yourself within your own work?

Speaker 4:

Sure, yeah. So in the first place I use it as a really great search engine. So I mean, I still use traditional search engines but a lot of my, let's say, exploration activities have shifted over to large language models. So I really like being able to. The way I'm using it today is I kind of start with a large language model. I ask it some questions. It gives me some information. I'll usually go and try to verify Like certain things will kind of, you know, maybe smell iffy to me, like I'm not sure about that, and I'll paste it into Google and see if I can get that confirmed, you know, by somewhere. Or I'll look at the links if I'm using one of the LLMs that provides reference materials and try and go confirm things. So that's one thing. Is this exploration and searching?

Speaker 4:

I found it very useful for generating tailored boilerplate code. So if I need a function that does something that I just don't feel like writing, like, I really don't feel like reading through the docs and digesting them and transforming them into code, you know I can get that initial prototype from the LLM. Now it typically takes quite a bit of further work on my part to, you know, make sure it's not got any bugs and kind of that. It does what I want it to do, and then it's updated to the latest, you know, interfaces and to kind of expand it in ways that would have been difficult for me to describe in natural language. Anyhow, like it's just easier for me to code it, kind of code it myself, but it saves me a tremendous amount of time in that initial prototype construction. And then I use it for fun too. So I have, I have fun trying to generate images in particular. It's what I've played around with the most. So yeah, things like that thank you.

Speaker 2:

And how much does this set you back? So it's financially. What systems do you invest in? What's the, what's the financial investment that's required just to to provide a bit of an anchor for ourselves, for our audience.

Speaker 4:

Yes, first of all, I don't pay 200-plus bucks a month for XYZ, awesome LLM, the ones that are free and or the lower-tier pay-as-you-go type things are fine for me. So I typically, first of all, I do have an open AI account. I've had an open AI account, you know, for some time and I probably spend, like you know, maybe 20 bucks a month or less kind of on the on the activities that I do there. I use I use Gemini, which is various forms, free, right. I use ChatGPT. Even though I have an OpenAI account, I sometimes use the free version of it or log in. I haven't played around much with DeepSeek. I just didn't really feel the need because I was getting what I needed from my work, at least from the other ones. I play around with it when we do puzzles and problems and things like that, to compare and grok as well. But primarily, I would say I use Gemini and open AI.

Speaker 3:

And if we look ahead two or three years, how will the capability of AI likely be different from today, so sort of end of 2026 or into 2027?

Speaker 4:

Well, it's really hard to predict the future, but what I think will start to happen is that we'll have more and more, let's say, more narrow intelligences trained and fit for purpose, and so people should be using a variety of narrow, narrowly intelligent, you know systems for different purposes and then kind of integrating together the results.

Speaker 4:

And I hope that, I hope that people start providing kind of advanced user experiences to make that easier. You know, to kind of have like, let's suppose you, you put in a query that contains you know, some natural language and some equations and maybe some images to analyze into their different modes and sends different combinations of those modalities to narrow intelligences, pulls the information back and kind of reassembles it. Because we're already seeing that people are finding results and this is no surprise I mean, it shouldn't have been a surprise, but maybe it's a surprise to some people that it's better to have a smaller model, narrowly trained on your goal, than to use a massive generalist model, right? So I think that's where we're going to head towards is more, let's say, constellations and systems of narrow intelligences rather than ever bigger general models.

Speaker 2:

Thank you and your thoughts on the capability of robots and where that's headed.

Speaker 4:

Yeah, so that's one thing that surprised me. Well, it's because you asked what surprised me. So what's surprising me is how much advancement and resources and focus are going into humanoid robots. So I always figured, yeah, you know, we're gonna create drones and that's gonna be it. Like people are just gonna make more and more drones of all different kinds. You know things that don't resemble humans at all. You know whether it's, you know, quad copters or things on wheels, or things on you know four legs that have wheels, or all kinds of all kinds of crazy stuff.

Speaker 4:

But there's been really significant advancement in humanoid robots. Uh, you know, I was a bit confused by that until I had a chat, you know, with a friend of mine and she helped me see, like kind of the obvious, which is, hey, the world is already designed around the humanoid form. So it makes total sense that that, uh, you would have humanoid robots because they can fit into that environment and especially if, if, robots are interacting with people or helping people do the tasks that people would ordinarily do. So I've been really quite surprised at the growing advancement and investment put into robots and it's super exciting. They're amazing. I mean I've seen just absolutely amazing videos. Maybe, maybe other people haven't seen those two online. And this gets back to the agentic you know uh question which is we need to? We need to be a bit cautious here, like even forgetting about agi sagi, doom, doom scenarios. You know, when you start unleashing robots, they can just cause damage, like accidentally. I do want this to proceed with caution, but it's super exciting.

Speaker 2:

You've worked in big organizations. With that, do you have advice for organizations that are wanting to respond to opportunities to address the risks?

Speaker 4:

Well, there's kind of. So there's two parts to that question the opportunities and the risks, you know. For the opportunities I would really encourage what I would do is I would encourage organizations to hold, you know, hackathons and workshops to really give people a hands-on experience with these. I mean, of course, the software providers like Microsoft and whatnot, are incorporating AI. You know, both narrow AI as well as, let's say, llm technology into their products. But just getting it into the hands of people and letting them try it out, and sort of coach sessions, right where somebody who's good at these things is kind of walking them through what they can be used for, tailored for their daily experience, I think is what I would highly encourage, because I've run into a number of people who they just haven't tried it, either because they weren't quite sure how to use it or they really hadn't imagined the possibilities. And when I walked them through it and kind of even just a couple hours, you know they're like wow, you know which is, I think, the path.

Speaker 4:

I think the way in which we mitigate the catastrophes that people are worried about is that we think now very and very detailed and thorough ways about reducing harm. You know that's caused by AI, like AI is already causing harm today. We already know that. We already know that, like we've seen the stories of the harm that you know algorithms and whatnot inflict through social media and many other avenues right. So if we focus on reducing the harms, so for an organization, what's a big harm? Well, data privacy, data leakage. You know, that's one angle. That's one angle. So, making sure that you've really got experts and privacy and data isolation and creating data firewalls and making sure that any AI systems you're using internally are governed correctly and robustly, right. And then, on the output side as well, making sure that you have in place necessary human curation, oversight, surveillance and I don't mean surveillance in like a negative way, but just keeping an eye on the content that's being produced, to keep an eye out for concerning, I mean like, misinformation or hallucinations or things like that.

Speaker 2:

Thank you so much. I really appreciate the thoughtful insight and discussion today. This has been a shorter discussion. I think David and I could talk to you for the rest of the day and keep learning. For those who are interested in learning more about your work, how should they follow you?

Speaker 4:

I don't. So I am on Twitter. You can find me there. It's uh, uh, dr duggar um on twitter, but I'm not. I'm not that at that act, I'm sorry. X. I'm not that active on x, um, yet I'm becoming more active and you're mostly going to find a couple posts and some poems and you know whatnot. But I think, um, but I will be more active there in the future. So you could do that. Could do that if you wanted to. You could also join our Machine Learning Street Talk Discord channel and check out our podcast and YouTube there. I'm pretty active in that Discord community. So if you join the MLST Discord, hang out in there, I'm around, we can chat in there. Otherwise Discord, hang out in there, I'm around, we can chat in there. Otherwise. And then you can check out my company, so that's XRAIglass G-L-A-S-S. Check it out, see what we're up to. Try the software. It's in the Android store and the Apple store. Those would probably be the best options right now.

Speaker 2:

Very good Are you on LinkedIn?

Speaker 4:

Yeah, I'm on LinkedIn. Let me just check here. I think that's. Oh, it's Dr Keith Duggar, d-r-k-e-i-t-h-d-u-t-g-a-r at LinkedIn.

Speaker 1:

Thanks for listening to the HR Chat Show. If you enjoyed this episode, why not subscribe and listen to some of the hundreds of episodes published by HR Gazette and remember for what's new in the world of work? Subscribe to the show, follow us on social media and visit hrgazettecom.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

HR in Review Artwork

HR in Review

HRreview
A Bit of Optimism Artwork

A Bit of Optimism

Simon Sinek
Hacking HR Artwork

Hacking HR

Hacking HR
A Better HR Business Artwork

A Better HR Business

getmorehrclients
The Wire Podcast Artwork

The Wire Podcast

Inquiry Works
Voices of the Learning Network Artwork

Voices of the Learning Network

The Learning Network
HBR IdeaCast Artwork

HBR IdeaCast

Harvard Business Review
FT News Briefing Artwork

FT News Briefing

Financial Times
The Daily Artwork

The Daily

The New York Times