HRchat Podcast

AI Beyond the Basics: Scaling Your Department's Capabilities with Dr. Tim Scarfe

The HR Gazette Season 1 Episode 823

The gap between casual ChatGPT users and organizations with massive AI teams seems unbridgeable for most departments. But what about that middle ground where small teams can leverage AI effectively without specialized expertise?

Dr. Tim Scarfe, CEO of Machine Learning Street Talk, discusses practical AI implementation for smaller organizations with hosts Pauline James and David Creelman in this HRchat conversation. 

Running a sophisticated content production operation with just 15 team members and spending $1,500-2,000 monthly on AI tools, Tim offers a realistic roadmap for departments looking to move beyond basic AI usage.

"ChatGPT is a reflection of you," Tim explains. "It makes dumb people dumber and smart people smarter." This insight highlights why some users remain frustrated with AI while others create remarkable value – the difference lies in approaching AI conversations as iterative journeys rather than one-shot interactions.

Most surprisingly, Tim suggests that building internal AI systems doesn't necessarily require specialized AI expertise. Rather, curiosity and experimentation can take departments far, especially when leaders understand that AI itself can help explain how to use AI more effectively. Tim cautions against waiting for "perfect" technology before diving in, warning that we face a potential digital divide similar to what occurred during the 1980s computing revolution.

For HR leaders and department managers, the conversation offers a practical middle path between doing nothing and pursuing enterprise-wide AI transformation. By starting small, experimenting continuously, and focusing on specific use cases, even modestly-sized teams can create significant value with today's AI tools.

Support the show

Feature Your Brand on the HRchat Podcast

The HRchat show has had 100,000s of downloads and is frequently listed as one of the most popular global podcasts for HR pros, Talent execs and leaders. It is ranked in the top ten in the world based on traffic, social media followers, domain authority & freshness. The podcast is also ranked as the Best Canadian HR Podcast by FeedSpot and one of the top 10% most popular shows by Listen Score.

Want to share the story of how your business is helping to shape the world of work? We offer sponsored episodes, audio adverts, email campaigns, and a host of other options. Check out packages here.

Speaker 1:

Welcome to the HR Chat Show, one of the world's most downloaded and shared podcasts designed for HR pros, talent execs, tech enthusiasts and business leaders. For hundreds more episodes and what's new in the world of work, subscribe to the show, follow us on social media and visit hrgazettecom and visit HRGazettecom.

Speaker 2:

Hello and welcome to the HR Chat Podcast. I'm Pauline James, founder and CEO of Anchor HR and Associate Editor of the HR Gazette. It's my pleasure to be your host. Along with David Krillman, ceo of Krillman Research, we're partnering with the HR Chat Podcast on a series to help HR professionals and leaders navigate AI's impact on organizations, jobs and people. In this episode, we speak with Dr Tim Scarf, ceo of Machine Learning Street Talk, a platform known for deep, technically rigorous conversations with some of the world's top AI researchers. Tim shares how he's using AI day-to-day to scale his work and offers us practical insights on how we can move beyond casual use. We explore how to get started without a technical background, what trade-offs to expect and why experimenting now matters.

Speaker 3:

Thanks for listening to this episode of the HR Chat Podcast. If you enjoy the audio content we produce, you'll love our articles on the HR Gazette. Learn more at hrgazettecom. And now back to the show.

Speaker 2:

Tim, so pleased to have you with us today. Can you briefly introduce yourself for our audience?

Speaker 3:

Thank you very much for inviting me on, pauline. I'm Tim Scarfe and I run the Machine Learning Street Talk podcast, which is probably the most galaxy brain technical very large podcast. I get to interview some of the best AI scientists in the world and we have a wonderful community and I have a background building several startups. I've worked in big corporations like Microsoft and so on.

Speaker 2:

Thank you. We're really excited about this conversation and to help us educate ourselves, our audience, could you?

Speaker 3:

begin by telling us a bit about your own organization and how you use AI. Yes, so I'm a founder of Machine Learning, street Talk and we use AI pretty much for everything. I think that AI helps founders more than large corporations at the moment, and it's because you know what it's like when you have big teams and you build sophisticated software. You get bottlenecks because there's this knowledge sharing bottleneck, essentially, where you have to explain your work to everyone else and they have to review your check-ins and they have to understand everything.

Speaker 3:

Ironically, it's faster if you're on your own and doing what I do, I have to essentially wear 20 hats as one person. So I'm an expert audio engineer and motion graphics designer and video editor and I'm reading research and I'm doing interviews. I'm doing all of these things and it's just a lot for one person to do. But it's cheaper for me to use AI in many cases than it is to hire a separate expert because the sparsity problem. You know why would I hire an audio engineer? I mean, don't get me wrong, it'd be great if I could pay them loads of money and get a really good one. But there's always this problem that the best people wouldn't want to work for me, because they would make their own YouTube channel and they would be earning some massive salary somewhere else. So there's a huge gap and AI fills that gap.

Speaker 2:

Thank you. What is the size of your organization?

Speaker 3:

Well, we're very small, so we have a team of about 15 video editors and that's the bulk of the team, to be honest, and then I'm doing most of the other stuff.

Speaker 2:

Thank you, and could you tell us how much you're spending on AI a month? How much of an investment is this financially?

Speaker 3:

Probably at the moment around $1,500 to $2,000 a month.

Speaker 4:

And that would be US dollars? Yes, and the reason that I wanted to hear all that that Pauline's been digging into is that many of our listeners have some personal experience with large language models like ChatGPT, so they know the sort of basic use. They also read about what some giant companies with huge teams are doing, but your experience is actually probably more in the ballpark of, say, their department or their part of the organization, and so it's really interesting to think, well, what can we do where we want to be more advanced than just making casual use of ChatGPT, but we don't have some huge team to support us in our applications of AI? Why don't we talk about some of your uses? What's one of the uses that you'd like to start with?

Speaker 3:

Well, I think a lot of people when they use ChatGPT and don't get me wrong, the amazing thing about large language models is their flexibility. You can use them for literally anything. They're multimodal. You can feed videos into them, you can feed audio into them, you can feed audio into them, images or any combination thereof. You can use them for writing your social posts on Twitter. You could use them for planning your shopping or even financial trading if you want. It's overwhelming and, of course, it works better for some things than others. And I think a lot of people at the beginning they get trapped in the sort of the beginner's mindset where they're using chatgptcom and they're just doing formulaic things, cookie cutter posts and unfortunately, because of the way the technology is especially if you're not using the foundation models is you get cookie cutter answers. So to a certain extent, chatgpt is a reflection of you. I joke that it makes dumb people dumber and smart people smarter. So if you're very creative, you can make it sing, and what I mean by being creative is really becoming acquainted with what it gives you back, because this is called prompt engineering.

Speaker 3:

There's a whole field of prompt engineering where you become a large language model whisperer and you learn when it's going well and you quickly detect when it's not going well and you adapt and you iterate. Importantly, you iterate. You don't do it in one shot. You kind of create this graph of interactions and you go many, many steps, because if you just do it in one shot it'll give you a banal answer. But coming to your question, david, um, the real step forwards is, rather than just thinking of it as a chatbot, you start to integrate it into your systems and you build software around it and it's surprisingly reliable. So, rather than getting text back, you ask it to give you like a JSON, which is, you know, like a software schematized object, and you wire it into your existing software stack. So, rather than it just being a wall of text, it's now actual entities in your system. You can put a user interface against it and you can start to build on top of it. And you can build very far, very quickly.

Speaker 4:

And let's talk about what you built to help you edit your video interviews.

Speaker 3:

Yeah, so I've built a fairly sophisticated stack so I can put an MP3 recording in of an interview and it will be transcribed. The transcription process is quite sophisticated. My previous startup was a transcription startup so it does many layers of transcription and diarization and you know post hoc transcription, refinement with language models and so on, and there's a research stage before that. So I'll use open ai, deep research and I'll get a vocabulary which helps the transcription, that I'll get lots of grounding information about all of the papers that the guests are talking about. I mean, this is another thing with language models. You have to ground it on useful information to mitigate hallucinations. So we have a big transcription and then I have this multi-agent system that will go and read all of the research papers and it'll figure out what questions I asked and it'll, you know, kind of create this entire map of the conversation if you like. And then I have some other features that will rank the podcast, so a bit like the, the ELO algorithm in chess, you know where this guy plays, this guy, and if this guy wins and it was a surprise then the ELO will go up. Well, I do that with fragments of the podcast, so I have language models as judges and I rank, you know, all of the pairs of fragments based on how interesting and engaging they are, and I use that for clip selection.

Speaker 3:

I can automatically generate timelines that go to my editors for creating clips for the shows. I can automatically generate a professional looking PDF document with all of the show notes and I can automatically export all of this into the video editors. The problem is with video editors they don't understand the content. This is very highly technical content, but this software before language models. It would have taken me probably two years to write this software and I wrote it in about a month. Now I don't want to, you know, be overly zealous about this. I mean, of course, there are problems with it. It sometimes hallucinates and it's problematic, and that's why it's very important to create software that has a human in the loop, so you can kind of robustify and verify as you go.

Speaker 4:

And for the listeners. So basically, we're talking about building a very useful tool for a particular use case that Machine Learning Street Talk has, and it took a month of program or time to do it and, as you said, it was very sort of iterative. As you think, well, I need to do this as the individual responsible for this area. Maybe I can get the AI tool to do this part for me, and then you keep just adding parts and fixing them as you go, exactly.

Speaker 2:

And also be interested in where you see you've saved time, as opposed to improve the quality of the output, where in the past maybe you would have accepted a limitation based on my resources, but I can fix this or, additionally, I've saved how many hours of my time.

Speaker 3:

Well, there's a couple of angles to this. I mean you could argue in some ways it's worse. There's no substitute for me. I understand the entire life cycle of the podcast and in the olden days I would go through and painstakingly edit everything. And putting motivated visuals is very important in a podcast so you understand the content, you actually show a visual which you've understood the content and it makes sense. And increasingly, when you start to systematize the process, you're not paying attention to everything because of course you've just scaled this thing up a hundred times over, so you don't have time to pay attention to everything and that creates a disconnection and that's not entirely a good thing. But that's just the reality of building businesses.

Speaker 4:

You have to scale them up, so that's the way it goes If I'm thinking as a manager and maybe I do have access to some technical resources. I guess I think you know we've got some good technical people on our IT team or maybe even within my own department, but they're not really experts in using large language models and I understand that in fact, you can use large language models to help you use large language models. Maybe one of your use cases is using AI to help program AI. Perhaps you can talk about that.

Speaker 3:

Yeah, this goes to the flexibility of language models, so they can actually teach you how to use the language models and you can use the language models to reflexively improve the solution that you've created.

Speaker 3:

This thing that you're just pointing to is something I'm very excited about, because software at the moment is very linear you write it, you test it, you put it in production.

Speaker 3:

You probably have some business requirements first, and potentially, the new generation of software it's referred to as agents, although I think a lot of people, when they talk about agents, that they're not really talking about it in all of the nuance that the subject deserves.

Speaker 3:

But potentially, the new generation of software is more like a living thing, it's a living ecosystem. You never put it into production, it's just alive and it's always there. It's much more biologically inspired, if that makes sense, and the agents are just autonomous units of computation that do a particular thing. And so, rather than thinking about the software stack as this big monolithic blob, it's now composed of this, you know, panoply of agents, and what that means is that you can have different versions of the agents in play at the same time and the system can test the agents to see if they're performing correctly. And the system can also do meta programming to update bugs and to adapt to failures without explicit involvement from humans. So, essentially, building an agential system is a system that has its own goals and has less human supervision in its operation.

Speaker 5:

Hi everybody. This is Bob Goodwin, president at Career Club. Imagine with me for a minute a workplace where leaders and employees are energized, engaged and operating at their very best. At Career Club, we work with both individuals and organizations to help combat stress and burnout that lead to attrition, disengagement and higher health care costs. We can help your organization and your workforce thrive, boosting both productivity and morale across the board. To learn more about how we might help you and your company, visit us at careerclub.

Speaker 4:

So if I'm a manager and I want to get started with my department, where would I look for the kind of technical resource to help me? How do I get started?

Speaker 3:

I think that the best thing is just to develop an obsession with language models, and it's easy to do, because it's very fun just playing around with them and just seeing what you can do just on your own.

Speaker 3:

So I use a program called Cursor, for example, and it's a Visual Studio fork. So you just install it on your system and in Cursor you can just say I want to build a Tetris game or I want to build an hr database and again, having a little bit of technical knowledge goes a long way. So if you understand how to frame the architecture and the technology stacks, you can say I want it to be a react app, I want it to be an angular js app, I want it to be a python app, I want it to have this architecture. So you know, at the moment some technical knowledge required, but the art of using language models is about the unknown unknowns. So understanding when the language models don't understand something. But when you don't understand something, prompting the language models to tell you what you don't know, and following that thought train, because it's always a thought train, it's never one shot, it's always taking a trajectory, taking a path through many steps to actually get to a useful solution.

Speaker 4:

Yeah, and just to sort of play that back, the kind of support I need as a manager is some programming background if I'm going to build my own system. But their sort of curiosity and interest in LLMs is going to be critical to them being a useful aid building this internal software.

Speaker 3:

Absolutely. I mean, it really is a technology which is going to change a lot of things and I fear that it might trigger a digital divide much like what we saw in the 1980s, where people who gained their skills in the 70s they were basically removed from the workforce and they were very disconnected from technology. And I feel that if folks don't really embrace this technology, they will find themselves on the wrong side of the next digital divide. And the best way to mitigate that is just to play around with the technology, because it kind of teaches you as you go. And I see some folks are very skeptical and they're right to be skeptical. The technology is very problematic for lots of different reasons, but there are some folks who just focus on the negatives and they say don't use this, it's unreliable, it's never going to work and unfortunately, that strategy is a failing strategy. It's becoming clearer by the day that this technology is the future and I recommend folks get accustomed to it.

Speaker 2:

Thank you. I'd like to lean in on your comment there that the technology will teach you if you lean in and you know, begin by playing with it, which I think is apt Experiment, see how it works, see what it can do. What are your suggestions on how you don't say, stop there? I welcome your perspective and how you go from being a casual user to being able to integrate more extensively, and how you get over being a casual user to being able to integrate more extensively and how you get over that hurdle or that fear potentially, because I don't think you're saying you need to become a technical expert, but you need to understand the foundations of the technology to leverage it effectively.

Speaker 3:

Yes, it's very interesting how I mean part of the reason for just the disparity in opinion.

Speaker 3:

Like very technical people in Silicon Valley who have software engineering and computer science backgrounds, they're hyping this technology up and they're making it do amazing things, and then a lot of other folks when they use it and to be honest, this is a problem with language models it's just creating this slopification of the internet.

Speaker 3:

So you look on LinkedIn and everyone's generating their posts and they all look the same and I can understand why people would have the perspective that this is just a bad technology and it's creating slop everywhere. And part of the problem for that is the technology is so deceptive so it will just give you very confidently wrong answers and if you ask the wrong questions, you get the wrong answers, and that's why you always have to see this as a journey and an exchange. You have to recognize when it steps into the world of banality, and it happens very often. The thing is I spot it instantly and many other people in Silicon Valley probably spot it easily, and because it only takes them about 0.9 seconds to spot it, they don't even cognitively register it. Spot it, they don't even cognitively register it and a lot of people get stuck on that and they don't get any further, which is why they have that perspective. So I think a big part of this is just curiosity and just spending the time with the technology and understanding how to make it work well.

Speaker 2:

Thank you. On the flip side, I'd also welcome your perspective on where it makes sense to lean in build your own and where it makes sense to lean in build your own. Where it makes sense to wait because the systems improve every day. I was experimenting with deep research the other day and I was so impressed with how much further ahead it was than other models that I had used previously. A platform we use for learning it's now automated AI voiceovers and they sound great and that just showed up in my system. So I welcome your perspective on that and also just the risk that we just we sit back and we wait for everything to be solved and easy to use, as opposed to leaning in more so.

Speaker 3:

I definitely wouldn't recommend waiting. I think there was a landmark moment, june last year, and that's when Anthropic released their Sonic Clawed 3.5 model. That was the first model which was incredibly reliable. Because you know, people said, oh, this technology it doesn't work, it's not reliable, it hallucinates and so on. And Sonic still has problems, it does hallucinate and so on. But you can now build software that uses tools, which means you can actually program it to integrate with your existing systems, and it actually works and it doesn't hallucinate and it's very reliable. And when it's not reliable you can fix it very easily.

Speaker 3:

All of these models have trade-offs. So O1 Pro, for example, it's very, very clever. It's actually an order of magnitude smarter than any of the other models, but you have to wait 10 minutes to get an answer an order of magnitude smarter than any of the other models, but you have to wait 10 minutes to get an answer. And Google Flash thinking is incredibly fast it's 200 tokens a second but it's very unreliable with tools. And the O3 mini model is incredibly good at mathematics but it's incredibly bad at following instructions and you can't really build it into applications yet. So in a year or maybe two years, we'll have one model which does all of these things together. But I would definitely recommend to play around with all of these different models, even though they do different things well, just to give you a step up in the future.

Speaker 4:

Right now, all the big AI companies are talking about AI agents and sometimes it seems like, oh, we're just going to get this AI agent and it's going to replace a person. Other times it just sounds like a feature added to the software. So where are we on that spectrum from? The agent is an interesting feature, like it lets us schedule a task versus no, I'm going to replace a whole human being with this new AI tool.

Speaker 3:

So there are many ways of designing agent-based systems. Deep research that's a great example of a multi-agent system, but a true agentual system is something that has its own goals and does what it wants to do. So with deep research, for example, you tell it what to do, it's breaking down the research agenda into different subtopics and then it's parallelizing that into different agents and they're going and finding all of the stuff and they're aggregating it together to giving you the answer. Now, the true agential system is what I discussed earlier, where it's actually a living, breathing system that has its own goals and it can heal itself and repair itself, and it's divergent and it's unsteerable. We're nowhere near that. I mean, that's a long way away.

Speaker 2:

What are your thoughts on where we will be with this technology?

Speaker 3:

say, in two or three years, tim, I think we're going to see just improvements in robustness and autonomy. Over the last three years, I mean, I was a huge skeptic of large language models. I mean, I've had Gary Marcus on the show many times. He's the most famous skeptic and I must admit that every year, all of the things that I thought the technology could never do, it now does you know, whether it's, uh, creativity or reasoning, uh, we're now starting to see improvements in autonomy. Um, there are still loads and loads of problems with it.

Speaker 3:

So I expect to see just an improvement in in capabilities, and one of the main things that's driving that is just the amount of computation that we have at our disposal.

Speaker 3:

So you need to have very, very powerful GPUs and data centers to run this technology, and right now we have a centralized model, which means open AI and Anthropic, et cetera, et cetera.

Speaker 3:

They do a huge pre-training run and they build this massive monolithic model and then they just copy it around onto all of these servers everywhere and people use it, and what we're going to see is more of a distributed version of the AI, and part of this is called test time inference, which means either they or you on your machine, you use the foundation model but you also do some extra computation when you do the prompt, and that dramatically improves the answer. And we're going to see these kind of active updating systems where you're essentially generating data. When you do this, when your model in respect of your prompt is doing reasoning and creating additional data, that will be fed back into the base model and the whole system will just develop a kind of you know, a living property perhaps that it doesn't have now, and the system as a whole will become far more adaptable. I think that's what we're going to see over the next couple of years.

Speaker 2:

Do you have thoughts on how that will impact the workforce, how that will impact leaders' roles? I think about the skills we endeavor to teach leaders around the importance of supporting their team's development, how this relates to this technology. Also the importance of supporting their team's development, how this relates to this technology. Also the importance of delegation and how leaders will need skills for both delegating to their teams effectively upskilling them and delegating to technology effectively process mapping skills that they may not have been a core requirement previously.

Speaker 3:

That's the million dollar question. Nobody really knows. So, as exciting as the technology is, the only thing I know for sure is that it's going to initially help founders like me build software very quickly and do things very quickly. Unfortunately, in the real world, in large corporations, you know how it is. We have to deliberately remove agency from processes so one person can't launch a nuclear bomb. We deliberately have five people approve it and we have the codes over here. And it's the same thing in software.

Speaker 3:

We don't just let people put software into production. We have release gating, we have ethics boards and we have advisory boards, and maybe some of this stuff is just bureaucracy and it shouldn't be there. But most of the time when you have these processes in corporations, they've been designed for good reason and these things, of course, will be a bottleneck to AI technology. So as smart and as good as the AI technology is, these things will form a bottleneck. So the million dollar question is what impact, if any, will this have in the corporate world in the next five years? I honestly don't know. I think that's what everyone's thinking about.

Speaker 4:

So if you're an HR manager, I mean you worked in big corporations, so you know what the human resources function is all about. Any advice for a HR leader today?

Speaker 3:

Well, I mean we spoke before we hit the record button around some of the problems with automated decisions and bias in hiring and stuff like that.

Speaker 3:

There was a news piece in the UK today about retail brands. They do personality tests before they hire people and it excludes neurodiverse folks with autism and, just as I've done, the temptation is to build software to systematize your entire system to remove that human subjective intuition, and the risk of that is you create a monolithic, biased system that sometimes makes the wrong decisions. And don't get me wrong, sometimes we need bias in hiring. Maybe there's a good reason for it. But I think what we should use this technology for is we should use it to maintain a degree of diversity and subjectivity in our corporate life, and the best way to do that is to get folks using AI themselves to empower what they do to maintain that diversity, because if we build centralized automated decision systems everywhere, then everything just gets a little bit boring and quite cookie cutter. So yeah, I would be thinking a lot about the risks of using this technology en masse.

Speaker 2:

Thank you, and it's interesting because it's very different from how we've typically thought about technology being very centralized enterprise level, where we think about this being much more enabling technology and allowing employees across the organization to leverage in small ways.

Speaker 3:

Yeah Well, I think, when it comes to building effective organizations, this is the perennial problem of whether you should kind of empower people and give them more agency or take it away. And, of course, the corporation itself it has an agenda right, it's trying to make money and it's got a strategy, and you want people to stay in their swim lane to a certain extent, but by the same token, you also want to maintain a degree of innovation and you want folks trying new and interesting things. And coming up with that perfect topology to manage an organization is very mysterious up with that perfect topology to manage an organization is very mysterious.

Speaker 2:

As I come back to just how goal-oriented we are within organizations, we're always looking to identify the ROI before we've begun the journey, which can really limit experimentation and the kinds of enablement we're trying to encourage.

Speaker 3:

That's exactly correct. I did a great interview with Kenneth Stanley. One of my favorite books ever is called why Greatness Cannot Be Planned. Definitely recommend reading that book. And exactly that you know. If you apply for a research grant you have to say well, what's the objective? You know children, when they go out and play they don't have any objectives and serendipity plays an outsized role in our lives. But we live in a kind of virtual reality where we like to think that everything we do is in service of an objective. And from a computer science point of view, that's the most naive thing possible to do in search, because your objective actually biases you away from discovering all the interesting stepping stones. But by the same token you can argue it both ways. You're an organization and you need people rowing in the same direction. So it's. The perennial problem is how do you balance that exploration and exploitation?

Speaker 2:

Thank you, which is certainly something organizations need to wrestle with when they think about enabling and upskilling their teams to be able to leverage this technology effectively. Absolutely Thank you. I'm really grateful for the conversation. You've given me some inspiration as well and some new things to search today and to keep playing with my deep research. My new best friend this week as well, so it's been really, really interesting to connect. Thank you.

Speaker 3:

Thank you very much, and I love deep research. I've been using it a lot. I'm so impressed with it. It's amazing.

Speaker 4:

Thank you very much, Tim. If people want to follow your work or get in touch, how should they do that?

Speaker 3:

Well, they can subscribe to Machine Learning Street Talk. We're on YouTube and they can join our Discord server. We have lots of great chats on there. Thank you, my pleasure. Thanks for inviting me on.

Speaker 2:

Thank you.

Speaker 1:

Thanks for listening to the HR Chat Show. If you enjoyed this episode, why not subscribe and listen to some of the hundreds of episodes published by HR Gazette and remember for what's new in the world of work? Subscribe to the show, follow us on social media and visit hrgazettecom.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

HR in Review Artwork

HR in Review

HRreview
Career Club Live with Bob Goodwin Artwork

Career Club Live with Bob Goodwin

WRKdefined Podcast Network
A Bit of Optimism Artwork

A Bit of Optimism

Simon Sinek
Hacking HR Artwork

Hacking HR

Hacking HR
A Better HR Business Artwork

A Better HR Business

getmorehrclients
The Wire Podcast Artwork

The Wire Podcast

Inquiry Works
Voices of the Learning Network Artwork

Voices of the Learning Network

The Learning Network
HBR IdeaCast Artwork

HBR IdeaCast

Harvard Business Review
FT News Briefing Artwork

FT News Briefing

Financial Times
The Daily Artwork

The Daily

The New York Times