AFA
AFA

Navigating AI Adoption for Businesses

E31 | With Heather Harris
Updated Feb 6, 2024

Navigating AI Adoption for Businesses

AI For All
|
E31
February 1, 2024
In this episode of the AI For All Podcast, Heather Harris, Principal Data Scientist at Herkimer Consulting, joins Ryan Chacon and Neil Sahota to discuss how businesses can better navigate the field of AI adoption. Harris emphasizes the significance of starting with a clear understanding of the organization's mission and the desired outcomes. She advocates for adopting AI as a support role rather than a central focus. Challenges such as data integrity, identifying the true AI solutions from the fakes, and obtaining the right data are discussed. Harris also foresees the integration of AI becoming seamless and covert in the next five to ten years. However, she urges businesses to be proactive rather than reactive to the rapidly changing pace of AI technology.
About Heather Harris
Heather Harris has a PhD in statistics related to cognitive processes, and she has worked as a consulting data scientist and statistician for 6 years. Her approach to consulting is to guide clients through setting up complex processes and AI solutions while keeping data governance, data integrity, and diversity/data representativeness front-of-mind.
As a consultant, she’s led a multitude of diverse projects including predicting audiences and micro-target marketing for the first movie with a theatrical release following the pandemic, predicting and segmenting viewers for several television shows and networks, developing scoring algorithms for EdTech companies, and working with clients from several substantive areas to architect data ecosystems for AI solutions and advanced analytics.
Interested in connecting with Heather? Reach out on LinkedIn!
About Herkimer Consulting
Herkimer Consulting is a small AI and data science consulting firm focused on holistic data strategy, AI/technology adoption, complex data integration, and advanced predictive analytics.
Key Questions and Topics from This Episode:

Transcript:
- [Ryan] Welcome everybody to another episode of the AI For All Podcast. I'm Ryan Chacon. With me is my co-host Neil Sahota, AI Advisor to the UN and one of the founders of AI for Good. Neil, how's it going?
- [Neil] Yeah, I'm doing all right. Just trying to stay dry because we have a massive thunderstorm going on over here.
- [Ryan] So today's episode, we have a lot of really interesting topics to talk about. We're going to talk about data integrity issues with AI. We're going to talk about how people can better navigate solution adoption how to differentiate between what's real AI and what's not. We'll talk about expectations for the year, lots of other exciting topics. And to do that, we have Heather Harris, the Principal Data Scientist at Herkimer Consulting, a small AI and data science consulting firm. Heather, welcome to the podcast.
- [Heather] Thank you so much. Happy to be here.
- [Ryan] Yeah. It's great to have you. Let's kick this off by having you do a quick introduction. Just tell us a little bit more about your background experience, what you do in the consulting space as it relates to AI and all that kind of good stuff.
- [Heather] So I'm a psychometrician by trade, which is basically stats related to mental models in like testing world. A couple of years into my career, I decided to pivot into data science and ended up working for a movie studio predicting the first film that was released since the pandemic had happened. That was an interesting task because there was no precedence for it or a way to really predict it. And since then I've grown my toolkit to be a tech and stats slash data science generalist, and I realized that there's a need for more technical folks to take a SHERPA approach of helping clients holistically through solving some of these tech needs which now obviously includes AI and being able to do AI adoption in a good way.
- [Ryan] Yeah, that's great. I think it's very much needed. There's a lot of conversations that we've had, I think even conversations we've had outside of this setting where people are just trying to understand not just what's happening in the space, but also how it's applicable to their business. How do they adopt, how they set themselves up for success, what's real, what's not. A lot of interesting things, and I, and so I know we're gonna talk about a bunch of those topics today. So if we start off with kind of the current state of the AI industry right now, one of the things that does come up often is when it comes to training AI. There potentially are data integrity issues, or how do you maintain data integrity? What makes data more valuable? All those kinds of things. How do you think about that approach that, how do you answer that kind of typical question from somebody when they talk about data integrity issues when it comes to the training of AI?
- [Heather] That's a great question. So a lot of, there's a big gap right now between your typical kind of organization, especially in the nonprofit association type space, and the kind of layers of complexity that build up to using and training AI models. And I would say that there tends to be a lot of missing data, things that are grown organically over time, and even like structured data sets. So being able to use some of that information, there's a few layers of clean up that are needed before someone can actually use their data. And that includes what are basic concepts like missing data and how you handle missing data that, we don't have those layers built in in a way where folks can just really easily use tools to fix and supplement their data, so that they can grow up to the level of being able to train and house some AI model so that it works a bit better for them. So, I would say the, just your kind of typical things that have always mattered, first for stats then for data science and now more for AI adoption, still really matter, it's just trying to, the issue of trying to get good resources and tools into the hands of users, so that they can start bridging that gap and adopting AI a bit more readily.
- [Neil] A lot of people have experienced the lack of data robustness, especially the cleaning and structure of it. But Heather, you refer to the kind of missing aspect, I think that's a challenge for some people. They don't have all the information and the data they need, or they don't have enough of it to use AI. What's the solution there? Is it synthetic data?
- [Heather] So synthetic, you've talked about synthetic data in the past. So synthetic data, meaning the ability to supplement your data in a way where you're taking a best guess at what it would be if you had it or like imputing values. That's a great approach if you can do it in a way where you're basing it off of all of the possible information you have. There's a lot of even more advanced traps out there that I will say surprisingly are still using methods like mean imputation. Which, mean imputation is where you give everybody the average value. If they're missing a value, just, you put in the average for everybody. If you're missing let's say income because those who are making the most money don't want to report their income, then putting the average in isn't just going to be wrong and a bad guess, it's going to bias everything that you train your models on to be wrong and inaccurate. Learning about how to best approach some of those challenges so that you have a solid base, especially if it's like demographically missing data based on specific populations of individuals, then it has bigger implications for lack of diversity and inclusion, things like that.
- [Neil] It's a huge challenge. With synthetic data, or some people like to call it fake data, how do we actually, how do we actually create good synthetic data then? How do we make these educated guesses?
- [Heather] That's a great question. So, you can actually use, you can leverage AI approaches, and I'm going to say machine learning more in the sense of like simple random forest type models to impute the likely values and then use all those values to train your model. So, it might seem a little bit like you're leveraging it twice, but it actually does when you run, like when we've done studies to see does the math check out and actually work better? It does work better if you use that approach.
- [Neil] So it's almost like a Monte Carlo simulation of data.
- [Heather] Yes. An informed Monte Carlo simulation of data, I would say. There's better approaches for how you can do it and check, make sure accuracy and things are there. So one of the things my wheelhouse is in now and the realm I'm working in is to try to help organizations build tools and approaches so that kind of they can go through programmatically and do some of these things so that they're in a better place.
The biggest lift from my experience so far, non profit associations, those spaces, are the ones that would leverage and have the most, best impact in efficiency from being able to use some of these AI tools, but they've got the biggest gulf to cross to get there.
- [Ryan] How's the evolution of quality assurance standards in AI data coming along? Obviously depending on how data is being used for, what, how important the use case is, the higher the quality assurance usually needs to be. So like the legal space, education, all these kinds of things. How are you seeing kind of those best practices unfold so that people can ensure that how they're handling, manipulating with the data, it's as accurate and as following those best practices as possible when it comes to quality assurance?
- [Heather] I would say so far I'm seeing a greedy approach, I like to call it, which is basically just use the data you have in whatever shape because we want to use new tools. And the thing I'd like to encourage is that pause to say let's actually evaluate what we have and where the gaps are so that we can take a better approach to it and have a North Star of accuracy and what is the outcome that we're hoping to shoot for? Like how does our data best serve that or not serve that? And how do we have to fix things if we need to collect more data, do imputations, things like that, to actually get in a place where someone like an organization's AI ready.
- [Neil] Is it possible, because I'm starting to hear this question now from businesses, that we could actually collect too much data, could actually have too much data available.
- [Heather] The data you need that's important is, should be prioritized. When it comes to like health data or like the continuous stream of data, you could capture a lot of erroneous data that you don't actually need for training a model. But being able to really focus in on what's important, and I think a little bit of soul searching on the front end to know your business, what you're actually attempting to do, I think that helps. And I wish in this space, and that's why I'm actually happy for this podcast and others that kind of dig in a little bit from that conceptual standpoint of let's talk about the concepts more than the like nuts and bolts of AI. I'm really grateful because I think it helps to put the emphasis in the right place. You could end up collecting a lot of erroneous data, building out a whole infrastructure to support that, and then at the end of the day, not being able to train a model or not knowing what you're doing still.
- [Neil] I won't name names, but there are lots of companies out there that are collecting an absurd amount of data, and they just, and I think 90 percent of it, they just don't know what to even do with or if it's even usable. And I think that's the key thing you're trying to share, Heather, is that what's the usable data and what's more importantly the right data? So how do organizations figure that out? How do they know what's actually the right data?
- [Heather] I think AI strategy is really important. And I think starting with the business in mind, not looking to adopt AI for the sake of adopting AI and then trying to retrofit it to the business, I think it's really important to have that be like a known use case. It's, there's so many new powerful tools that are coming out that we can leverage, but really businesses need to have, go through the identity crisis of there's these new tools coming out, how do we just implement them, because they're sexy right now, and go back to like where's the North Star? How do we actually know what we're doing? And then how do we use this in the best way? It could be that the tool that would be the best use case is the simplest one and just stats that were created in the 1970s, 1960s. And if that's the case, that's awesome, and that's what should be done. It'll be a lot less expensive than trying to train an AI model. But I think going through that thought process is really important.
- [Neil] I think that's actually really important that just because we have a bright shiny tool like AI, doesn't mean that's actually the best solution. It's like healthcare, right? You don't need the latest, greatest medical device if there is a treatment that's tried and true for the last 80 years, right?
- [Ryan] Let me ask you something I was alluding to when we first kicked off the conversation, which is about navigating solutions, the adoption of solutions. So AI has obviously grown in popularity over the last 12, 18 months, especially all the way from the consumer level to businesses now basically trying to understand how to bring AI into the business. What does it even mean to bring AI into the business? What solutions actually are AI? What, which ones are not? Which ones are just, are the marketing team just throwing the word AI into their solution and promotional efforts to make it seem like they actually are an AI solution. How do you advise companies or what would be your kind of approach for companies to make better decisions when it comes to navigating the solutions that are on the market that they're considering bringing into their business to understand? Is this a company that is really providing an AI solution or is this not? Or is this more just someone being very creative with marketing? And I've seen this in all the industries, even outside of the enterprise world, even in the consumer space, companies saying they're using, it's an AI tool, an AI solution, or this product was made with AI, but I know most of it is to get headlines when in reality, it's not really AI. So how do you approach that? How are you think about that because I think it's an important topic to talk about.
- [Heather] I think that's a great question, and I think it comes down to AI literacy from leadership and organizations and being able to be discerning with what types of, what's under the hood, so to speak, when it comes to what you're buying.
There's a difference between using like staff tools or like efficiency tools. That's, a lot of the rubber hits the road there for me is how what's the privacy, how are they going to use your data, are you going to give them access, give access to the company or the tool to like some repository of code or documents and things that you're not meaning to share.
So that's more of a like use case based look at privacy, look at how data is being used, things like that. When it comes to building something like building a tool. Let's say a nonprofit is interested in building a tool to help with intakes going more quickly and with the review process being streamlined, something like that. If that is the use case, then I would say you don't want a company to come in and then sell you, oh, I can do this classification tool to your intake process, and it's only, and it's AI, and it's only going to cost you 600,000 dollars, right? But at the end of the day, it's some sort of really basic stats or algorithm that doesn't do any more than what a data scientist could throw up in a day, then they're selling you snake oil, and you're being taken in that way. So, I would say when it comes to a little bit bigger scale adoption that's not that efficiency staff tool side of things, having someone who has a level of expertise come in and look under the hood and being able to know the difference between the time and resources it takes to do something like machine learning and some basic kind of stats approach or approaches or algorithmic approaches versus large language models, larger like tuning, training kind of paradigms where that takes months and that does and maybe that does warrant a bigger budget, having someone who can help navigate that is really important.
- [Neil] A lot of small, medium businesses struggle on this front. They either don't know who the right people are to contact for this help, or they worry about the cost. So, is this an effective approach for them, those SMBs?
- [Heather] Maybe regret saying this later, but that's why I'm actually hoping that more consultants start working in the space that I'm working in because there is a need for it and that, trying to do that SHERPA approach of guiding folks through making some of these decisions, I think there's a need for it because we're running to this, we're running through this marathon of AI adoption quicker than we're ready for it with the AI literacy side of things.
And I am very cautious of the snake oil salesmen, the people who are understanding that if you put AI on a tool that you've been selling for 15 years, but now it's shiny again, we need to be cautious of that.
- [Ryan] No, yeah, I was going to say the same thing. I agree. That's what we're seeing is they're just repackaging it with a couple of letters there at the top to say, oh, this is actually an AI solution. I do think your point about needing more people in this space, in that quote unquote SHERPA role, I think is the more people you can have helping navigate these companies towards AI adoption success, the better it's going to be for not only the companies that are selling these solutions, but I think the industry as a whole because you'll start to see more success stories.
You'll hopefully start to weed out the companies that are more of the snake oil salesmen. Once that happens, I think the market will start to correct itself where those that are really AI will rise to the top and the others will fall off and fail, which hopefully will start to remove more of that concern that a company might have of getting in business with the wrong company, where then you have to tell them when they start talking to you is, oh, sorry, you made a mistake, and you've been spending all this money on something that's really not what it seems to be. How can companies approach or assess what they really need before they go into that process, so they can hopefully better align themselves with a solution in a company that's providing it because I feel like that's a necessary first step before they even start looking for solutions is to understand what is our actual internal need? What does that assessment look like? What are the risks, what are the benefits? And before we go into the adoption process, which might help them better not only navigate but see success when it comes to adoption so that they are picking the right organizations to work with.
- [Heather] The soul searching that I recommend upfront is consider who your audience or who your members, users, your like who your consumers are for your product and be tech agnostic in your evaluation of what it is that you do for them because how you serve them might change and probably will change drastically in the next five to 10 years. And what their engagement with your product looks like might also completely change in the next five to 10 years.
But if you're in the business of let's say from a nonprofit or let's say ed tech, you're in the process, or you're in the business of improving literacy in the US, then that's the endgame, and that's the thing that you want to build around, and you want to attempt to future proof that if you can and the, how AI supplements that is like then going to be built into kind of supporting that endgame rather than figuring out a tool that you can pop in as like the whole space is going to be changing in the next few years anyway.
- [Neil] I think we're not saying anything different from a business perspective, right? Focus on the problem. What's the opportunity, the outcomes, understand your customer and their needs. But why are so many organizations that want to put the cart before the horse, and they're so worried about I don't know the capabilities of AI, I don't know how to use that, why are they so focused there first then rather than say, okay, this is ideally what I'd like to do, now what tools are out there to make that happen?
- [Heather] I think there's anxiety around it. And if I'm being totally like a little vulnerable, I even have some anxiety around, oh, am I an imposter getting in this space because there's so much to learn and there's so much to stay on top of. The fact that there's such an anxiety around we need to adopt this quickly, or we're going to be left behind. Or maybe our competitors are going to adopt this quicker than us and then we'll be left behind. So there's so much anxiety around just getting something in place rather than that stitch in time saves nine of doing that front end work to try to make sure everything's in alignment because it's gonna with that changing paradigm in the next five, 10 years, the tech is gonna change. You're gonna have to roll, like not you guys, obviously, well you guys too, but like everybody, we're gonna have to roll with the punches for what that looks like as it continuously changes quickly and in most cases a lot quicker than it's gonna, then we're gonna be able to stay on top of it and track changes and even anticipate changes.
- [Neil] I a hundred percent agree, and I think it sounds like it's more like a people problem or maybe a mindset problem because we're living in an age of hyper change. We're probably going to experience 100 years worth of change in the next 10 years, and the pace is so quick that people, they don't know how to react I think to that and that's the problem that we're so used to reacting. Something happens, we react, figure it out, and we're in an age now where you have to be very proactive. How do people make this shift and how do they take advantage of like their talents under them to make that shift?
- [Heather] I'm hopeful that the fact that we're on a little bit of an AI runaway train right now is going to help us put, give us some good perspective in the sense that we really have to start putting, we have to prioritize, and we have to figure out what it is that we're meaning to do and be more intentional rather than these cut edges and try to make sure we're as, we have as much of an edge or competitive kind of leaning as possible.
And if we start running our own races, so to speak, as businesses, organizations, non profits, I think that's going to help with having this more central core identity and mission. And I'm hopeful that really, the AI and how that builds around that is going to only be supportive. And it hopefully will have to be adopted as tools because even the metrics for how well you're doing versus other companies, those will change with AI. They'll become more granular. It's going to be a more interesting space being able to compare different businesses to one another. And even the consumer feedback loops, instead of someone giving five star ratings, having a plethora of data, including use cases where people's like opinions get weighted differently based on how much they actually use products and engage with businesses, it's all going to be this kind of shifting landscape.
- [Ryan] We're learning as we go to some degree too, right? All of us are. But a lot of these business owners and companies, they don't, we learned this in the IoT space, as interested as they are in the solutions, they don't care as much about the technology at times as they do about the results and what's actually happening.
So they need someone to help take their mind off of those pieces of the process. So that's where I think there's, finding the right organizations and the right individuals to work with to help through the adoption process is critical for the success of that, especially early on in the industry.
I know AI has been around forever, but thinking about for us, how it's becoming more mainstream for companies to adopt and help navigate who's really selling AI and who's not, that's where I think the, there's going to be a lot of need early on until we start to see those successes actually really happen.
- [Neil] I think one of the key things here is we always hear data, data, data, and I love how there's talk about you have to have the right data, the value add data. And using AI capabilities, once we know the outcomes we want to get, what's the other big challenges that businesses are like struggling with when they enter this whole space around AI or even just data analytics?
- [Heather] I think the understanding what's possible, fidelity of implementation kind of a thing. Like understanding what's possible given where your organization's at, who you have working and like the in house expertise or who you can get as consultants, and the data and use cases, being able to make a pragmatic decision about that is really difficult, even for folks who know what they're talking about.
So, I think the holistic view of where organizations are at, I think that's going to be a challenge. And that, again, that's why I'm hoping a lot of, a lot more folks come in this space to work to help with facilitating some of that.
- [Neil] Is it a challenge because businesses aren't good at self reflection or they just, they don't even know where the mirror is.
- [Heather] I think we're all, everybody's got some like blinders when it comes to self reflection. And I think the ability to self reflect on technical aspects of the business when keeping technical staff is a hard, it's a hard job to keep them on staff, pay them, keep them interested. So, I think that confluence, it's either an opportunity, or it's a, the confluence of things that could be really detrimental if a business is in a spot where none of those are working in their favor.
- [Ryan] So let's, as we we get to towards the end here, let me ask some line of question, I'm going to summarize some of the thoughts we've had. So, if I'm listening to this, I'm a company looking to adopt AI, what advice do you have for that company looking to bring an AI solution to their business? What's the main piece of advice on how to approach that process, what they should be thinking about, how should they prepare themselves for that? And yeah, so how would, what would you say to a company out there that wants to get started on their AI journey?
- [Heather] First and foremost, I would say from a leadership and strategy perspective, do some soul searching, really hone in on what your mission is and how your mission is a continuous mission through this next few five to 10 years as the tech world is shifting around all of us.
And once you can lock that in a little bit, then figure out, take stock and be self reflective of what's the capacity from a staffing resource, technical expertise standpoint, and from data and sort of data integrity and availability standpoint. You don't necessarily need expertise in house. You don't necessarily need the highest fidelity data if there's other data sets out there that can train basically for what you need it to do. You can borrow data or supplement in some ways. But I think being able to holistically take stock about those pieces and how they fit into an overall mission, so that you can plan for flexibility going forward in how you would adopt AI, how it will supplement and support your business rather than a complete retooling of the business, so that AI can be, so AI has a place. You want AI to be supportive, not like the central tenant to your business. And I think if an organization can go through and organize their thoughts around that, that's a good starting point.
- [Ryan] Where do you see things going as we get further into 2024. Like 2033 was a big year for AI, just ChatGPT and everything. This is what's brought us to this conversation. So where do you see things going in 2024? What are the kind of things, not just are you most excited about, but what are the predictions for how this evolves in the kind of business and enterprise adoption space?
- [Heather] I think for a lot of folks in the US and around the world, the development of AI is going to be almost covert seeming where we don't see a lot of the big milestones. We just see it get integrated passively in the background of apps and tools and things that we're using. I think from a business and strategy standpoint, I think we're going to, so I think we'll see smarter decision making, more informed where like you're talking about a project management tool, and it gives you information about the flow of projects and resources and stuff intuitively.
But we need to also like promote AI literacy at the same time, so we understand that all the information that we're being fed by AI isn't necessarily going to be accurate. We need to make sure we kick the tires, and we know ourselves well enough that the AI doesn't know us better, and we don't start trusting it more than trusting ourselves.
- [Ryan] We talked about a lot of really good topics today. If I'm a company still unsure about the idea of bringing AI into my business, what would you tell them on the value and the benefits of really exploring this space, on utilizing AI to grow their business?
- [Heather] Learn about the different use cases out there and how it can accelerate some of the processes your business is already doing, especially when it comes to staff tools and the efficiency tools that are out there.
When it comes to changing business decisions or building it into bigger picture strategy, doing some like brainstorming around that I think is excellent because if you can think of a use case and AI can't do it right now, chances are in five to 10 years, it can probably do it. So just having some ideas about how that might be integrated, I think is great because we're going to see such flexibility in generative AI going forward where it's going to be able to do so much more than what we can even, like the use cases we can even think of right now.
- [Ryan] I think we covered all the main topics I wanted to cover today. I really appreciate you taking the time. For those out there who maybe want to learn more about what you're doing, reach out, follow up with questions, thoughts, even explore the opportunity to work together, what's the best way they can do that?
- [Heather] Reaching out on LinkedIn would be the best way.
- [Ryan] Heather, thank you so much. I really appreciate you taking the time to do this. I'm glad we finally got this on the calendar. This is, the journey that you're on in the role that you're playing in the industry I think is something that people really need to understand and not overlook the importance of it, especially in the early stages of an AI adoption journey and AI as an industry or more widely adopted industry that we're getting into now because I think too many companies have a chance of coming across those companies or those solutions sold by companies that really are not doing AI. And if they don't know how to navigate those waters, how to prepare themselves for success in bringing an AI solution in, understanding what is it that they even really need to be adopting, it's only going to lead to potential kind of failures and disappointment with AI. And when we've, in the past, we've talked about the IoT space, we see that a lot. We see the percentage of pilots that make it successfully through the pilot stages is pretty low because of a lot of those reasons. There's choosing the wrong company to work with and understanding really how to navigate those waters on bringing a whole IoT solution into your business, it's the same with AI. And I think the more advice and support you can have through that, the better you're going to really be able to reap the benefits of what AI has to offer, and that's something I think people really need to really focus on.
- [Heather] Absolutely. I agree with you.
- [Ryan] Well thank you so much for taking the time. Really appreciate it, and we'd love to have you back sometime in the future.
- [Heather] Thank you so much. I really appreciate it.
Special Guest
Heather Harris
- Principal Data Scientist, Herkimer Consulting
Hosted By
AFA
AI For All
Subscribe to Our Podcast
YouTube
Apple Podcasts
Google Podcasts
Spotify
Amazon Music
Overcast