AFA
AFA

Building an AI Business

E27 | With Riley's Andrew Reiner
Updated Jan 29, 2024

Building an AI Business

AI For All
|
E27
December 21, 2023
In this episode of the AI For All Podcast, Andrew Reiner, co-founder and CEO of Riley, joins Ryan Chacon and Neil Sahota to discuss the landscape, challenges, and strategies for AI startups and businesses. They dive into the transition from predictive modeling to impactful AI solutions in startups, the significance of focusing on major operational problems to build successful AI products, and the importance of domain experts in the development process. Additionally, they delve into the ethical dimensions of AI, data security, and how to handle misinformation in the AI landscape. Lastly, they advise AI startups to start small by focusing on solving real problems using existing AI tools before developing their own solutions for scalability and independence.
About Andrew Reiner
Andrew Reiner is the co-founder and CEO of Riley. Previously, Andrew was the COO at SeatServe. Andrew is an active mentor and advisor for a handful of startups. He also runs a 750,000+ person community revolving around the startup ecosystem. Andrew was a Quantitative Researcher at Zweig-DiMenna, a $4.5+ billion hedge fund in New York City and before that he was a Desk-Based Analyst at Lehman Brothers where they ran a $600 million proprietary trading book and an Algorithmic Electronic Trader at Barclays Capital.
Interested in connecting with Andrew? Reach out on LinkedIn!
About Riley
Riley is a voice activated relationship insights platform designed to help you build authentic professional relationships.
Key Questions and Topics from This Episode:

Transcript:
- [Ryan] Welcome everybody to another episode of the AI For All Podcast. I'm Ryan Chacon with me is my co-host, Neil Sahota, AI Advisor to the UN and one of the founders of AI for Good. Neil, how's it going?
- [Neil] Doing all right. It's exciting time in AI land with a lot of focus on solutions and regulations, Ryan. How are you doing?
- [Ryan] It is very exciting times. I think today's conversation is going to align very well with stuff that's going on in the market, which I'm very excited about. We also have with us Nikolai, our producer.
- [Nikolai] Hello everyone. You can't tell, but I am actually in the studio this time.
- [Ryan] All right, today's episode, very good conversation focused on AI startups, building AI businesses, ethical AI, lots of things related to those topics. And we have Andrew Reiner, the co-founder and CEO of Riley. Riley is a voice activated relationship insights platform designed to help you build authentic professional relationships. Andrew, thanks for being on the podcast.
- [Andrew] Thanks for having me, everyone.
- [Ryan] Let's start this off around kind of the AI startup landscape. I think a lot of us out in the world are paying attention to a lot of things happening in the AI space, but not really sure kind of the full scope of the landscape. And if you were to talk to somebody about just what's going on in startup world for AI companies, how would you frame it? How would you scope it for somebody to really understand and grasp what is going on with early stage companies?
- [Andrew] Sure, so I would probably start by saying about maybe six months ago, we made a pretty big leap from everyone calling predictive modeling AI to actual AI being impactful and useful in startups, right? So the terms predictive modeling and AI were pretty conflated back then, but now it actually seems like a lot of people are using tools and open source tools, ChatGPT being obviously the biggest one right now.
I would say that everybody is using AI and trying to figure out to see what sticks when they throw it at the wall. You're seeing some really cool and impactful tools from the obviously the big ones trying to assess like medical diagnosis and utilizing Watson, which Neil can talk about, has been trying to do this for a very long time. But there's a lot of really exciting technology that's coming out, and there's also a lot of really useless technology that's coming out, but people are trying to find applications for it. And I think that's normal when brand new technology starts, right? And I think that we're really just at the very beginning of how AI will be impactful for the next, 1, 2, 5, 20, 30 years.
- [Ryan] There was, I believe Andreessen Horowitz, they put out a report recently talking about how there's a lot of cool tech out there in AI, but until they're actually providing solutions like the value is gonna be hard to realize for a lot of organizations. So what do you think companies need to do in order to, especially startup companies in the AI space, in order to survive? Like who is left over once consolidation happens, once companies fail, what is, what's gonna make for a successful path to basically survival and growth for a lot of these companies?
- [Andrew] Once again, like a couple months ago, I probably would've said the startups that own the patent are the ones that are gonna survive or at least have some sort of defensibility. But right now, it seems like the people answering like the big problems that enterprises are really looking for are the companies that are going to survive. We're, I think we're at like the next level of sort of job replacement, which always breeds new creativity and new industry, but the companies that I know that are utilizing AI really well seem to be simplifying and reducing the need for large amounts of staff to streamline a process in a more efficient way. So I'll give you an example. I know a startup that's sort of in stealth right now that's helping hospitals reduce conversation friction between patients and doctors themselves, like full time staff, right? There's a lot of training that goes in to how to teach doctors how to have good bedside manner, how to treat and teach administrators that, and these hospital systems like Mount Sinai here in the city are spending a lot of money on a tremendous amount of staff and not having massive success. And there are a handful of startups trying to identify this very specific need. But these are the companies to me that will thrive because they're early adopters. They're really fast, really nimble, and there's a ton of data that they can use to train their models appropriately to actually add value relatively quickly.
- [Ryan] Neil, what do you think from your perspective? Just what have you seen in the startup landscape as far as companies that are positioning themselves in a spot where you feel like they're going to have the best chance of success versus the ones who maybe are not doing that? Maybe they're getting a lot of attention and hype now, but maybe they're not poised to be positioned in order to really drive growth.
- [Neil] It's a lot of hype to wade through, unfortunately. Startups that are really primed for this are the ones that are solving an actual problem. They're fulfilling that unmet customer need. And I know that a lot of people look at this and say if I can automate something, that's going to create value for businesses. But that is actually not enough of a solution. A lot of companies that I've actually seen and worked with, they implement some of this like gen AI technology now, and they don't actually reduce headcount. What they realized is, great, I've automated a few of my tasks and processes, but it's still not perfect, and I have all this other work now over here that needs to get done that's a little more complex and value added, and they just repurpose their people. So I think the startups, even established companies that are going to have success with this are the ones that are actually bringing something new to the market that offered benefits that weren't there before.
- [Ryan] How would you, if you were talking to a company externally, Andrew, maybe you can shed some light on this, how would you advise a company about how to go about building an actual solution? Like how do, as opposed to just a cool tech that is showcased and can do one or two things as to Neil's point, it might be able to optimize something or automate something, but it's got to go beyond that. So how does a company really evaluate what they're trying to do and really, and fit it into is this a real solution versus just something neat to be able to play with and potentially benefit a few people or a few businesses?
- [Andrew] Building a wrapper around something like ChatGPT is probably a pretty quick and easy way to start. But it's not a, it's not a solution. And so if you were to build a wrapper around something like OpenAI or any of these other platforms have created, you can find out pretty quickly is there value being generated by what you've created? And then if there is, then I think that there's a huge burden on the startup itself to lean in, but eventually I think you need to sever the wrapper, right? It's really good, it's impactful, but it hasn't been, it's not up to date trained, right? I think the last time it was trained was back in 2022. I know that with new models, they're training with more up to date materials, but I think really what startups are gonna have to do is dedicate like a decent amount of resources once they have their POC, right, or their proof of concept or the MVP out there, to lean to the data that their customers actually deeply care about and train their models that way.
- [Nikolai] We also just saw that OpenAI added the ability for ChatGPT to actually read and understand PDFs. And there was a bunch of headlines that said, oh, all these startups are now gone because of that because they built a wrapper providing that extra feature themselves. Is there any room for that anymore if it's like the company you're building off of could just take your feature and implement it natively?
- [Andrew] I think that we've seen that for a long time, right? And I think that you always have these sort of Goliaths in the room that build technology that smaller and more nimble companies have. Like the ability to use any sort of large language model or NLP model to interpret a PDF, this has been around for five years, right? The companies that this will obviously impact are the companies that had this technology, were selling it, and then using a very basic wrapper around ChatGPT to then generate some sort of call to action. Those are the ones who are probably going to suffer the most from this very specific use case, but there have been companies out there taking on Salesforces and LinkedIns and Microsofts for forever, and it doesn't necessarily mean it's a death sentence for these startups.
- [Ryan] Some of them may actually use it as a path to an acquisition. There's a tool, it's called AI PRM, AI Prompt Marketplace, that I found, and I'm sure many people listening to this have used it before, with, that integrated directly into ChatGPT's interface and allowed you to have prompts that were created by the community and got voted up if they were useful, but with what OpenAI is doing now, and providing that them, the ability to create them in their environment and push them on there and then even make money from them, it, I wonder what that really is going to do for the companies that have tried to build that experience themselves because they basically, what they did is they built a prompt marketplace, but so how does the, this prompt marketplace, which is its own software company now survive if OpenAI decides to say, yeah, we're not actually going to let you in anymore. Being platform dependent can be very scary for people because it can easily be shut down, which at the same time, there's the other side like as mentioned is you could be acquired, which is a good, probably a plan for some of them, but they could easily just say, yeah, we're just going to build it ourselves and lock you out.
- [Andrew] It's always dangerous to be handcuffed. And that's what a lot of these companies are, right? So they're completely handcuffed. And if OpenAI decides to completely remove their, revoke their access to whatever they are doing, they're in a really bad spot.
- [Ryan] How do you view the decision for a company to build on top of a platform versus build something standalone. Like how does, is there a decision making process you would go through to figure that out? Or how should people be thinking about that? Because there is a risk associated with it if the platform decides to do it on their own or shut you out completely.
- [Andrew] I think it depends on the type of business you're trying to run. If you're trying to run a business that's generating a hundred thousand dollars a year, which is a good side business, I think that you can build it completely on top of something. I think if you're trying to generate something where you own the data, you own the tech, you own the model, you own the tweaks, you can manipulate it to your way, I think that there comes a point where you really need to ask yourself, do I need to de tether? And not at first. It takes millions of dollars, hundreds of thousands of dollars, a lot of time for companies to build something useful. But with the rise of OpenAI and the dramatic rise of LLMs out there, there are a lot of open source platforms that now completely rival like the Stanford Core that was out forever to give people alternative methods of building.
- [Neil] I agree with that. And I think it's an interesting time because a lot of people talk about ChatGPT because people are on the scene, but we've seen the rise of a lot of other gen AI tools like Perplexity AI, where it actually gives you the source of where it's actually pulling information from. I'm sure OpenAI is trying to work on something similar, so it's an interesting arms race, but how do you even make some of these decisions in the first place?
And Andrew, you actually came from a very different background as a tech entrepreneur. What was your journey like?
- [Andrew] Oh yeah, I'm, so I'm like a finance guy before I became a tech entrepreneur. I started a company when I was 15, and then I sold it when I was 18, in tech. And then I went to investment banking at Lehman Brothers, which lasted like a day.
I started utilizing sort of Google's NLP. And then I realized it was really hard to manipulate it in any way that made like any sort of cognitive sense. And then we started looking at Stanford Core. We hired our first AI engineer. And then we realized early on that it was too burdensome from a cost and a time perspective, so we leaned in heavily to existing LLM models that were out there. It's tough, Neil. It's a tough question to me. It's just about ROI, right? If you're going to build something on your own, can you protect it? Can you make it defensible? Because then if you're a startup and you're building something that's yours, you can patent it, and it's an asset you could sell. White label, sell, whatever you want to do at the end of the day. I think that it's just a matter of time and money.
- [Neil] What about those companies that they're purely API driven? So less of a platform, but you have that little, we do this one thing really well, just API license it to whoever wants to use. Is that a different thought process?
- [Andrew] API services for AI are tough, right? Like for me, I don't think I would use one unless it was very specific, or it was an API, like we use APIs and third party vendors for the data that we use but not for the AI itself. Like I barely trust the AI that we've created and we've patented it, right? And we know it, like we know it like to its core, but every now and then it spits out some very strange answer, and I'm like, I gotta, do I rerun it? And it was just like a blip or not. But like to me an AI service, an API service for AI, like you have no idea, it's hard to put limitations on a lot of these AI services, and it's pretty easy to manipulate. If you're getting charged by the character an AI service is giving you, and you put like a handcuff of like a hundred characters, and somebody with a pretty simple prompt knows how to get around that, you're on the hook for a potential massive charge that you really didn't think of it. And if that's at scale, it could be disastrous for your business, right, with like very little control.
- [Nikolai] What options have you seen for startups to access the necessary infrastructure for AI? GPUs and whatnot. Because I know certain startups, if they have the right backing, their investors will get them privileged access to GPUs. But for startups that don't have those investors, I just, what are the options out there for powering these kinds of AI products?
- [Andrew] Most of our system is on AWS. So we pay up a bit, for sure, but we know it's stable. I know that there are a handful of these AI infrastructure startups. I can't recall them. But once again, I really think it depends on what you're looking for. I think what a lot of startups out there have to be aware of is how frequently you're training, not just how frequently, but like the amount of data that you're training with. Speaking from experience, we left, we left a model training on for three days straight or something, and I remember it was like, we weren't really doing anything. I'm like what is this $750 bill that we got from AWS this month? It's just, we left the machine on, right? We just didn't toggle it off. We didn't have the automation to toggle it off. It's like, well, you've got this massive computer that we're using for 20 minute stints, and then one of our engineers just forgot to shut it down, and it just ran. And even though there was no use on it, we still got billed for it. So just be careful and make sure that you either use the service that automatically shuts down when you're not using it or make sure you turn it off. It's very simple but very easy to forget type of issue.
- [Ryan] What do you think going forward, what kind of AI businesses are really going to drive the industry forward? Do you feel like there's a group or a type or a focus of certain businesses that will really start to drive AI out of the kind of realm of that, the hype cycle that we're in and show real utility? At Neil's earlier point of really solving a problem, do you think there's going to be certain types of companies or certain industries that you're going to start to see lead that charge?
- [Andrew] I'm going to break this answer into two specific sections, right? One, I think the thing that like open API, OpenAI does really is they get the consumers involved, right? It's very easy for anybody to create an account and just try it, and I think that the more exposure the individual person gets, the more they're willing to try and see what makes their life easier. So I think that the more people that we get in front of AI, the different aspects of it, the better. I think that springboards us towards. Whatever value creation and generation that we have as opposed, for like the businesses itself that drive, honestly, I think anything that you could take a hundred thousand dollar job for and automate, whatever that may be, the concierges at hotels, maybe lower end hotels because people still like the face to face interaction, call centers, right? I think a lot of the, help desks, a lot of the technology out there where it costs 40, 50, 60,000 bucks, a hundred thousand dollars a seat and there's many of them that you can reduce to a part time worker who's overseeing or a couple of lower level people. I think that's where, whatever those industries are, there's a lot of them, I think that's really where AI will thrive is where they can cut costs for businesses.
- [Neil] I agree. We know the goal of any company is to make money. Operational benefits, cost savings, these are the more tangible ones. Most people don't believe that somehow this is going to magically create whole new markets and revolutionize product development quite yet. But I think that's one of the big challenges that a lot of companies are actually facing is they know the low hanging fruit, right? They know I can, this can help with some research, with some customer service, but they're struggling to find some of the other opportunities around like process improvement and operational efficiencies. I think the big question Andrew is like what's your advice to all those companies out there to try and answer that question? Like, how do they figure out what can be done?
- [Andrew] Have someone on your team with tremendous domain experience and knowledge. That's probably like the most important role here. If you're going to go into AI for logistics, and you have a finance background, you may know how to buy one, but you don't know how to run one. Get people on the ground who know all about logistics and figure out, okay, like these are the inefficiencies. These are the bottlenecks, right? Why are you hiring three people for a job that you only need someone three hours a day for? Or a computer two hours a day for type of questions. That's how I would seriously recommend people, whatever you're focusing on, the way AI will generally help is by understanding the bottlenecks of whatever problem you're trying to solve is. And then just attack that one problem and only that one problem. And don't get distracted by the noise. There's a lot of different rabbit holes that you can go down that may be really cool, but if it's not answering that core question of like how do you reduce costs or increase revenue, but for this conversation, we're talking about reducing costs, like that's the only thing that should matter.
- [Neil] I like the fact you really call it domain experts because I find that too many entrepreneurs, too many businesses feel that they're smart engineers, smart technologists will tell them what the opportunities actually are. The truth is, while you have some great people working for you, technologist doesn't know the pain point of a marketer or a doctor or some of these other people. And so I think a lot of these solutions, a lot of these ideas or opportunities get generated from domain experts. I think that's a big mind shift from traditional software development or IT type of projects.
- [Ryan] We see that a ton in the IoT space. The IoT side, we've seen the people that are most successful with building targeted solutions are those that really understand the domain and the vertical that they're building it for. Building something very horizontal and saying this can do everything isn't really the approach for a lot of these companies to be successful. It's being able to build targeted solutions that solve a very clear and understood pain point by end users, and I feel like that's carrying over to exactly what you're saying in the AI space. It needs to be something that companies that want to build an AI business understand and focus on. So let me transition a little bit and talk about kind of the ethical side of this for organizations that are building AI businesses. Obviously, the tech is moving forward very quickly, but there are lots of ethical considerations that need to be thought about. What, if you're a company building AI, what should be at the forefront of your mind on the ethical side of things and how you're thinking about how quickly technology is moving forward, but also how you're protecting potentially that customer that you're building for?
- [Andrew] It's a really challenging question. I think the first thing that we all inherently know is, is this like the right thing or the wrong thing to build? I think that should be like the number one thing. Is this technology that really should be out there or not? Like you can use AI to build something that just spews anti-Semitic slurs across Twitter. Is that a good thing to build or not? It's obviously not. And I think that we all can agree on that. So that should be like the first thing. Second, if you're using other technology that's not your own, be aware that company probably has your data now. That's like a big thing. Like you're using these API services. Your data is probably now theirs for whatever they want to do with it. That's just part of the terms of service of a lot of these platforms. And so if you're transactioning on medical data, for example, like you have to be very careful about who your third party vendors are. You have to be aware that if you use historical data that's biased, your AI will generate biased responses.
What I tell people in the AI space, it's like I would rather give you less information that I have high confidence is correct than more information when my confidence really starts to drop, is it accurate or not? And I wish that a lot of other companies would take that because it seems like everyone's just giving everything at all times, but that's how I would think of it. It's a pretty, it's a deep rooted issue, right? And I think you can even go to the one end of deepfakes. And I think that eventually we need some sort of regulation to hone a lot of this back. But the problem with tech has always been it's a global issue, not a national issue.
- [Neil] It's gotten to the point where even in the United Nations, this whole thing about fake information or disinformation should be classified as a lethal autonomous weapon. That's the extent. We've seen deepfakes being used in elections. You call the 2024 U.S. presidential election the deepfake election. How are people supposed to handle that? Everyone's like can't we just authenticate? You have people talking about we can do a watermark. It's like those watermarks have already been faked. I don't want to be all gloom and doom here, but when it comes to technology in general, it's not just a technology thing. We always talk about people, process, technology. You introduce a new technology, your business processes are going to change, to some degree, small or large, are going to change, but there's also a people impact. More than jobs and things, it's about how we actually think and do things will alter. You see someone in a photo, or you hear someone, or you see them in a video, you instinctively trust that. That's not the case anymore.
- [Andrew] The problem is do people have the patience, or do they care enough or, a much sort of like larger meta question, do people care about the truth, right? And I think that it's very easy right now to manipulate the presidential election at scale. All I need to do is create a deepfake of any, anybody out there saying something sexist or racist or aggressive or whatever it may be and post it on Twitter. Have 45 different accounts retweet it to trick the algorithm essentially to start promoting it a little bit more. And then before you know it, it's got 5 million views. And even if it comes out that it was fake, the impressions are already done.
- [Nikolai] Well, it's because people, they already believe what they believe. When they see the AI thing, it was just reinforcement. It didn't need to be true, it just had to be, compliment what they already believed. That's why even when it's revealed to be false, it doesn't change anything.
- [Ryan] Given that this is something that many people are paying attention to, have heard about, you move it into the corporate, commercial business side of things, how do companies that are building tools assure the customers that they're working with or they're selling to, the experience you're going to have with our AI tool is not what you're hearing in the media. Like it's safer, it's more secure. Like how do companies approach building that kind of reputation into their tool?
- [Andrew] Don't sell the data and make that very clear. I think it also has to do with you can limit the scope of what you're selling. And if you're like, okay, look, I'm building this AI tool, it's going to increase this process by 5 percent or 10 percent to let you get rid of 2 percent of your overhead. I don't think a lot of companies are going to worry about deep fakes and AI generated issues and different other scopes. But I think the companies out there that are trying to do too much, are trying to do things that are consumer facing, have a much harder time than very specific, non horizontal AI tech companies. But that's one of the things that we have to deal with is what do we do with the data? Like we essentially have like user generated training data, right? Our model is like relatively simple. What you put in helps us train our data further, but we don't actually look at the data itself. It's automatically piped. But if I'm pitching a sales team about this and they're like what's up with my data? What are you going to do? I'm like it's completely secure. Don't worry about it. Well, do you see it? No. Then how is it getting better? It's like disseminating this information of like, think of it as like a black box. I just put it in, I run a model, that model spits out some statistics, and if it's better after a spot check, I use it. It's, that, that's like the challenging part is reassuring people's data is secure. And I think that explaining that in a, to people who don't necessarily understand how the technology works is challenging.
- [Ryan] So let me ask you as we wrap up here, to summarize everything we've been talking about, if I'm listening to this, and I'm working on an AI tool, AI solution of some kind, what do you think are the most important elements for them to turn what they're doing into a business or to allow a business, startup, a smaller AI business to be able to grow and have a better chance of success than maybe others? Like what would be your kind of general advice there?
- [Andrew] First step, listen to more AI For All. Get some real insight here. I would say start small, start with a real problem. Don't spend any money developing it yourself right now.
- [Ryan] Use what's out there to prove it out?
- [Andrew] Use what's out there to prove it out and then make a business decision when you're ready to really sell it if it's time to build it yourself. And do you parallelize that or not? Make sure that data is secure and make sure you understand what model you're using, what tweaks you're having, and get advisors who can really help you from a high level and also from an actual execution perspective as well. We made a good point earlier too in our conversation about once you prove it out, then you can figure out how you're going to protect it and how to build something that is worth turning, it can actually be turned into a business as opposed to potentially that's platform dependent and gets shut down or is at the mercy of those larger platforms. So I think that's great advice. Neil, any last words from you as far as in that same vein of thought of for companies looking to grow out of that startup phase and have a chance of success?
- [Neil] It's no secret. You got to do something that creates a competitive advantage for you. So at some point, as Andrew was talking about, you have to detach yourself as much as you can to be a standalone. It's a, what is it they say, it's a marathon, not a sprint. Hopefully we're ending this a bit on an up note because I know part of the conversation was a downer here.
- [Ryan] No, I agree. I think it's been a really good conversation. We've actually been able to focus on just more general view of what's going on in the business side of AI right now. Lots of tools, lots of cool tech being showcased, driving the hype. But at the end of the day, the ones that survive are going to be the ones that provide real utility to a real solution. And I think you've both painted a really good picture on how to get there. That was a, I think we've had a lot of value in this conversation. So Andrew, thank you so much for being here. For our audience who wants to learn more about what you have going on, follow up, engage, what's the best way to do that?
- [Andrew] Yeah. Check out riley.ai. You can email me [email protected], or you could download us on the app store.
- [Ryan] Thank you so much for being here, and we look forward to getting this out to our audience.
- [Andrew] Great. Thanks everyone.
Special Guest

Hosted By
AFA
AI For All
Special Guest
Andrew Reiner
- Co-Founder & CEO, Riley
Hosted By
AFA
AI For All
Subscribe to Our Podcast
YouTube
Apple Podcasts
Google Podcasts
Spotify
Amazon Music
Overcast