AFA
AFA

Building Responsible AI

E32 | With AMP Solutions' Lisa Thee
Updated Feb 8, 2024

Building Responsible AI

AI For All
|
E32
February 8, 2024
In this episode of the AI For All Podcast, Lisa Thee, CEO of AMP Solutions, joins Ryan Chacon and Neil Sahota to discuss the ethics and responsible use of AI. Lisa talks about the evolution of AI, its applications in healthcare and online safety, and her career journey in the AI space as an intrapreneur. Lisa delves into the importance of online trust and safety in today's interconnected world and how to develop ethical, responsible AI. Lisa anticipates the future of AI, calling for more transparency and regulation as these powerful tools continue to evolve.
About Lisa Thee
Lisa Thee is a renowned thought leader and expert in the fields of artificial intelligence, career transformation, and ethical technology. With over two decades of experience, Lisa has dedicated her career to driving innovation and empowering individuals and organizations to thrive in an ever-changing world. Her passion for harnessing the potential of AI to create positive change led her to become a sought-after international keynote speaker. She has been honored with numerous awards, including the 2023 International Impact Book Award for her groundbreaking book, "Go! Reboot Your Career in 90 Days", the 2023 Gold Viddy Award for Long Form Series “Navigating Abroad”, and she is a Gold award winner for the 2023 Executive of the Year - Stevie Awards for Women in Business.
Interested in connecting with Lisa? Reach out on LinkedIn!
About AMP Solutions
AMP Solutions is a consulting firm focused on AI ethics, trust and safety, and women in STEM.
Key Questions and Topics from This Episode:

Transcript:
- [Ryan] Welcome everybody to another episode of the AI For All Podcast. I'm Ryan Chacon. With me is my co-host, Neil Sahota, the AI Advisor to the UN and one of the founders of AI for Good. Neil, how's it going?
- [Neil] Hey, I'm doing great, Ryan. I'm looking forward to a really timely topic today on our show.
- [Ryan] Yeah, a lot of good topics to get into. A fantastic guest with us as well. Before I introduce her, let me go ahead and give you guys a quick overview of what our focus is today. So, we're going to be talking about AI in ethics, how companies can build responsible technology, how all that fits together. To discuss this today, we have Lisa Thee, the CEO of AMP Solutions, a consulting firm focused on AI ethics, trust and safety, and women in STEM. Lisa, thanks for being on the podcast.
- [Lisa] Thank you so much for having me, Ryan.
- [Ryan] So, I wanted to kick this off and dive a little bit into your background because it is a very impressive background. You have a lot of interesting stuff that you have done in your career and being in the AI space for over two decades, how has AI changed in that time? Can you frame it for the people who may be new to the AI space just to get a sense of where we've come from and how we've or where we are now?
- [Lisa] Absolutely. To be fair, I do have a technical background. I did study industrial engineering in college, but at that time I was much more focused on the automotive industry than the applications of AI.
So my interest really sparked in that space in 2015 when I was working for Intel Corporation. I was doing business development for their data storage group, and so we were looking at different workloads that needed a lot of compute, and we were thinking through different applications that could start to drive consumption.
And one of the places that really sparked my interest was around the applications of AI for opportunities in healthcare. And they were looking at things like taking the genomic code and personalized medicine and integrating that to make sure that the treatments that cancer patients were getting were the most effective for their body.
And I actually had a co-worker that went from having a six months to live diagnosis from his doctor that was still there eight years later to raise his child, and that really inspired me to think through, wow, if we can change the course of cancer, and its impact on families, what could we do in the areas that affect marginalized groups like women and children, that maybe don't get as many resources?
And so that inspired my journey to start as an intrapreneur a little bit later in my career. And I stepped out in my forties to do my own company called Minor Guard, which was an AI software startup that spun out of some of the work that we did with the National Center for Missing and Exploited Children and THORN.
- [Ryan] Very cool. So tell us a little bit about what you kind of have going on now. What's the main focus of kind of the work you're doing?
- [Lisa] Sure. I'm wrapping up my fourth year of management consulting in AI for Good and online safety. I've got to work with some of the larger tech companies as well as some really innovative research hospitals in the areas that will benefit society. Again, the areas of disrupting the abuse of children online and trafficking as well as innovation in healthcare and getting better diagnostics for people. Wrapping that work up, I'm really focusing in on, my specific area is spreading the message of the benefits of intrapreneurship and entrepreneurship to more people in the STEM field.
I didn't ever think that I would be working in AI, and so I just wanted to help people to bridge to realize that it's definitely something you can learn. I remember my first year in the field being surrounded by data scientists that were using different languages that I was used to hearing, talking about contextual inquiries or other kind of jargon that I didn't know how to connect with. But the more I sat around and listened, I was able to connect it back to something that I already knew. And I think there's a lot of people in the workforce that probably are really good at what they do for business, but are a little bit intimidated by the technology that's coming crowding into their space. And I want to do more to demystify that, so people recognize that AI can be the best assistant you ever had, not your competitor for your job.
- [Neil] I think that's fantastic, Lisa. There's a lot to unpack there, but I think it's very timely, especially knowing that there's congressional hearings going on right now with some of the top tech leaders, and so I think some people are really surprised by how bad actors have weaponized the technology. I don't know if you could share a little bit about that, but also balance it out how we actually are using the technology to try and also protect kids.
- [Lisa] So, my entree was to look and see what could we do to help accelerate the best tech companies in the world from getting the reports of the online abuse of children to law enforcement's hands as fast as possible.
So the way that the problem was framed for us was we have these millions of reports that come in each year, we have 25 analysts that need to validate that the reports are accurate and get them designated to the right law enforcement agency where they, we think this crime is occurring. And the stakes are high. And it's a felony, and it's not acceptable. And obviously it is very risky data to be using. So having a background in understanding how to deal with risky data from my day job, we were able to bring in some of those insights to help accelerate that from a 30 day turnaround process down to 24 hours by using a mix of automation and AI to help to designate when a criminal made a mistake and gave away their location committing their crime online.
And in working in that, seeing the effect that a team can have coming together, and that was a team of non-technical and technical people that approached that public-private partnership together. I led the Intel side of that team alongside Google and Microsoft, and we helped to implement that in 2017. And once I saw what could be done by using automation and AI, I was hooked.
Using more modern techniques with nearest neighbor searches and more deep learning techniques, we were able to put some tools in the hands of law enforcement that recovered 130 children in the first month of use. And that went into production in 2018, and from there, we created some software to help block kids from making this mistake in the first place. It's been an extension of learning the technology to solve a problem I'm passionate about, a lot less driven by the passion for the technology. I think that's the message that I want to bring to more people is if you have a problem that you've just got to itch, you've got to scratch, maybe it's the environment, maybe it's the elderly, maybe it's mental health, this is one of those innovation breakthroughs that can help to solve problems that have been around for generations, and it's a really unique and exciting time to develop some data literacy and some up-leveled skills to be able to really start to make an impact in your career beyond just the paycheck. Getting that double line, double revenue impact. It's a very exciting time for that.
- [Neil] That's awesome. That's some phenomenal work. I really want to touch upon something that I think you said that was very profound, and something that's a common theme on this show is that it's not the technology itself, it's really trying to address a problem.
- [Lisa] And then I think the other thing that's good lessons for the business community is sometimes you can drive innovation in places where maybe you're not going to recognize the revenue and the problem that you're solving, but the learning that you're going to get as an organization and the ability to tell a story about why your technology matters can very much come through those engagements.
So, let me give you an example. At the end of the day when we were building all this technology pro bono, we were also learning a lot. They had a lot of data. It was sensitive data. Some of it needed to be on-premise because you couldn't move it. Some of it was in, could be moved to the cloud, but there was cybersecurity issues, so they needed cybersecurity audits to make sure you could do that. And then we needed to work with the cloud providers to optimize workloads and decide what was better to run in the cloud versus what needed to run on-prem. Some of the workloads were images, some were videos. We learned a lot about that technology as well.
And so by doing a pilot in a place of passion, we actually built solutions that scaled into all sorts of different three letter agency solutions well beyond this problem statement. People that do human trafficking often run drugs or guns, so you can start to get into, you can think about it from a financial analyst point of view. It's a, it's the element of how do you make sure you're not, anti-money laundering technology and using AI to determine criminal movement of money. That can be built off of this foundation. Or maybe you are building facial recognition algorithms, and what we learned was that the algorithms that were trained on adult male faces didn't work so well on our population, which tend to be teenage diverse female faces. And so we were able to optimize our software to go from a 70% recognition level to a 99% accuracy level by doing more modern techniques and deploying that at scale. When you have better facial recognition algorithms, those have great uses for retail, for asset management, making sure you don't have theft and all sorts of other criminal activity that piggybacks on to here.
The same way that criminals can be early adopters who use technology to do bad things, when we put, apply that technology as a defense, oftentimes it's going to be I think in the future AI versus AI. The good, the people trying to protect and the people trying to find the weaknesses in the system, and it'll keep working alongside each other.
And what I can tell you is this. In 2015, at least when I got involved in this working hands-on with law enforcement and those agencies, boy the bad guys had better technology. They had better money, they had more access to resources, they had more access to tools, and we just wanted to help level the playing field a little bit.
- [Ryan] So, I think a lot of this ties back to a few different topics I think are worth expanding on. One is the ethical side of all the technology and AI and so forth, and the other is just trust and safety online, which I think over the last number of years has really been called into question by people regarding, whether they're on social media or on certain sites and so forth.
So I, through the work that you've done and where we are now as a society and where the technology is from a maturity standpoint, how is AI really contributing to making the internet a safer and more trusting place for people? We obviously all know and heard about deepfakes and the big issues there. But how will we be able to combat that going forward? How can companies combat that and do everything they can to protect themselves?
- [Lisa] Yeah, so I think one of the better applications I've seen for AI is a detection and sorting tool. So when you're looking at a certain terms of service violation that you want to make sure that you flag, it's really hard with the volume of content that's uploaded to all of the major platforms around the globe. It would take lifetimes to go through everything before posting to be able to sift through it. And so I think one of the better uses I've seen of AI is being able to do some kind of lightweight scanning that allows us to find problematic content like you would with tools to prevent hackers from getting into your device in the first place. Using hashing technology and being able to do matching with AI for appropriate known illegal content is one of the best ways I've seen this applied.
One of the ways that I am more excited about in the future is doing predictive analytics of being able to scan things with homomorphic encryption and other technology stacks that allow us to retain privacy but also to be able to make sure that criminals are not benefiting from privacy to disrupt the privacy of they're victims.
So I think the technology has come a long way in terms of federated learning and confidential computing and some of these more modern techniques to be able to look at content while maintaining privacy. But what concerns me greatly is that we have had so much conversation out there around the rights of privacy that I think people don't realize the trade-off and the cost of what that looks like when you have end-to-end encryption. So today, the majority of reports that come to law enforcement come through social media sites because they actually scan for it, and they're looking to find it, and they report it when, as fast as they can. But in an environment where a Meta, for instance, goes to a strategy of end-to-end encryption like one of their assets with WhatsApp, all of a sudden you've turned off the visibility into how much of this content is actually circulating. You didn't change the crime, you just closed your eyes. And I know that when I did my TED Talk around this topic in 2021, that year there were 21.7 million reports to law enforcement that year. That's, and 90% of those came from Meta. So it's easy to blame these companies and say you're creating this playground, but it also is important to make sure that we really support them to be able to understand when people are really abusing and misapplying their platform. I think that's where AI can be really helpful in making sure that we can identify problem areas and be able to put in solutions that can work to disrupt that, whether that be in the real world or online, but not, but reduce the amount of trauma that comes to the people that have to look at it.
- [Neil] Touched on a couple of the critical points here, but one I really want to maybe emphasize is a lot of people just don't even know what data's being collected about them. It's hard to even respond or come up with some safety guards if you don't know what's actually being collected and how it could be used or misused. What does the average business or the average person do in that regard?
- [Lisa] That's a great question, and I do think that some of the regulatory bodies have made some movements since my early days in this where it was the wild wild west. Luckily with things like GDPR, we have some rules of the road in that space. I think that we're about to have a renaissance in the AI area as well with the AI Act passing and some of the digital safety acts that are coming online. I think that it's going to take time for us to find the right place from going from the wild west and do whatever you want and hope for the best, which kind of came in in the 90s with the Communication Decency Act 230 when platforms had very extensive immunity for many crimes that are conducted on them. They didn't have a large incentive to do that work. It was more voluntary. I think with regulations changing, there's going to be more accountability. I love some of the mandates that are coming out of Australia's eSafety Commissioner. I think you're starting to see the people that have the tech experience moving into government roles that can ask questions and hold accountability at a level that's necessary. But I also think that if we stopped today with innovation, we would probably still be five years out from getting our regulation corrected to where the technology is today, and nobody's stopping. So this is going to be an important time where people vote with their feet, and they learn about the privacy and the safety on their platforms. Or when they're building AI tools, for example, they look for ethics statements on their partners' websites that are public about what they will and won't do with data, what they will and won't do with AI.
As a company founder, I would have to ask myself, if I build this technology to rescue a child, but it's sold to a adversary under an acquisition under another company that I have no control over, did I just build something that can be completely misapplied? And if the answer was yes, it was always a, it always started with a not can we build it, should we build it. And from there starting to go to how do we build it safely. It's really that safety by design that I think is so critical.
- [Ryan] One of the topics we've talked about in the past before is where the responsibility falls on and how a company can approach building responsible technology. When we talk about that, and we hear about companies saying they're building responsible technology, what does that exactly mean and how can companies do that? If a company's looking to build some kind of tool, solution and promoting it as something that is responsibly built, how do you check those boxes to make sure that this is something that really is being responsible or is being responsibly built for the customer and the potential end users and really thinking about all the different kind of edge cases that could happen to them.
- [Lisa] Yeah, so I think a lot of this is recognizing that ethics and responsibility are going to be a team sport. They're not a solo sport. So you're not going to find one person that's going to know which corners to look around for every problem that you might have. So I think that oftentimes this is a great place to engage consultants. The reason for that is they probably are able to see trends across industries and bring in players that might have great expertise to address the problems that you have.
So I think a lot of times it's starting with the business problem and being really clear on what you're solving for, making sure that the technology is a good match for that problem, not just picking technology and then trying to apply it to the problems you have. And then from there, when you find the right problem that AI is uniquely good at to help accelerate, then it's a matter of making sure that the hardware and software vendors that you select are building their systems to be cyber secure and also responsible in terms of how they're developing them. And then by those people having public statements and being able to piece that together, you can make sure that you're not creating loopholes for bad actors to navigate through.
And none of this is perfect. It's going to be iterative. I think that the, we will always be in a cycle of some technical debt and gotchas, but if you at least start with making sure that you surround yourself in the ecosystem of people that can help play devil's advocate for you in the consulting community to help you think through any risk analysis, using software as a service vendors to help you understand your threat letting landscape today. It's hard to fix something if you don't even know where you are. So getting those kinds of assessments is important. And then last but not least, partnering with people that have already solved for some of these things. So they're embedded in the things you're purchasing versus you having to buy it all from scratch. So one of the trends that I think is really interesting in AI, especially as people are building, is this trend of synthetic data being able to be generated. I think that's going to be a place where people can really be more responsible in the way that they are building and curating models, especially custom LLMs for certain business industries and applications.
So I am a board advisor to Nurdle AI. And the reason that I chose to align with them is because they're strong background interest in safety. Learning on some of the worst things on the internet makes you really good at creating data because you have to train models, and you can't use personally identifiable information. So they've got some really cool IP around there about how to protect privacy and do that responsibly.
I think also looking at where are there industry consortiums where you can beg, borrow, and steal technology versus building it from scratch, I think both sustainability wise as well as for difficult topics like the one I've discussed today.
- [Neil] Sounds like you're really advising people that you don't have to learn all these different things. Don't reinvent the wheel, learn some of the tools, APIs, things that are actually already out there and how you can leverage them, which I think is fantastic advice. But I also see the, I'll call it the other extreme end where a lot of organizations and people think I'll just wait until someone creates something, and I'll buy like off the shelf software. AI doesn't quite work that way. I'm sure that, unfortunately, no one was working on a whole bunch of AI tools to help children in need, right?
- [Lisa] Yeah. Some of the ways that I've seen that rolling out is, for example, I got the chance on my podcast, Navigating Forward, to interview this president of Check Point Software. And what she talked about was that they had 20 years of data of cybersecurity threats that they were able to train their models on to actually get AI integrated into their software suite so that you have an AI that's guarding the gate for you based on all the learnings historically, but everything that you experience going forward as it's part of your technology stack.
So I think that there are a lot of ways where people can integrate the benefits of AI without having to build from scratch. And in those cases where you do need to build from scratch, I think bringing in experts to help you build that responsibly and using more synthetic data sources is the way to go.
- [Neil] Synthetic data has been a boon in AI because obviously you need data to train the system. Sometimes we don't have enough of it or enough good quality of it. You're essentially creating fake data that looks like real data. Is there any concern that the synthetic data we're creating may not be accurate enough?
- [Lisa] So that's one instance. It wouldn't be used for every use case. An example I could give, a counterly, the other way is there are use cases where we have applied where you really can't fake the data. It's not going to be effective enough. Or even if you de-identify the data, it won't be effective enough for your use case.
We worked with a large research hospital and their Center for Digital Transformation to look at how do we get enough data so that we can train models for medical applications for use in clinical settings. While the FDA requires that you reduce bias in your models and prove that gender or age or ethnicity are not going to dramatically reduce the caliber of the model prediction outcomes. And so we worked with them. They had delivered one of the first clinical applications in partnership with General Electric for their critical care suite. They had learned a lot of things through that project and had some IP that they wanted to spin out into a company. And what we were able to help them collaborate on is how do you look using some techniques from the intelligence community and some other very restricted access data communities to be able to bring the models to the data versus bring the data to the models because hospitals are not going to be able to release patient records, and they need to be done in a HIPAA and cyber secure way. And then how do we make sure that the whole solution stack is a black box with confidential computing capabilities where the model trainer can't see the data, the data steward can't see the model, and the third party bringing it together to do the validation reporting in that black box can't see either person's things.
And so the, that company is now a standalone company called BeeKeeperAI. They have an escrow product out. And you can check that out. It's built on Microsoft's confidential computing stack with Fortanix. And I think those are, those are ways that I'm getting excited because I don't need to know, as somebody with a rare disease myself, I have more statistical likelihood of getting struck by lightning twice in my lifetime than getting the disease that I have. So you can imagine what doctor, how long it took me to get doctors to be able to recognize it when there's no blood test for it. In my case, 11 years. 11 years of not knowing something's wrong and something's really wrong. Luckily now that I have a diagnosis, I have better medication and my quality of life has improved a lot. But in order to get things like that diagnosed and approved, you need a doctor that can recognize symptoms that maybe they don't see every day. So, I think that's the promise of AI where a doctor's going to see what they experience in front of them, but maybe having data to say, hey, did you check for this? Just at that early stage might be the thing that gets them open-minded to think about something different. Or in the case of the federated learning and why we were bringing data sets together, if you have a rare disease, even if you're the Mayo Clinic, you don't have enough cases to statistically show that you're not being biased in finding the treatments that match that disease. We found that you needed three different hospitals worth of data to be able to get the number. I don't need to know the patient's name that had the disease. I don't need to know anything personally identifiable about them. I just need to know the number. How many people have this type of cancer that got this treatment and had a positive outcome. And those are the ways that I think AI is really good at being able to run in a way that it can just produce that number but not violate anybody's privacy.
- [Ryan] As we wrap up here, I wanted to ask you something that kind of brings all of this together. Just talk about what's happening in society now and where we're going. From your perspective, where do you think we're headed as society when it comes to AI?
- [Lisa] So I have the honor of hosting the main stage for the World AI Festival in Cannes last year and now this year again. So, I'm in my preps right now talking to a lot of those big thought leaders that have much more information than I do about these things, and I would rather reflect the consensus of what I've heard in preparations for this than my own personal opinion, if that's all right. And from talking with professors at the Sorbonne and Oxford and Stanford and all those fancy schools I didn't get into, in the last week or so and some of the leaders of research at Meta and IBM and other places, what I feel like I'm hearing are some themes. One is these tools are probably too powerful for one company to have as a private thing. And so I hear a lot of momentum supporting the open source community for this to be able to make sure that we have some transparency and accountability in what's being built.
Secondly, what I'm hearing a lot of is the people that are the experts are not super worried about the AI today scaling up to be what you see in science fiction, but there are some research things that are being explored that may make it more instead of artificial intelligence, it's more like almost artificial consciousness and understanding that it's alive and wanting to defend itself. That's a whole different ball of wax.
And so I think that, in the short term, we probably need to be regulating around things that we just wouldn't want whether they were online or offline. It just objectively needs to be regulated like consumer protection. We can use some of those laws instead of creating all these big AI bills that nobody really understands or knows how to implement.
On the longer term, the extinction event kind of concerns that people have, I do think that we really need to have a coordinated movement forward to make sure that we are responsibly doing this. I hear a lot of conversation about if we build responsibly and the other nation states don't, are we going to be at a huge deficiency? I like to look more at the places where we can overlap and agree than what polarizes us. So I think that places like the United Nations that are convening AI for Good and people are sharing their information about what the what's possible is, I hope that these frameworks and these approaches are being more broadly utilized over time to make sure that we don't have unintended consequences, but I think that AI is, the genie's out of the bottle. Humans today design it and build it, and that hasn't changed yet. So we just need to be really conscious about what do we build going forward and how do we design around that?
- [Neil] Lisa, if our audience is interested in learning more about you and your work, what's the best way for them to stay in touch?
- [Lisa] Sure. You can reach me at my website, lisathee.com, just my first name, last name. And on there you can see my signature keynote about embracing trust and ethics for AI. You could also see my keynote around career agility. I'm really passionate about helping people in STEM to get more mission and meaning into their careers, but also have sustainable jobs because if you don't make money, that's called a hobby. And hobbies are cool, but I don't know about you guys, I'm, I don't have a trust fund, so I still need to be above water, and I think a lot of people are in that position. So, you can check out my website. You can buy my book on Amazon. It's called Go! Reboot Your Career in 90 Days. And you can check out my new podcast, Go Reboot Your Life, which is actually launching on February 14th, it's my Valentine's Day present to myself?
- [Ryan] Congrats on the upcoming launch and make sure everybody knows where to check it out, check out the book, website, all that kind of good stuff. But as Neil said, yeah, fantastic conversation. Really appreciate you taking the time to come on and talk to us about your experiences and views on what's going on in the AI industry, offering advice to our audience, things like that. So really appreciate it and really thank you for being here.
- [Lisa] Thank you so much for the opportunity. I had a great time.
Special Guest

Hosted By
AFA
AI For All
Special Guest
Lisa Thee
Hosted By
AFA
AI For All
Subscribe to Our Podcast
YouTube
Apple Podcasts
Google Podcasts
Spotify
Amazon Music
Overcast