On this episode of the AI For All Podcast, Lia DiBello, Chief Science Officer at ACSI Labs, joins Ryan Chacon and Neil Sahota to discuss accelerated learning with AI. They talk about the business applications of accelerated learning, cloning expertise, how accelerated learning saved a biotech company from ruin, data-driven decision-making, human cognition, creative thinking, and how fear is learned.
About Lia DiBello
Dr. Lia DiBello is the innovator behind the FutureView™ Platform and the cognitive science behind it. Her research was funded by numerous awards from the National Science Foundation, NASA, and the National Academies of Science. DiBello is best known for the development of a particular kind of activity-based “strategic rehearsal” approach that has been shown to greatly accelerate learning through cognitive reorganization.
Studies of over 7,000 people at all levels exposed to DiBello’s methods indicate that learning was accelerated by several months in all cases. The method is now delivered via the FutureView™ Platform to companies across four continents in industries as diverse as mining, transportation, financial services, IT implementation, manufacturing, and pharmaceuticals.
Interested in connecting with Lia? Reach out on LinkedIn!
About ACSI Labs
ACSI Labs develops two kinds of applications: smart virtual worlds for wargaming difficult problems in business and accelerated learning and performance solutions for businesses, the military, and law enforcement.
- [Ryan] Welcome everyone to another episode of the AI For All Podcast. I'm Ryan Chacon and with me is my co-host Neil Sahota, the AI Advisor to the UN and the founder of AI for Good. Neil, how's it going?
- [Neil] I'm doing all right. How about yourself Ryan?
- [Ryan] Not too bad. I know you can't tell, but we actually moved to a new, bigger studio. Same background but more space. Today's episode, we're going to be talking a lot about accelerated learning. We're going to talk about how you can clone expertise, how you can enhance creative thinking, a lot of really interesting topics. With us today is Lia DiBello, the Chief Science Officer at ACSI Labs. Lia, thanks for being here today.
- [Lia] Thank you.
- [Ryan] Really looking forward to this conversation ever since we got it scheduled. And what I'd like to do is kick this off and talk a little bit about accelerated learning just to give context to our audience and explain to them what is accelerated learning, what is the science behind that, how is AI helping us accelerate learning, just high level it for our audience and kick things off.
- [Lia] Accelerated learning is not original with me. It's an area of cognitive science, and I in fact I'm a co-author of a book with other people who've explored this, but there are different branches of it. The basic assumption is that, it flies in the face of the 10,000 hour rule, that learn, that to become an expert it takes a lot of chronological time. In fact, chronological time has very little to do with it because the brain actually doesn't really have a sense of time. It has a sense of experience or iterations of experience.
So, you can accelerate learning, and I think this has been done with elite troops in the military for a long time, by accelerate, by giving them a lot of rehearsal practice with difficult problems and showing them the consequences of their decisions in rapid time. Accelerated learning is basically a cognitive remodeling approach where our default theories of a particular domain are reorganized through iterative experience in compressed time.
And yeah, it seems like it would be very stressful, but the brain doesn't mind that. It's, again, it doesn't experience it that way if it's your area of expertise or your area of interest.
- [Ryan] Interesting. If we or if the person going through this feels stressed, you're saying that the brain isn't necessarily feeling that, that emotion, but it's, it is more quickly digesting, learning, and growing that element of things.
- [Lia] There's a little bit of a trick to it. You don't want to stress people so much that you activate the flight or fight response. So that's why games are fun. Games are a little bit threatening to the part of the brain that is responsible for accelerated learning. You like to win because winning is seen as adaptive and survival oriented. Losing is a little bit threatening to the more primitive parts of the brain. So we gamify things, and we compress time, and people enter a state of flow when they're using our technologies.
- [Ryan] When we talk about technology being incorporated into this and specifically AI technology being incorporated into this, how has and how are you all using and just other people using AI to help make accelerated learning possible outside of the military application you mentioned, just like generally speaking, what is being done to really adapt that across different industries, to different types of people in different kinds of roles?
- [Lia] AI is just another tool. And our species, we're inherently tool users and tool appropriators. So the first tool that we used to change our cognition is language. We know that the brain is very affected by learning a language as a baby. We have a completely different structure, and we have a way of sorting ideas, arranging things in our heads because we have this tool.
In the 70s and 80s, the, there was an influx of complex technologies, literally making the information available about a million times more complicated than it ever had been in human history. And I was a professor at the time, so there was a lot of debate about whether or not people will adapt, but they did.
People inherently adapt to new tools, whether they're information technologies, AI, or a hammer. It's what we do. And I think that AI, instead of being very scary, it's actually very promising because it can really extend your cognition and help you focus on higher order expertise. Not how to do something, but how to, whether or not it should be done at all and getting AI to do it for you.
- [Neil] I know there's a lot of AI training tools out there, there's some VR training tools. Lia, it sounds like, at least what I've seen, a lot of what they do is they're automating the existing training and education. So just basically copying the learning models over. You're talking about actually different sets of learning models that more efficiently wire the brain. Is that the case?
- [Lia] Yes. And we don't learn unless we have agency. So when we enter into an environment, including FutureView, we have to be able to make almost infinite choices. And agency is what triggers accelerated learning. We use AI in a couple of ways. For our business simulations, we use it to give people feedback, and we use it to mix things up a bit, to change the context and make it more challenging.
If they start mastering what they're doing, we can up it, make it more difficult, and pull them even higher. And the AI can do that and create an ever changing context that's sensitive to the learner. It can learn about the learner and change the environment. But the, but it's very human centered.
In other words, we're still focused entirely on accelerating, pulling the individual forward to a higher level of expertise. And AI is another tool. If they realize that AI would be better at something and do it faster then they, as an agent, they make the decision about how to deploy it.
- [Ryan] Can you talk through some of these, based on what we've been just chatting about here for a second, some real world examples of these ideas being applied?
- [Lia] Take mining, for example. Mining is dangerous and even though it's quite advanced at this point, it's almost like a, block cave mines are like skyscrapers in reverse, they go down and all the operations are underground, but they're very sophisticated. It's increasingly automated by AI. And yet somebody has to be somewhere, sometimes 3,000 miles away, managing what all of the AI does. And it's without being there. They have to understand how the worldview of or the assumptions of the deployed AI system like remote haulers, these things are multi ton haulers.
If they run over your foot, it's actually not survivable because it'll pull you under. So being able to understand how the equipment thinks and manage a large fleet in a several square mile operation from 3,000 miles away is a good example. And so what we've noticed is that these operators use AI to extend their own cognition as if they are there. And they point it in ways that have the best advantage. And they also learn to interpret its biases. So sometimes a multi ton truck will say, will stop and say, there's a school bus in the bottom of the pit. We know there's not a school bus in the bottom of the pit in West Australia. But we know that there's something that's triggering that tag. So it would be up to the operator to decide whether or not it's an actionable item.
- [Ryan] So you mentioned something that I want to ask you for more about because it's being able to replicate their knowledge and their expertise, like cloning expertise. How is that being done and how is that, what value is that providing for an organization? Because that's a really interesting kind of idea to elaborate on.
- [Lia] We don't clone expertise with AI. We clone expertise in the person. And expertise is a moving target. The example I just gave you is being able to run a whole fleet from a laptop is a new skill, right? Particularly a multi billion dollar fleet.
The idea is that in any domain where expertise is known, like we know what an expert looks like, we can use iterative cycles of experience and give people feedback to pull them up to that level. An example I like to use is chess. Even though a lot of chess masters don't have a high opinion of each other, they do recognize another chess master when they see one. There is a codifiable, homogenous way of looking at the game. It's the same in any domain. And once we codify that, we can pull people towards it by iteratively telling them what you just did is not like an expert. Try again. That's a little closer and their cognition reorganizes much more quickly.
- [Ryan] And do you see this being something that, it seems naturally that this would enhance people's ability to do jobs. But what about when it comes to the discussion that always comes up with when it comes with to AI is around replacing jobs. Is this something that you believe will make improve and enhance certain positions and individuals or do you think this is something that will also lead to potentially not needing as many people because you're going to have more experts as part of the team as they are able to go through this process?
- [Lia] People's jobs will change, and they always have. In the 90s with enterprise technology, expediters on the factory floor were replaced. I think that it's just been happening. Printing press. All those scribes got put out of business at the monastery when we had a printing press, right? It's just the evolution of where the skill has to reside when it comes to a human being.
- [Neil] You mentioned uplifting people, like using chess. Are you saying that you could help people become great chess players like Garry Kasparov then? Is that the clone of the expertise?
- [Lia] We haven't tried to do that, but there's, theoretically, it should work. We have been able to do that in business domains, and the financial impact on the companies they work for is pretty dramatic. Our biggest problem is getting people to believe we really did it. And it's really just helping people make that pivot to the skill that's appropriate at the time for how the world has changed and pulling them up to that expertise.
- [Neil] You've worked with over 50 of these types of projects, companies. I know you talked about some of the military, some of the mining, what other areas have you seen and what are some of the results they're getting. Is it cost efficiencies, is it profitability?
- [Lia] When we worked in financial services after the subprime mortgage crisis, we helped some large financial institutions recover. And it turned out that they had an inaccurate theory of how the market would behave. And the subprime mortgage crisis revealed that, that there was new opportunities for variation.
And they had to practice navigating around those eventualities. And it was a very successful approach. We also implemented cycle based maintenance technology for New York City Transit. We, our project in the 90s was the first ever deployed at the front line with the blue collar workers. And it saved, New York City Transit about 400 million before we were even done.
And it, that, the transit is a good case because none of these guys were, had ever used computers before. And we didn't send them to a computer training school. We had them use what they already know about complex transportation equipment, trains and buses and routes, which they're not even aware that they know, and what causes failures and what increases the mean distance between failure of equipment. Most equipment is, has weaknesses. So breaks, things like that, are always likely to bring the whole party to an end, and they know that instinctively. So we were able to get them to redeploy what they knew instinctively in a technology that recorded their decisions and gave them the data to make decisions.
By the time we were done, we looked at the audit logs of the workers, and we couldn't even tell what they were doing because they had gone so far beyond anything that we could understand or teach them. So we had to measure expertise using a Chaffee test, which is increased homogeny in the keystroke patterns, which you would see with increased expertise. But we were not able to follow them and understand what they were doing.
- [Neil] You mentioned data driven decision making. I know that we all talk about that a lot. I think we understand from a mathematics standpoint, some of these other things, the value, but you still see a lot of people thinking, the gut decision, that just I know by experience, or I know something, I know what the data says, but I just know it's really this thing, or this is a better decision. Is that a barrier for what you're trying to do with FutureView?
- [Lia] What we do is manipulate the gut feeling, and we empower it to be more accurate. It's, your gut, the gut feeling is always your competition. It overrides everything else. People can't not see the world the way they do. So you have to change how they see the problem by manipulating the adaptive unconscious and changing that gut feeling. So people, they walk out of our games and they say, I didn't learn anything. Everybody now sees what I see. And they, first time they went through it, it was a disaster. They were not doing that. The most dramatic example of that was a biotech company that was, had 800,000 dollars worth of product on back order at any given moment. And they said that's normal. It's a biological product. It grows at its own pace. And they were about a 300 million dollar company that was probably going to implode because what they made was not that special. They were, they had competition. And we realized through some analysis that the way for the company to succeed is to be on time. That's the only advantage that they had. They made, they were like Amazon for drug developing. They overnight delivery of stuff you need to test things, right? But if somebody delivered it faster, they were going to be out of business because what they made is not that special.
We did FutureView with them, and we turned the whole thing around. Their average back order after three months was 1,100 dollars. But when we went to do, and they sold for 16 billion, when we went to do follow up interviews, we said what do you think of the fact that you were in 800,000 in back order at any given moment three months ago? They said, oh, that was never true. We never did that. It was never that bad. You couldn't stay in business that way. They said, oh no, you always have to pay attention to the schedule. You have to work back from the due date. All this stuff that they thought was impossible was now their idea.
- [Ryan] Let me ask you, just generally speaking, if we move back, just take a step back, I wanted to ask you about your experience with lots of different projects that Neil alluded to. What are the overall struggles that organizations and even individuals within organizations really face when it comes to learning, just learning in general or creatively thinking about things? Just what are the main things that are very apparent in those conversations prior to you engaging and working with them? And what are they trying to achieve?
- [Lia] My colleague, Dave Lehmann, says it's fear and ego, and I think that's probably true. We live in a culture where we're trying to get A's in school, we're trying to be right, and school is a instructional system. It's not really, people do learn in school, but they learn despite the fact that they're instructed. I don't really believe people learn through instruction. I think that, but we think we do. And because we're aware of that experience, we can reflect on it and remember it, we think that's how we've learned.
A lot of our learning, right now you're learning, and you're not necessarily realizing it. Every time we do something, we change as a result. I think that organizations may not understand that, or they want to have control over the instruction process, and they think if they open the top of somebody's head and pour something in, they understand it, but there's an expression, I could explain it to you, but I can't understand it for you. And so what we are trying to do is get people to understand something. As far as barriers with companies, over the, 20 years ago, it was, we were insane. The only reason that we got the work we did at New York City Transit is because they were desperate. They had tried to implement cycle based scheduled maintenance and all the computers ended up in the Hudson River, and they couldn't do anything about it because the workers were unionized. I had done a small project in the air brake shop for my dissertation, and I was changing how they think about enterprise systems. A small group of people, less than 40. And after I left, which I didn't know because I moved to California to be a professor, it became a profit center for the whole Northeast. They were doing all the air brake remanufacturing for the old trains, including Amtrak trains because they have learned how to use supply chain management technology. So they called me and said can you do that again with 3,000 people? I said theoretically it should work.
- [Ryan] Any time you take a second to think about how we as individuals and how we as humans learn anything, it's I think hard for a lot of people to really comprehend and break down how that's being done, even in their own person. So to be thinking about your all's focus and being able to analyze that, build a system that allows you to help people learn more quickly, to get smarter, to become experts, to benefit in them individually, to benefit companies, it's just such a good showcase of how these new technologies, how AI is playing a role in things that really can make a difference in people's lives and in organizations, which is fascinating to learn about.
- [Lia] Yeah, no, I think AI has a lot of promise for making people's lives better and for extending their cognition. Think of us all using Google, right? We've been doing that for years. Of course, we also have a mental health application that we're working on now called Journey. And the idea there is anxiety is learned.
We learn to be afraid. We have a theory of what's going to happen to us that's not necessarily true. And if we practice alternative theories of how to handle a situation or alternative approaches, we can learn to have process confidence in what's emotionally difficult for us.
- [Ryan] This conversation has been super fascinating. Neil, I wanted to pass it over to you to see if you had any last questions, comments, things you wanted to make sure we touched on before we jumped off here.
- [Neil] I remember reading that human beings are only born with two fears, right? There's only two instinctive fears. Loud noises and falling. So the whole idea that we learn fear, and we can maybe unlearn the fear is a fascinating topic. Absolutely fascinating discussion. I don't think a lot of people thought that, it's hard to teach like creative and critical thinking, but the AI might be a tool to help us, but one thing that I really appreciate is, I use this quote often from Plato that we need pain to learn, and I think this is a validation of that, the fact that we could actually control and manage and even trigger some of that pain to do that little cognitive dissonance, if you will, is just absolutely powerful. The ability to clone expertise, the ability to make us better thinkers, to develop our skills, that's a huge boon across the board.
- [Ryan] Lia, for our audience out there who wants to learn more about what you're all doing, follow up with questions, thoughts, what's the best way that they can do that and dive in further?
- [Lia] We could send them to our website, futureviewplatform dot com. We have an information button. They can write to people, or they can contact either Neil or me.
- [Ryan] Lia, thank you so much for taking the time. Fascinating conversation. Something we haven't got a chance to dive into all that much yet. Really cool work that you have going on over there. We're excited to see this keep spreading out to more people, more organizations. So thank you for your time and really appreciate having you on.
- [Lia] Thank you for inviting me. It was a lot of fun for me too.